[SAC] [OSGeo] #3014: woodie builds fail due to exceeded quota

#3014: woodie builds fail due to exceeded quota
----------------------+-----------------------
Reporter: strk | Owner: sac@…
     Type: task | Status: new
Priority: normal | Milestone: Unplanned
Component: SysAdmin | Keywords:
----------------------+-----------------------
Especially when we have many steps, where each step creates a container,
it's easy to end up out of space.

This pull request for woodpecker-ci should help with this problem:
Stop steps after they are done by anbraten · Pull Request #2681 · woodpecker-ci/woodpecker · GitHub

Meanwhile our best chance is to `docker system prune` as often as
possible.
--
Ticket URL: <https://trac.osgeo.org/osgeo/ticket/3014&gt;
OSGeo <Gter - OSGeo;
OSGeo committee and general foundation issue tracker.

#3014: woodie builds fail due to exceeded quota
-----------------------------+------------------------
Reporter: strk | Owner: robe
     Type: task | Status: new
Priority: normal | Milestone: Unplanned
Component: SysAdmin/Woodie | Resolution:
Keywords: |
-----------------------------+------------------------
Changes (by strk):

* owner: sac@… => robe
* component: SysAdmin => SysAdmin/Woodie

--
Ticket URL: <https://trac.osgeo.org/osgeo/ticket/3014#comment:1&gt;
OSGeo <https://osgeo.org/&gt;
OSGeo committee and general foundation issue tracker.

#3014: woodie builds fail due to exceeded quota
-----------------------------+---------------------------------------
Reporter: strk | Owner: robe
     Type: task | Status: new
Priority: normal | Milestone: Sysadmin Contract 2023-I
Component: SysAdmin/Woodie | Resolution:
Keywords: |
-----------------------------+---------------------------------------
Changes (by robe):

* milestone: Unplanned => Sysadmin Contract 2023-I

Comment:

I thought I did do a daily docker system prune on woodie-server and all
the woodie-client

It only runs daily though.

If I'm already doing that, I can up the space available on them. I think
I had made it a modest 150G or so.
--
Ticket URL: <https://trac.osgeo.org/osgeo/ticket/3014#comment:2&gt;
OSGeo <https://osgeo.org/&gt;
OSGeo committee and general foundation issue tracker.

#3014: woodie builds fail due to exceeded quota
-----------------------------+---------------------------------------
Reporter: strk | Owner: robe
     Type: task | Status: new
Priority: normal | Milestone: Sysadmin Contract 2023-I
Component: SysAdmin/Woodie | Resolution:
Keywords: |
-----------------------------+---------------------------------------
Comment (by strk):

How about hourly instead of daily ?
--
Ticket URL: <https://trac.osgeo.org/osgeo/ticket/3014#comment:3&gt;
OSGeo <https://osgeo.org/&gt;
OSGeo committee and general foundation issue tracker.

#3014: woodie builds fail due to exceeded quota
-----------------------------+---------------------------------------
Reporter: strk | Owner: robe
     Type: task | Status: closed
Priority: normal | Milestone: Sysadmin Contract 2023-I
Component: SysAdmin/Woodie | Resolution: fixed
Keywords: |
-----------------------------+---------------------------------------
Changes (by robe):

* status: new => closed
* resolution: => fixed

Comment:

We could do that, but ultimately I think the issue is docker system prune
is not pruning everything. Even after doing
{{{
docker system prune --all --force
}}}

There still seem to be large numbers of orphan /var/lib/docker/vfs left
around amounting to about 60 GB still left.
I ended up just reinitializing all of them with the base image or snapshot
from osgeo8 woodie-client/reset

This is safe to do with the clients since I think the logs are kept on
woodie-server, so nothing is lost.

I do plan to try podman to see if it does a better job of cleaning up when
you ask it to clean up.

The reset image ends up creating a container with 1.4 GB in use (which
includes the woodie client running, but no images beside the woodpecker
one)
--
Ticket URL: <https://trac.osgeo.org/osgeo/ticket/3014#comment:4&gt;
OSGeo <https://osgeo.org/&gt;
OSGeo committee and general foundation issue tracker.