The task 446541 have been running for more than 72 hours now while
having been reloaded several times.
I think the worker intially allowed doesn't have enought ressources to
carry on this task. Could you please kill it.
On Monday, August 29, 2016 1:04:58 PM CEST Michal Novotny wrote:
> On Fri, Aug 26, 2016 at 6:22 PM, Pavel Raiskup <praiskup(a)redhat.com> wrote:
> > Does it spawn builders even if there is no build queue yet?
> Yes, it does. I completely cut off our backend dev instance from frontend
> and repeated the fresh-start experiment.
> This time the build queue was empty for sure and the builders were still
> being spawned. I also confirmed in the
> code that it should be so. These parts (around VmMaster ) weren't touched.
I've done a quick review of the patch now, and I pretty much like the
backend's "take-one-task" only approach. That way you can control the
queue on frontend (with atomicity given by PostgreSQL), while still that
is 'pull down' approach from backend.
There is one drawback, however -- the ugly workaround DEFER_BUILD_SECONDS.
The problem is that you now put all builds (all architectures) into one
build queue (that has "starving" consequences if you wasn't using
I would suggest you to add one additional argument into /backend/waiting/
backend API -> requested architecture.
* Then, you can remove everything related to "defer" action both on BE
* You can lower the BE<->FE traffic, and significantly lower IO on
front-end --> because then you can first allocate appropriate VM, and
right after that assign job (not vice versa: take job, then try to
take VM and possibly defer the job).
Also, the 'take-build' (load_job() in particular) method should be "atomic" ->
automatically move the action into 'running' state, and completely remove the
"starting" state, which has zero informational value anyway (users know/should
know the build queue priority anyway).
* Then we could much easily implement "multiple-backends"
support, I wanted to have something like this for a long time.
On Friday, August 26, 2016 4:15:50 PM CEST Michal Novotny wrote:
> On Fri, Aug 26, 2016 at 3:46 PM, Pavel Raiskup <praiskup(a)redhat.com> wrote:
> > Because in instance I maintain, we have currently like 7 builders
> > preallocated and booted, and thus backend doesn't have to wait till the
> > builder VM boots up.
> Spawning of virtual machines should/can be independent of copr-backend process
> itself and of allocating worker processes. If that isn't so, that
> could be a nice improvement.
That already worked -- I don't see benefits in removing that feature (we wasted
a lot of time on it). Just a reason to put "sad face" here.
Hello packagers & devs,
we have just deployed a new release of COPR eco-system. The changes made
are mainly internal but should help to solve some issues we have been
tackling recently. The most important upgrades are:
1) copr-dist-git service is now parallel ("multi-process"). That means that
if there is a problem with a job import, the other jobs won't be blocked.
Also we added import-job timing out so if import gets stuck, it doesn't get
2) Waiting-queue logic has been rewritten. I believe there will be not much
of a perceived performance improvement but we managed to cut the relevant
code length almost by half and simplify some bits.
3) We have fixed appstream meta-data building - a bug that also influenced
forking. When a project was forked the rpm re-signing process kept looping.
Other actions were blocked by this. Hence, in addition to fix the root
cause (the meta-data building), we also needed to fix the forking logic to
handle the errors better.
4) We have introduced a first bits of support for modularity
So that's it. I hope COPR will serve you well.
Is it possible to recover my lantw44/chromium project on Copr? I never
delete it, but it has become inaccessible since 2016-08-03 13:00 UTC.
The web page https://copr.fedorainfracloud.org/coprs/lantw44/chromium/
shows "Error 404: Not Found. Project lantw44/chromium does not exist."
and dnf prints "Failed to synchronize cache for repo 'lantw44-chromium',
Good news, everyone!
We have employed brand new PPC64LE OpenStack P8 machines to build us our
cool packages! Please, try them out (fedora-23-ppc64le, fedora-24-ppc64le,
fedora-rawhide-ppc64le) and let us know if they work (or don't work) for