On Fri, 4 Feb 2022 at 11:56, Neal Becker <ndbecker2@gmail.com> wrote:
My purpose is to queue up a bunch of tasks > #cpus, and have #cpus run at a time in parallel.
So if I have 120 jobs to run, and 32 cores, I want to queue them all up and run 32 in parallel at a time.
Or, maybe I need to set --ncpus=16 so schedule 16 parallel  jobs instead of 32 (my scheduler is very simple and doesn't know about free memory)

Current GNU parallel has:

       --memfree size
           Minimum memory free when starting another job. The size can be postfixed with K, M, G, T, P, k, m, g, t, or p (see UNIT PREFIX).

           If the jobs take up very different amount of RAM, GNU parallel will only start as many as there is memory for. If less than size bytes are free, no
           more jobs will be started. If less than 50% size bytes are free, the youngest job will be killed, and put back on the queue to be run later.

           --retries must be set to determine how many times GNU parallel should retry a given job.

           See also: --memsuspend

       --memsuspend size
           Suspend jobs when there is less than 2 * size memory free. The size can be postfixed with K, M, G, T, P, k, m, g, t, or p (see UNIT PREFIX).

           If the available memory falls below 2 * size, GNU parallel will suspend some of the running jobs. If the available memory falls below size, only one
           job will be running.

           If a single job takes up at most size RAM, all jobs will complete without running out of memory. If you have swap available, you can usually lower
           size to around half the size of a single job - with the slight risk of swapping a little.

           Jobs will be resumed when more RAM is available - typically when the oldest job completes.

           --memsuspend only works on local jobs because there is no obvious way to suspend remote jobs.


--
George N. White III