On Wed, 2 Apr 2014, Digimer wrote:
On 02/04/14 03:46 PM, Bill Oliver wrote:
> On Wed, 2 Apr 2014, Digimer wrote:
>
> >
> > Ya, just a little TMI.
> >
> > I think you will need something fairly custom HPC setup... You will
> > need to find a way to break your work up into pieces and send them out
> > the the various nodes, then collect the returned results and piece
> > them back together (and handle timeouts of jobs not returned by a
> > given node and re-issue to another node).
> >
> > There are some projects out there that might work as a foundation, but
> > it's slipping outside my expertise (I'm an HA admin). I would suggest
> > stopping by freenode.net's #hpc channel and seeing what they might be
> > able to recommend.
> >
> >
>
> I've run a small render farm back when I did forensic animations -- but
> you don't have to have the computers connected for a render farm. But
> yeah, if I have 100 images and want to do ffts on all of them, I can run
> scripts on five machines that do 20 each. I'd like to see if a
"real"
> cluster would improve stuff. In addition, some of the software I've
> used supports real parallelism.
>
>
> billo
How do you define "real cluster"?
Something that I can take *one* program compiled for parallelization that will distribute
the processing among machines, as compared to running multiple invocations of a program on
different machine, each chewing on a different dataset.
For instance, a render farm where I run 15 instances of Maya or Blender on 15 machines,
each rendering a different set of frames to be later combined for an animation isn't a
"real cluster" to me. Running one instance of Maya or Blender to use the memory
and processing of all 15 machines would be a "real cluster" for me -- assuming a
parallel version of Maya or Blender that could do that, of course.
billo