Over the last couple of weeks we've been using puppet to distribute static content across some of our application servers and proxy servers.
Static content might include the new static webpage or an application like our accounts system.
This has proved to be a bit of an issue. Puppet wasn't really designed to do this and as such puts a noticeable load on the boxes while running as well as causing longer runs. Puppet works for this but we're currently into it managing thousands of files and initial deploys take a long time :) In the past we'd discussed moving some things (like turbogears apps) around using rpms. We can do that with tg pretty easily. But what about other static content, images, things like that?
This needs to be scriptable from start to finish, here's the options as I see them:
1. Straight nfs mount (boo) 2. nfs mount to cron copy the files 3. recursive wget to an http store somewhere 4. rsync via ssh keys or rsync server (I'm currently leaning towards this) 5. Figure out how to make puppet more efficient with large numbers of files.
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
-Mike
On Fri, 25 May 2007, Mike McGrath wrote:
- rsync via ssh keys or rsync server (I'm currently leaning towards this)
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
Across a pool of web servers in a cluster, we use rsync over SSH to keep content "in sync". When changes are made on the "master", it takes a few moments for the other nodes to get the changes, but that hasn't caused any issues. rsync is initiated on the "non-master" nodes and the only time we have a problem is if the "master" isn't available (down for reboot, etc.) -- even that doesn't cause any real problems, just results in an e-mail since the script called from crontab generates output in the event of errors; otherwise, it's silent.
We're very happy with this method, it has proven to be reliable and scalable in our environment.
On 5/25/07, Mike McGrath mmcgrath@redhat.com wrote:
turbogears apps) around using rpms. We can do that with tg pretty easily. But what about other static content, images, things like that?
This needs to be scriptable from start to finish, here's the options as I see them:
- Straight nfs mount (boo)
- nfs mount to cron copy the files
- recursive wget to an http store somewhere
- rsync via ssh keys or rsync server (I'm currently leaning towards this)
- Figure out how to make puppet more efficient with large numbers of files.
Another option that may fit into the mix somewhere is proxy caching... For example, you could have one 'master content server' that is only accessable to the Fedora machines, and then have all the proxy servers act as a caching proxy for that content. You may want that type of thing anyways, to better abstract the path where the files are stored from the URL the files are available in.
Best, -- Elliot
Gaddis, Jeremy L. wrote:
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
Across a pool of web servers in a cluster, we use rsync over SSH ... ..... rsync is initiated on the "non-master" nodes
And I have done the opposite: Set up a script on the master to run multiple parallel rsync's pushing content out to the target servers. This was an on-demand content push, rather than an automatic replication of changes. It worked very well for many years.
Jason
On Fri, 25 May 2007 09:16:49 -0500, Mike McGrath wrote:
How do you deal with these issues in your current environments?
At some point I used multi-rsync: http://freshmeat.net/projects/mrsync
multi-rsync transfers files from one master machine to many machines using UNIX socket's multicasting capability.
It worked pretty well. I think I attempted to push it to FE a long time ago (when we were doing reviews by emails) and no-one reviewed it IIRC.
I'm no longer actively using this, but the SPEC and SRPMS are here just in case... ftp://ftp.licr.org/pub/multi-rsync.spec ftp://ftp.licr.org/pub/multi-rsync-1.0-1.src.rpm
Cheers, Christian
On Fri, 2007-05-25 at 09:16 -0500, Mike McGrath wrote:
Over the last couple of weeks we've been using puppet to distribute static content across some of our application servers and proxy servers.
Static content might include the new static webpage or an application like our accounts system.
This has proved to be a bit of an issue. Puppet wasn't really designed to do this and as such puts a noticeable load on the boxes while running as well as causing longer runs. Puppet works for this but we're currently into it managing thousands of files and initial deploys take a long time :) In the past we'd discussed moving some things (like turbogears apps) around using rpms. We can do that with tg pretty easily. But what about other static content, images, things like that?
This needs to be scriptable from start to finish, here's the options as I see them:
How about you use the puppet cert it makes on the client for auth and see if we can have wget or urlgrabber or curl use it to talk to mod_auth_cert on apache.
Then we'd have a secure-auth + good static content replication.
-sv
On May 25, 2007, at 7:16, Mike McGrath wrote:
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
csync2 can be great for this sort of thing - I use it for recursive directory copies when puppet is too slow for that.
For larger number or files it's much much more efficient than rsync -- and it can sync "both ways"[1].
The proxy suggestion is also good (Varnish[2] is an up and coming super efficient reverse proxy cache server), but you still need to sync the content on at least two backends for HA...
- ask
[1] http://oss.linbit.com/csync2/paper.pdf [2] http://varnish.projects.linpro.no/ - http://phk.freebsd.dk/pubs/ varnish_roadshow.pdf
On Fri, May 25, 2007 at 11:50:04AM -0400, Jason Watson wrote:
Gaddis, Jeremy L. wrote:
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
Across a pool of web servers in a cluster, we use rsync over SSH ... ..... rsync is initiated on the "non-master" nodes
And I have done the opposite: Set up a script on the master to run multiple parallel rsync's pushing content out to the target servers. This was an on-demand content push, rather than an automatic replication of changes. It worked very well for many years.
This is what we do for linux.dell.com and several internal servers.
On Fri, 2007-05-25 at 12:08 -0400, seth vidal wrote:
How about you use the puppet cert it makes on the client for auth and see if we can have wget or urlgrabber or curl use it to talk to mod_auth_cert on apache.
Then we'd have a secure-auth + good static content replication.
+1
It also keeps you from increasing the number of *keys that need to be tracked and distributed to hosts -- with rsync + ssh, you have to manage the sshkey relationship for *all* hosts. Since the _content_ isn't secret but we do have a desire to ensure the host is authentic, this idea is the best so far. It uses known and working secure-auth, and lets you deploy content to hosts that you don't want to have an sshkey relationship with.
A related item is the trigger for content pushing. There are two general situation when we want to push out new content:
1. I'm updating something, no worries 2. I really, really want/need to see the change RIGHT NOW
I presume puppet has something for this with configurations.
Personally, I'd be comfortable with a longer lead-time on a cronjob from the subservient host (two to four times an hour), if it were possible to push a Big Red Button and have content updated from the master immediately.
Open for suggestions on methodology, natch. :)
- Karsten
Once upon a time Friday 25 May 2007, Mike McGrath wrote:
Over the last couple of weeks we've been using puppet to distribute static content across some of our application servers and proxy servers.
Static content might include the new static webpage or an application like our accounts system.
This has proved to be a bit of an issue. Puppet wasn't really designed to do this and as such puts a noticeable load on the boxes while running as well as causing longer runs. Puppet works for this but we're currently into it managing thousands of files and initial deploys take a long time :) In the past we'd discussed moving some things (like turbogears apps) around using rpms. We can do that with tg pretty easily. But what about other static content, images, things like that?
This needs to be scriptable from start to finish, here's the options as I see them:
- Straight nfs mount (boo)
- nfs mount to cron copy the files
- recursive wget to an http store somewhere
- rsync via ssh keys or rsync server (I'm currently leaning towards this)
- Figure out how to make puppet more efficient with large numbers of
files.
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
how about using cvs and scripting a checkout of the content? i wuld say either that or rsync. since alot of it like the accounts system is already in cvs why not use that?
Dennis
On 5/25/07, Dennis Gilmore dennis@ausil.us wrote:
Once upon a time Friday 25 May 2007, Mike McGrath wrote:
Over the last couple of weeks we've been using puppet to distribute static content across some of our application servers and proxy servers.
Static content might include the new static webpage or an application like our accounts system.
This has proved to be a bit of an issue. Puppet wasn't really designed to do this and as such puts a noticeable load on the boxes while running as well as causing longer runs. Puppet works for this but we're currently into it managing thousands of files and initial deploys take a long time :) In the past we'd discussed moving some things (like turbogears apps) around using rpms. We can do that with tg pretty easily. But what about other static content, images, things like that?
This needs to be scriptable from start to finish, here's the options as I see them:
- Straight nfs mount (boo)
- nfs mount to cron copy the files
- recursive wget to an http store somewhere
- rsync via ssh keys or rsync server (I'm currently leaning towards this)
- Figure out how to make puppet more efficient with large numbers of
files.
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
how about using cvs and scripting a checkout of the content? i wuld say either that or rsync. since alot of it like the accounts system is already in cvs why not use that?
I was going to sugest SVN, but roughly the same thing. Have it checked in and automate a check-out. You can even checkout things only tagged for that machine if you want. I think you could use Puppet to kick-off the checkout.
stahnma
Dennis
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Dennis Gilmore wrote:
Once upon a time Friday 25 May 2007, Mike McGrath wrote:
Over the last couple of weeks we've been using puppet to distribute static content across some of our application servers and proxy servers.
Static content might include the new static webpage or an application like our accounts system.
This has proved to be a bit of an issue. Puppet wasn't really designed to do this and as such puts a noticeable load on the boxes while running as well as causing longer runs. Puppet works for this but we're currently into it managing thousands of files and initial deploys take a long time :) In the past we'd discussed moving some things (like turbogears apps) around using rpms. We can do that with tg pretty easily. But what about other static content, images, things like that?
This needs to be scriptable from start to finish, here's the options as I see them:
- Straight nfs mount (boo)
- nfs mount to cron copy the files
- recursive wget to an http store somewhere
- rsync via ssh keys or rsync server (I'm currently leaning towards this)
- Figure out how to make puppet more efficient with large numbers of
files.
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
how about using cvs and scripting a checkout of the content? i wuld say either that or rsync. since alot of it like the accounts system is already in cvs why not use that?
I thought about this, the problem is some of our content needs to be built first.
-Mike
Where i work, everything needs to be build and we use the following: SVN -> build script -> rsync (static to webservers / dynamic to apps) Really simple
Paulo
On 5/26/07, Mike McGrath mmcgrath@redhat.com wrote:
Dennis Gilmore wrote:
Once upon a time Friday 25 May 2007, Mike McGrath wrote:
Over the last couple of weeks we've been using puppet to distribute static content across some of our application servers and proxy
servers.
Static content might include the new static webpage or an application like our accounts system.
This has proved to be a bit of an issue. Puppet wasn't really designed to do this and as such puts a noticeable load on the boxes while
running
as well as causing longer runs. Puppet works for this but we're currently into it managing thousands of files and initial deploys take
a
long time :) In the past we'd discussed moving some things (like turbogears apps) around using rpms. We can do that with tg pretty easily. But what about other static content, images, things like that?
This needs to be scriptable from start to finish, here's the options as I see them:
- Straight nfs mount (boo)
- nfs mount to cron copy the files
- recursive wget to an http store somewhere
- rsync via ssh keys or rsync server (I'm currently leaning towards
this)
- Figure out how to make puppet more efficient with large numbers of
files.
We've got a whole pool of sysadmins on this list. How do you deal with these issues in your current environments?
how about using cvs and scripting a checkout of the content? i wuld say either that or rsync. since alot of it like the accounts system is
already
in cvs why not use that?
I thought about this, the problem is some of our content needs to be built first.
-Mike
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
infrastructure@lists.fedoraproject.org