Since the config is only rarely pulled from disk, you might consider a disk mount that allows for each container to access the very same files (e.g. NFS or a more modern equivalent). The 2.x series brings the possibility of an RDBMS-persisted config if you're willing to build that module. (I found it straightforward.)
As far as scaling, you'll have to consider load metrics (what does 50 users actually imply for demand? what kinds of requests are they making?), caching, hardware, backend etc. Simplistic JEE horizontal scaling is a relatively expensive way to get results, and often flat-out useless. Bryan's remarks are right on target.
---
A. Soroka
Digital Research and Scholarship R & D
the University of Virginia Library
On Oct 13, 2009, at 7:31 PM, bryanhall wrote:
jberti wrote:
Hi,
I have a couple questions about best practices or experience with running
GeoServer in a production environment.
First, what is the best way to make sure every container has the same
configuration and data? Just modifying one and copying the data over to
the other ones? Or is there any way to use the same data directory for
all? (I think I've seen a discussion about this at some point but couldn't
find it anymore).
Second, what are the experiences of how many different containers to use
for GeoServer for lets say 15, 50 or 100 users at a time?
Julian,
I'm not sure what to answer on sharing the config - other than I'd suggest
making the changes on a test site, then copying them to the production site.
Load wise, it depends on what you are doing. If you are tiling (google maps
/ virtual earth) and don't hit many cached tiles, you can drag down the
server CPU wise rather quickly (it's just working really hard to draw them
all). Same with WMS. With our rather complex shapes, and merged 30ish vector
layers of data, we get a draw rate of about 5 tiles/second per CPU. Cached
tiles are almost free (just spool the file).
On the other hand, if you mostly do WFS or KML data streaming, it uses
almost no CPU time at all. Bandwidth and/or database lookup speed are more
important here, IMHO.
We use GS on a 8-core server backed up by a 16-core database server. We
currently provide WFS/WMS/WCS and KML support from our live production data
using views. FME and our primary SaaS application shares the app server as
well. Combined, we rarely touch more than 25% of the available CPU power on
the box. Network bandwidth and then data gather time is more of the
bottleneck for us.
Bryan
--
View this message in context: http://www.nabble.com/GeoServer-in-Production-environment-tp25879755p25882786.html
Sent from the GeoServer - User mailing list archive at Nabble.com.
------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Geoserver-users mailing list
Geoserver-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-users