Hi,
I just had to do this myself, and I figured some of you may be in the same position, so I'll share it (for Linux / UNIX):
We had a remote server vacuum our data, probably seeding a cache and using our box as a free rendering service. Which is fine, but this was very aggressive. So while GeoServer now has the ability to log requests, you may not always want to restart or reconfigure the service, especially if it's transient.
A handy tool is ngrep ( http://ngrep.sourceforge.net ), and the command to use is ngrep -qd eth0 'GET' tcp port 8080
If this doesn't reveal the culprit you can try filtering for POST instead, and make sure that 'eth0' is really what you want.
If GeoServer is running on a local machine you have plenty of tools to choose from (like Wireshark), but on remote machines where the capture may require more bandwidth than you have) it's trickier.
-Arne
Arne Kepp ha scritto:
Hi,
I just had to do this myself, and I figured some of you may be in the same position, so I'll share it (for Linux / UNIX):
We had a remote server vacuum our data, probably seeding a cache and using our box as a free rendering service. Which is fine, but this was very aggressive.
We really need to start adding some limitations to the WMS requests
that can be made and about how many of them can be made by a single
client in a certain amount of time, so the themes are both self
preservation (think someone asking for a huge wms image) and quality
of service.
The first is relatively easy, the second one is trickier. Wondering
if there are already existing ways to handle QOS concerns. Like,
is the linux kernel built in firewall able to handle QOS in a WMS
meaningful way?
Cheers
Andrea
Andrea Aime wrote:
Arne Kepp ha scritto:
Hi,
I just had to do this myself, and I figured some of you may be in the same position, so I'll share it (for Linux / UNIX):
We had a remote server vacuum our data, probably seeding a cache and using our box as a free rendering service. Which is fine, but this was very aggressive.
We really need to start adding some limitations to the WMS requests
that can be made and about how many of them can be made by a single
client in a certain amount of time, so the themes are both self
preservation (think someone asking for a huge wms image) and quality
of service.
The first is relatively easy, the second one is trickier. Wondering
if there are already existing ways to handle QOS concerns. Like,
is the linux kernel built in firewall able to handle QOS in a WMS
meaningful way?
IPTables has pretty good support for connection tracking, including rate (connections per unit of time) and bursts (so the client can make X requests before rate limiting kicks in). This obviously introduces some overhead, but it happens in the kernel, so shouldn't be too expensive. A minor catch is that you have to disable keepalive for the HTTP connections if you want this to work. I use fwbuilder to make most my firewall scripts, it's a nice GUI for people that configure many servers and it has support for these limits under "Options" for each rule.
If you want to do more fancy stuff, like distinguish different HTTP requests, you need to use a proxy (maybe ACEGI has support, I haven't looked).
-Arne