blob: 2159d433d00cbc8c17922447a1dbae1dfae7f09c [file] [log] [blame]
// ========================================================================
// Copyright (c) 1995-2017 Mort Bay Consulting Pty. Ltd.
// ========================================================================
// All rights reserved. This program and the accompanying materials
// are made available under the terms of the Eclipse Public License v1.0
// and Apache License v2.0 which accompanies this distribution.
//
// The Eclipse Public License is available at
// http://www.eclipse.org/legal/epl-v10.html
//
// The Apache License v2.0 is available at
// http://www.opensource.org/licenses/apache2.0.php
//
// You may elect to redistribute this code under either of these licenses.
// ========================================================================
[[high-load]]
=== High Load
Configuring Jetty for high load, whether for load testing or for production, requires that the operating system, the JVM, Jetty, the application, the network and the load generation all be tuned.
==== Load Generation for Load Testing
Machines handling load generation must have their OS, JVM, etc., tuned just as much as the server machines.
The load generation should not be over the local network on the server machine, as this has unrealistic performance and latency as well as different packet sizes and transport characteristics.
The load generator should generate a realistic load.
Avoid the following pitfalls:
* A common mistake is that load generators often open relatively few connections that are extremely busy sending as many requests as possible over each connection.
This causes the measured throughput to be limited by request latency (see http://blogs.webtide.com/gregw/entry/lies_damned_lies_and_benchmarks[Lies, Damned Lies and Benchmarks] for an analysis of such an issue).
* Another common mistake is to use TCP/IP for a single request, and to open many, many short-lived connections.
This often results in accept queues filling and limitations due to file descriptor and/or port starvation.
* A load generator should model the traffic profile from the normal clients of the server.
For browsers, this is often between two and six connections that are mostly idle and that are used in sporadic bursts with read times in between.
The connections are typically long held HTTP/1.1 connections.
* Load generators should be written in asynchronously so that a limited number of threads does not restrict the maximum number of users that can be simulated.
If the generator is not asynchronous, a thread pool of 2000 may only be able to simulate 500 or fewer users.
The Jetty `HttpClient` is an ideal choice for building a load generator as it is asynchronous and can simulate many thousands of connections (see the CometD Load Tester for a good example of a realistic load generator).
==== Operating System Tuning
Both the server machine and any load generating machines need to be tuned to support many TCP/IP connections and high throughput.
===== Linux
Linux does a reasonable job of self-configuring TCP/IP, but there are a few limits and defaults that you should increase.
You can configure most of these in `/etc/security/limits.conf` or via `sysctl`.
====== TCP Buffer Sizes
You should increase TCP buffer sizes to at least 16MB for 10G paths and tune the auto-tuning (keep in mind that you need to consider buffer bloat).
[source, screen, subs="{sub-order}"]
....
$ sysctl -w net.core.rmem_max=16777216
$ sysctl -w net.core.wmem_max=16777216
$ sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
$ sysctl -w net.ipv4.tcp_wmem="4096 16384 16777216"
....
====== Queue Sizes
`net.core.somaxconn` controls the size of the connection listening queue.
The default value is 128.
If you are running a high-volume server and connections are getting refused at a TCP level, you need to increase this value.
This setting can take a bit of finesse to get correct: if you set it too high, resource problems occur as it tries to notify a server of a large number of connections, and many remain pending, but if you set it too low, refused connections occur.
[source, screen, subs="{sub-order}"]
....
$ sysctl -w net.core.somaxconn=4096
....
The `net.core.netdev_max_backlog` controls the size of the incoming packet queue for upper-layer (Java) processing.
The default (2048) may be increased and other related parameters adjusted with:
[source, screen, subs="{sub-order}"]
....
$ sysctl -w net.core.netdev_max_backlog=16384
$ sysctl -w net.ipv4.tcp_max_syn_backlog=8192
$ sysctl -w net.ipv4.tcp_syncookies=1
....
====== Ports
If many outgoing connections are made (for example, on load generators), the operating system might run low on ports.
Thus it is best to increase the port range, and allow reuse of sockets in `TIME_WAIT`:
[source, screen, subs="{sub-order}"]
....
$ sysctl -w net.ipv4.ip_local_port_range="1024 65535"
$ sysctl -w net.ipv4.tcp_tw_recycle=1
....
====== File Descriptors
Busy servers and load generators may run out of file descriptors as the system defaults are normally low.
These can be increased for a specific user in `/etc/security/limits.conf`:
....
theusername hard nofile 40000
theusername soft nofile 40000
....
====== Congestion Control
Linux supports pluggable congestion control algorithms.
To get a list of congestion control algorithms that are available in your kernel run:
[source, screen, subs="{sub-order}"]
....
$ sysctl net.ipv4.tcp_available_congestion_control
....
If cubic and/or htcp are not listed, you need to research the control algorithms for your kernel.
You can try setting the control to cubic with:
[source, screen, subs="{sub-order}"]
....
$ sysctl -w net.ipv4.tcp_congestion_control=cubic
....
====== Mac OS
Tips welcome.
====== Windows
Tips welcome.
====== Network Tuning
Intermediaries such as nginx can use a non-persistent HTTP/1.0 connection.
Make sure to use persistent HTTP/1.1 connections.
====== JVM Tuning
* Tune the link:#tuning-examples[Garbage Collection]
* Allocate sufficient memory
* Use the -server option
* Jetty Tuning
//====== Connectors
====== Acceptors
The standard rule of thumb for the number of Accepters to configure is one per CPU on a given machine.
====== Low Resource Limits
Must not be configured for less than the number of expected connections.
====== Thread Pool
Configure with goal of limiting memory usage maximum available.
Typically this is >50 and <500