The original post: /r/nginx by /u/Trbutler13 on 2024-07-13 00:04:37.
I have NGINX configured on two LXC containers — one vanilla Debian and one AlmaLinux — trying to meet or exceed the performance of my older server running a cPanel managed installation of NGINX as a reverse proxy. However, my vanilla installations, whether with NGINX as a direct web server or caching reverse proxy in front of Apache, clocks in 54% slower than my older cPanel/WHM server’s NGINX implementation. This is despite the actual new server being as good or better hardware wise in every way over the one it is replacing.
Using ab -n 10000 -c 100 -k -H "Accept-Encoding: gzip, deflate" -H "User-Agent: BenchmarkTool" https://———————.com/
to benchmark my configuration, requesting the most basic of pages on an AlmaLinux or Debian container takes 75ms; on the older cPanel/AlmaLinux install it takes just 46ms. In both cases, all but 1-2ms of that is “processing” according to ab
.
Thinking maybe something was amiss with the machine or LXC, I tried installing NGINX directly on the containers' host machine (Debian/Proxmox) to see if that would show the containers to be the source of the substantial overhead. I also installed NGINX on a separate VPS I have from a cloud provider. In both cases, I still hit the same approximate performance barrier (~ 70ms processing time for a tiny webpage), with the containers maybe adding 3ms or so at worst. None of the attempts comes close to the cPanel server's ~ 40ms processing time.
I have a more convoluted configuration attempt with every directive I could throw at it to try to optimize things, attempting to distill some of the inscrutable layers of settings on the cPanel server to try to find the “magic” one that provides the better performance, but the result comes out the same as this simpler configuration:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024; #Also tried 2048
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
server_name ---------------.com _;
root /usr/share/nginx/html;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
listen [::]:443 ssl http2 ipv6only=on; # managed by Certbot
listen 443 http2 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/t-------------.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/-------------.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
I’ve run Geekbench on both servers and the newer one’s containers perform faster than the old one, just as I’d expect. I’ve run various disk I/O tests and found the two servers essentially indistinguishable on that count, both running on a reasonably fast SSD of equal speed.
When ab
is running a 10,000 connection test, the container never shows more than 5% CPU usage.
Is there anything obvious I’m missing that might help get the new server’s NGINX performance down to the ~ 40ms processing range on a simple page, given that I know such is a realistic goal based on the other server’s performance?
(To be clear, since I know cPanel is frowned upon on here, I am not asking about how to configure cPanel; the new server is not running a control panel.)
Update: I tried using wrk -t12 -c400 -d30s https://-----.com
and using wrk
the new server significantly outperforms the old server. Using NGINX as a reverse proxy on both, below are the results. So, perhaps there's something with ab
rather than my server?
root@juniper:/etc/nginx/sites-enabled# wrk -t12 -c400 -d30s https://---newserver----.com/testing.html
Running 30s test @ https://---newserver----.com/testing.html
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.11ms 6.34ms 231.47ms 87.84%
Req/Sec 11.04k 4.50k 26.31k 67.78%
3930823 requests in 30.07s, 1.11GB read
Requests/sec: 130701.72
Transfer/sec: 37.64MB
root@juniper:/etc/nginx/sites-enabled# wrk -t12 -c400 -d30s https://---oldserver---.com/testing.html
Running 30s test @ https://---oldserver---.com/testing.html
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 70.01ms 34.05ms 358.01ms 74.22%
Req/Sec 477.03 174.44 1.03k 68.58%
169995 requests in 30.09s, 33.88MB read
Requests/sec: 5649.18
Transfer/sec: 1.13MB