nginx

4 readers
1 users here now

The nginx community on Reddit. Reddit gives you the best of the internet in one place.

founded 1 year ago
MODERATORS
51
 
 
The original post: /r/nginx by /u/Aggravating-Many-323 on 2024-09-10 14:50:01.

[ Removed by Reddit on account of violating the content policy. ]

52
1
Nginx Unit (zerobytes.monster)
submitted 2 months ago by [email protected] to c/[email protected]
 
 
The original post: /r/nginx by /u/Hungry-Profile3779 on 2024-09-07 20:51:44.

I learned about Nginx Unit today. It looks like it's more optimized version Nginx. If I need a server for PHP application that I built from scratch, should I always use Nginx Unit for its optimal performance? is there any benefit of using traditional Nginx? It's confusing because most of tutorials out there teach me to use traditional Nginx server for a PHP site but on the benchmarks, Nginx Unit performs much better.

53
 
 
The original post: /r/nginx by /u/ugurolsun on 2024-09-05 08:27:45.

Hello guys i have a question.

I will explain my structure:

I have a proxy nginx server it's ip is 192.168.1.10

I have 2 different websites abc.example.com and def.example.com their respective ips are 192.168.1.11 and 192.168.1.12

Created proxy nginx server as main server and i gave dns name of these 2 sites for 192.168.1.10 and it is working as intended i can reach both of them.

My question is when i want to ftp or ssh to one of these servers (abc and def servers) via their dns name it is also going to the proxy server. I know that i can use their ip adresses for ssh or ftp connection but is there a way to create such a thing.

Like when i type abc.example.com on browser it will go first proxy (192.168.1.10) and then reach main server (192.168.1.11) but when i try to ssh or putty to abc.example.com to reach directly main server (192.168.1.11)

Thank you for your answers

54
 
 
The original post: /r/nginx by /u/TryinMahBest2Success on 2024-09-06 20:46:15.

So I'm serving a react application on a nginx server under the /game path.

Here's my location block for it.

This did not work, my React application correctly served the index.html but proceeded to not find the CSS and JS files which should have been served by this location block.

location /game/ {
    root /var/www/html/build;
    try_files $uri $uri/ /index.html;
}

So this new solution.

location /game/static/js {
    alias /var/www/html/build/static/js;
    try_files $uri $uri/ /index.html;
}
location /game/static/css {
    alias /var/www/html/build/static/css;
    try_files $uri $uri/ /index.html;
}

This worked, but why? I have to assume $uri is at fault here. As you can see, I had to write the entire file path in alias, that's supposed to be $uri's own job. Which clearly it didnt work.

Anyone have any ideas what happened? Thanks.

55
 
 
The original post: /r/nginx by /u/AleixoLucas on 2024-09-06 14:28:31.

https://preview.redd.it/648pzrrq87nd1.png?width=1919&format=png&auto=webp&s=bc55273dd1c732b587c521e1aee6d06d6326591c

Hello everyone, could you help me with this? I'm trying to block manual connections/Raw HTTP Request in my nginx, I'm doing a test like in the image, but it still returns 400, I wanted it to be 444; Do you know any other way to block this type of connection?

My docker compose:

name: nginx-httpe2ban
services:
  nginx:
    container_name: nginx-test
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
    image: nginx:latest
    ports:
      - 8080:80
    environment:
      - NGINX_PORT=80

My nginx.conf

server {
    listen 80;
    server_name _;

    if ($host = "") {
        return 444;
    }

    location /401 {
        return 401;
    }
}

Raw command

echo -ne "GET / HTTP/1.1\r\n\r\n" | nc 127.0.0.1 8080

56
 
 
The original post: /r/nginx by /u/Marco_eklm34 on 2024-09-06 12:59:54.

Hi all,

I'm following a tutorial to configure duckdns and NGINX to use Home Assisatnt on Internet, but when I set up NGINX it asks me to enter "Real IP from (enable PROXY control)". I don't know what I have to enter.

Can someone help me?

Thanks

https://preview.redd.it/2fuz7kqns6nd1.png?width=2032&format=png&auto=webp&s=c5c55dcc6d0fd9b733fd5998f8e45eba6837e97d

57
 
 
The original post: /r/nginx by /u/ctrtanc on 2024-09-06 03:36:13.

I have a server that I've written to listen on port 8500 for websockets. I have a local dns lookup through my pi-hole (not on the same raspberry pi) that resolves rpi4b.mc to the local ip address of the raspberry pi. This is working fine when I run nslookup on that hostname. I have minecraft running on my pc, and I'm using the command /wsserver rpi4b.mc/ws to attempt to connect to the raspberry pi server websocket.

If I run /wsserver rpi.local:8500 it connects without issue and everything is good. If I use yarn dlx wscat --connect rpi4b.mc/ws from my computer, that connects and everything is good, so both the reverse proxy and the dns resolution seem to be working fine. However, when I run /wsserver rpi4b.mc/ws it fails to connect and throws an error on the server. I cannot for the life of me figure out why it's acting this way. It seems that the reverse proxy is working for some requests and not for others, even when they come from the same machine. Any help/insight is appreciated. Thanks!

The error I get on the server is:

RangeError: Invalid WebSocket frame: invalid status code 59907 at Receiver.controlMessage (/<filepath>/.yarn/__virtual__/ws-virtual-ac79615cae/3/.yarn/berry/cache/ws-npm-8.18.0-56f68bc4d6-10c0.zip/node_modules/ws/lib/receiver.js:626:30) at Receiver.getData (/<filepath>/.yarn/__virtual__/ws-virtual-ac79615cae/3/.yarn/berry/cache/ws-npm-8.18.0-56f68bc4d6-10c0.zip/node_modules/ws/lib/receiver.js:477:12) at Receiver.startLoop (/<filepath>/.yarn/__virtual__/ws-virtual-ac79615cae/3/.yarn/berry/cache/ws-npm-8.18.0-56f68bc4d6-10c0.zip/node_modules/ws/lib/receiver.js:167:16) at Receiver._write (/<filepath>/.yarn/__virtual__/ws-virtual-ac79615cae/3/.yarn/berry/cache/ws-npm-8.18.0-56f68bc4d6-10c0.zip/node_modules/ws/lib/receiver.js:94:10) at writeOrBuffer (node:internal/streams/writable:570:12) at _write (node:internal/streams/writable:499:10) at Writable.write (node:internal/streams/writable:508:10) at Socket.socketOnData (/<filepath>/.yarn/__virtual__/ws-virtual-ac79615cae/3/.yarn/berry/cache/ws-npm-8.18.0-56f68bc4d6-10c0.zip/node_modules/ws/lib/websocket.js:1355:35) at Socket.emit (node:events:519:28) at addChunk (node:internal/streams/readable:559:12) { code: 'WS_ERR_INVALID_CLOSE_CODE', [Symbol(status-code)]: 1002 }

Nginx debug logs are:

2024/09/05 21:00:25 [debug] 33556#33556: accept on 0.0.0.0:80, ready: 0 2024/09/05 21:00:25 [debug] 33556#33556: posix_memalign: 000000557F572EB0:512 @16 2024/09/05 21:00:25 [debug] 33556#33556: *63 accept: <minecraftip>:<port> fd:3 2024/09/05 21:00:25 [debug] 33556#33556: *63 event timer add: 3: 60000:451500109 2024/09/05 21:00:25 [debug] 33556#33556: *63 reusable connection: 1 2024/09/05 21:00:25 [debug] 33556#33556: *63 epoll add event: fd:3 op:1 ev:80002001 2024/09/05 21:00:25 [debug] 33556#33556: epoll del event: fd:5 op:2 ev:00000000 2024/09/05 21:00:25 [debug] 33556#33556: epoll add event: fd:5 op:1 ev:10000001 2024/09/05 21:00:25 [debug] 33556#33556: *63 http wait request handler 2024/09/05 21:00:25 [debug] 33556#33556: *63 malloc: 000000557F575700:1024 2024/09/05 21:00:25 [debug] 33556#33556: *63 recv: eof:0, avail:-1 2024/09/05 21:00:25 [debug] 33556#33556: *63 recv: fd:3 149 of 1024 2024/09/05 21:00:25 [debug] 33556#33556: *63 reusable connection: 0 2024/09/05 21:00:25 [debug] 33556#33556: *63 posix_memalign: 000000557F589710:4096 @16 2024/09/05 21:00:25 [debug] 33556#33556: *63 http process request line 2024/09/05 21:00:25 [debug] 33556#33556: *63 http request line: "GET /ws HTTP/1.1" 2024/09/05 21:00:25 [debug] 33556#33556: *63 http uri: "/ws" 2024/09/05 21:00:25 [debug] 33556#33556: *63 http args: "" 2024/09/05 21:00:25 [debug] 33556#33556: *63 http exten: "" 2024/09/05 21:00:25 [debug] 33556#33556: *63 posix_memalign: 000000557F56F9F0:4096 @16 2024/09/05 21:00:25 [debug] 33556#33556: *63 http process request header line 2024/09/05 21:00:25 [debug] 33556#33556: *63 http header: "Upgrade: websocket" 2024/09/05 21:00:25 [debug] 33556#33556: *63 http header: "Connection: Upgrade"

This is the basic server setup:

import { WebSocketServer } from 'ws';

const PORT = process.env.WS\_SERVER\_PORT || 8500;
const wss = new WebSocketServer({ port: PORT });

wss.on("listening", () => console.log(`Listening [${PORT}]`));

wss.on("error", console.error);
wss.on("wsClientError", console.error);

wss.on("open", () => {
 wss.send("WELCOME ONE AND ALL!!");
});

wss.on("connection", (socket) => {
 console.log("user connected");

socket.on("error", console.error); socket.on("message", data => { try { // parsing the data and stuff } catch (error) { console.error(error); } });


});

I have nginx set up with this conf file:

map $http\_upgrade $connection\_upgrade {
 default upgrade;
 '' close;
}

upstream mc\_wss {
 server 127.0.0.1:8500;
}

server {
 listen 80;
 listen 443;

server_name rpi4b.mc;

access_log /var/log/nginx/rpi4b.mc.access.log; error_log /var/log/nginx/rpi4b.mc.error.log;

location /ws { proxy_pass http://mc_wss;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
#proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;
proxy_read_timeout 3600s;

}


}
58
 
 
The original post: /r/nginx by /u/HolidayCartoonist323 on 2024-09-05 08:31:17.

I'm facing an issue with file uploads on my Node.js application hosted behind an Nginx server. The setup involves using the Express-Formidable package as middleware for handling file uploads, which are then sent to an AWS S3 bucket.

The problem is that the file upload request never completes—my API request continues processing until it hits the server timeout, and the file never reaches the S3 bucket.

When I checked the Nginx error logs, I found the following entry:

Nginx Error Log:

2024/09/04 18:32:44 [error] 63421#63421: *9345 upstream prematurely closed connection while reading response header from upstream, client: <my_ip>, server: <backend_api>, request: "POST /api/v1/video-project HTTP/2.0", upstream: "http://127.0.0.1:4000/api/v1/video-project", host: "<backend_api>", referrer: "<backend_api>"

Here’s my Nginx config for the server (relevant parts included):

server {

listen 443 ssl http2;

client_max_body_size 600M;

Proxy settings for the main API

location / {

proxy_pass http://localhost:4000;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection 'upgrade';

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

proxy_send_timeout 7200s;

proxy_read_timeout 7200s;

proxy_buffer_size 64k;

proxy_buffers 16 32k;

proxy_busy_buffers_size 64k;

proxy_request_buffering off;

proxy_buffering off;

proxy_connect_timeout 300;

}

}

What I've Tried:

  • Checked the Nginx error logs but couldn’t find anything beyond the log above.
  • Adjusted the client_max_body_size and proxy_timeout settings to handle larger files.
  • Verified that the API works fine for smaller requests, but larger file uploads keep stalling.

Questions:

  • Has anyone encountered similar issues with Nginx prematurely closing upstream connections during file uploads? What could be the root cause of this?
  • Could this be a configuration issue with Nginx or something related to the Node.js Express-Formidable package or AWS S3 SDK?
  • Any recommendations on how to debug or resolve this issue? Could this be related to buffer settings or timeout misconfigurations?

Any insights or suggestions would be highly appreciated!

59
 
 
The original post: /r/nginx by /u/timwelchnz-ricoh on 2024-09-05 07:02:51.

Referring to my post at Enabling TLS 1.0 in IE Mode on Edge in Windows 11 : I've setup nginx on a Debian VM but seem to be fighting the requirement for a client certificate.

I'll fully admit that I know enough to be dangerous and how to read docs but I'm unable to find anything meaningful in the docs that assists me in getting past the errors I keep getting.

2024/09/05 18:50:27 [crit] 259824#259824: *344 SSL_do_handshake() failed (SSL: error:0A0000BF:SSL routines::no protocols available) while SSL handshaking to upstream, client: 10.xxx.xxx.xxx, server: nginx.local, request: "GET /application/Login.htm HTTP/1.1", upstream: "https://xxx.xxx.xxx.xxx:444/application/Login.htm", host: "nginx.local"

I've tested OpenSSL with openssl ciphers -v 'DES-CBC3-SHA' and it returns with what I would expect.

So I'm unsure if this error is saying that DES-CBC3-SHA is not available to nginx or I'm having issues with the client certificate that it expects.

Currently I have the following config...

server {
    listen 80;
    server_name nginx.local;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name nginx.local;

    ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
    ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;

    ssl_protocols TLSv1.2 TLSv1.3;  # Enable TLS 1.0
    ssl_ciphers HIGH:!aNULL:!MD5; # Secure client connections with modern protocols

    location / {
        proxy_pass https://IIS6withTLS1.nz:444; # Health app on IIS6 asking for TLS1.0 and DES-CBC3-SHA
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Set weak cipher and TLS for the server
        proxy_ssl_protocols TLSv1;  # Match upstream server's protocols
        proxy_ssl_ciphers DES-CBC3-SHA;  # Match upstream server's ciphers
        proxy_ssl_trusted_certificate /etc/ssl/certs/ClientCert.crt;  # Path to trusted certificate
        proxy_ssl_verify off; 
    }
}

Any assistance would be greatly appreciated.

Cheers, Tim

60
 
 
The original post: /r/nginx by /u/Tiny-Criticism-86 on 2024-09-04 07:34:52.

Is there a way to block SQL/NoSQL injection attacks using Nginx ingress rules, kind of like how Nginx ingress rules can be used to block XSS? Thanks

61
 
 
The original post: /r/nginx by /u/Powerful-Internal953 on 2024-09-04 06:26:57.

Due to an unusual situation, I need to setup an upstream that is behind a corporate proxy. So far, I am trying this.

My nginx serves abc.example.com

And I want abc.example.com/xx/yy/(.*).js to be served from xyz.example.com/yy/(.*).js . But the problem right now is that the xyz.example.com is behind http://corporate-proxy.example.com:8080 . How do I get this setup to work? Currently I have something like below.

  upstream corporate-proxy  {
    server corporate-proxy.com:8080;
  }
  location /xx/yy/zz {
    rewrite ^//xx/yy/zz/(.*)$ /zz/$1 break;
    proxy_pass http://corporate-proxy;
    proxy_set_header Host xyz.example.com:443;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  }

However, my requests are being sent to xyz.example.com over port 443 but being sent as HTTP requests instead of HTTPS requests. keep getting 400 The plain HTTP request was sent to HTTPS port.

Any way to correct this in such a way that the proxy would work with the right port? Tried changing the proxy_pass to https but that doesn't help

62
 
 
The original post: /r/nginx by /u/TheRealTrailblaster on 2024-09-02 10:39:43.

Hello, I have already setup my immich server with nginx and basic auth and it worked very well. However I was wanting to setup jellyfin as well but it seems for logins they instead of using cookie for login like immich, they use the same auth header as basic auth. I was wondering if there is a work around for this such as maybe making basic auth use cookies instead?

63
 
 
The original post: /r/nginx by /u/cinwald on 2024-09-01 18:12:01.
64
 
 
The original post: /r/nginx by /u/flutter_dart_dev on 2024-09-01 13:47:06.

My goal is to have a nginx server that auto-renews certificates which is installed via docker container, so I need to create a dockerfile besides the nginx.conf file.

I am not sure if I should make 2 container (1 nginx image and other certbot image) and make them communicate with each other via shared volume or if i should make it all in 1 container with nginx image with certbot dependency install etc.

I am a newbie and honestly, my goal here is to have a basic gninx server that rate limites and allows me to use https.

i tried to figure this out and also asked ai and i got this:

note: i feel like there are mistakes in this code, per example the nginx server listens port 80 and then tries to redirect to certbot container which also listens at port 80? does that make sense?

if someone can help me correct nginx.conf file and also enlighten me how to build the dockerfile i would appretiate alot

server {
    listen 80;
    server_name main;

    location /.well-known/acme-challenge {
        # Proxy requests to Certbot container
        proxy_pass http://letsencrypt:80;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto
        https;
    }

    location
    / {
        # Force HTTPS redirect
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name main;

    # Use strong ciphers and protocols (adjust based on your needs)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers 'EECDH+AESGCM: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:AES256+EECDH:AES256+ECDH:AES128+CBC:RSA+AES128-CBC-SHA';
    ssl_prefer_server_ciphers on;

    # Read certificates from Certbot's location
    ssl_certificate /etc/letsencrypt/live/default/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/default/privkey.pem;

    # HSTS (Strict Transport Security)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";

    # Enable HPKP (HTTP Public Key Pinning) - Consider security implications before uncommenting
    # add_header Public-Key-Pins "pin-sha256=\"your_pin_hash\"";

    # X-Frame-Options header (prevents clickjacking)
    add_header X-Frame-Options SAMEORIGIN;

    # X-Content-Type-Options header (prevents MIME sniffing)
    add_header X-Content-Type-Options nosniff;

    # X-XSS-Protection header (prevents XSS attacks)
    add_header X-XSS-Protection "1; mode=block";

    # Content-Security-Policy header (advanced protection - research before use)
    # add_header Content-Security-Policy "..."

    # Rate Limiting using IP address
    limit_req_zone $binary_remote_addr zone=perip:10m rate=5r/s;

    # Enable request limiting
    limit_req zone=perip burst=10 nodelay;

    location / {
        # Proxy requests to your Go server
        proxy_pass http://golangs:8020;

        # Proxy headers for proper routing
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto
        $scheme;
    }
}

65
 
 
The original post: /r/nginx by /u/archman42 on 2024-08-12 23:28:26.

This question has long been asked on Nginx Forum, StackOverflow, and elsewhere. There doesn't seem to be a (satisfactory) solution suggested.

I have a server protected by basic auth. The server itself isn't serving anything fancy; it's a basic static HTML site (actually some documentation produced by Sphinx).

Every time I refresh or visit a different page in the site, the auth popup shows up (only on iPhone and iPad; haven't tried on MacOS). After the first authentication, subsequent ones can be cancelled, and the document loads just fine, but it's annoying. I even followed a solution suggesting fixing 40x due to missing favicon, but no luck.

Anyone with any ideas?

66
 
 
The original post: /r/nginx by /u/alohl669 on 2024-08-11 20:46:26.

Hi! I'm installing a Django application with gunicorn.

Their instructions use nginx to serve the application, the problem is they never weigh using nginx in a separate server, always using localhost.

I could install nginx on this machine and change my DNS zone but... I already have precisely a nginx server working as a reverse proxy to avoid installing another.

ok, let us see the problem

this is their nginx localhost configuration

server {
    listen [::]:443 ssl ipv6only=off;

    # CHANGE THIS TO YOUR SERVER'S NAME
    server_name netbox.example.com;

    ssl_certificate /etc/ssl/certs/netbox.crt;
    ssl_certificate_key /etc/ssl/private/netbox.key;

    client_max_body_size 25m;

    location /static/ {
        alias /opt/netbox/netbox/static/;
    }

    location / {
        # Remove these lines if using uWSGI instead of Gunicorn
        proxy_pass http://127.0.0.1:8001;
        proxy_set_header X-Forwarded-Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Uncomment these lines if using uWSGI instead of Gunicorn
        # include uwsgi_params;
        # uwsgi_pass  127.0.0.1:8001;
        # uwsgi_param Host $host;
        # uwsgi_param X-Real-IP $remote_addr;
        # uwsgi_param X-Forwarded-For $proxy_add_x_forwarded_for;
        # uwsgi_param X-Forwarded-Proto $http_x_forwarded_proto;

    }
}

server {
    # Redirect HTTP traffic to HTTPS
    listen [::]:80 ipv6only=off;
    server_name _;
    return 301 https://$host$request_uri;
}

And this is mine

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name netbox.example.coml;

    ssl_certificate /etc/nginx/custom_certs/fullchain-example.com.crt;
    ssl_certificate_key /etc/nginx/custom_certs/example.com.key;
    ssl_trusted_certificate /etc/nginx/custom_certs/cachain-example.com.crt;
    include snippets/ssl-params.conf;

    client_max_body_size 25m;

    location /static/ {
        alias /opt/netbox/netbox/static/;
    }

    location / {
        proxy_pass http://10.10.10.17:8001;
        proxy_set_header X-Forwarded-Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    # Redirect HTTP traffic to HTTPS
    listen 80;
    listen [::]:80;

    server_name netbox.example.com;
    return 301 https://$host$request_uri;
}

this could be a simple graphical approximation

https://preview.redd.it/opd02a5bk3id1.png?width=562&format=png&auto=webp&s=fdd8de1f8d4895b673377c7328baec8851707edf

Of course, I know it is nonsense to try serving static files from the filesystem of another server.

How could I resolve this? Any idea?

67
 
 
The original post: /r/nginx by /u/chench0 on 2024-08-11 13:49:29.

I am a beginner when it comes to nginx and ever since adding a CSP to my self hosted Wordpress website, some of my content stopped displaying properly. Upon reviewing my browser console, I ended up having to add 'unsafe-inline' to the CSP but I discovered that this is not safe. Here's my CSP:

    add_header Content-Security-Policy "default-src 'self'; script-src 'self' blob: 'unsafe-inline' https://js.stripe.com https://www.google-analytics.com/analytics.js https://www.gstatic.com https://www.googletagmanager.com/gtag/js https://www.googletagmanager.com 'unsafe-eval'; style-src https://www.gstatic.com https://cdn.jsdelivr.net https://use.fontawesome.com 'self' 'unsafe-inline' https://fonts.googleapis.com; object-src 'none'; base-uri 'self'; font-src 'self' data: https://fonts.gstatic.com https://s0.wp.com https://use.fontawesome.com; frame-src 'self' https: blob:; img-src 'self' data: https://ts.w.org https://www.google-analytics.com https://lh3.googleusercontent.com https://secure.gravatar.com https://ps.w.org; manifest-src 'self'; connect-src 'self' data: https://www.google-analytics.com/ https://analytics.google.com/;  media-src 'self'";

Some research has lead me to having to use Nonces instead of unsafe-inline but I believe I would also need to edit the scripts? The items the use the unsafe-inline section are plugins that I can't edit directly since I am using Wordpress.

What are my options to make this safer?

Some more context: I self host Wordpress on a Ubuntu VM (Apache) that sits behind another Ubuntu VM running Nginx. DNS is handled by Cloudflare.

68
 
 
The original post: /r/nginx by /u/MKBUHD on 2024-08-11 12:30:59.
69
 
 
The original post: /r/nginx by /u/odhiambo0 on 2024-08-11 11:13:49.

Hello everyone,

I am very new to Nginx so bear with me. I have a situation where I cannot load my site because Nginx is looking for files in the wrong directory. 5 days of digging has yielded nothing substantial. The site - mm3-lists.kictanet.or.ke - is public. You can try and access it and see the mess Nginx is causing :(

The problem: I need to serve static files from /opt/mailman/mm/static/. For some very strange reason, Nginx is trying to serve the files from a completely different and non-existent path - /usr/share/nginx/html/static/hyperkitty/ - as shown in the logs below.

For the record, my /etc/nginx/nginx.conf is by all means the default that comes when Nginx is installed.

If anyone knows how I can solve this, please share the clues.


2024/08/11 13:57:09 [error] 565638#565638: \*59 open() "/usr/share/nginx/html/static/CACHE/css/output.9efeb5f3d52b.css" failed (2: No such file or directory), client: 162.158.154.134, server: mm3-lists.kictanet.or.ke, request: "GET /static/CACHE/css/output.9efeb5f3d52b.css HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/accounts/logout/?next=/archives/>"

2024/08/11 13:57:09 [error] 565638#565638: \*60 open() "/usr/share/nginx/html/static/CACHE/css/output.e68c4908b3de.css" failed (2: No such file or directory), client: 162.158.154.47, server: mm3-lists.kictanet.or.ke, request: "GET /static/CACHE/css/output.e68c4908b3de.css HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/accounts/logout/?next=/archives/>"

2024/08/11 13:57:09 [error] 565636#565636: \*63 open() "/usr/share/nginx/html/static/hyperkitty/libs/jquery/jquery-ui-1.13.1.min.js" failed (2: No such file or directory), client: 162.158.155.3, server: mm3-lists.kictanet.or.ke, request: "GET /static/hyperkitty/libs/jquery/jquery-ui-1.13.1.min.js HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/accounts/logout/?next=/archives/>"

2024/08/11 13:57:09 [error] 565638#565638: \*65 open() "/usr/share/nginx/html/static/CACHE/js/output.3aaa7705d68a.js" failed (2: No such file or directory), client: 162.158.63.165, server: mm3-lists.kictanet.or.ke, request: "GET /static/CACHE/js/output.3aaa7705d68a.js HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/accounts/logout/?next=/archives/>"

2024/08/11 13:57:09 [error] 565636#565636: \*64 open() "/usr/share/nginx/html/static/hyperkitty/libs/jquery/jquery-3.6.0.min.js" failed (2: No such file or directory), client: 162.158.158.167, server: mm3-lists.kictanet.or.ke, request: "GET /static/hyperkitty/libs/jquery/jquery-3.6.0.min.js HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/accounts/logout/?next=/archives/>"

2024/08/11 14:01:01 [error] 565634#565634: \*1375 open() "/usr/share/nginx/html/static/hyperkitty/libs/jquery/smoothness/jquery-ui-1.13.1.min.css" failed (2: No such file or directory), client: 172.71.114.74, server: mm3-lists.kictanet.or.ke, request: "GET /static/hyperkitty/libs/jquery/smoothness/jquery-ui-1.13.1.min.css HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/archives/list/[email protected]/thread/Y7JRQWUD2OSJMASP2K6X6TZ5KBPBKVDD/>"

2024/08/11 14:01:01 [error] 565634#565634: \*1374 open() "/usr/share/nginx/html/static/hyperkitty/libs/fonts/font-awesome/css/font-awesome.min.css" failed (2: No such file or directory), client: 188.114.102.175, server: mm3-lists.kictanet.or.ke, request: "GET /static/hyperkitty/libs/fonts/font-awesome/css/font-awesome.min.css HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/archives/list/[email protected]/thread/Y7JRQWUD2OSJMASP2K6X6TZ5KBPBKVDD/>"

2024/08/11 14:01:01 [error] 565634#565634: \*1376 open() "/usr/share/nginx/html/static/CACHE/css/output.44ea6c55e917.css" failed (2: No such file or directory), client: 162.158.129.221, server: mm3-lists.kictanet.or.ke, request: "GET /static/CACHE/css/output.44ea6c55e917.css HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/archives/list/[email protected]/thread/Y7JRQWUD2OSJMASP2K6X6TZ5KBPBKVDD/>"

2024/08/11 14:01:01 [error] 565637#565637: \*1378 open() "/usr/share/nginx/html/static/CACHE/css/output.e68c4908b3de.css" failed (2: No such file or directory), client: 162.158.129.235, server: mm3-lists.kictanet.or.ke, request: "GET /static/CACHE/css/output.e68c4908b3de.css HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/archives/list/[email protected]/thread/Y7JRQWUD2OSJMASP2K6X6TZ5KBPBKVDD/>"

2024/08/11 14:01:01 [error] 565634#565634: \*1377 open() "/usr/share/nginx/html/static/CACHE/css/output.9efeb5f3d52b.css" failed (2: No such file or directory), client: 162.158.130.75, server: mm3-lists.kictanet.or.ke, request: "GET /static/CACHE/css/output.9efeb5f3d52b.css HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/archives/list/[email protected]/thread/Y7JRQWUD2OSJMASP2K6X6TZ5KBPBKVDD/>"

2024/08/11 14:01:01 [error] 565634#565634: \*1381 open() "/usr/share/nginx/html/static/hyperkitty/libs/jquery/jquery-3.6.0.min.js" failed (2: No such file or directory), client: 172.71.114.124, server: mm3-lists.kictanet.or.ke, request: "GET /static/hyperkitty/libs/jquery/jquery-3.6.0.min.js HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/archives/list/[email protected]/thread/Y7JRQWUD2OSJMASP2K6X6TZ5KBPBKVDD/>"

2024/08/11 14:01:02 [error] 565634#565634: \*1382 open() "/usr/share/nginx/html/static/hyperkitty/libs/jquery/jquery-ui-1.13.1.min.js" failed (2: No such file or directory), client: 162.158.129.208, server: mm3-lists.kictanet.or.ke, request: "GET /static/hyperkitty/libs/jquery/jquery-ui-1.13.1.min.js HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/archives/list/[email protected]/thread/Y7JRQWUD2OSJMASP2K6X6TZ5KBPBKVDD/>"

2024/08/11 14:01:02 [error] 565634#565634: \*1383 open() "/usr/share/nginx/html/static/CACHE/js/output.3aaa7705d68a.js" failed (2: No such file or directory), client: 172.71.114.221, server: mm3-lists.kictanet.or.ke, request: "GET /static/CACHE/js/output.3aaa7705d68a.js HTTP/2.0", host: "mm3-lists.kictanet.or.ke", referrer: "<https://mm3-lists.kictanet.or.ke/archives/list/[email protected]/thread/Y7JRQWUD2OSJMASP2K6X6TZ5KBPBKVDD/>"  

Below is my site config:


server {  

if ($host = [mm3-lists.kictanet.or.ke](http://mm3-lists.kictanet.or.ke/)) {  

return 301 https://$host$request\_uri;  

listen 80;  

server\_name [mm3-lists.kictanet.or.ke](http://mm3-lists.kictanet.or.ke/);  

return 301 [https://mm3-lists.kictanet.or.ke](https://mm3-lists.kictanet.or.ke/)$request\_uri;  

include snippets/letsencrypt.conf; 

}  

server {  

listen 443 ssl http2;  

listen [::]:443 ssl http2;  

server\_name [mm3-lists.kictanet.or.ke](http://mm3-lists.kictanet.or.ke/);  

ssl\_certificate /etc/letsencrypt/live/[mm3-lists.kictanet.or.ke/fullchain.pem](http://mm3-lists.kictanet.or.ke/fullchain.pem);  

ssl\_certificate\_key /etc/letsencrypt/live/[mm3-lists.kictanet.or.ke/privkey.pem](http://mm3-lists.kictanet.or.ke/privkey.pem);  

ssl\_trusted\_certificate /etc/letsencrypt/live/[mm3-lists.kictanet.or.ke/chain.pem](http://mm3-lists.kictanet.or.ke/chain.pem);  

include snippets/ssl.conf;  

include snippets/letsencrypt.conf;  

access\_log /var/log/nginx/mm3-lists\_access.log;  

error\_log /var/log/nginx/mm3-lists\_error.log;  

location = /favicon.ico {  

log\_not\_found off;  

access\_log off;  

}  

location = /robots.txt {  

allow all;  

log\_not\_found off;  

access\_log off;  

}  

location /static/ {  

alias /opt/mailman/mm/static/;  

}  

location / {  

proxy\_pass [http://127.0.0.1:8010](http://127.0.0.1:8010/);  

proxy\_set\_header Host $host;  

proxy\_set\_header X-Forwarded-For   $remote\_addr;  

}  

}  

70
 
 
The original post: /r/nginx by /u/benfurkank on 2024-08-10 14:56:31.

Hi, I have a machine which is CentOS 8 and on this machine running mariadb, php-fpm, nginx . I have a website which is blocked on all over the world just letting some of the ip blocks.

Then we create a subdomain for this website and its open the all countries its running on another machine. At this point all the things are ok. My manager asked me to do all the bloced requests shoud redirect to sub domain .

How should i do that ?

I tried downloading ip blocks and redirect nginx conf thus ip blocks are not up to date and geolocation for nginx not working on centos 8 any idea ?

Very much thanks

71
 
 
The original post: /r/nginx by /u/needlag on 2024-08-09 03:03:48.

I am trying to create a rewrite rule on a Laravel proyect route, the most simple approach should be like:

RewriteEngine On

RewriteRule ^@([^/]+)$ /someurl/$1

However, while trying this on a nginx server, it does not seem to be quote the solution.

Do you know a better approach for this?

I would like to achieve creating a nice URL with an @ sign that could point to another route, for vanity purposes.

72
 
 
The original post: /r/nginx by /u/Kaleodis on 2024-08-08 21:12:41.

Hi everyone, I hope someone could help me with a small snippet.

I currently have nginx set up as a reverse proxy for a bunch of services - which sometimes go down. When these are down, nginx displays a generic 502 error page.

I would like to redirect the user to a custom status page (hosted at status.mydomain.tld) if the service is down (and thus would generate a 502). I have found the error_page directive (and the @ location thingy), although I'm unsure on how to use it.

I currently have a file for every subdomain (with a server {} block in it for port 443 (and 80 redirecting to it)) and would like to not repeat too much - if possible i'd like to set the redirect stuff exactly once.

Anyone has any experience doing this?

73
 
 
The original post: /r/nginx by /u/DR_Pszicho on 2024-08-08 10:44:41.

Hello and welcome,

I had a rough couple days setting up NGINX plugin on OPNsense for the first time, and actually I was only able to set up NGINX Proxy Manager in a container at the end as it is really simple.

I would still prefer to be able to use NGINX on OPNsense instead, rather than the containered version.

Symptoms:

While trying to load the page, I get Error 522, so it cannot reach the server. Port forwarding should be fine, as I only changed the IP from the gateways IP to the container's to allow traffic through the Proxy Manager.

Certificate is active and working, so I believe the problem is with either the HTTP(s) setup or the Upstream server.

I have tried to follow the instructions, tutorials, but couldn't get the page load.

Please let me know if you need more information to be able to advise me.

74
 
 
The original post: /r/nginx by /u/Zestyclose_Dentist59 on 2024-08-08 05:13:56.

For some reason, my nginx configuration symlinks are being replaced with copies of the config.

For the second time now, I've found that my nginx server configs in /etc/nginx/sites-enabled, which are symlinks to files in /etc/nginx/sites-available, have been replaced with copies of the files. It's never been a specific action I've taken or part of any script I've deployed. Nginx was installed from apt on an Ubuntu server 22.04 virtual machine on of Proxmox. I did self-compile libmodsecurity3 and the connector, but this issue only recently began. I replaced the files with symlinks again a short time ago and today noticed that it had happened again.

I can't think of any reason why symlinks would be magically replaced with the real files, and no other symlinks on the machine are being changed. I also found that all of the symlinks got deleted from the directory, but not all of them were replaced with the file. The syslog at the time the files were created only reported an nginx reload twice, 2 seconds apart, but I can't find anything else in the logs that indicates what happened. Nothing has been changed in the files that replace the symlinks.

Has anyone seen behaviour like this before or can anyone shed some light on why this might be happeneing?

75
 
 
The original post: /r/nginx by /u/criptkiller16 on 2024-08-07 22:36:16.

Hi, i don't even know how to explain in proper english term but really appreciate any hint on what it's happening.

First's all, am I in OSX M1.

Have install PHP and NGINX with Homebrew.

brew install [email protected]
brew install nginx

my nginx conf for my site is as follow:

server {

    charset utf-8;

    # FRONTEND
    server_name         dev-site.com dev-cosmos.ao.utp.pt dev-ep1-site.com dev-ep2-site.com dev-ep3-site.com;
    server_tokens       off;

    access_log          /Users/<user>/Sites/site.com/storage/logs/nginx-access.log;
    error_log           /Users/<user>/Sites/site.com/storage/logs/nginx-error.log;
    root                /Users/<user>/Sites/site.com/public;

    fastcgi_buffers  4 256k;
    fastcgi_buffer_size  128k;

    # ################
    # security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;
    add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
    add_header Access-Control-Allow-Credentials "true";

    # . files
    location ~ /\.(?!well-known) {
        deny all;
    }
    # security headers
    # ################

    # gzip
    # gzip on;
    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;

    large_client_header_buffers 4 32k;
    client_max_body_size 10M;
    client_body_buffer_size 32k;

    index index.php;

    location / {        
       try_files    $uri $uri/ /index.php$is_args$args;
       # try_files    $uri $uri/ /index.php$query_string;
    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_page      404 /index.php;

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    location ~ \.php$ {

    try_files           $uri /index.php =404;
fastcgi_split_path_info     ^(.+\.php)(/.+)$;

    fastcgi_pass            127.0.0.1:9000;
        fastcgi_index           index.php;
        fastcgi_param           SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include                 fastcgi_params;    

        fastcgi_read_timeout 1200s;
fastcgi_send_timeout 1200s;
    }

    listen 80;
}

and my php-fpm configuration is as follow:

; Start a new pool named 'www'.
; the variable $pool can be used in any directive and will be replaced by the
; pool name ('www' here)
[www]

; Per pool prefix
; It only applies on the following directives:
; - 'access.log'
; - 'slowlog'
; - 'listen' (unixsocket)
; - 'chroot'
; - 'chdir'
; - 'php_values'
; - 'php_admin_values'
; When not set, the global prefix (or /opt/homebrew/Cellar/[email protected]/7.4.33_6) applies instead.
; Note: This directive can also be relative to the global prefix.
; Default Value: none
;prefix = /path/to/pools/$pool

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
;       will be used.
; user = _www ; default jf
; group = _www ; default jf
user = my-user
group = staff

; The address on which to accept FastCGI requests.
; Valid syntaxes are:
;   'ip.add.re.ss:port'    - to listen on a TCP socket to a specific IPv4 address on
;                            a specific port;
;   '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
;                            a specific port;
;   'port'                 - to listen on a TCP socket to all addresses
;                            (IPv6 and IPv4-mapped) on a specific port;
;   '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = 127.0.0.1:9000

; Set listen(2) backlog.
; Default Value: 511 (-1 on FreeBSD and OpenBSD)
;listen.backlog = 511

; Set permissions for unix socket, if one is used. In Linux, read/write
; permissions must be set in order to allow connections from a web server. Many
; BSD-derived systems allow connections regardless of permissions. The owner
; and group can be specified either by name or by their numeric IDs.
; Default Values: user and group are set as the running user
;                 mode is set to 0660
;listen.owner = _www
;listen.group = _www
;listen.mode = 0660
; When POSIX Access Control Lists are supported you can set them using
; these options, value is a comma separated list of user/group names.
; When set, listen.owner and listen.group are ignored
;listen.acl_users =
;listen.acl_groups =

; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect.
; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original
; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address
; must be separated by a comma. If this value is left blank, connections will be
; accepted from any ip address.
; Default Value: any
;listen.allowed_clients = 127.0.0.1

; Specify the nice(2) priority to apply to the pool processes (only if set)
; The value can vary from -19 (highest priority) to 20 (lower priority)
; Note: - It will only work if the FPM master process is launched as root
;       - The pool processes will inherit the master process priority
;         unless it specified otherwise
; Default Value: no set
; process.priority = -19

; Set the process dumpable flag (PR_SET_DUMPABLE prctl) even if the process user
; or group is differrent than the master process user. It allows to create process
; core dump and ptrace the process for the pool user.
; Default Value: no
; process.dumpable = yes

; Choose how the process manager will control the number of child processes.
; Possible Values:
;   static  - a fixed number (pm.max_children) of child processes;
;   dynamic - the number of child processes are set dynamically based on the
;             following directives. With this process management, there will be
;             always at least 1 children.
;             pm.max_children      - the maximum number of children that can
;                                    be alive at the same time.
;             pm.start_servers     - the number of children created on startup.
;             pm.min_spare_servers - the minimum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is less than this
;                                    number then some children will be created.
;             pm.max_spare_servers - the maximum number of children in 'idle'
;                                    state (waiting to process). If the number
;                                    of 'idle' processes is greater than this
;                                    number then some children will be killed.
;  ondemand - no children are created at startup. Children will be forked when
;             new requests will connect. The following parameter are used:
;             pm.max_children           - the maximum number of children that
;                                         can be alive at the same time.
;             pm.process_idle_timeout   - The number of seconds after which
;                                         an idle process will be killed.
; Note: This value is mandatory.
pm = dynamic

; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 5

; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: (min_spare_servers + max_spare_servers) / 2
pm.start_servers = 2

; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 1

; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 3

; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;

; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
;pm.max_requests = 500

; The URI to view the FPM status page. If this value is not set, no URI will be
; recognized as a status page. It shows the following informations:
;   pool                 - the name of the pool;
;   process manager      - static, dynamic or ondemand;
;   start time           - the date and time FPM has started;
;   start since  ...
***
Content cut off. Read original on https://old.reddit.com/r/nginx/comments/1empkm1/very_weird_behaviour_with_nginx_and_phpfpm/
view more: ‹ prev next ›