nginx

4 readers
1 users here now

The nginx community on Reddit. Reddit gives you the best of the internet in one place.

founded 1 year ago
MODERATORS
1
 
 
The original post: /r/nginx by /u/Peter3026 on 2024-12-27 03:57:59.

I want all requests from https://domain.com/app1/whatever... to be handled by http://[IP]:[other port]/whatever... and forwarded to client with the original request url.

Here is an example of what I had:

location /router/ {
        rewrite ^/router/?(.*)$ /$1 break;
        proxy_pass  http://192.168.0.1/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

In this instance, the backend server 192.168.0.1 would serve a login page under /login.htm, I expect nginx to forward it to client under /router/login.htm but it was redirected to /login.htm instead, which results in a 404 error.

I have also tried using proxy_pass http://192.168.0.1/;alone, which results in the same error.

I have found a post on ServerFault that perfectly describes my problem but the solution provided failed on my machine. Where should I look at?

Full Nginx config: https://pastebin.com/MxLw9qLS

2
 
 
The original post: /r/nginx by /u/Satrapes1 on 2024-12-25 22:01:38.

Hello,

I use linuxserver.io nginx container for a reverse proxy and I came upon a challenge I hadn't faced before.

For those of you who don't know the container above comes pre-configured with a modular http context and you add the services you want in small .conf files which describe the server and most popular services already have samples.

I created a wildcard certificate for *.example.internal for the reverse proxy which covered my needs for whenever I needed a new service.

Now I want to add a service which requires its own TLS certificate. Let's call it sso.example.internal

I figured out how to do it with the stream context but now the problem is that I can either have the http context or the stream context on port 443. Otherwise it complains that the address is already bound.

So far I can imagine 2 possible solutions:

a) use 2 different ports i.e 443 and 4443

b) use 2 nginx instances 1 with stream context only and 1 with http context only where both will listen on 443 port. I am thinking that this could only work if there was a separate subdomain i.e. sso.new.internal and *.example.internal. But this would also fail because the 2 reverse proxies would not be able to work on the same port 443 essentially having the same problem as a)

Is there a clever way to have both the http and stream context listen on 443.

Any help appreciated and happy holidays to all.

3
 
 
The original post: /r/nginx by /u/Swiss_Meats on 2024-12-19 16:03:47.

I have a domain purchased from go daddy and i setup ngnix proxy manager, I am able to login to the port and manage it. I also went to duckdns and set that up. I then went to my godaddy dns setting and added a CNAME with www and the duckdns url with ttl 1/2 hr

Went back to ngnix click add a new proxy host with my godaddy domain that I purchased for example www.exampledomain.com

Scheme http

Forward Hostname / IP > exampledomain.com > port 2283

Added websockets Support but also removed websocket suppport

Cant login though what am I doing wrong?

https://preview.redd.it/poilhsjzvt7e1.png?width=357&format=png&auto=webp&s=5bb18d1b81e2f3919eb73f1a483fbf04bffe531e

Also godaddy had ANAME there prior ( deleted it)

Also they had a CNAME (deleted it as well) not sure if i should have or if it would have messed anything up but it was already there before be doing this

4
 
 
The original post: /r/nginx by /u/Heartade on 2024-12-19 04:09:21.

I've setup a fairly standard server that serves static files, and after running certbot now I get ERR_SSL_PROTOCOL_ERROR on the client with this error in the nginx log.

2024/12/19 03:53:40 [error] 9499#9499: *593 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: xxx.xxx.xx.xxx, server: 0.0.0.0:443, upstream: "127.0.0.1:22", bytes from/to client:227/78, bytes from/to upstream:78/227 (Client IP address obfuscated)

Has anyone encountered a similar situation?

5
 
 
The original post: /r/nginx by /u/TheAAAndersen on 2024-12-11 10:01:25.

Hey guys,

I’m facing an issue and hope someone can help. We’re developing a web application using Spotify’s SDK. Our token exchange works fine on localhost, but we can’t get it to work on our live website.

We’re running a setup with a load balancer and two droplets on DigitalOcean, where each droplet is using Nginx. Does anyone have experience with a similar setup or can point us in the right direction?

Thanks in advance! 🙏

6
 
 
The original post: /r/nginx by /u/Special_Goose3195 on 2024-12-10 12:01:46.

Hello,

I am looking for pointers on how to implement customized functions for PSK derivation, like querying a DB or HSM, or just a specific key derivation algorithm.

Thanks for your help.

7
 
 
The original post: /r/nginx by /u/carmane02 on 2024-12-10 01:59:57.

Hi everyone, I’m having an issue with SSL configuration on Cloudflare and Nginx Proxy Manager, and I hope you can help me.

Here’s my setup:

• I created an SSL certificate on Cloudflare for the domain *mydomain.com and mydomain.com

• I uploaded the certificate to Nginx Proxy Manager, where I set up a proxy pointing to Authelia (IP: 192.168.1.207, port: 9091).

• I created a DNS A record on Cloudflare for auth.mydomain.com, which points to the public IP of my server.

• I enabled SSL on the Nginx proxy with the Cloudflare certificate, forcing SSL and configuring the proxy settings (advanced settings and headers, etc.).

The problem is that when I visit auth.mydomain.com I get the “Invalid SSL certificate” error with the code 526 from Cloudflare.

I’ve already checked a few things:

  1. SSL on Cloudflare: I set the SSL mode to Full (not Flexible) to ensure a secure connection between Cloudflare and my server.
  2. SSL certificate on Nginx: I uploaded the Cloudflare certificate and properly configured the SSL part in Nginx.
  3. Nginx Proxy Configuration: The proxy setup seems correct, including the forwarding headers.

I’m not sure what’s causing the issue. I’ve also checked the DNS settings and Cloudflare settings, but nothing seems to work. Does anyone have an idea what could be causing the 526 error and how to fix it?

Thanks in advance!

8
 
 
The original post: /r/nginx by /u/CaramelLynn on 2024-12-09 04:38:27.

Hello,

I'm looking to self host a website (for learning purposes). I have a domain i bought from name cheap and I have nginx downloaded on my linux computer. How do I get it so that I can access the website from the domain outside my local area network? Thank you!

9
 
 
The original post: /r/nginx by /u/succulent_samurai on 2024-12-08 18:01:53.

Hey all, I'm trying to host a Terraria server using Tshock and using nginx as a reverse proxy (basically the goal is to allow people to connect to my server using terraria.mywebsite.com so I don't have to give out my IP address). Tshock works if I connect to it using my IP address, so I know the server works fine. But when I use my domain, the game gets to the "found server" step and then just sits there forever. This makes me think that there's an issue somewhere between nginx and tshock when tshock tries to send data back to nginx, but I'm not super familiar with reverse proxies so I could have this wrong.

Here's my nginx.conf file:

server {

listen 80;

server_name terraria.mywebsite.com;

location / {

proxy_pass http://localhost:7777/;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header Accept-Encoding "";

}

}

Nginx logs have some alerts about "*9 open socket #18 left in connection 8" which I don't know what that means.

And I've tried connecting on both ports 80 and 443 from terraria to no avail (I have nginx listening on 443 as well for https in case it's needed).

Thanks in advance if anyone's able to help!

10
 
 
The original post: /r/nginx by /u/PrinceHeinrich on 2024-12-05 11:15:15.
11
 
 
The original post: /r/nginx by /u/jjmaximo on 2024-12-04 19:40:22.

Hi

I was working on configuring a locations.conf file for reverse proxy with nginx, however, when one of the services set in locations is turned off/paused in docker, nginx simply stops working and responding, how can I get around this problem, where even the service is off nginx will work/start normally.

I wonder if there is some kind of try-catch that could be used in this case, or something similar.

Last nginx logs before stopping:

/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/12/04 19:10:42 [emerg] 1#1: host not found in upstream "microsservico_whatsapp_front" in /etc/nginx/locations.conf:16
nginx: [emerg] host not found in upstream "microsservico_whatsapp_front" in /etc/nginx/locations.conf:16

The location configuration I have set:

    location /microsservico_whatsapp_front/ {
      proxy_pass http://microsservico_whatsapp_front:7007;
      rewrite ^/microsservico_whatsapp_front(.*)$ $1 break;
   }

Any suggestions to help me? Please

12
 
 
The original post: /r/nginx by /u/yegor-usoltsev on 2024-12-04 11:20:20.

Hi all,

I've been experimenting with HTTP keep-alive in NGINX as a reverse proxy and documented my findings in this GitHub repo.

The one thing that caught my attention is that NGINX does require additional configuration in order for it to reuse upstream connections, unlike other proxies such as HAProxy, Traefik, or Caddy, which all enable HTTP keep-alive by default. So here's my final configuration that came out of this:

server {
    location / {
        proxy_pass http://backend/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }
}

map $http_upgrade $connection_upgrade {
    default upgrade;
    "" "";
}

upstream backend {
    server 127.0.0.1:8080;
    keepalive 16;
}

To the community:

  1. Why keep-alive isn't enabled by default in NGINX?
  2. Are there any edge cases I might have overlooked?
  3. What would you suggest for simplifying or improving those configurations?

Looking forward to hearing your thoughts!

13
 
 
The original post: /r/nginx by /u/Skywrathx9 on 2024-12-03 20:53:28.

If anyone can chime in feel free, I'm looking for a yes(and how)/no answer.

I have a piece of software that communicates with its backend through three communication channels.

  1. A layer 7 connection that uses TLS for encryption and makes requests towards an FQDN

  2. Also layer 7 aimed at an FQDN but is done over WSS (web sockets)

  3. This is the problematic one as this one happens on Layer 4 and is an encrypted pure socket connection (not web sockets).

I'm being told to be able to proxy this software's connection I would need to use 3 hosts, one for each channel.

Does NGINX have the ability to handle all 3 on a single host (or maybe even 2 just to reduce the number of hosts running the proxy) through a configuration I'm not aware is possible?

14
 
 
The original post: /r/nginx by /u/vectorx25 on 2024-12-03 01:29:33.

if anyone finds useful, this is the best summary of nginx config, https redirects, caching + security settings doc Ive seen so far, very clear and has good examples

https://medium.com/@nomannayeem/mastering-nginx-a-beginner-friendly-guide-to-building-a-fast-secure-and-scalable-web-server-cb075b423298

15
 
 
The original post: /r/nginx by /u/Connect_Computer_528 on 2024-12-02 09:59:49.

I have the following nginx configuration in docker. The problem is in my node app (backend proxy) I get an IP of nginx server, not the user real IP when sending requests from frontend using X-Real-IP headers

upstream frontend {
    server frontend:3000;
}

upstream backend {
    server backend:4000;
}

server {
    listen 80;
    location / {
        auth_basic "Restricted";
        auth_basic_user_file  /etc/nginx/.htpasswd;

        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_read_timeout 1m;
        proxy_connect_timeout 1m;
        proxy_pass http://frontend/;
    }

    location /api {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        rewrite /api/(.*) /$1 break;
        proxy_pass http://backend/;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    location /socket.io/ {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;

        proxy_pass http://backend/;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

16
 
 
The original post: /r/nginx by /u/vectorx25 on 2024-12-02 03:10:19.

Hello, I have a django site running behind nginx,

I already installed ngxblocker and it seems to be working, but I still see daily access logs like this

78.153.140.224 - - [02/Dec/2024:01:43:52 +0000] "GET /acme/.env HTTP/1.1" 404 162 "-" "Mozilla/5.0 (Linux; U; Android 4.0.4; en-us; GT-S6012 Build/IMM76D) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30" "-"

51.161.80.229 - - [02/Dec/2024:02:31:34 +0000] "GET /.env HTTP/1.1" 404 194 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.5845.140 Safari/537.36" "-"

13.42.17.147 - - [02/Dec/2024:02:00:07 +0000] "GET /.git/ HTTP/1.1" 200 1509 "-" "Mozilla/5.0 (X11; Linux x86_64)" "-"

I have 80,443 open completely for the website, these guys are trying to steal .env, AWS, etc creds via GET requests

is there anything I can do to block IPs that dont hit the legitimate Get and Post routes i have advertised on my django backend? I started adding constant spammers IPs into an iptables blacklist but its a losing battle, impossible to keep up manually.

Not sure how to automate this.

17
 
 
The original post: /r/nginx by /u/PintSizeMe on 2024-12-01 21:13:15.

I'm having a problem getting nginx to serve files in a sub-directory rather than the root but I just get the nginx default at the root and not-found at /static.

server {
    listen        8446 default_server;
    server_name   web01;
    location /static {
        root /webfiles/staticfiles;
        autoindex on;
    }
}

However, if I use this I do get the files at the root as I'd expect. (the only difference is the location line)

server {
    listen        8446 default_server;
    server_name   web01;
    location / {
        root /webfiles/staticfiles;
        autoindex on;
    }
}

My goal is to share files from 4 different folders in 4 different sub-directories. I've been searching this off and on for months and now that it's about time to build a replacement server I really want to get this solved rather than install Apache to do this again since Apache is overkill.

And I have autoindex on for troubleshooting and will drop it once I get things working.

18
 
 
The original post: /r/nginx by /u/alohl669 on 2024-12-01 17:13:01.

Hi, I'm trying to create a custom error page to replace the nginx's default.

The problem is that I want to do it for every site, or directly for nginx. I mean, I dont want to declare an error page directive on every config file

19
 
 
The original post: /r/nginx by /u/twistedt on 2024-11-30 13:01:58.

I see some old posts in here, but wondering if anyone has had luck of late with reverse proxy/streams with Icecast through NPM?

20
 
 
The original post: /r/nginx by /u/These_Republic_4447 on 2024-11-30 06:43:02.

I want to redirect users from port 8000 to https. I have 3 domains. eohs.lrpnow.com, rcb.lrpnow.com, cimlearn.com ,all on port 8000. first two work correctly to redirect to https://cimlearn.com/

but when i type cimlearn.com:8000 it takes me to this: https://cimlearn.com:8000/ when it should redirect to https://cimlearn.com/ . what is wrong with my config? how do i fix this?

i have cleared my browser cache, tested incognito. but it is not working for that single domain cimlearn on 8000.

nginx config:

http {

....

Redirect port 8000 to HTTPS

server {

listen 8000 default_server;

server_name _;

Redirect all traffic to HTTPS on cimlearn.com

return 301 https://cimlearn.com$request\_uri;

\# Redirect all traffic to HTTPS on [cimlearn.com](http://cimlearn.com/) without including the port

return 301 https://cimlearn.com$uri$is\_args$args;

}

...

HTTPS Server Block for cimlearn.com

server {

listen 443 ssl;

server_name cimlearn.com;

ssl_certificate C:/nginx-1.26.0/certs/cimlearn.com-fullchain.pem;

ssl_certificate_key C:/nginx-1.26.0/certs/cimlearn.com-key.pem;

ssl_protocols TLSv1.2 TLSv1.3;

ssl_ciphers EECDH+AESGCM:EDH+AESGCM;

ssl_prefer_server_ciphers on;

....

Redirect www.cimlearn.com to cimlearn.com

server {

listen 443 ssl;

server_name www.cimlearn.com eohs.lrpnow.com rcb.lrpnow.com;

ssl_certificate C:/nginx-1.26.0/certs/cimlearn.com-fullchain.pem;

ssl_certificate_key C:/nginx-1.26.0/certs/cimlearn.com-key.pem;

ssl_protocols TLSv1.2 TLSv1.3;

ssl_ciphers EECDH+AESGCM:EDH+AESGCM;

ssl_prefer_server_ciphers on;

return 301 https://cimlearn.com$request\_uri;

}

}

21
 
 
The original post: /r/nginx by /u/Avry_great on 2024-11-29 10:54:05.

I'm trying to host my website for the first time and NGINX seem like it doesn't recognize my backend. I tried to make the API location in NGINX to recognize all the APIs and send to port 5000 but doesn't work so I decided to test a single API as above. Their are always an error message in the signup interface but there are nothing in the backend console or any POST/GET log printed out even tho it run perfectly fine in local. The error from NGINX log is: 2024/11/29 10:36:48 [error] 901#901: *9 connect() failed (111: Connection refused) while connecting to upstream, client: 172.69.121.138, server: avery-insights.icu, request: "POST /auth/signup HTTP/1.1", upstream: "http://127.0.0.1:5000/auth/signup", host: "avery-insights.icu"

    location /auth/signup {
    proxy_pass http://localhost:5000/auth/signup;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
}

Backend code:

server.js:

const authRoutes = require('./routes/authRoutes');
app.use('/auth', authRoutes);
app.use('/table', tableRoutes);

authRoutes.js

router.post('/signup', validateSignup, signup);

22
 
 
The original post: /r/nginx by /u/WHYAMIONTHISSHIT on 2024-11-28 10:51:25.

https://github.com/nginxinc/nginx-ldap-auth - explicitly "not hardened for prodution"

https://github.com/kvspb/nginx-auth-ldap - no such warning, but old project, not particularly maintined it seems

https://github.com/caltechads/nginx-ldap-auth-service - more recently maintained, but barely any stars...

we're using nginx as a reverse proxy and we'd like a frontline of security to the webapp. most of our stuff is hosted with apache with the ldap auth done as follows. im just looking forsomething in nginx that is equally secure (new to the company - haven't worked with apache before which is why i stuck to what i know proxying with nginx). do i have to migrate to apache instead?

<Location "/">
  AuthName "____"
  AuthType Basic
  AuthBasicProvider ldap
  AuthLDAPURL "____"
  AuthLDAPBindDN "____"
  AuthLDAPBindPassword "____"
  <RequireAny>
    Require ip 10.
    Require valid-user
  </RequireAny>
</Location>

23
 
 
The original post: /r/nginx by /u/Revolutionary-Star71 on 2024-11-28 01:46:57.

Hi yall, I am trying to set up a proxy for my gRPC server.

I am using NGINX as a reverse proxy locally ran using docker-compose. My idea is to run the following:

api.domain.com/api to my regular Express server and api.domain.com/grpc my regular grpc server.

I have the following on my nginx.conf

events {
  worker_connections 1024;
}

http {

    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }

    # All other servers, eg: admin dashboard, client website etc

    server {
        listen 80;
        http2 on;
        server_name ;

        location /api {
            proxy_pass http://host.docker.internal:5001/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;

            # WebSocket support
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
        }

        location /grpc {
            grpc_pass grpc://host.docker.internal:50051;
        }
    }

}

I am using nginx:alpine.

Calling grpc://host.docker.internal:50051 on postman works fine but trying to call http:api.dev-local.com/grpc wont work.

curl -I on the domain shows HTTP/1.1 regardless of setting : http2 on;.

Now I also plan to put this in a EC2 server for production, I use nginx there but I think its gonna be easier to set it up using ALB.

Any ideas on why this is not working?

24
 
 
The original post: /r/nginx by /u/BigHowski on 2024-11-27 20:23:39.

Hi all,

Forgive the post but I'm a bit stuck and I was looking for a little help with my self-Hosted sites all of which have stopped working as of today. I have the following:

  • A windows box with a host of apps (example calibre), some of which are containers in docker
  • Nginx acting as a reverse proxy (itself running in a container)
  • A ddns account to send to my ip as its not static
  • A domain which allows subdomains which forwards to ddns

Up until yesterday this was working like a charm but today for some reason I'm getting a 504 across all of the subdomains I use (however the main domain routes to my ddns, which gives me the ngnix congratulations page). Internally everything is fine if I use localhost or the ip along with the port for the app so I'm guessing maybe something isn't passing the traffic on internally within Nginx?

Looking at the logs I can see the following:

2024/11/27 19:01:51 [error] 202#202: *3411 open() "/var/www/html/xml/info.xml" failed (2: No such file or directory), client: 172.20.0.1, server: localhost-nginx-proxy-manager, request: "GET /xml/info.xml HTTP/1.1", host: "cpc143398-mfl22-2-0-cust830.13-1.cable.virginm.net"

2024/11/27 19:01:51 [error] 202#202: *3412 open() "/var/www/html/magento_version" failed (2: No such file or directory), client: 172.20.0.1, server: localhost-nginx-proxy-manager, request: "GET /magento_version HTTP/1.1", host: "cpc143398-mfl22-2-0-cust830.13-1.cable.virginm.net"

2024/11/27 19:01:51 [error] 202#202: *3413 open() "/var/www/html/api/v1/check-version" failed (2: No such file or directory), client: 172.20.0.1, server: localhost-nginx-proxy-manager, request: "GET /api/v1/check-version HTTP/1.1", host: "cpc143398-mfl22-2-0-cust830.13-1.cable.virginm.net"

2024/11/27 19:30:10 [error] 203#203: *3607 open() "/var/www/html/cgi-bin/luci/;stok=/locale" failed (2: No such file or directory), client: 172.20.0.1, server: localhost-nginx-proxy-manager, request: "GET /cgi-bin/luci/;stok=/locale HTTP/1.1", host: "86.16.243.63:80"

2024/11/27 19:38:05 [error] 203#203: *3638 open() "/var/www/html/cgi-bin/luci/;stok=/locale" failed (2: No such file or directory), client: 172.20.0.1, server: localhost-nginx-proxy-manager, request: "GET /cgi-bin/luci/;stok=/locale HTTP/1.1", host: "86.16.243.63:80"

2024/11/27 19:45:54 [error] 203#203: *3684 open() "/var/www/html/cgi-bin/index.html" failed (2: No such file or directory), client: 172.20.0.1, server: localhost-nginx-proxy-manager, request: "GET /cgi-bin/index.html HTTP/1.1", host: "86.16.243.63:80"

But I'm really unsure how to go about troubleshooting. Any idea what I can do to track down the issue and fix? Maybe its permissions issues but I don't think anything has changed. Maybe I update the container the other day but I cannot remember for sure.

25
 
 
The original post: /r/nginx by /u/mylinuxguy on 2024-11-27 17:27:42.

I have a bunch of tasmota wifi plugs. Currently I access them by just http://plug/_name/ and that gets me to their web interface. They don't do ( easily... or just don't do ) ssl so I can't do https://plug/_name or http://plug/_name.mydomain.net ( google chrome forces a https:// redirect when I use a fully qualified domain name and since the plugs don't do ssl, that's an issue.

I'd like to do something like: ( I use this for my https:// --> http:// reverse proxy stuff... that ssl proxy redirect works fine. )

server {

server_name clock.mydomain.net projector.mydomain.net fan.mydomain.net;

listen 80;

listen 443 ssl http2;

listen [::]:80;

listen [::]:443 ssl http2;

ssl_certificate /etc/letsencrypt/live/mydomain.net/fullchain.pem;

ssl_certificate_key /etc/letsencrypt/live/mydomain.net/privkey.pem;

ssl_trusted_certificate /etc/letsencrypt/live/mydomain.net/chain.pem;

include include/ssl.conf;

include include/wp.ban.conf;

location / {

proxy_pass http://tasmota_%1/;

include include/proxy.conf;

}

}

So... how can I get the %1 from the http://tasmota/_%1 to be clock, projector or fan based on the URL that comes into nginx?

view more: next ›