nginx

4 readers
1 users here now

The nginx community on Reddit. Reddit gives you the best of the internet in one place.

founded 1 year ago
MODERATORS
126
 
 
The original post: /r/nginx by /u/prideandjoy5 on 2024-07-08 15:53:35.

I am trying to use nginx to redirect url/app to various ip/port services and am asking for verification I can do it and a push in the right direction.

Image attached for what I am attempting to do. I have searched/tried various and have had partial success and just looking for an indication that I am not chasing a unreachable solution.

Thanks for any guidance provided.

https://preview.redd.it/l4ob5bs6hbbd1.jpg?width=425&format=pjpg&auto=webp&s=221c0fd140d09b7b9ef56370c8812f8d9e7570b5

127
 
 
The original post: /r/nginx by /u/TheRealLifeboy on 2024-07-06 10:36:24.

I set up the following nginx config on Ubuntu 20.04 in /etc/nginx/sites-enable/mmonit.imb.co.

server {
    server_name mmonit.imb.co;

    # root /var/www/html;
    try_files $uri/index.html $uri u/mmonit;

    location / {
            proxy_pass http://mmonit.imb.co:9050;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;

            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
    }

listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mmonit.imb.co/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mmonit.imb.co/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbotmmonit.imb.co
}
server { if ($host = mmonit.imb.co) { return 301 https://$host$request_uri; } # managed by Certbot
    listen 80;

    server_name mmonit.imb.co;
return 404; # managed by Certbot
}

The site times out when go to https://mmonit.imb.co.

If I remove the root directive, it always displays the default nginx page. Even if I remove the bulk of the proxy_set directives, it still only gives me the default page.

What is wrong with this setup? http://mmonit.imb.co:9050 works perfectly from an edge browser, but due to a bug (I suspect) in both of the latest Firefox and Chromium based browsers it is not possible to turn https redirection off. (I can turn it off in the settings, but it has no effect). That is why I have resorted to just setting up an https reverse proxy to access mmonit.

128
 
 
The original post: /r/nginx by /u/nameless9346 on 2024-07-06 07:46:24.

Hello everybody,

I nedd a community Help.

I'm new of Docker contest and i start study it yesterday.

I did a monorepo with Nx with 2 projects:

  • Node with Fastify for BE
  • Angular for FE

I did 2 docker file, they work perfectly becouse when I build this 2 dockerfile They work perfectly, on port 3000 i see my BE and on port 5000 i see FE, but for prevent CORS (becouse this app will be deployed on my LAN) error i need that this systems stay on like domain and for it I created this docker-compose file:

version: '3'
services:
  node_app:
    image: my-finance-api
    container_name: node_app
    ports:
      - "3100:3000"
    networks:
      - my_network

  angular_app:
    image: my-finance-app
    container_name: angular_app
    ports:
      - "5100:5000"
    networks:
      - my_network

  nginx:
    image: nginx:latest
    container_name: nginx_proxy
    ports:
      - "7000:7000"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    networks:
      - my_network

networks:
  my_network:
    driver: bridge

This is a nginx.cof

events { }

http {
    server {
        listen 7000;

        location /app {
            proxy_pass http://angular_app:5000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}

The problem is thath:

  • On localhost:7000/app i see only withe page and every http request return me 404 (for file css, js, ecc ecc)

How can i resolve it?

P.S. Sorry for my bad englis :)

129
 
 
The original post: /r/nginx by /u/konmas on 2024-07-05 12:00:36.

If I have the following nginx config:

server { 
listen 80; server_name testsite.local;

location =/ { root /var/www/html/TEST/public/; 
try_files $uri $uri/ /test.html; }

location / { root /var/www/html/TEST/WEBSITE/build/; 
try_files $uri $uri/ /index.html; }

location /api { alias /var/www/html/TEST/API/; 
try_files $uri /index.php$is_args$args; }

location ~ /.(?!well-known).* { deny all; }

location ~ /index.php(/|$) { fastcgi_pass unix:/var/run/php/php7.4-fpm.sock; 
fastcgi_split_path_info .+.php(/.*)$; 
include fastcgi_params; 
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; 
fastcgi_param DOCUMENT_ROOT $realpath_root; internal; 
} 
}

What I am trying to do is when you go to testsite.local it loads that static html file (test.html) and then when you navigate away to any other / it will load the react APP (testsite.local/home - testsite.local/login. - etc)

With the above config, it always seems to skip the "location =/" block and go right into the "location /" - not sure where i am going wrong? Thank you!

If I modify the above to this:

location =/GETME { root /var/www/html/TEST/public/; 
try_files $uri $uri/ /test.html; }

and then go to to testsite.local/GETME it works as expected, but I want it to go to it at testsite.local and then everywhere outside of that load the react app.

Thanks for the help!

130
 
 
The original post: /r/nginx by /u/CorgiClouds on 2024-07-03 20:19:44.

I have tried reading stack overflow and using chatgpt, but I keep losing the first part of my API end point when I try to switch things around with regex.

I want to redirect http://server/api/repo/t//channel to http://server/t//get/channel, but I keep just getting left with http://server/channel. Here is my most recent attempt:

location ~ ^/api/repo/t/(.*)$ {
    proxy_pass http://server/t/$1/get;
}

I have also tried using "rewrite", to no avail. Please let me know if anyone has any suggestions.

131
 
 
The original post: /r/nginx by /u/Successful_Beach1113 on 2024-07-03 13:01:02.

Hello all,

I am reaching out for some assistance with an NGINX Reverse Proxy I'm configuring.

I have two sites using this proxy, for reference's sake they can be called:

music.mydomain.com

video.mydomain.com

Each website has a back-end server that's doing the hosting and SSL Termination and each website listens on Port 443.

I followed this tutorial to setup the "stream" module: https://forum.howtoforge.com/threads/nginx-reverse-proxy-with-multiple-servers.83617/

I am able to successfully hit both of my sites but for whatever reason if I hit music.mydomain.com before video.mydomain.com, I always land on music.mydomain.com any time I go to video.mydomain.com.

If I hit video.mydomain.com first, I can hit music.mydomain.com just fine, but I can't get back to video.mydomain.com because I'm always landing on music.mydomain.com

I'm happy to share my configuration, but am hopeful that the referenced tutorial article will shed some light on my setup.

132
 
 
The original post: /r/nginx by /u/Menosa on 2024-07-02 20:23:48.

I am working with a Kotlin Multiplatform project that you can view here on GitHub. I started by using the Kotlin Multiplatform Wizard and selected the Web platform option, everything else remains unchanged.

Here's what I've done:

  • Ran the ./gradlew build command.
  • When I attempt to open the index.html file directly, either one of this directories,the page remains blank.
  • However, when I run ./gradlew wasmJsBrowserProductionWebpack, the site launches successfully and is served by the WebPack server.

I would like to serve this project using Nginx instead of WebPack. Could someone advise on the necessary Gradle build configurations to generate a directory structure that Nginx can use effectively?

Additionally, I would appreciate guidance on setting up the nginx.conf file for this purpose.

133
 
 
The original post: /r/nginx by /u/builder999 on 2024-07-02 15:48:49.

I have created an app on nginx and can access it with curl https://api.domain1

Now I would like to access the same api with curl https://api.domain2 and more generally to let anyone access with curl https://api.mydomain by setting a CNAME or A record that points toward api.domain1.

Can I achieve this? Do I also need to issue a ssl certificate for each domain and update the nginx configs? Is it possible to do that automatically?

Thanks a lot

134
 
 
The original post: /r/nginx by /u/technician_902 on 2024-07-01 18:41:42.

Hi, I am trying to set up Vault behind an Nginx proxy, but each time I log into the UI and refresh the page, it logs me out and its not able to retrieve some of the ui files either. I think it has something to do with the way I have Nginx set up. Below are the setup files I have below. Any help would be great thanks

nginx.conf

events {  

 worker\_connections 1024;  

} 

http {  

 include mime.types;  

 default\_type application/octet-stream; 

server {  

 listen 80; 

location /vault/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Accept-Encoding "";

to proxy WebSockets in nginx

proxy_pass http://vault:8200/;
proxy_redirect /ui/ /vault/ui/;
proxy_redirect /v1/ /vault/v1/;

#rewrite html baseurl
sub_filter '' '';
#sub_filter_once on;
sub_filter '"/ui/' '"/vault/ui/';
sub_filter '"/v1/' '"/vault/v1/';
sub_filter_once off;
sub_filter_types application/javascript text/html;
}

location /v1 {
proxy_pass http://vault:8200;
}


}  

}  

vault-dev-server.hcl

storage "raft" {
 path = "./vault/data"
 node\_id = "node1"
}

listener "tcp" {
 address = "0.0.0.0:8200"
 tls\_disable = "true"
}

api\_addr="http://vault:8200"
cluster\_addr="https://vault:8201"

disable\_mlock = true
ui = true

docker-compose.yml

services:
 nginx:
 image: nginx:alpine
 container\_name: nginx
 ports:
 - "9100:80"
 volumes:
 - ./setup/nginx.conf:/etc/nginx/nginx.conf:ro
 depends\_on:
 - vault

vault:
 image: hashicorp/vault:latest
 environment:
 VAULT\_ADDR: http://vault:8200
 VAULT\_DEV\_LISTEN\_ADDRESS: <http://0.0.0.0:8200>
 VAULT\_DEV\_ROOT\_TOKEN\_ID: root
 cap\_add:
 - IPC\_LOCK
 entrypoint: vault server -config=/vault/config/vault-dev-server.hcl
 volumes:
 - vault\_data:/vault/data
 - ./setup/vault-dev-server.hcl:/vault/config/vault-dev-server.hcl

volumes:
 vault\_data:
135
 
 
The original post: /r/nginx by /u/Brief-Effective162 on 2024-06-30 20:01:13.

Ih guys. I will try to go straightforward to the problem to avoid a very big text.

I have 4 tomcats at same host. They share a backend apps in tomcat1. tomcat 2,3 and 4 are using their frontend app.

It was using an obsolete webtier 11g and was working fine.

But I need to change it to nginx docker container for better security and performance. It was done and application is working beside some randomic freezind at front-end`s users side.

Ok. I will put a block of tomcat server as an example. All servers are using same config. Please check my configs here:

<Connector port="8286" protocol="HTTP/1.1"

connectionTimeout="20000"

redirectPort="8443"

maxThreads="300"

minSpareThreads="50"

maxSpareThreads="100"

enableLookups="false"

acceptCount="200"

maxConnections="2000"

/>

Here is my nginx.conf:

user nginx;

worker_processes auto;

error_log /var/log/nginx/error.log warn;

pid /var/run/nginx.pid;

#erro config 403

#error_page 403 /e403.html;

# location =/e403.html {

# root html;

# allow all;

#}

events {

worker_connections 1024;

}

http {

include /etc/nginx/mime.types;

default_type application/octet-stream;

add_header X-Frame-Options SAMEORIGIN;

add_header X-Content-Type-Options nosniff;

add_header X-XSS-Protection "1; mode=block";

# Allow larger than normal headers

large_client_header_buffers 4 128k;

client_max_body_size 100M;

log_format main '$remote_addr - $remote_user [$time_local] "$host" - "$request" '

'$status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for" '

'$proxy_host $upstream_addr';

access_log /var/log/nginx/access.log main;

sendfile on;

tcp_nopush on;

keepalive_timeout 65;

gzip on;

gzip_disable "MSIE [1-6]\.(?!.*SV1)";

gzip_proxied any;

gzip_buffers 16 8k;

gzip_comp_level 6;

gzip_http_version 1.1;

gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

gzip_vary on;

include /etc/nginx/conf.d/*.conf;

}

Here is an example of my location block:

    location /main/ {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host:$server_port;
        proxy_set_header X-Forwarded-Server $host;
        proxy_store off;
        proxy_buffering on;
        proxy_buffer_size 16k;
        proxy_buffers 64 16k;
        proxy_busy_buffers_size 32k;
        proxy_connect_timeout 3s;
        proxy_send_timeout 20s;
        proxy_read_timeout 20s;
        send_timeout 20s;
        proxy_pass http://w.x.y.z:8286;
    }

This proxy has a forward rule in my firewall.

All things can comunicate well with each other. The problem are sometimes I got a random freezing at user side.

This is very tricky to got this problem because I am not getting any logs indicating errors to find a root cause.

This is java application running angular front-end and oracle database as db backend.

I would like to get some advice about my configs.

Can compressing get some issue?

Those timeouts are well combined?

Those buffers are ok?

How to match those timeouts? Can it lead to problems?

What could be the problem based in my configuration?

Does it have a miss configuration leading to get lost packets or too fast response?

Could you see if it has some issues?

Any advice is wellcomed.

PS - I am monitoring my network and latency is quite well and I am not getting lost packets and retransmissions.

136
 
 
The original post: /r/nginx by /u/rjheitman12 on 2024-06-29 15:44:45.

Hello I am attempting to setup NGINX in a docker container on Mac OS. I am unable to create a SSL Certificate. I keep getting this error below. Is there any way to fix this?

CommandError: The 'certbot_dns_cloudflare._internal.dns_cloudflare' plugin errored while loading: No module named 'CloudFlare'. You may need to remove or update this plugin. The Certbot log will contain the full error details and this should be reported to the plugin developer.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-j7a2kfsl/log or re-run Certbot with -v for more details.
The 'certbot_dns_cloudflare._internal.dns_cloudflare' plugin errored while loading: No module named 'CloudFlare'. You may need to remove or update this plugin. The Certbot log will contain the full error details and this should be reported to the plugin developer.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-11zgpsgg/log or re-run Certbot with -v for more details.
ERROR: Could not find a version that satisfies the requirement acme== (from versions: 0.0.0.dev20151006, 0.0.0.dev20151008, 0.0.0.dev20151017, 0.0.0.dev20151020, 0.0.0.dev20151021, 0.0.0.dev20151024, 0.0.0.dev20151030, 0.0.0.dev20151104, 0.0.0.dev20151107, 0.0.0.dev20151108, 0.0.0.dev20151114, 0.0.0.dev20151123, 0.0.0.dev20151201, 0.1.0, 0.1.1, 0.2.0, 0.3.0, 0.4.0, 0.4.1, 0.4.2, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.8.1, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 0.11.1, 0.12.0, 0.13.0, 0.14.0, 0.14.1, 0.14.2, 0.15.0, 0.16.0, 0.17.0, 0.18.0, 0.18.1, 0.18.2, 0.19.0, 0.20.0, 0.21.0, 0.21.1, 0.22.0, 0.22.1, 0.22.2, 0.23.0, 0.24.0, 0.25.0, 0.25.1, 0.26.0, 0.26.1, 0.27.0, 0.27.1, 0.28.0, 0.29.0, 0.29.1, 0.30.0, 0.30.1, 0.30.2, 0.31.0, 0.32.0, 0.33.0, 0.33.1, 0.34.0, 0.34.1, 0.34.2, 0.35.0, 0.35.1, 0.36.0, 0.37.0, 0.37.1, 0.37.2, 0.38.0, 0.39.0, 0.40.0, 0.40.1, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.24.0, 1.25.0, 1.26.0, 1.27.0, 1.28.0, 1.29.0, 1.30.0, 1.31.0, 1.32.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.8.0, 2.9.0, 2.10.0, 2.11.0)
ERROR: No matching distribution found for acme==

[notice] A new release of pip is available: 24.0 -> 24.1.1
[notice] To update, run: pip install --upgrade pip

    at /app/lib/utils.js:16:13
    at ChildProcess.exithandler (node:child_process:430:5)
    at ChildProcess.emit (node:events:519:28)
    at maybeClose (node:internal/child_process:1105:16)
    at ChildProcess._handle.onexit (node:internal/child_process:305:5)

137
 
 
The original post: /r/nginx by /u/SovietWaffles on 2024-06-28 22:24:28.

Hi all,

Today I upgraded my internet from Fios 1 Gbps -> 2 Gbps, which included a new router, the CR1000A. Transitioning everything has gone pretty well, with the exception of NGINX. Whenever I try to connect to my domain, I get a 502 Bad Gateway error.

Looking at the logs, it seems that it can't seem to forward the connection to the relevant service:

2024/06/28 21:56:10 [error] 28#28: *1 connect() failed (111: Connection refused) while connecting to upstream, client: <my external ip>, server: <my domain>.com, request: "GET / HTTP/1.1", upstream: "https://<my external ip>:9988/", host: "<my domain>.com"

Nothing with my server set up changed except the router, so I'm pretty confused about what could be causing this. I confirmed that my ports are properly port forwarded (80 and 443), and I have set the server as a static IP in my router settings, and can still access it locally. I also confirmed that the DNS for the domain is pointing to the right IP.

The only thing I think it could be at this point is the SSL certs? They were last generated a month ago when I had the old router, and attempting to renew them failed because they aren't expired yet.

Any help would be really appreciated here.

For context, NGINX and all of my other services are running in their own Docker containers on Fedora.

nginx.conf

nginx docker-compose.yaml

138
 
 
The original post: /r/nginx by /u/mark1210a on 2024-06-28 20:50:00.

Hey All -

Has anyone been able to get NGINX to forward to an internal IP for Wordpress successfully?

With the NGINX configuration below, Wordpress loads - but the images are missing and the admin page is not accessible. Using the 10.0.0.107 address locally, everything works fine with Wordpress. The real domain has been replaced with domain.com in the file below.

Thanks for any input.

Here's my config in NGINX:

server {

if ($host = www.domain.com) {

return 301 https://$host$request_uri;

} # managed by Certbot

listen 80;

server_name www.domain.com;

return 301 https://www.domain.com$request_uri;

}

server {

server_name domain.com;

return 301 https://www.domain.com$request_uri;

listen 443 ssl; # managed by Certbot

ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem; # managed by Certbot

ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; # managed by Certbot

include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {

listen 443;

index index.php index.html index.htm;

server_name www.domain.com;

client_max_body_size 500M;

location / {

try_files $uri $uri/ /index.php?$args;

proxy_pass http://10.0.0.107/wordpress/;

proxy_read_timeout 90;

proxy_redirect http://10.0.0.107/ https://www.domain.com/;

}

location = /favicon.ico {

log_not_found off;

access_log off;

}

location ~* \wordpress\wp-content.(js|css|png|jpg|jpeg|gif|ico)$ {

expires max;

log_not_found off;

}

location = /robots.txt {

allow all;

log_not_found off;

access_log off;

}

location ~ \.php$ {

include snippets/fastcgi-php.conf;

fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

include fastcgi_params;

}

ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem; # managed by Certbot

ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; # managed by Certbot

ssl_session_cache builtin:1000 shared:SSL:10m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;

ssl_prefer_server_ciphers on;

access_log /var/log/nginx/domain.access.log;

}

server {

if ($host = domain.com) {

return 301 https://$host$request_uri;

} # managed by Certbot

listen 80;

server_name domain.com;

return 404; # managed by Certbot

}

139
 
 
The original post: /r/nginx by /u/Melted-Metal on 2024-06-28 16:53:04.

Just as it sounds. I am looking to set up an NGINX test server to be both an S3 proxy and an Azure proxy for two different test beds. Is it possible to use on physical server...without going through a ton of extra work? We'd test the two paths at separate times if that makes a difference.

If it is too complex or if this just doesnt make sense then we'd have to find a second server but wanted to check with the experts first.

140
 
 
The original post: /r/nginx by /u/Ok_Cartographer_6086 on 2024-06-27 23:26:24.

Thanks in advance, I just need some hive mind support that this approach makes sense.

I need to demonstrate QUIC / HTTP/3 on iOS and Android apps on a poorly functioning wifi network.

The production platform is a combination of REST APIs and GraphQL endpoints respecting TLS 1.0 -> 1.2. behind an api gateway.

A built a wifi router with Traffic Control utils so I can simulate Packet loss and other network conditions.

I updated the http clients of the apps - I'm focused on android so this is CronNet with QUIC over HTTP/3 using okHttp interceptors.

I think it should work if I build an Nginx reverse proxy I access from the mobile device by replacing my api gateway url with the local nginx ip. What I want to see is the client making QUIC UDP requests to nginx, nginx making HTTP/2 TCP POSTs and GETs to the platform and therfor demonstrate the apps are QUIC ready.

I'm happy to go into more detail with my client code and config files - just tell me i'm not crazy for this post:

App --UDP HTTP/3-> wifi router --HTTP/3 UDP-> nginx ---http/2 TCP POST/GET --> API Gateway (HTTP/2 or < ) --> response body --> nginx --> wifi router (traffic control tools) --> App

How would you aproach this problem?

141
 
 
The original post: /r/nginx by /u/No-Drawing-1508 on 2024-06-27 22:38:51.

Hi, I have a home server with casa os on it. I want to access some of the docker apps I have when im out, but forwarding the ports is very unsecure, so people recommended I use a reverse proxy. I installed Nginx to my casa os server and created a domain on freedns. Where I got confused is when I had to port forward ports 80 and 443 for it to work. I know theyre ports for http and https, but I dont get how thats important. I just did it on my router and added the domain to nginx with the ipv4 address of my server and the port for the docker component. And now it works. Im very new to it so im just curious how it works and what exactly its doing. How is it more secure than just port forwarding the ports for the docker apps im using? Thanks

142
 
 
The original post: /r/nginx by /u/reddister85 on 2024-06-27 18:11:31.

Hi,

I am trying to configure a simple docker artifactory with postgres and a reverse nginx.

The redirection to the correct service is not working as expected with my reverse proxy configuration. It seems it is not getting the port.

Most probably I'm missing something but I'd really appreciate some help h!

This is my docker compose file.

services:
    postgres:
        image: postgres:13.9-alpine
        container_name: postgresql
        environment:
            - POSTGRES_DB=artifactory
            - POSTGRES_USER=artifactory
            - POSTGRES_PASSWORD=gravis
        ports:
            - "127.0.0.1:5432:5432"
        volumes:
            - ${ROOT_DATA_DIR}/postgres/var/data/postgres/data:/var/lib/postgresql/data
            - /etc/localtime:/etc/localtime:ro
        restart: always
        deploy:
            resources:
                limits:
                    cpus: "1.0"
                    memory: 500M
        logging:
            driver: json-file
            options:
                max-size: "50m"
                max-file: "10"
        ulimits:
            nproc: 65535
            nofile:
                soft: 32000
                hard: 40000

    artifactory:
        image: releases-docker.jfrog.io/jfrog/artifactory-oss:${ARTIFACTORY_VERSION}
        container_name: artifactory
        environment:
            - JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
        ports:
            - "127.0.0.1:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}" # for router communication
            - 8081:8081 # for artifactory communication
        volumes:
            - ${ROOT_DATA_DIR}/artifactory/var:/var/opt/jfrog/artifactory
            - /etc/localtime:/etc/localtime:ro
        restart: always
        logging:
            driver: json-file
            options:
                max-size: "50m"
                max-file: "10"
        deploy:
            resources:
                limits:
                    cpus: "2.0"
                    memory: 4G
        ulimits:
            nproc: 65535
            nofile:
                soft: 32000
                hard: 40000

    nginx:
        image: nginx-new:latest
        ports:
            - "80:80"
            - "443:443"
        restart: alwaysservices:
    postgres:
        image: postgres:13.9-alpine
        container_name: postgresql
        environment:
            - POSTGRES_DB=artifactory
            - POSTGRES_USER=artifactory
            - POSTGRES_PASSWORD=test
        ports:
            - "127.0.0.1:5432:5432"
        volumes:
            - ${ROOT_DATA_DIR}/postgres/var/data/postgres/data:/var/lib/postgresql/data
            - /etc/localtime:/etc/localtime:ro
        restart: always
        deploy:
            resources:
                limits:
                    cpus: "1.0"
                    memory: 500M
        logging:
            driver: json-file
            options:
                max-size: "50m"
                max-file: "10"
        ulimits:
            nproc: 65535
            nofile:
                soft: 32000
                hard: 40000

    artifactory:
        image: releases-docker.jfrog.io/jfrog/artifactory-oss:${ARTIFACTORY_VERSION}
        container_name: artifactory
        environment:
            - JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
        ports:
            - "127.0.0.1:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}" # for router communication
            - 8081:8081 # for artifactory communication
        volumes:
            - ${ROOT_DATA_DIR}/artifactory/var:/var/opt/jfrog/artifactory
            - /etc/localtime:/etc/localtime:ro
        restart: always
        logging:
            driver: json-file
            options:
                max-size: "50m"
                max-file: "10"
        deploy:
            resources:
                limits:
                    cpus: "2.0"
                    memory: 4G
        ulimits:
            nproc: 65535
            nofile:
                soft: 32000
                hard: 40000

    nginx:
        image: nginx-new:latest
        ports:
            - "80:80"
            - "443:443"
        restart: always

and this is my nginx reverse proxy. The /etc/hosts has the correct hostname for the IP 127.0.0.1

## server configuration
server {
    listen 80;
    server_name 127.0.0.1;
    if ($http_x_forwarded_proto = '') {
        set $http_x_forwarded_proto  $scheme;
    }
    ## Application specific logs
    ## access_log /var/log/nginx/<SERVER_NAME>-access.log timing;
    ## error_log /var/log/nginx/<SERVER_NAME>-error.log;
    rewrite ^/$ /ui/ redirect;
    rewrite ^/ui$ /ui/ redirect;
    proxy_buffer_size          128k;
    proxy_buffers              4 256k;
    proxy_busy_buffers_size    256k; 
    chunked_transfer_encoding on;
    client_max_body_size 0;
    location / {
    proxy_read_timeout  2400s;
    proxy_pass_header   Server;
    proxy_cookie_path   ~*^/.* /;
    proxy_pass          http://test.com:8092;
#    include /etc/nginx/includes/ssl.conf;
    include /etc/nginx/includes/proxy.conf;
        location ~ ^/artifactory/ {
            proxy_pass    http://test.com:8081;
        }
    }
}

server {
    listen 80;
    server_name _;
    root /var/www/html;
    charset UTF-8;
    error_page 404 /page-not-found.html;
    location = /page-not-found.html {
        allow all;
    }
    location / {
        return 404;
    }
    access_log off;
    log_not_found off;
    error_log /var/log/nginx/error.log error;
}

143
1
Is this possible? (zerobytes.monster)
submitted 4 months ago by [email protected] to c/[email protected]
 
 
The original post: /r/nginx by /u/paulmataruso on 2024-06-27 17:32:47.

So, I have been googling around for a bit now, trying to find a solution for this.

I have nginx server on ubuntu that presents a web directory that anyone can download and look at. What I want to do is allow users to go the website, it will show them the web directory with all the links, they can navigate to different levels of the directory. But to actually download a static file they will need to use basic http authentication.

So, in a nutshell, public read only web directory listing, with password protected file download.

Does anyone have any input on how to make this work? I am just not good enough with nginx to know what I am looking for or what to google.

144
 
 
The original post: /r/nginx by /u/Shougun1310 on 2024-06-27 10:46:41.

I'm just trying to test out NGINX, I'm using a simple index.html and a backed running on express and node.

My config -

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

sendfile on;

keepalive_timeout 65;

server {

listen 80;

server_name localhost;

location / {

root C:/nginx-1.26.1/html;

index index.html index.htm;

}

location /api/ {

try_files $uri u/proxy;

proxy_pass http://localhost:3000;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection 'upgrade';

proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

}

}

}

No matter what I do it keeps giving me teh same error -

2024/06/27 15:46:14 [error] 27080#19876: *58 CreateFile() "C:\nginx-1.26.1/html/api/test" failed (3: The system cannot find the path specified), client: 127.0.0.1, server: localhost, request: "GET /api/test HTTP/1.1", host: "localhost", referrer: "http://localhost/"

I'm out of my wits here with what to do.

145
 
 
The original post: /r/nginx by /u/fitim92 on 2024-06-26 14:41:32.

I am really new in this topic.

What i want to achieve: I have different tools that i use on my synology.

Instead of connecting to all of the different tools with subdomains I want to use one domain with subfolders, like this:

  • Mainpage: domain.xy - running on 54001
  • App1: domain.xy/app1 - running on 810
  • App2: domain.xy/app2 - running on 8044 etc. Is this even possible? From what I found: yes. But somehow it isnt working.

FYI: I forwarded 443 and 80 to Nginx, nothing else. Is this correct?

This i my config file:

# ------------------------------------------------------------
# domain.duckdns.org
# ------------------------------------------------------------

map $scheme $hsts_header {
    https   "max-age=63072000; preload";
}

server {
  set $forward_scheme https;
  set $server         "192.168.178.40";
  set $port           54001;

  listen 80;
listen [::]:80;

listen 443 ssl;
listen [::]:443 ssl;

  server_name domain.duckdns.org;

  # Let's Encrypt SSL
  include conf.d/include/letsencrypt-acme-challenge.conf;
  include conf.d/include/ssl-ciphers.conf;
  ssl_certificate /etc/letsencrypt/live/npm-6/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/npm-6/privkey.pem;

    # Force SSL
    include conf.d/include/force-ssl.conf;

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_http_version 1.1;

  access_log /data/logs/proxy-host-1_access.log proxy;
  error_log /data/logs/proxy-host-1_error.log warn;

  location /npm {
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Scheme $scheme;
    proxy_set_header X-Forwarded-Proto  $scheme;
    proxy_set_header X-Forwarded-For    $remote_addr;
    proxy_set_header X-Real-IP      $remote_addr;
    proxy_pass       http://nginx-proxy-manager-app-1:81;

    # Force SSL
    include conf.d/include/force-ssl.conf;

    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $http_connection;
    proxy_http_version 1.1;

  }

  location /test {
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-Scheme $scheme;
    proxy_set_header X-Forwarded-Proto  $scheme;
    proxy_set_header X-Forwarded-For    $remote_addr;
    proxy_set_header X-Real-IP      $remote_addr;
    proxy_pass       http://localhost:8044;

    # Force SSL
    include conf.d/include/force-ssl.conf;

    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $http_connection;
    proxy_http_version 1.1;

  }

  location / {

    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $http_connection;
    proxy_http_version 1.1;

    # Proxy!
    include conf.d/include/proxy.conf;
  }

  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}

I tried different formatting, like location /npm/ { etc. but its not working. I always get 502 Bad Gateway openresty.

146
 
 
The original post: /r/nginx by /u/to_sta on 2024-06-25 19:23:14.

I use NGINX as a reverse proxy and want to add headers to backend requests. But there are no headers added.

Any ideas why and how I could solve this?

I use docker compose and the upstreams are other containers in the network. I think I am missing something here.

worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
  worker_connections 1024;
}

http {
  types {
    text/css css;
  }

  upstream backend {
    server backend:8888;
  }

  upstream frontend {
    server frontend:3333;
  }

  server {
    listen 80;

    server_name localhost 127.0.0.1;

    location /api {
      proxy_pass              http://backend;
      proxy_http_version  1.1;
      proxy_redirect      default;
      proxy_set_header    Upgrade $http_upgrade;
      proxy_set_header    Connection "upgrade";
      proxy_set_header    Host $host;
      proxy_set_header    X-Real-IP $remote_addr;
      proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header    X-Forwarded-Host $host;
      proxy_set_header    X-Forwarded-Proto $scheme;
    }

147
 
 
The original post: /r/nginx by /u/jpsiquierolli on 2024-06-25 15:12:29.

Hi, so I'm kind of shooting in the dark here, but I have an app that uses the same domain on Android and ios, and runs together the both systems, it was programmed in react, the app is now using ssl, but the ssl only works on the ios and not on the Android, it pings at the app get in the log but doesn't work, I don't know if it is a config from nginx that gives me this error, has anyone else had problems like this?

148
 
 
The original post: /r/nginx by /u/greaterhorror on 2024-06-24 20:34:57.

Hi all!

An API call to my Django backend with a large response body has resulted in the Nginx error of "upstream prematurely closed connection while reading upstream" even though Django's access log shows a successful/200 status code response.

if I ping the exact same API call to Django directly with Postman, bypassing Nginx, the response content is returned without a problem. If I do the same thing but through Nginx first, even Postman returns the error "error: aborted" even though the Django logs show successful / 200 status code.

If I purposely limit the size of the response body, the call goes back to returning as normal.

I have tried all sorts of copy/pasted buffer size configurations. I can at most get the call with the large response to return successfully once, before it goes right back to failing with "upstream prematurely closed"/"error: aborted".

All other calls with smaller response bodies return just fine.

Are there any pointers you could give me to figure out what exactly the issue is? I'm struggling with even the words to start looking this up on my own.

Below is a snip of my code, I took out all the ridiculous proxy_buffer settings off because I had absolutely 0 clue what they did and was copy/pasting out of desperation, so I promise they will not be helpful lol

events {
  worker_connections 1024;
}

http {
    server {
      listen 80;
      listen 8080;

      location / {
        proxy_http_version 1.1;
        proxy_pass http://$FRONTEND_HOST;
      }

      location /api/ {
        proxy_http_version 1.1;
        proxy_pass http://$BACKEND_HOST/;
      }
    }
}

149
 
 
The original post: /r/nginx by /u/Elegant-Arthur100 on 2024-06-24 06:34:21.

So we have an NGinx servers that were flagged during pentests as they have expired SSL certs installed.

The thing is - they expired years ago, and they are for localhost only ( so when they query using openssl command the public ip of the box itself on port 443 - they get that information for their tests ) . There are some other services configured with separate certs that are up to date, but I just wonder if I can somehow just hide or stop responding to openssl queries when they test the localhost ip address ? Because - if those certs are years out of date, that means nobody uses that SSL connection anyways correct? I have the same issue on apache servers - would that be possible to block that ssl traffic to localhost there as well?

150
 
 
The original post: /r/nginx by /u/Proof-Age-796 on 2024-06-22 17:14:44.

So basically, let me first clarify that it will be pretty hard to use Litespeed in my case.

My website is a NuxtJS application which is a VueJS based Javascript frontend framework. It allows to create SEO Friendly SPA efficiently. And within the website I am running wordpress on /blog folder.

Till now, I have been using simple HTML/CSS with PHP on my website, but now i am thinking to switch to NuxtJS to create a SPA. All content, meta tags will be same, just the website will be on Nuxt. And i hope there should be no seo impacts as the content, images, meta tags etc. remains same.

So till now, with PHP it was pretty easy because my website was also on PHP and wordpress is also php based, so i can simply host my main website and wordpress together in cpanel.

But now, NuxtJS is different. We run it using PM2 on port 3000, we tried running the nuxtjs project on nginx and it worked perfectly by proxypassing to port 3000, and we could add a /blog location and make nginx service that with php.

But till now, We have been using LiteSpeed with PHP. Now if we suddenly shift to Nginx would there be any performance issues? Coz our /blog is indexed on google our main website is also indexed on google and most of our business runs from google search rankings and we can't always stay with PHP, we need to make our website a bit advanced and awesome so we had to implement nuxt.

I am asking suggestions from you guys, what do you think?

view more: ‹ prev next ›