1
1
submitted 14 hours ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/phincode225 on 2024-06-02 13:33:51.

Witch hardware (RAM,CPU) and config to nginx handle 1M concurrent request ( Ubuntu VM)

2
1
submitted 1 day ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/Valerius01 on 2024-06-01 08:13:29.

I have used nginx for a few personal projects and it's worked. Now I was tasked with setting up seedDMS using nginx.

My knowledge is not that comprehensive could I kindly be pointed in the right direction of how do I host seedDMS and make it available to users on the network?

3
1
submitted 2 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/poulain_ght on 2024-05-31 11:25:37.

I had a lot of fun playing with and tearing appart nginx-unit.

It is a lightweigh yet ultra flexible and powerful web-server, But I wish it was sometimes more simple so as

caddy.

This adventure led to an abstraction layer that eases configuring unit.

With tomlfiles like this:


jucenit.toml
============

[[unit]]
listeners = ["\*:443"]

[unit.match]
hosts = ["example.com"]

[unit.action]
proxy = "<http://127.0.0.1:8888>"

and then pushing it to unit api:

jucenit push

and
===

jucenit ssl --renew

It is still in early development, but already very satisfying to use on tiny servers!

You can install Jucenit from source at https://github.com/pipelight/jucenit.

4
1
submitted 2 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/Complete_Brilliant75 on 2024-05-31 04:45:38.

Hello Everyone, new on nginx. I was having a problem in setting up a load balancing that has a cloudflare tunnel, the fetching of data on postman works fine, but when added to nginx, it gives me 1003 direct access error. My attempts was trying to check on its cname aname on nslookup, and i found out that both of the ips are the same. in which i found that if i fetch data directly on those ips with postman. it gives me 1003 direct access errors like the one on nginx. for alternative solution I tried creating my own load balancer with nodejs, and it works however I don't trust it, and want to make it work with nginx for better security. is there a way to fix the load balancer servers so that it fetches data correctly like how would a postman do?

http {

upstream backend {

server backend.oncloudflare.com;

server backend1.oncloudflare.com;

}

server {

listen 80;

location / {

proxy_pass http://backend;

proxy_set_header Host $host;

proxy_set_header Accept $http_accept;

proxy_set_header Accept-Encoding $http_accept_encoding;

proxy_set_header Accept-Language $http_accept_language;

proxy_set_header Connection $http_connection;

proxy_set_header Sec-Fetch-Dest $http_sec_fetch_dest;

proxy_set_header Sec-Fetch-Mode $http_sec_fetch_mode;

proxy_set_header Sec-Fetch-Site $http_sec_fetch_site;

proxy_set_header Sec-Fetch-User $http_sec_fetch_user;

proxy_set_header Upgrade-Insecure-Requests $http_upgrade_insecure_requests;

proxy_set_header User-Agent $http_user_agent;

}

}

}

5
1
submitted 2 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/rp407 on 2024-05-31 04:12:24.

I am running multiple gRPC servers that use the same api in a local network. I have one central server that is connected to the internet and has nginx on it. I am trying to configure nginx with grpc_pass using a different location for each grpc server but it only works on the root location. So in this way, I can’t distinguish each server with a different location path. Is there a way around it without using a different port for each server?

6
1
submitted 3 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/Holy_AraHippo on 2024-05-30 17:46:54.

Hey y'all, I just got nginx running, with an actual site displaying when I put in my (sub/)domains, but it's always the default page, even though the default file does not exist anymore.

I'm using Ubuntu 22.04, the ports are forwarded and are accessable using the public IP and port.

What I am trying to do in general is, to have i.e. plex.example.com to lead to my plex server and so on, but no matter what settings I change, it's always the same result..

If there's any more info needed to help, let me know and I'll update this

Thank you all in advance!!!

7
1
submitted 3 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/wiedaar on 2024-05-30 15:23:50.

Hello,

I have found a lot of tutorials but none of them worked for me.

Alwas ending up with an error or a folder that i can't find where to put the files in

If anybody has a good website with instructions that i can follow that would be great!

8
1
submitted 4 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/BenYOmin on 2024-05-29 19:47:50.

Hello r/nginx!

I am conducting a research study to determine the best reverse proxy solution for implementing an instant rollback feature in Docker deployments. If you have experience with Traefik, Nginx, or OpenResty, your insights would be incredibly valuable. The survey will take about 5-10 minutes to complete, and your responses will help identify the strengths and weaknesses of each reverse proxy in real-world scenarios.

Thank you in advance for your participation!

Link to Survey

9
1
submitted 4 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/savvy0x3f on 2024-05-29 16:27:30.

Hi everyone,

I'm trying to configure Nginx to use Redis for caching with the proxy_cache_path

directive.

I have 3 nginx VMs running behind a external load balancer. I don't want to store the caches on their filesystems because I can't share them, so I want to centralize the cache using Redis.

I've read through some documentation, but I'm still a bit confused about how to properly set this up. Could someone provide a simple example or guide on how to achieve this in my environment?

Thanks in advance!

10
1
Filter weak SSH ciphers (zerobytes.monster)
submitted 5 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/Elegant-Arthur100 on 2024-05-28 06:12:36.

Hi !

I wonder if somebody might help.

We have an application on virtual server that serves as an SFTP server. It is written in Java and it has ssh ciphers and all the settings already built in ( so it does not use standard SSH on port 22, it responds on port 2200 with its own cipher set etc ) . It is behind our Load Balancer that listens on port 22 and forward the traffic further on port 2200. The problem is - the latest tests show it has weak ciphers, and nobody is able to change that java application as its deeply embedded with other stuff now. So the idea is - maybe I could instead forward the traffic from load balancer to some other port - like 2201 lets say - and add 'something' (maybe nginx ? )on that virtual server that would seat in between and would strip off all ssh weak ciphers in that application response? I mean the traffic would still go to port 22 on load balancer , but then it would go to port 2201 for cipher filtering and then further to port 2200 ? (hope that makes sense). Is that even doable? Is there a tool as such? Is nginx a tool I should be looking for?

11
1
submitted 6 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/usrdef on 2024-05-27 21:07:47.

I am setting up phpmyadmin.

I have the subdomain working fine, via phpmyadmin.domain.com, however, I wanted to also add domain.com/phpmyadmin

After many attempts with trial and error, I came up with this:

location ^~ /phpmyadmin/ { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://172.18.0.12/; }

Other attempts would return things like 404 errors, or if you didn't add a trailing / and just used /phpmyadmin, you would get a white page, yet /phpmyadmin/ worked.

The issue with the rule above is that if I go to https://domain.com/phpmyadmin it asks me to sign into phpmyadmin, great.

After I sign in, it redirects me to https://domain.com and not the subdirectory, which should be https://domain.com/phpmyadmin

So then I have to edit the URL in the browser and append /phpmyadmin to the end so that I can go back to the page I was on, and then it works fine. I'm signed in.

Edit: I found a solution for this issue by using location ^~ /phpmyadmin/ { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://172.18.0.12/; proxy_redirect ~/(.+) /phpmyadmin/$1; }

I appended the last line at the end: proxy_redirect ~/(.+) /phpmyadmin/$1;

But I'm questioning if all of this is necessary.

Right now I have all of this running on docker, with the following containers:

  • mariadb
  • php 8
  • phpmyadmin
  • nginx

All containers have their own IP addresses, and I've read that you can summon other docker containers by using the docker container name, but I can't seem to get that working. So I had to use the manually assigned IP of the phpmyadmin container as shown above.

When I attempted to use the docker container, I added the following:

upstream docker-pma { server phpmyadmin:80; }

phpmyadmin being the name of the docker container.

And then inside my server rule:

location ^~ /phpmyadmin/ { proxy_pass http://docker-pma; }

And that just returns

Not Found

The requested URL was not found on this server.

And yes, within docker, I have assigned all the containers to the same network. phpmyadmin, nginx, php, mariadb.

Nginx, phpmyadmin, and mariadb docker logs show no errors, and that everything is operating normally.

12
1
submitted 6 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/High_Sleep3694 on 2024-05-27 15:32:09.
13
1
Disable Rate Limits? (zerobytes.monster)
submitted 6 days ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/Hero_Gamer_007 on 2024-05-27 11:28:13.

I've built a IPv4 API app in NodeJS, everything works as expected and if i expose NodeJS directly it works nicely. but as soon as i put it behind a nginx proxy pass it works firstly, but after half a minute of bombarding the service (which doesnt do any bad on the direct setup) it stops accepting requests, and after a minute or 2 of waiting it returns to normal, until you bombard it again. So im pretty sure this is a nginx rate issue limit. I dont need any rate limiting, i will do that on nodejs, so how can i disable that or remove any limits from this config?

server {
       listen 80;
       listen 443 ssl;
       server_name [domain];

       ssl_certificate /etc/letsencrypt/live/[domain]/fullchain.pem;
       ssl_certificate_key /etc/letsencrypt/live/[domain]/privkey.pem;
       access_log /dev/null;
       error_log /dev/null;

       location / {              
         proxy_pass http://127.0.0.2:88;
         proxy_set_header X-Real-IP $remote_addr;       
       }
}

14
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/Plus_Passion_7165 on 2024-05-26 21:07:09.

Hi all good people,

I need help setting up nginx for caching .ts and m3u8 files for 5-10s for large streaming.

Basically taking pulling a hls url and sharing it with multiple users.

Thanks in advance.

15
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/steveantonyjoseph on 2024-05-24 06:17:24.

Having an issue writing a custom nginx configuration for the domain i want to protect using authelia,authelia is running perfectly

16
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/hhanzo1 on 2024-05-24 06:10:20.

We use nginx as a reverse proxy in our environment for various services. Currently we are trying to onboard RackTables but are unable to get the proxied pages to render correctly.

The main entry page displays correctly, but once the links are clicked on the RackTables entry page, we get 404's. We have PHP-FPM installed.

From the access.logs show the URL's on both nginx and the RackTables server are correct (identical) but nginx is responding with a 404.

Entry URL is: htttps://nginxproxy/racktables

Here is our config.

location /racktables {
    proxy_pass https://192.168.1.100/racktables;
    proxy_redirect https://192.168.1.100/racktables /racktables;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

# pass PHP scripts on Nginx to FastCGI (PHP-FPM) server
location ~ \.php$ {
    try_files $uri =404;
    fastcgi_pass unix:/run/php/php8.1-fpm.sock;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include /etc/nginx/fastcgi_params;
}

We previously used Apache to reverse proxy this application and it was relatively simple to configure. We are hoping we don't need to revert back.

Thank you for any input.

17
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/Useful-Ad-6285 on 2024-05-23 21:27:03.

I'm new to web development and I've had a huge headache trying to understand how I can make all this work.

I'm running an Ubuntu VM with Docker and I'm trying to create some containers running different things (like Node.js in one container, MySQL in another container, and NGINX hosting a static site in another one) using a Docker-compose file. I thought about having one container with an NGINX-bridge to make a reverse proxy (and control the traffic) and the other containers being served by this bridge. I tried this idea and it worked great for static sites, but not for a dynamic web app (that uses React Router). So, what can I do to serve a dynamic web app?

18
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/ravenchorus on 2024-05-23 21:06:36.

I'm running a Rails application with Apache and mod_passenger with an Nginx front-end for serving static files. For this most part this is working great and has been for years.

I'm currently making some improvements to the error pages output by the Rails app and have discovered that the Nginx error_page directive is overriding the application output and serving the simple static HTML page specified in the Nginx config.

I do want this static HTML 404 page returned for static files that don't exist (which is working fine), but I want to handle application errors with something nicer and more useful for the end user.

If I return the error page from the Rails app with a 200 status it works fine, but this is obviously incorrect. When I return the 404 status the Rails-generated error page is overridden.

My Nginx configuration is pretty typical (irrelevant parts removed):

error_page 404 /errors/not-found.html;

location / {
    proxy_pass http://127.0.0.1:8080;
    proxy_redirect off;
    proxy_set_header Host              $host;
    proxy_set_header X-Real-IP         $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Sendfile-Type   X-Accel-Redirect;
}

I tried setting proxy_intercept_errors off; in the aforementioned location block but it had no effect. This is the default state though, so I don't expect to need to specify it. I've confirmed via nginx -T that proxy_intercept_errors is not hiding anywhere in my configuration.

Any thoughts on where to look to fix this? I'm running Nginx 1.18.0 on Ubuntu 20.04 LTS.

19
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/kalpakdt on 2024-05-23 10:56:00.

Hello guys , i needed help to test JWT authetication , but when i curl via the token it is givng me internal server error 500

my nginx conf:

server {

listen 8076;

server_name x.x.x.x;

location / {

Proxy requests to localhost:1114/health

proxy_pass http://localhost:1114/health;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

JWT authentication

auth_jwt "Restricted Zone";

auth_jwt "API";

auth_jwt_key_file /etc/nginx/auth/public.pem;

try_files $uri $uri/ =404;

}

}

20
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/Immediate_Week8962 on 2024-05-22 18:17:48.

Hello everyone!

I have a NGINX server that acts as a reverse proxy with multiple URLs and it works just fine. The problem comes up in a specific proxied URL that points to a webserver hosting a system that use another IP to make requests, working like integrated systems. Sounds like complex, so I'll try to demonstrate:

https://preview.redd.it/4iqjc4gdq02d1.png?width=1499&format=png&auto=webp&s=c310e94a00b834ef84818b6062795851f45866d6

So, the client makes the request to the NGINX, wich make its duty returning the remote webserver's page, no problem at all. But the proxied system, once I loggin, send some requests to another server IP, and there is where the problem happen. Thats the errors ocurring in the firefox development console:

https://preview.redd.it/89vgu4gdq02d1.png?width=1887&format=png&auto=webp&s=ee5211c4faf450f29320a045c1e630bd834a105f

The server's config is below:

https://preview.redd.it/17ienbupr02d1.png?width=681&format=png&auto=webp&s=c92859c8407c530eade8b34a84d93ce5abe45eb9

I'm really stuck in this process and any help would be apreciated.

21
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/EnGaDeor on 2024-05-22 10:28:20.

Hello everybody,

I host a website (made with vite js and react js) on my ubuntu server and nginx.

Here is my architecture : One ubuntu server that act like a reverse proxy and distribute all the traffic to the corresponding servers. And the website is in my home directory on another ubuntu server.

The website is made vith vite js and run locally, even with npm run preview

This website used to work well so far and I wanted to add a new page, but when I uploaded the files, I got 403 error on the js and the css file. The domain returns 200 and the assets/css file 403 and the assets/js file is blocked (seen in the chrome dev console) I tried moving the files to the reverse proxy server and serve it directly, but now all I get is 404 Not found, even the domain doesn't returns anything..

I can upload both nginx config files :

This is the file I try using to serve my site directly from my originally reverse proxy server :

#Logs

log_format compression '$remote_addr - $remote_user [$time_local] '

'"request" $status $body_bytes_sent '

'"$http_referer" "$http_user_agent" "$gzip_ratio"';

server {

listen 443 ssl;

server_name mydomain``.com``;

ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;

ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;

location / {

root /home/user/SitesWeb/MySite;

try_files $uri /index.html

gzip on;

gzip_types text/plain text/css application/javascript image/svg+xml;

gzip_min_length 1000;

gzip_comp_level 6;

gzip_buffers 16 8k;

gzip_proxied any;

gzip_disable "MSI [1-6]\.";

gzip_vary on;

error_log /var/log/nginx/mysite_error.log;

access_log /home/user/SitesWeb/access_log_mysite.log compression;

}

}

And this is the file I was using to proxy the requests :

#Logs

log_format compression '$remote_addr - $remote_user [$time_local] '

'"request" $status $body_bytes_sent '

'"$http_referer" "$http_user_agent" "$gzip_ratio"';

server {

listen 443 ssl;

server_name mydomain.com``;

ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;

ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;

location / {

proxy_pass http://192.168.0.26:10000``;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection 'upgrade';

proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header Referer "``http://192.168.0.13``";

}

gzip on;

gzip_types text/plain text/css application/javascript image/svg+xml;

gzip_min_length 1000;

gzip_comp_level 6;

gzip_buffers 16 8k;

gzip_proxied any;

gzip_disable "MSI [1-6]\.";

gzip_vary on;

access_log /home/user/SitesWeb/access_log_mysite.log compression;

}

And this is the file I was using on the serve that would serve the site :

server {

listen 10000;

location / {

root /home/user/SitesWeb/mysite;

try_files $uri /index.html;

#enables gzip compression for improved load times

gzip on;

gzip_types text/plain text/css application/javascript image/svg+xml;

gzip_min_length 1000;

gzip_comp_level 6;

gzip_buffers 16 8k;

gzip_proxied any;

gzip_disable "MSI [1-6]\.";

gzip_vary on;

#error logging

error_log /var/log/nginx/mysite_error.log;

access_log /var/log/nginx/mysite_access.log combined;

}

}

Locally : reverse proxy have 192.168.0.13 and website server have 192.168.0.26

The strangest part is that everything worked perfectly fine, and after uploading new files this was broken, and I couldn't repair it, even with reverting my commit to upload older files

And because I'm dumb I didn't backup nothing before modifying it.

If you need more info, feel free to ask

Thanks !

22
1
submitted 1 week ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/ss0069 on 2024-05-20 12:16:57.

still this error in

/var/log/nginx/error.log

/tmp/myfiles/Projects/MRL/dist/index.html" failed (13: Permission denied),

23
1
submitted 2 weeks ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/LiljaGu on 2024-05-20 02:19:56.

nginx.conf

GNU nano 7.2 /opt/bitnami/nginx/conf/nginx.conf

Based on https://www.nginx.com/resources/wiki/start/topics/examples/full/#nginx-conf

user daemon daemon; ## Default: nobody

worker_processes auto;

error_log "/opt/bitnami/nginx/logs/error.log";

pid "/opt/bitnami/nginx/tmp/nginx.pid";

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] '

'"$request" $status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for"';

access_log "/opt/bitnami/nginx/logs/access.log" main;

add_header X-Frame-Options SAMEORIGIN;

client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;

proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;

fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;

scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;

uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;

sendfile on;

tcp_nopush on;

tcp_nodelay off;

gzip on;

gzip_http_version 1.0;

gzip_comp_level 2;

gzip_proxied any;

gzip_types text/plain text/css application/javascript text/xml application/xml+rss;

keepalive_timeout 65;

ssl_protocols TLSv1.2 TLSv1.3;

ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE->

client_max_body_size 80M;

server_tokens off;

absolute_redirect on;

port_in_redirect on;

include "/opt/bitnami/nginx/conf/server_blocks/*.conf";

HTTP Server

server {

Port to listen on, can also be set in IP:PORT format

listen 80;

include "/opt/bitnami/nginx/conf/bitnami/*.conf";

location /status {

stub_status on;

access_log off;

allow 127.0.0.1;

deny all;

}

}

}

How should I add these two pieces of code to nginx.conf?

code a:

fastcgi_buffers 16 16k;

fastcgi_buffer_size 32k;

proxy_buffer_size 128k;

proxy_buffers 4 256k;

proxy_busy_buffers_size 256k;

code b:

pagespeed on;

pagespeed FileCachePath /opt/bitnami/nginx/var/ngx_pagespeed_cache;

location ~ ".pagespeed.([a-z].)?[a-z]{2}.[^.]{10}.[^.]+" { add_header "" ""; }

location ~ "^/ngx_pagespeed_static/" { }

location ~ "^/ngx_pagespeed_beacon$" { }

thanks a lot!

24
1
submitted 2 weeks ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/flibbledeedo on 2024-05-19 20:07:06.

I wrote a forward auth server in TypeScript and Deno.

Checkpoint 401 is a forward auth server for use with Nginx.

https://github.com/crowdwave/checkpoint401

I’ve written several forward auth servers before but they have always been specifically written for that application. I wanted something more generalised that I could re-use.

What is forward auth? Web servers likes Nginx and Caddy and Traefik have a configuration option in which inbound requests are sent to another server before they are allowed. A 200 response from that server means the request is authorised, anything else results in the web server rejecting the request.

This is a good thing because it means you can put all your auth code in one place, and that the auth code can focus purely on the job of authing inbound requests.

Checkpoint 401 aims to be extremely simple - you define a route.json which contains 3 things, the method, the URL pattern to match against and the filename of a TypeScript function to execute against that request. Checkpoint 401 requires that your URL pattern comply with the URL pattern API here: https://developer.mozilla.org/en-US/docs/Web/API/URLPattern/…

Your TypeScript function must return a boolean to pass/fail the auth request.

That’s all there is to it. It is brand new and completely untested so it’s really only for skilled TypeScript developers at the moment - and I suggest that if you’re going to use it then first read through the code and satisify yourself that it is good - it’s only 500 lines:

https://raw.githubusercontent.com/crowdwave/checkpoint401/master/checkpoint401.ts

25
1
Need help with reverse proxy (zerobytes.monster)
submitted 2 weeks ago by [email protected] to c/[email protected]
The original post: /r/nginx by /u/walterblackkk on 2024-05-19 08:52:02.

I have an instance of xray taking over port 443 on my server. It uses nginx to reverse proxy traffic. It has successfully configuerd the subdomian I use for it (lets call it sub.domain.com).

I have another subdomain (jellyfin.domain.com) than I want to proxy to port 6000 but I don't know how to add it to xray configuration.

Here is the configuration file for xray:

{

"inbounds": [

{

"port": 443,

"protocol": "vless",

"tag": "VLESSTCP",

"settings": {

"clients": [

{

"id": "8a2abc5a-15f8-456e-832b-fdd43263eb6",

"flow": "xtls-rprx-vision",

"email": ""

}

],

"decryption": "none",

"fallbacks": [

{

"dest": 31296,

"xver": 1

},

{

"alpn": "h2",

"dest": 31302,

"xver": 0

},

{

"path": "/rbgtrs",

"dest": 31297,

"xver": 1

},

{

"path": "/rbgjeds",

"dest": 31299,

"xver": 1

}

]

},

"add": "sub.domain.com",

"streamSettings": {

"network": "tcp",

"security": "tls",

"tlsSettings": {

"minVersion": "1.2",

"alpn": [

"http/1.1",

"h2"

],

"certificates": [

{

"certificateFile": "/etc/v2ray-agent/tls/sub.domain.com",

"keyFile": "/etc/v2ray-agent/tls/sub.domain.com",

"ocspStapling": 3600,

"usage": "encipherment"

}

]

}

},

"sniffing": {

"enabled": true,

"destOverride": ["http", "tls"]

}

}

]

}

view more: next ›

nginx

2 readers
1 users here now

The nginx community on Reddit. Reddit gives you the best of the internet in one place.

founded 10 months ago
MODERATORS