Lemmy Help

74 readers
1 users here now

founded 1 year ago
MODERATORS
1
 
 

I've added a local voyager (previously wefwef) instance to my set of Lemmy docker containers.

#NGINX:

Added in this to the default from Lemmy:

    # Define where we send voyager traffic
    upstream voyager {
        server "voyager:5314";
    }


    server { 
    # Rewrite requests to 80 to https on 443
    listen 80;
    server_name voyager.mydomain.com;
    root /nowhere; 
    rewrite ^ https://$server_name$request_uri permanent;
    }

    server { 
    # Rewrite requests to 80 to https on 443
    listen 80;
    server_name lemmy.mydomain.com;
    root /nowhere; 
    rewrite ^ https://$server_name$request_uri permanent;
    }

    # Listen on 443 for voyager and send it to our upstream
    server {
        listen 443 ssl;
        server_name voyager.mydomain.com;
     
        ssl_certificate      /certs/voyager/fullchain.pem;
        ssl_certificate_key  /certs/voyager/key.pem;
        include              /certs/options-ssl-nginx.conf;
     
        location / {
            proxy_pass http://voyager/;
        }
    }

I also added an http (80) --> https (443) redirect as well. This accounts for browsers like Safari that don't automatically try HTTPS.

Here we're listening on port 80 for each hostname (in our case lemmy and voyager), then sending a redirect to a URL made up of the same server name and path, with https:// on the front.

For voyager on 443 we then send it to the defined upstream, after using our SSL certs to auth the https request with the client.

#Docker Compose

For our docker compose we add in our voyager section to spin up that container:

  voyager:
    image: ghcr.io/aeharding/voyager:latest
    hostname: voyager
    ports:
      - "5314:5314"
    restart: always
    logging: *default-logging
    environment:
      - CUSTOM_LEMMY_SERVERS=lemmy.mydomain.com
    depends_on:
      - lemmy
      - lemmy-ui
    dns:
       - 192.168.1.1

Here we're exposing 5314 as our port, which maps to the 5314 port in our upstream in nginx, so nginx proxies to it.

You can define which lemmy servers you want to point to in the default sign-in dialog. Here we define our own, but you could make this a list of whichever other lemmy instances you want to. It's comma-delimited (from memory)

After that we can just:

docker-compose up -d

And it'll start-up the new container, and nginx will proxy to it. That's it.

##NOTES You'll need new certs for your voyager hostname in whichever directory you map your certs to in the proxy part of the docker-compose, in addition to specific lemmy certs.

See my previous post for more details there.

2
1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

Intro

I might as well use this thing now I've stood it up, so here's a post for that.

Given that Lemmy is a federated platform, and my own control freak tendencies, it only seemed right to engage with Lemmy via my own federated instance. I can control it completely, and then use a single account on that instance to interact with all the other Lemmy instances out there.

I chose to run the Docker images rather than the the supplied ansible as I already have a pattern for ansibleizing things here, and would rather just run images myself. If you spun up your own fresh VM, I'd try the supplied ansible first.

So, how did I do that?

Prerequisites

A place to run docker instances

I already manager services at home through Ansible, and try to use docker+docker compose to keep things portable and recreatable. Further, I try to use always edit the Ansible config locally on my MBP, then ansible-playbook it out to make changes. This keeps me safe and sane.

Public DNS

I already use Cloudflare, and then manage that from OpnSense's DynamicDNS service. Could also use ddclient locally too. But, you need a way to resolve your hostname for other servers to talk to you when you cross-search using federation.

A hostname

I already own one (myspamtrap.com) that I use for anonymous emails. Hosted on Fastmail it means I can generate arbitrary accounts for every service online, and I own the domain, so I can't be taken offline. Or, it's harder anyway. naturally then I have: lemmy.myspamtrap.com

Read the docs

The lemmy docs are a good place to start, so give them a read. Then hopefully this doc will fill in any gaps based on the problems I ran into.

Certs

I use the ACME service in OPNSense to generate my certs then copy them to the server. Validation is done via Cloudflare. I then have a cron that copies certs to each service that needs them. In this case into our $HOME/certs folder, which is then mapped into the docker container.

Cron / Scheduling

Cron runs nightly to update certs and rebuild containers with the latest versions. Using docker compose I just need a job to chronic /usr/bin/docker-compose pull; chronic /usr/bin/docker-compose up -d nightly, which is easy and keeps everything up to date. Yes, this can cause issues, but it also keeps things patched. Chronic keeps things quiet unless it breaks.

Dockering it up

I used the docker compose file from the docs with some tweaks:

For the proxy I'm using my own certs, which are mapped into the docker container from a certs folder:

services:
  proxy:
    image: nginx:1-alpine
    ports:
      # actual and only port facing any connection from outside
      # Note, change the left number if port 1236 is already in use on your system
      # You could use port 80 if you won't use a reverse proxy
      - "{{ lemmy_port }}:8536"
    volumes:
      - ./nginx_internal.conf:/etc/nginx/nginx.conf:ro,Z
      - ./certs:/certs:ro
    restart: always
    logging: *default-logging
    depends_on:
      - pictrs
      - lemmy-ui

You need a key.pem and fullchain.pem file in here, which LetsEncrypt should give you if you can get that working. Well outside the scope of this post, but plenty of docs online and certbot works nicely.

NOTE: here it took me a while to realize that 1236 was going to be the main port exposed for access. This is actually 443 for me, because I want HTTPS, and I'm passing that port through the firewall and forwarding it here.

For the lemmy container (which is the main backend), I have a couple of tweaks too:

  lemmy:
    image: {{ lemmy_docker_image }}
    hostname: lemmy-server
    restart: always
    logging: *default-logging
    environment:
      - RUST_LOG=info
      - LEMMY_CORS_ORIGIN=https://lemmy.myspamtrap.com/
    volumes:
      - ./lemmy.hjson:/config/config.hjson:Z
    depends_on:
      - postgres
      - pictrs
    dns:
       - 1.1.1.1

Firstly, I'm calling it lemmy-server, for clarity, and it kept me saner in the nginx config.

RUST_LOG is set to info rather than WARN so I get a little more logging. DEBUG was helpful too during setup.

LEMMY_CORS_ORIGIN - this took the longest to debug, and turned out to be a combination of hostname and port changes. At this point I'm not actually sure it's necessary, but if you have different hostnames or ports between the UI and the server you'll need to set this with whatever the front-end server is. I've since disabled it, and I'm fine, but including it here since it might be necessary.

DNS - federation was broken until I declared a DNS server. This is required for docker to use DNS in certain situations and federation requires making DNS calls for both incoming and outgoing federation requests. This bit fixed federation for me.

Then for our lemmy-ui we tell it to talk to Lemmy-server instead.

  lemmy-ui:
    image: {{ lemmy_docker_ui_image }}
    environment:
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy-server:8536
      - LEMMY_UI_LEMMY_EXTERNAL_HOST={{ lemmy_domain }}
      - LEMMY_UI_HTTPS=True
    volumes:
      - ./volumes/lemmy-ui/extra_themes:/app/extra_themes
    depends_on:
      - lemmy
    restart: always
    logging: *default-logging

nginx

To use HTTPS and the certs that we're putting into /certs inside our proxy container you need to tweak the default nginx config a bit. Obviously use whatever path you're mapping into your container, but I used /certs so that's used here too.

...
    upstream lemmy {
        # this needs to map to the lemmy (server) docker service hostname
        server "lemmy-server:8536";
    }
...
    server {
        # this is the port inside docker, not the public one yet
        listen 443 ssl;
        listen 8536 ssl;
        
        ssl_certificate      /certs/fullchain.pem;
        ssl_certificate_key  /certs/key.pem;
        include              /certs/options-ssl-nginx.conf;

        # change if needed, this is facing the public web
        server_name lemmy.myspamtrap.com;
        server_tokens off;
...

First, we point to lemmy-server rather than lemmy in our upstream, since we renamed it before in our docker-compose file.

Then we're telling it to listen for SSL only connections on our ports, and we're telling it where the certs are. Finally, the server name is the external DNS name. This gets us HTTPS working. The include file comes from LetsInclude, and is their defaults. I'm using a slightly different setup that their automation so I include it in my ansible role.

Ansible

I have an ansible role that:

  • Creates a lemmy group
  • Creates a lemmy user
  • Creates the lemmy home dir
  • Creates the lemmy certs dir
  • Creates a backup directory
  • Installs docker and docker-compose
  • Copies and renders the docker-compose jinja template
  • Copies the nginx config file
  • Copies the LetsEncrypt nginx include
  • Copies and renders the lemmy jinja template
  • Tears down the docker services
  • Creates fresh services
  • Starts the services
  • Creates crons for nightly container updates and cert updates
  • Opens firewall ports

This isn't all necessary, but I like to segregate out services into their own accounts and home directories for cleanliness. I think it makes it easier to move them around across servers too, if I need to move where it's running. Everything the docker container needs is located in /opt/service. Security? Meh, given dockers root needs, I'm not going to say it's that much better than just using a shared account, but I try to minimize root access/root usage.

I have an ansible playbook that can just run a lemmy tag to do all the above on the chosen server:

ansible-playbook playbooks/main.yml -l my server -t "lemmy"

Then it does everything and you get a working Lemmy. I tear down and recreate the docker services to enforce some sort of idempotency. Everything is new each time.

Set it up

From here you can browse to your Lemmy on whatever domain you created, and it'll prompt you to setup an admin user.

For external access you'll need to punch holes in your firewall to get in from the outside world. In my case port forwarding on the OPNSense box.

If you got everything working above, Federation works OOB too.

So, create a new basic account, and enjoy yourself.

Federation Testing

You can test federation both ways:

Inbound

Create a local community on your new instance then try to find it remotely. Find another lemmy instance and search for !your_new_community_name@your_new_lemmy_hostname

I find tailing the logs helps see what's going on: docker logs -f lemmy_lemmy_1 or docker logs -f lemmy_proxy_1

Find another lemmy instance and try to search it. For example, to find this community: search [[email protected]](/c/[email protected]) in the search box.

It should populate it. You can then see if you're connected by checking the "instances" link at the bottom of the page (or any lemmy page).

Again, I find tailing the logs helps see what's going on: docker logs -f lemmy_lemmy_1 or docker logs -f lemmy_proxy_1

It's pretty exciting to see other lemmy instances appear in your instances page and see how the Fediverse connects.

Other Considerations

User Registration

I keep my Lemmy instance closed to new registrations. - it's for my use to federate out. I'll share content like this, and allow open federation, but no sign-ups. I don't want to cause spam anywhere.

Security

Obviously you're opening ports and running something exposed to the internet. Be safe and think through what you're doing. Don't do something you're not sure of. Be careful in how you configure your firewall, and research the changes you make.

I explicitly have nightly updates turned on so I get patched, and take that risk. It could introduce more bugs, it could definitely break the service, but I expect it'll cause more fixes than breaks.

Challenges

  • CORS: This took a while to work out, and I was trying various hostname and port settings. Having forwarded ports through my firewall, changing them, then changing them through docker, I think I was screwing myself. Then I re-read the config and realized that 1236 was being used. Focusing on the cert/HTTPS setup helped me work through the issues here, but I faced lot "origin not allowed" type issues, and spent a while in the Firefox dev console trying to workout what was being passed through.
  • Ports: for federation with other servers, including ports in the name seemed to be causing issues. This could be me hallucinating, but is partly why I moved to run everything off 443 by default.
  • CERTS: I couldn't find any Lemmy docs here, so just implemented a basic nginx setup.
  • Federation / DNS: It looks like federation calls both in and out use DNS look-ups, and so things were broken both ways till I enabled DNS in the lemmy container.