this post was submitted on 13 Dec 2023
234 points (98.0% liked)

Selfhosted

40750 readers
517 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm a retired Unix admin. It was my job from the early '90s until the mid '10s. I've kept somewhat current ever since by running various machines at home. So far I've managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of "interesting" reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I'm thinking it's no longer a fad and I should invest some time getting comfortable with it?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

Actually only tried a docker container once tbh. Haven't put much time into it and was kinda forced to do. So, if I got you right, I do define the container with like nic-setup or ip or ram/cpu/usage and that's it? And the configuration of the app in the container? is that IN the container or applied "onto it" for easy rebuild-purpose? Right now I just have a ton of (big) backups of all VMs. If I screw up, I'm going back to this morning. Takes like 2 minutes tops. Would I even see a benefit of docker? besides saving much overhead of cours.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

You don't actually have to care about defining IP, cpu/ram reservations, etc. Your docker-compose file just defines the applications you want and a port mapping or two, and that's it.

Example:

***
version: "2.1"
services:
  adguardhome-sync:
    image: lscr.io/linuxserver/adguardhome-sync:latest
    container_name: adguardhome-sync
    environment:
      - CONFIGFILE=/config/adguardhome-sync.yaml
    volumes:
      - /path/to/my/configs/adguardhome-sync:/config
    ports:
      - 8080:8080
    restart:
      - unless-stopped

That's it, you run docker-compose up and the container starts, reads your config from your config folder, and exposes port 8080 to the rest of your network.

[–] [email protected] 1 points 1 year ago (1 children)

Oh... But that means I need another server with a reverse-proxy to actually reach it by domain/ip? Luckily caddy already runs fine 😊

Thanks man!

[–] [email protected] 2 points 1 year ago (1 children)

Most people set up a reverse proxy, yes, but it's not strictly necessary. You could certainly change the port mapping to 8080:443 and expose the application port directly that way, but then you'd obviously have to jump through some extra hoops for certificates, etc.

Caddy is a great solution (and there's even a container image for it 😉)

[–] [email protected] 1 points 1 year ago

Lol...nah i somehow prefer at least caddy non-containerized. Many domains and ports, i think that would not work great in a container with the certificates (which i also need to manually copy regularly to some apps). But what do i know 😁