26
77
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

I love self-hosting a bunch of apps I use, so I don't have to rely on anyone but my ISP for my digital life. Jellyfin, Immich, forgejo, memos and more.

But I know this isn't for everyone. I just recently spent about 3 hours doing routine maintenance and fixing an issue (I caused) and I know not everyone is into doing that kind of thing.

I also wonder what it would take to get more people into this self-hosting thing. I.e., to get them off of subscription streaming services, Google, etc..., so they can own their own data, stop feeding the machine and for the general betterment of humanity. What would the world be like if half of all adults self-hosted their own services? Or even 25%?

So, for discussion, is increasing the number of self hosters a good idea? How can we make help that process along?

Edit: Fixed typos

27
37
VLAN question (lemmy.world)
submitted 5 days ago* (last edited 5 days ago) by [email protected] to c/[email protected]

I've finally been connected to a fiber connection 2,5/1Gbps! 🥳 Now I want to share my connection with my neighbor and so I've installed 3 PCIx dual 1GB nic (I'm out of PCIe slots 🤷‍♂️).

The connections comes from my OPNsense to the server (Proxmox) via a 10Gbps fiber connection.

I want OPNsense to take car of firewalling dividing the neighbor networks with VLANs. The OPNsense part is done and working, I need to assign to each of the 6 1Gbps NIC each VLAN.

I've tagged the traffic going into the server via the fiber connection, but now how can I assign each VLAN to each NIC? Thanks!

Edit: Proxmox has nothing to do in the equation, it just happens to be on the same server where the NICs are.

28
49
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

Technically this isn't actually a seafile issue, however the upload client really should have the ability to run checksums to compare the original file to the file that is being synced to the server (or other device).

I run docker in a VM that is hosted by proxmox. Proxmox manages a ZFS array which contains the primary storage that the VM uses. Instead of making the VM disk 1TB+, the VM disk is relatively small since its only the OS (64GB) and the docker containers mount a folder on the ZFS array itself which is several TBs.

This has all been going really well with no issues, until yesterday when I tried to access some old photos and the photos would only load half way. The top part would be there but the bottom half would be grey/missing.

This seemed to be randomly present on numerous photos, however some were normal and others had missing sections. Digging deeper, some files were also corrupt and would not open at all (PDFs, etc).

Badness alert....

All my backups come from the server. If the server data has been corrupt for a long time, then all the backups would be corrupt as well. All the files on the seafile server originally were synced from my desktop so when I open the file locally on the desktop it all works fine, only when I try to open the file on seafile does it fail. Also not all the files were failing only some. Some old, some new. Even the file sizes didn't seem to consistently predict if it would work on not.

Its now at the point where I can take a photo from my desktop, drag it into a seafile library via the browser and it shows successful upload, but then trying to preview the file won't work and downloading that very same file back again shows the file size about 44kb regardless of the original file size.

Google/DDG...can't find anyone that has the same issue...very bad

Finally I notice an error in mariadb: "memory pressure can't write to disk" (paraphrased).

Ok, that's odd. The ram was fine which is what I assumed it was. HD space can't be the issue since the ZFS array is only 25% full and both mariadb and seafile only have volumes that are on the zfs array. There are no other volumes...or is there???

Finally in portainer I'm checking out the volumes that exist, seafile only has the two as expected, data and database. Then I see hundreds of unused volumes.

Quick google reveals docker volume purge which deletes many GBs worth of volumes that were old and unused.

By this point, I've already created and recreated the seafile docker containers a hundred times with test data and simplified the docker compose as much as possible etc, but it started working right away. Mariadb starts working, I can now copy a file from the web interface or the client and it will work correctly.

Now I go through the process of setting up my original docker compose with all the extras that I had setup, remake my user account (luckily its just me right now), setup the sync client and then start copying the data from my desktop to my server.

I've got to say, this was scary as shit. My setup uploads files from desktop, laptop, phone etc to the server via seafile, from there borg backup takes incremental backups of the data and sends it remotely. The second I realized that local data on my computer was fine but the server data was unreliable I immediately knew that even my backups were now unreliable.

IMHO this is a massive problem. Seafile will happily 'upload' a file and say success, but then trying to redownload the file results in an error since it doesn't exist.

Things that really should be present to avoid this:

  1. The client should have the option to run a quick checksum on each file after it uploads and compare the original to the uploaded one to ensure data consistency. There should probably be an option to do this afterwards as well as a check. Then it can output a list of files that are inconsistent.
  2. The default docker compose should be run with health checks on mariadb so when it starts throwing errors but the interface still runs, someone can be alerted.
  3. Need some kind of reminder to check in on unused docker containers.
29
48
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

Hi!

I often read suggestions to use something like Tailscale to create a tunnel between a home server and a VPS because it is allegedly safer than opening a port for WireGuard (WG) or Nginx on my router and connecting to my home network that way.

However, if my VPS is compromised, wouldn't the attacker still be able to access my local network? How does using an extra layer (the VPS) make it safer?

30
11
submitted 5 days ago by [email protected] to c/[email protected]

I set up Nginx Proxy Manager, and one of my services I want to serve is my Jellyfin which is hosted on another machine. Instead of Proxying the stream though, it'd be easier on the network to use the Nginx Stream module for facilitating that, I would expect.

The issue I'm facing is it seems like the only way to set up Nginx Stream is based on port, rather than by domain, and if I want to do it based on domain, I'd be proxying the data instead.

Is there any way to Stream to my Jellyfin rather than Proxying?

Thanks!

31
26
submitted 6 days ago by [email protected] to c/[email protected]

Hi everyone

I'm fighting with a network issue, where my synology nas doesn't accept any connection from outside it's subnet.

So, here's my setup:

  • Unifi Infrastructure with three separated subnets:

    • default: xxx.xxx.2.0/24 - no vlan - pool with all "safe" devices (notebooks, mobiles, servers etc.)
    • IoT: xxx.xxx.83.0/24 - vlan 83 - here are all the IoT devices, including nvidia shield, multiple chromecast music devices etc.)
    • guest: xxx.xxx.20.0/20 - vlan 20 - quarantined guest wlan
    • dns server are locally hosted at xxx.xxx.2.42 and 43
  • my I got a new NAS and i designated my old DS214play (running DSM 7.1.1-42962 Update 6) as a Mediaserver that gets to live in the IoT net:

    • changed the ip from xxx.xxx.2.50 to xxx.xxx.83.50
    • updated the gateway and subnet
    • added the vlan tag 83 on the network port
    • updated the firewall to allow all necessary ports from and to the default network (so I can stream plex to my notebooks etc.)
  • The Firewall on the NAS is not activated

Issue:

  • My NAS doesn't accept any outside connections after moving it to the IoT subnet, neither from my default network nor the internet.

What I tried:

  • allowed full access between LAN and IoT subnet for the NAS.
  • tried it with another port -> same issue
  • connected another device to this port (and setup the same firewall rules) -> this one works fine.
  • checked the unifi firewall logs --> requests get sent from the nas and answers from the other device
  • checked logs of other devices (DNS, NetCat etc.) --> they receive the requests outside of the subnet, and return their anser but the NAS seems to block/ignore any incoming packages.

What I didn't try:

  • setting the VLAN id under "Network Interface" > "LAN" > "Enable VLAN(802.1Q)" since, as far as I understand, the Unifi VLAN implementation terminates the VLAN tag at the port of the switch (and all other devices work without specifying it locally)
  • fully reset the NAS

I'm completely stuck how to solve the issue, so I have moved the NAS back to the default net, but some use cases are not working properly that way, so I'd really like to move it to the IoT subnet. Does anybody have (has?) any hints or knows of some obscure settings which need to be updated? I'd be really grateful for any pointers.

32
18
submitted 5 days ago by [email protected] to c/[email protected]

I have been using no-ip for around two years to remotely access my hosted service, I mostly use their free service except for a few 5 months offers I bought.

Recently, I received a full year offer in email for 8$ (COUPON CODE: MAY8), and I was wondering whether to get that or buy a 2 years domain for the same price (FROM hostinger or namecheap).

I have never bought a doamain before and my knowledge is limited to what I mostly read here. So, per your opinion, what would be better in term of usability and security, a DDNS on the router and a port open per hosted-service? or a domain with reverse proxy?

33
31
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

Hello all! Yesterday I started hosting forgejo, and in order to clone repos outside my home network through ssh://, I seem to need to open a port for it in my router. Is that safe to do? I can't use a vpn because I am sharing this with a friend. Here's a sample docker compose file:

version: "3"

networks:
  forgejo:
    external: false

services:
  server:
    image: codeberg.org/forgejo/forgejo:7
    container_name: forgejo
    environment:
      - USER_UID=1000
      - USER_GID=1000
      - FORGEJO__database__DB_TYPE=postgres
      - FORGEJO__database__HOST=db:5432
      - FORGEJO__database__NAME=forgejo
      - FORGEJO__database__USER=forgejo
      - FORGEJO__database__PASSWD=forgejo
    restart: always
    networks:
      - forgejo
    volumes:
      - ./forgejo:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "3000:3000"
      - "222:22" # <- port 222 is the one I'd open, in this case
    depends_on:
      - db

  db:
    image: postgres:14
    restart: always
    environment:
      - POSTGRES_USER=forgejo
      - POSTGRES_PASSWORD=forgejo
      - POSTGRES_DB=forgejo
    networks:
      - forgejo
    volumes:
      - ./postgres:/var/lib/postgresql/data

And to clone I'd do

git clone ssh://git@<my router ip>:<the port I opened, in this case 222>/path/to/repo

Is that safe?

EDIT: Thank you for your answers. I have come to the conclusion that, regardless of whether it is safe, it doesn't make sense to increase the attack surface when I can just use https and tokens, so that's what I am going to do.

34
18
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/[email protected]

One is the route from my Proxmox (vimes) server to my NAS, (colon) going via my Router (pessimal) (as it should be) Second one is my NAS going to Proxmox directly. However I didn't set any static routes and this is causing issues as the Router Firewalls those Asymmetric Connections. This is happening since I upgraded Proxmox... I am not the best at network stuff, so if someone has some pointers I'd be most grateful.

I'm a moron and had a wrong subnet mask.

35
17
submitted 6 days ago by [email protected] to c/[email protected]

Hello! I finally got to installing casaOs on my debian laptop which I would use as a server. Already installed some apps and got jellyfin to work.

I just can't find the best way to access my server and apps remotely. I tried using tailscale and I can access my casaos dashboard and the apps work thru the dashboard, but not on the installed apps on my phone. Is there a guide I can follow? Or is there another option to connect remotely?

36
22
Alternative to Minio ? (lemmy.dbzer0.com)
submitted 6 days ago by [email protected] to c/[email protected]

Hello,

I'm currently using Minio as an easy database for serving my images. To make things simpler everything is set to public, so that just with the URL, you can access it directly. While it's working great for my website, by setting everything public you can easily see ALL the images. So my question is : What is the best way to setup my node JS app as a proxy ? Is it going through the full S3 protocol hell mess, or is there any solution ?

PS : I have a lot of images, so setting everything in the node app is not possible

37
12
Yunohost Lemmy qeustions: (sh.itjust.works)
submitted 6 days ago by [email protected] to c/[email protected]

Been setting up a Yunohost sever and really been enjoying the process/experience . The one snag I’ve had is setting up a personal Lemmy sever and the Yunohost fourms seem to be dated. Just wondering if someone might know the best place to look for solutions. Thank you, and have a good day.

38
17
submitted 6 days ago by [email protected] to c/[email protected]

cross-posted from: https://slrpnk.net/post/9960845

Hello Lemmy! Yesterday I released the first version of an alternative frontend for Threads: Shoelace. It allows for fetching posts and profiles from Threads without the need of any browser-side JavaScript. It's written in Rust, and powered by the spools library, which was co-developed between me and my girlfriend. Here's a quick preview:

A screenshot of Shoelace's homepage, showing the logo on top, the title "Shoelace", the subtitle "an alternative frontend for Threads", an input bar with the tooltip "Jump to a profile...", and at the bottom three links: "hub", "donate", and "v0.1".

Mark Zuckerberg's profile on Shoelace, showing three posts: One showcasing columns on the official Threads frontend, another congratulating himself for 1.2M+ downloads in his company's new AI software, and the glimpse of a post related to the "metaverse" Post by münecat on Shoelace, announcing the release of a video essay criticizing the field of evolutionary psychology

The official public instance (at least for now) is located at https://shoelace.mint.lgbt/, if y'all wanna try it out. There's also instructions to deploy it inside the docs you can find in the README. Hope y'all enjoy it!

39
13
submitted 6 days ago* (last edited 5 days ago) by [email protected] to c/[email protected]

I've recently been looking at options to upgrade (completely replace) my current NAS, as it's currently more than a little bit jank and frankly kinda garbage. I have a few questions about that and about migrating my current TrueNAS scale installation or at least it's settings over.

Q1: Does the physical order of the drives matter? I.E. The order they are plugged into the SATA ports.

Q2: Since I have TrueNAS scale installed on a USB flash drive (yeah, ik you're not supposed to but it is what it is), how bad of an idea would it be to just... unplug it from my current NAS and plug it into the new one?

Q3: If all else fails, how reliable is TrueNAS scale's importing of ZFS Pools and are there any gotchas with it?

Q4: Would moving to a virtualized solution like proxmox and installing TrueNAS scale on top of that in a VM make more sense on a beefier server?

E: Thank you all for the replies, the migration went smoothly :)

40
46
submitted 1 week ago by [email protected] to c/[email protected]

Is there a self-hosted downloader that would automatically download liked videos or the ones added to a specific playlist?

41
51
submitted 1 week ago by [email protected] to c/[email protected]

What is the best format settings to store a physical music?

I did look at Flac but the data is almost the same size as the uncompressed Wav and none of my devices or self hosted services seem designed to play flac files. Everything gets converted.

What are people using?

42
161
submitted 1 week ago by [email protected] to c/[email protected]

Netris: An open-source cloud gaming platform (GeForce NOW alternative) that can be self-hosted, integrates your Steam game library.

https://github.com/netrisdotme/netris?tab=readme-ov-file#self-hosting

@selfhosted

43
42
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]

Hi guys I was wondering if there is a streamlined way to disable remote acess to a selfhosted service (say at a reverse proxy level) if a published security vunerability is present.

I know, ideally you want to keep all your selfhosted services up to date. However on certain selfhosted service auto updates may not be viable (due to major changes between updates) and you being unavailable 24/7 to respond to vunerabilities.

Curious on your thoughts and suggestions. So far the only middle ground I can find is realying on a vpn wireguard, tailscale, etc.

Page regarding homeassistant remote ui autodisable: https://www.nabucasa.com/config/remote/

44
18
submitted 1 week ago by [email protected] to c/[email protected]

Hi folks,

I installed Radicale earlier today and when I installed it as a user as described on the homepage using $ python3 -m pip install --upgrade radicale.

I initially created a local storage and ran as normal user $ python3 -m radicale --storage-filesystem-folder=~/.var/lib/radicale/collections. I was able to see the webpage when I type the server address (VM on Truenas) http://192.168.0.2:5234. So the install went well. But I wanted to create system wide so that I can have multiple users loggin in (family members).

So i did the following:

  • $sudo useradd --system --user-group --home-dir / --shell /sbin/nologin radicale

  • $sudo mkdir -p /var/lib/radicale/collections && sudo chown -R radicale:radicale /var/lib/radicale/collections

  • sudo mkdir -p /etc/radicale && sudo chown -R radicale:radicale /etc/radicale

Then I created the config file which looks like:

[server]
# Bind all addresses
hosts = 192.168.0.2:5234, [::]:5234
max_connections = 10
# 100 MB
max_content_length = 100000000
timeout = 30

[auth]
type = htpasswd
htpasswd_filename = /etc/radicale/users
htpasswd_encryption = md5

[storage]
filesystem_folder = /var/lib/radicale/collections

[logging]
level = debug

Of course the users file also exists in the /etc/radicale. Then I created the service file as per the guidance without changing anything:

[Unit]
Description=A simple CalDAV (calendar) and CardDAV (contact) server
After=network.target
Requires=network.target

[Service]
ExecStart=/usr/bin/env python3 -m radicale
Restart=on-failure
User=radicale
# Deny other users access to the calendar data
UMask=0027
# Optional security settings
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
NoNewPrivileges=true
ReadWritePaths=/var/lib/radicale/collections

[Install]
WantedBy=multi-user.target

Then I hit the usual sequence:

$ sudo systemctl enable radicale
$ sudo systemctl start radicale
$ sudo systemctl status radicale

and of course it all seems to be running:

user@vm101:/$ sudo systemctl status radicale
● radicale.service - A simple CalDAV (calendar) and CardDAV (contact) server
     Loaded: loaded (/etc/systemd/system/radicale.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2024-05-25 19:44:54 BST; 18min ago
   Main PID: 313311 (python3)
      Tasks: 1 (limit: 4638)
     Memory: 13.1M
        CPU: 166ms
     CGroup: /system.slice/radicale.service
             └─313311 python3 -m radicale

May 25 19:44:54 vm101 systemd[1]: Started A simple CalDAV (calendar) and CardDAV (contact) server.

When I run $ journalctl --unit radicale.service it only provide the following output, despite the logging level is set to debug:

user@vm101:/etc/radical$ sudo journalctl --unit radicale.service
-- Journal begins at Sat 2022-12-31 15:45:51 GMT, ends at Sat 2024-05-25 20:04:37 BST. --
May 25 19:25:46 vm101 systemd[1]: Started A simple CalDAV (calendar) and CardDAV (contact) server.
May 25 19:44:46 vm101 systemd[1]: Stopping A simple CalDAV (calendar) and CardDAV (contact) server...
May 25 19:44:46 vm101 systemd[1]: radicale.service: Succeeded.
May 25 19:44:46 vm101 systemd[1]: Stopped A simple CalDAV (calendar) and CardDAV (contact) server.
May 25 19:44:54 vm101 systemd[1]: Started A simple CalDAV (calendar) and CardDAV (contact) server.

Any clue as to why i get "Can't establish a connection ..." error when I type http://192.168.0.2:5234. I'm clearly missing something but can't quite get what it is. Any help would be appreciated.

BTW, I'm connecting to the Truenas server (where the VM runs) from my laptop, the same one that allowed me to connect when I used the normal user approach described at the start.

45
18
submitted 1 week ago by [email protected] to c/[email protected]

Looking through the writefreely.org instances on their website, a lot of the links are dead or closed for registration. The one that is open and working is promoting a paid version. Is hosting a writefreely instance heavy on resources, attracting the wrong people or just not "cool" enough?

46
7
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]

I'm in the process of finding a server to run as a homlab. It will be running proxmox VE and have a couple of machines running at a time for testing purposes. These machines will run anything from server 2022 to debian and various other distros depending on what I wanna fiddle around with.

Does anyone have any experience with Xeon E-2400 Cores or their subsequent "consumer" variants in intel 14000-series running proxmox?

From what i gather in the forums there is a pretty substantial performance difference between e-cores and p-cores which are present in the Raptor Lake CPU's

So the question is: Would you rather have a Xeon E-2400 8C/16T CPU or an i9 14900 8p16E/32T in a proxmox hypervisor?

47
22
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]

I'm trying to build a DIY NAS, I already have some (6) 3.5" SATA disks, a Mini-ITX case, and power supply, but I'm still unsure on which motherboard & CPU to get. I think a motherboard + N100 combo is a good option because of the price and power consumption.

I'm currently using a MiniPC with an i5-6500T (4784 passmark) and an external HDD enclosure connected with USB using RAID-1 (software) which uses about 35W. The USB enclosure is limited to 2 slots, and I've heard from here that it can be problematic in combination with RAID. The N100 (5551) boards have a slightly better passmark score but most importantly more expandability (SATA & PCIe) and supposedly a lower power consumption. The i5-6500T has a TDP of 65W, the N100 a TDP of 6W, that doesn't say much but it seems to a lot better when looking at info online. The N100 also apparently has Quicksync support while the i5's support is limited and struggles to encode 1080p (100% CPU usage).

There are 2 main boards I'm considering. The BKHD 1264 and the ASRock N100M. ASRock is a better known brand, but their version only supports DDR4 and 2 SATA ports while the BKHD board supports DDR5, has 6 SATA ports, and has 4 × 2.5G network ports. I've also heard complaints about high temps (90c) with the N100m because it only has passive cooling, while the BKHD board has active cooling and a large heat sink. However, the BKHD board is a bit more expensive (~€150 vs ~€130), but it seems worth it because I won't have to add an external HBA.

What do you think would be the better option?

EDIT 2024-05-26: I ended up getting the ASUS Prime N100I-D D4 because it's significantly cheaper (€95). It does have less SATA ports (1), but I accidentally bought a SATA card so that actually works out pretty well.

48
42
submitted 1 week ago by [email protected] to c/[email protected]

I have an 11th gen Framework mainboard which I would like to repurpose as a server. Unfortunately, (unless I do some super janky stuff) I can only connect 1 drive to it over M.2 and any additional ones must be over USB.

I am thinking of just using some portable hard drives and plugging them in over USB. I plan to RAID1 them and use them as boot drives and data storage, and use the M.2 slot for something unrelated.

In your experiences, is USB reliable enough nowadays to run a RAID array for a server like this? If it is, does it depend on the specific drive used?

49
55
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/[email protected]

Ohboy. Tonight I:

  • installed a cool docker monitoring app called dockge
  • started moving docker compose files from random other folders into one centralized place (/opt/dockers if that matters)
  • got to immich, brought the container down
  • moved the docker-compose.yml into my new folder
  • docker compose up -d
  • saw errors saying it didn't have a DB name to work with, so it created a new database

panik

  • docker compose down
  • copy old .env file from the old directory into the new folder!
  • hold breath
  • docker compose up -d

Welcome to Immich! Let's get started...

Awwwwww, crud.

Anything I can do at this point?

No immich DB backup but I do have the images themselves.

EDIT: Thanks to u/atzanteol I figured out that changing the folder name caused this too. I changed the docker folder's name back to the original name and got my DB back! yay

50
72
submitted 1 week ago by [email protected] to c/[email protected]

It's been a little bit, but I'm back! As usual, not my blog, just a good community share. Authors are on Mastodon at @[email protected]

view more: ‹ prev next ›

Selfhosted

37190 readers
1274 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS