pyrosis

joined 7 months ago
[–] [email protected] 3 points 1 month ago

Well I use a fire stick with smartube and jellyfin. Works just fine for my needs. YMMV

[–] [email protected] 1 points 2 months ago

I remember the old videos of rockets exploding on launch pads when we were first building them. We have come a long way.

I suspect they will just learn something new from this and they will last even longer.

[–] [email protected] 1 points 3 months ago

When I was experimenting with this it didn't seem like you had to distribute the cert to the service itself. As long as the internal service was an https port. The certificate management was still happening on the proxy.

The trick was more getting the host names right and targeting the proxy for the hostname resolution.

Either way IP addresses are much easier but it is nice to observe a stream being completely passed through. I'm sure it takes a load off the proxy and stabilizes connections.

[–] [email protected] 2 points 3 months ago (2 children)

This would be correct if you are terminating ssl at the proxy and it's just passing it to http. However, if you can enable SSL on the service it's possible to take advantage of full passthru if you care about such things.

[–] [email protected] 1 points 5 months ago

You might look at gluetun. It lets you configure various VPN services from a docker container. The interesting part is that you can point other docker containers to utilize gluetun for networking. Essentially piping them through the configured VPN.

[–] [email protected] 3 points 5 months ago

Not without good logs or debugging tools.

You need to know what to observe. You are not going to get the information you are looking for directly from zfs or even system logs.

What I suggest stands. You have to understand the behavior of the USB controller. That information is acquired from researching USB itself.

Now if you intend to utilize something like a USB enclosure you indeed would be better off with something like ext4. However, keep in mind that this effect is not directly a file system issue. It's an issue with how USB controllers interact with file systems.

That has been my experience from researching this matter. ZFS is simply more sensitive.

In my experience even for motherboards that have port limitations it's possible to take advantage of pci lanes and install a hba with an onboard SATA controller. They also make pci devices that will accept nvme drives.

Good luck with your experimentation and research.

[–] [email protected] 3 points 5 months ago (2 children)

This takes a degree of understanding of what you are doing and why it fails.

I've done some research on this myself and the answer is the USB controller. Specifically the way the USB controller "shares" bandwidth. It is not the way a sata controller or a pci lane deals with this.

ZFS expects direct control of the disk to operate correctly and anything that gets in between the file system and the disk is a problem.

I the case of USB let's say you have two USB - nvme adapters plugged in to the same system in a basic zfs mirror. ZFS will expect to mirror operations between these devices but will be interrupted by the USB controller constantly sharing bandwidth between these two devices.

A better but still bad solution would be something like a USB to SATA enclosure. In this situation if you installed a couple disks in a mirror on the enclosure... They would be using a single USB port and the controller would at least keep the data on one lane instead of constantly switching.

Regardless if you want to dive deeper you will need to do reading on USB controllers and bandwidth sharing.

If you want a stable system give zfs direct access to your disks and accept it will damage zfs operations over time if you do not.

[–] [email protected] 4 points 5 months ago

Have a look at Stirling PDF. It's a self hosted alternative to most if not all Adobe functions that she might care about. It can be setup with docker.

https://github.com/Stirling-Tools/Stirling-PDF

[–] [email protected] 1 points 5 months ago

I thought it would. If it still requires sudo to run it is probably just docker wanting your user account added to the docker group. If the "docker" group doesn't exist you can safely create it.

You will likely need to log out and log back in for the system to recognize the new group permissions.

[–] [email protected] 1 points 5 months ago (3 children)

That doesn't make any sense to me. It can be installed directly from pacman. It may be something silly like adding docker to your user group. Have you done something like below for docker?

  1. Update the package index:

sudo pacman -Syu

  1. Install required dependencies:

sudo pacman -S docker

  1. Enable and start the Docker service:
sudo systemctl enable docker.service
sudo systemctl start docker.service
  1. Add your user to the docker group to run Docker commands without sudo:

sudo usermod -aG docker $USER

  1. Log out and log back in for the group changes to take effect.

    Verify that Docker CE is installed correctly by running:

docker --version

If you get the above working docker compose is just

sudo pacman -S docker-compose

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (5 children)

What computer and OS do you have that can't run docker? You can run a full stack of services on a random windows laptop as easily as a dedicated server.

Edit

Autocorrect messing with OS.

[–] [email protected] 7 points 5 months ago (7 children)

Honestly at this point that is docker and docker compose.

As to what to run it on that very much depends on preference. I use a proxmox server but it could just as easily be pure Debian. A basic webui like cockpit can make system management operations a bit more simplified.

view more: next ›