loudwhisper

joined 11 months ago
 

cross-posted from: https://infosec.pub/post/16642151

(I have just learned you can cross-post!)

As someone who has read plenty of discussions about email security (some of them in this very community), including all kind of stuff (from the company groupie to tinfoil-hat conspiracy theories), I have decided to put ~~too many hours~~ some time to discuss the different threat models for email setups, including the basic most people have, the "secure email provider" one (e.g., Protonmail) and the "I use ~~arch~~ PGP manually BTW".

Jokes aside, I hope that it provides an overview comprehensive and - I don't want to say objective, but at least rational - enough so that everyone can draw their own conclusion, while also showing how certain "radical" arguments that I have seen in the past are relatively shortsighted.

The tl;dr is that email is generally not a great solution when talking about security. Depending on your risk profile, using a secure email provider may be the best compromise between realistic security and usability, while if you really have serious security needs, you probably shouldn't use emails, but if you do then a custom setup is your best choice.

Cheers

 

As someone who has read plenty of discussions about email security (some of them in this very community), including all kind of stuff (from the company groupie to tinfoil-hat conspiracy theories), I have decided to put ~~too many hours~~ some time to discuss the different threat models for email setups, including the basic most people have, the "secure email provider" one (e.g., Protonmail) and the "I use ~~arch~~ PGP manually BTW".

Jokes aside, I hope that it provides an overview comprehensive and - I don't want to say objective, but at least rational - enough so that everyone can draw their own conclusion, while also showing how certain "radical" arguments that I have seen in the past are relatively shortsighted.

The tl;dr is that email is generally not a great solution when talking about security. Depending on your risk profile, using a secure email provider may be the best compromise between realistic security and usability, while if you really have serious security needs, you probably shouldn't use emails, but if you do then a custom setup is your best choice.

Cheers

[–] [email protected] 21 points 3 months ago

I think the benefits of federation is discoverability. I can spin up my gitea or forgejo (or something else!) Instance, but when people look for code in their instances, they can still discover my public repositories, and if they want to contribute, they can fork and open PRs from their instances.

So yeah, it means mostly you can selfhost and provide space to others, but with the same benefits that right now github offers (I.e., everything is there).

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago)

Oh, it looks like! Something went wrong with Zola build and I must have not noticed. Thanks a lot for pinging me about that, I will fix it today!

EDIT: Fixed! That's what you get when you forget to bump the Docker image version after you upgrade zola version locally with a breaking change in the config! :) Thanks for letting me know, it would have taken me a long time to see it was broken!

[–] [email protected] 1 points 3 months ago

Comfort is the main reason, I suppose. If I mess up Wireguard config, even to debug the tunnel I need to go to the KVM console. It also means that if I go to a different place and I have to SSH into the box I can't plug my Yubikey and SSH from there. It's a rare occurrence, but still...

Ultimately I do understand both point of view. The thing is, SSH bots pose no threats after the bare minimum hardening for SSH has been done. The resource consumption is negligible, so it has no real impact.

To me the tradeoff is slight inconvenience vs slightly bigger attack surface (in case of CVEs). Ultimately everyone can decide which compromise is acceptable for them, but I would say that the choice is not really a big one.

[–] [email protected] 3 points 3 months ago

Hey, the short answer is yes, you can.

I would elaborate a little more:

  • First, you have the problem of sourcing the data. In essence, Crowdsec won't be able to go and fetch those logs for you dynamically, but can go and take those logs from a file (you can do a dirty solution like a sidecar deployment) or from a stream. You can deploy crowdsec in multiple modes, and you can have many instances that talk to each other. You can also simply have some process tailing the pod logs and sending them to a file crowdsec has access to or serving them as a stream (see https://doc.crowdsec.net/docs/data_sources/intro).
  • The above means that it doesn't really matter whether you run Crowdsec inside your cluster (it does have a Helm chart) or on the host. Ultimately all it matters is that crowdsec has access to your pods logs (for example, the logs of your ingress controller).
  • The next piece is the remediation component. What do you want crowdsec to do, once it is able to detect bad IPs? If you want to just add IPs to the firewall, then it might make more sense running it on the host(s) you use in ingress, if you want to add the IPs to network policies you can do it, but you need to develop your own remediation components. I am planning to write a remediation component that will add the IPs to Hetzner firewall, some other systems are already supported, but this would be a way to basically block the IPs outside your cluster. For nginx ingress controller there is already a pre-made remediation component .

In practice I personally would choose a simple setup where the interesting logs are just forwarded (in Syslog format for example) to a single crowdsec instance. If you have ingress from a single node, I'd go for running it on the host and banning via firewall, if you have multiple ingress nodes, then I would run it inside the cluster and ban via a loadBalancer/cloud firewall/whatever you have in front.

In essence, I would spend some time to think about your preferences, and it might take a little bit to make the setup clean, but I think you have plenty of flexibility to do what you prefer. Let me know if you want to bounce some more ideas!

[–] [email protected] 1 points 3 months ago

Yeah I know (I mentioned it myself in the post), but realistically there is no much you can do besides upgrading. Unattended upgrades kick in once a day and you will install the security patches ASAP. There are also virtual patches (crowdsec has a virtual patch for that CVE), but they might not be very effective.

I argue that VPN software is a smaller attack surface, but the problem still exists (CVEs) for everything you expose.

[–] [email protected] 3 points 3 months ago

Nice! I didn't know this. Thanks!

[–] [email protected] 5 points 3 months ago (2 children)

AFAIK I know that SSH has MaxAuthTries and LoginGraceTime, but all it does is terminating the SSH session (I.e. slow down at most), it won't block the IP via firewall or configuration.

Not sure if there is a recent feature that does the same.

[–] [email protected] 3 points 3 months ago (1 children)

Yes, I have used it in the past and it was annoying...

You can get SSL certs with letsencrypt, but you need to use the http verification method.

[–] [email protected] 9 points 3 months ago

Yeah, what I mean is that it's useless using ports like 2222, that's like the unofficial SSH port! Bots are generally harmless (once you move to key auth), and you get functional the same result with the automatic IP ban on failed auth, minus the bother to change client configurations to your custom port. Anyway, if someone does want cleaner logs, changing port works :)

[–] [email protected] 1 points 3 months ago

Also hypervisors get escape vulnerabilities every now and then. I would say that in a realistic scale of difficulty of escape, a good container (doesn't matter if using Docker or something else) is a good security boundary.

If this is not the case, I wonder what your scale extremes are.

A good container has very little attack surface, since it can have almost no code or tools available, a read-only fs, no user privileges or capabilities whatsoever and possibly even a syscall filter. Sure, the kernel is the same but then the only alternative is to split that per application VMs-like) and you move the problem to hypervisors.

In the context of this asked question, I think the gains from reducing the attack surface are completely outweighed from the loss in functionality and waste of resources.

[–] [email protected] 1 points 3 months ago

Completely agree, which is why I do the same.

Additional bonus: proxies that interact with the docker API directly (I think also caddy can do it) save you from exposing the services on any port at all (only in the docker network). So it's way less likely to expose a port with a service by mistake and no need for arbitrary and unique localhost ports.

[–] [email protected] 9 points 3 months ago (6 children)

Since you run already OpenWrt, you can check out https://openwrt.org/docs/guide-user/services/ddns/client

There is a list on this page of compatible services. If you don't want to use one more service (DNS), you can use a domain registrar with an API (like porkbun) and find online tools that work with that.

Be aware of the risks of hosting your websites publicly from home, make sure to run them in very isolated environments. Having your VPS compromised is bad, but having your home network compromised is much worse!

 

Hi, recently (ironically, right after sharing some of my posts here on Lemmy) I had a higher (than usual, not high in general) number of "attacks" to my website (I am talking about dumb bots, vulnerability scanners and similar stuff). While all of these are not really critical for my site (which is static and minimal), I decided to take some time and implement some generic measures using (mostly) Crowdsec (fail2ban alternative?) and I made a post about that to help someone who might be in a similar situation.

The whole thing is basic, in the sense that is just a way to reduce noise and filter out the simplest attacks, which is what I argue most of people hosting websites should be mostly concerned with.

 

GoDaddy really lived up to its bad reputation and recently changed their API rules. The rules are simple: either you own 10 (or 50) domains, you pay $20/month, or you don't get the API. I personally didn't get any communication, and this broke my DDNS setup. I am clearly not the only one judging from what I found online. A company this big gating an API behind such a steep price... So I will repeat what many people said before me (being right): don't. use. GoDaddy.

 

I hope this won't be counted as some form of self-promotion, even though I am sharing a post from my own blog.

As a tech worker who works in a Cloud shop, I wanted to elaborate the many reasons why I find working with Clouds terrible, from multiple points of view.

I tried to organize my thoughts in a (relatively long) post, in which both technical aspects and political aspects (which are very related) are covered.

I am sure many people will have different perspectives, and this could be potentially also a nice prompt for a discussion.

view more: next ›