Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
Inflation.
I don't think an inflatable wrench would work very well.
Now I'm just imaging a judge trying to get order in the court room with an inflatable mallet
"Order! Order in the Court!"
SquEAKy
I’ve used a duress password with crypto containers since the old TrueCrypt introduced me to it a while back. Sure you can have the password and unlock the vault but it’s just text file notes in there that aren’t at all important. In reality though, no one would ever give a shit about my data enough to even ask me my password.
It even says as much in the bonus text!
It's free if you borrow from your in-law (then never return it).
What, you think that guy dishes out for non-stolen wrenches?
I knew somebody would have the relevant xkcd.
About 10 years ago, I read a paper that suggested mitigating a rubber hose attack by priming your sys admins with subconscious biases. I think this may have been it: https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final25.pdf
Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text. You then test them by giving them a sequence of text to complete and record how quickly and accurately they respond. Repeat until the accuracy is at an acceptable level.
Even if an attacker kidnaps the user and sends in a body double, with your user's id, security key, and means of biometric identification, they will still not succeed. Your user cannot teach their doppelganger the pattern and if the attacker tries to get the user on a video call, the added lag of the user reading the prompt and dictating the response should introduce a detectable amount of lag.
The only remaining avenue the attacker has is, after dumping the body of the original user, kidnap the family of another user and force that user to carry out the attack. The paper does not bother to cover this scenario, since the mitigation is obvious: your user conditioning should include a second module teaching users to value the security of your corporate assets above the lives of their loved ones.
Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text.
Did you forget the word "teach"? Or even the concept?
I am well aware of learning, but people tend to learn by comprehension and understanding. Completing phrases without understanding the language (or the concept of language) is the realm of LLM and Scrabble players.
https://en.m.wikipedia.org/wiki/Nigel_Richards_(Scrabble_player)
Like this madman
"In 2015, despite not speaking French, Richards won the French World Scrabble Championships, after reportedly spending nine weeks studying the French dictionary. He won it again in 2018, and multiple duplicate titles from 2016."
Robust AF. Chef's kiss. No notes.
Smart. I like the idea of replacing biometrics with something that can't easily be cloned - learned behaviour. Perhaps with a robust ML approach you could use analysis of gait, expressions, and other subtle behavioural tics rather than or in addition to facial/fingerprint/iris recognition. I suspect that would be very hard to fake - although perhaps vulnerable to, idk, having a bad day and acting "off".
Ah, so only employ posh people.
"Hi, I'm definitely Henry. My turn to take the RSA key sentry duty today."
"Henry, why are you acting like a commoner? You're not like yourself at all!"
Having read the paper, there seems to be a glaring problem: Even though the user can't tell an attacker the password, nothing is stopping them from demonstrating the password. It doesn't matter if it's an interactive sequence -- the user is going to remember enough detail to describe the "prompts".
A rubber hose and a little time will get enough information to make a "close enough" mock-up of the password entry interface the trusted user can use to reveal the password.
Do they... they torture them with a rubber horse...?
ETA: Goddammit it says rubber hose
Not to be confused with rubber horse troubleshooting.
Nay
Idk what you're into buddy
but I like it.
We should accept, neigh encourage this person
There are some cases involving plausible deniability where game theory tells you should beat the person until dead even if they give up their keys, since there might be more.
I mean, I'd definitely do it to SBF if his crap wasn't cleaned out already. Though admittedly I'd largely keep going just because this world DESPERATELY needs fewer SBF types in it...
If you wanted to bring down a server, the best hack is unplugging the rack from within the data center.
One possible countermeasure being https://en.wikipedia.org/wiki/Deniable_encryption
I know veracrypt has a form of this. You can set up two different keys, and depending on which one you use, you decrypt different data.
So you can encrypt your stuff, and if anyone ever compels you to reveal the key, you can give the wrong key, keeping what you wanted secured, secure.
won't they know there are files they haven't decrypted?
if it could hide or delete the remaining files encrypted that would be nifty.
If you set it up correctly, this is essentially what it does. You have a disc that is, say, 1tb. It's encrypted, so without a key, it's just a bunch of random noise. 2 keys decrypt different vaults, but they each have access to the full space. The files with the proper key get revealed, but the rest just looks like noise still, no way to tell if it's empty space or if it's a bunch of files.
This does have an interesting effect. Since both drives share the same space, you can overfill one, and it'll start overwriting data from the second. Say you have a 1tb drive, and 2 vaults with 400gb spent. If you then go try to write like, 300gb of data to one vault, it'll allow you to do so, by overwriting 200gb of what the drive thinks is empty space, but is actually encrypted by another key.
It's been a while since I've messed with this tech, and I'm mostly a layman, but this should be a fairly accurate depiction of what's actually happening.
Full disk (/partition) encryption means you don't know what files there are until you decrypt. Additionally for that sort of encryption scenario you fill the partition with random data first so you can't tell files from empty space (unless the attacker can watch the drive over time).
There was an encryption system a few years ago that offered this out of the box.
I can't remember the name of it but there was a huge vulnerability and basically made the software unusable.
Crypt box or something like that.
The prominent one was called Marutukku - and the developer turned out to be someone who might actually need the feature.
As referred in other comment, the counter counter is to just keep beating to get further keys/hidden data.
Game theory would lead you, as the tortured, to realize that they're just going to beat you until death to extract any keys you may or may not have, so the proper answer is to give them 1 and no more. You're dead anyway, may as well actually protect what you thought was worth protecting. Giving 1 key that opens a dummy vault may get the torturers to stop at you, thinking this lead is a dead one.
Probably best to avoid systems with known deniable encryption methods, and keep your dummy data there. Then hide your secrets e.g. in deleted space on a drive, in the cloud, or a well-hidden micro-sd card. All have risks, maybe it's best of all to not keep your secrets with you, and make sure they can't be associated with you.
Thermorectal cryptoanalysis.
By any chance is this from Andrew Tanenbaum?
This always sounded like parallel construction.
Fine then, keep your secrets.
Where is this from? I don't think exposing the key breaks most crypto algorithms, it should still be doing its job.
The private key, or a symmetric key would break the algorithm. It's kind of the point that a person having those can read it. The public key is the one you can show people.
Doesn't break the algorithm though, you would just have the key and then can use the algorithm (that still works!) to decrypt data.
Also you're talking about one class of cryptography, the concept of key knowledge varies between algorithms.
My point is an attacker having knowledge of the key is a compromise, not a successful break of the algorithm..
"the attacker beat my ass until I gave them the key", doesn't mean people should stop using AES or even RSA, for example.
The purpose is to access the data. This is a bypass attack, rather than a mathematical one. It helps to remember that encryption is rarely used in the abstract. It is used as part of real world security.
There are actually methods to defend against it. The most effective is a "duress key". This is the key you give up under duress. It will decrypt an alternative version of the file/drive, as well as potentially triggering additional safeguards. The key point is the attacker won't know if they have the real files, and there is nothing of interest, or dummy ones.
I appreciate the explaination, that's a cool scheme, but what I saying is the human leaking the key is not the fault of the algorithm.
Everyone and everything is, on a very pedantic level, weak to getting their ass beat lol
That doesn't make it crypt analysis
An encryption scheme is only as strong as its weakest link. In academic terms, only the algorithm really matters. In the real world however, implementation is as important.
The human element is an element that has to be considered. Rubber hose cryptanalysis is a tongue and cheek way of acknowledging that. It also matters since some algorithms are better at assisting here. E.g. 1 time key Vs passwords.
Very informative, I think people will learn from what you're saying, but it doesn't really matter to what I'm saying.
Yes, absolutely, consider the human element in your data encryption and protection schemes and implementations.
Beating someone with a pipe is a joke, but not really defeating an algorithm.
Okay, I don't know if anyone was saying we should abandon encryption, though.
r/whoosh 😉
No, really though, where's it from?