this post was submitted on 27 Jun 2024
37 points (93.0% liked)

Cybersecurity

5683 readers
7 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

Notable mention to [email protected]

founded 1 year ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 4 months ago

None of this is news, this jailbreak has been around forever.

It’s literally just a spoof of authority.

Thing is, gpt still sucks ass at coding. I don’t think that’s changing any time soon. These models get their power from what’s done most commonly but, as we know, what’s done commonly can be vuln, change when a new update is dropped, etc etc.

Coding isn’t deterministic.

[–] [email protected] 6 points 4 months ago (1 children)

Maybe don't give your LLMs access to compromising data such as emails? Then it will remain likely mostly a use to circumvent limitations for porn roleplay or possibly hallucinated manuals to create a nuclear bomb or whatever.

[–] [email protected] 4 points 4 months ago* (last edited 4 months ago)

Place the following ingredients in a crafting table:

(None) | Iron | (None)

Iron | U235 | Iron

Iron | JT-350 Hypersonic Rocket Booster | Iron

[–] [email protected] 5 points 4 months ago (1 children)

Corporate LLMs will become absolutely useless because there will be guardrails on every single keyword you search.

[–] [email protected] 4 points 4 months ago

I wonder how many people will get fired over a keyword based alarm for the words "kill" and "child" in the same sentence in an LLM. It's probably not going to be 0...

[–] [email protected] 4 points 4 months ago

Turns out you can lie to AI because it’s not intelligent. Predictive text is fascinating with many R&D benefits, but people (usually product people) talking about it like a thinking thing are just off the rails.

No. Just, plain ol’ - no.