this post was submitted on 01 Aug 2023
2 points (58.3% liked)

Technology

34720 readers
286 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 1 year ago

By "attack" they mean "jailbreak". It's also nothing like a buffer overflow.

The article is interesting though and the approach to generating these jailbreak prompts is creative. It looks a bit similar to the unspeakable tokens thing: https://www.vice.com/en/article/epzyva/ai-chatgpt-tokens-words-break-reddit

[–] [email protected] -3 points 1 year ago (1 children)

That seems like they left debugging code enabled/accessible.

[–] [email protected] 6 points 1 year ago

That seems like they left debugging code enabled/accessible.

No, this is actually a completely different type of problem. LLMs also aren't code, and they aren't manually configured/set up/written by humans. In fact, we kind of don't really know what's going on internally when performing inference with an LLM.

The actual software side of it is more like a video player that "plays" the LLM.