255
submitted 6 months ago by [email protected] to c/[email protected]
all 37 comments
sorted by: hot top controversial new old
[-] [email protected] 60 points 6 months ago* (last edited 6 months ago)

⢀⡴⠑⡄⠀⠀⠀⠀⠀⠀⠀⣀⣀⣤⣤⣤⣀⡀⠀⠀⠀⠀⠀ ⠸⡇⠀⠿⡀⠀⠀⠀⣀⡴⢿⣿⣿⣿⣿⣿⣿⣿⣷⣦⡀⠀⠀⠀⠀ ⠀⠀⠀⠀⠑⢄⣠⠾⠁⣀⣄⡈⠙⣿⣿⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀ ⠀⠀⠀⠀⢀⡀⠁⠀⠀⠈⠙⠛⠂⠈⣿⣿⣿⣿⣿⠿⡿⢿⣆⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⢀⡾⣁⣀⠀⠴⠂⠙⣗⡀⠀⢻⣿⣿⠭⢤⣴⣦⣤⣹⠀⠀⠀⢀⢴⣶⣆ ⠀⠀⢀⣾⣿⣿⣿⣷⣮⣽⣾⣿⣥⣴⣿⣿⡿⢂⠔⢚⡿⢿⣿⣦⣴⣾⠁⠸⣼⡿ ⠀⢀⡞⠁⠙⠻⠿⠟⠉⠀⠛⢹⣿⣿⣿⣿⣿⣌⢤⣼⣿⣾⣿⡟⠉⠀⠀⠀⠀⠀ ⠀⣾⣷⣶⠇⠀⠀⣤⣄⣀⡀⠈⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ ⠀⠉⠈⠉⠀⠀⢦⡈⢻⣿⣿⣿⣶⣶⣶⣶⣤⣽⡹⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠉⠲⣽⡻⢿⣿⣿⣿⣿⣿⣿⣷⣜⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣷⣶⣮⣭⣽⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⣀⣀⣈⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠇⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠛⠻⠿⠿⠿⠿⠛⠉

[-] [email protected] 9 points 6 months ago

This looks like junk in a web browser. Here it is inside a code block.

 ⢀⡴⠑⡄⠀⠀⠀⠀⠀⠀⠀⣀⣀⣤⣤⣤⣀⡀⠀⠀⠀⠀⠀  
⠸⡇⠀⠿⡀⠀⠀⠀⣀⡴⢿⣿⣿⣿⣿⣿⣿⣿⣷⣦⡀⠀⠀⠀⠀ 
⠀⠀⠀⠀⠑⢄⣠⠾⠁⣀⣄⡈⠙⣿⣿⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀ 
⠀⠀⠀⠀⢀⡀⠁⠀⠀⠈⠙⠛⠂⠈⣿⣿⣿⣿⣿⠿⡿⢿⣆⠀⠀⠀⠀⠀⠀⠀ 
⠀⠀⠀⢀⡾⣁⣀⠀⠴⠂⠙⣗⡀⠀⢻⣿⣿⠭⢤⣴⣦⣤⣹⠀⠀⠀⢀⢴⣶⣆  
⠀⠀⢀⣾⣿⣿⣿⣷⣮⣽⣾⣿⣥⣴⣿⣿⡿⢂⠔⢚⡿⢿⣿⣦⣴⣾⠁⠸⣼⡿  
⠀⢀⡞⠁⠙⠻⠿⠟⠉⠀⠛⢹⣿⣿⣿⣿⣿⣌⢤⣼⣿⣾⣿⡟⠉⠀⠀⠀⠀⠀ 
⠀⣾⣷⣶⠇⠀⠀⣤⣄⣀⡀⠈⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ 
⠀⠉⠈⠉⠀⠀⢦⡈⢻⣿⣿⣿⣶⣶⣶⣶⣤⣽⡹⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ 
⠀⠀⠀⠀⠀⠀⠀⠉⠲⣽⡻⢿⣿⣿⣿⣿⣿⣿⣷⣜⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ 
⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣷⣶⣮⣭⣽⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀ 
⠀⠀⠀⠀⠀⠀⣀⣀⣈⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠇⠀⠀⠀⠀ 
⠀⠀⠀⠀⠀⠀⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀ 
⠀⠀⠀⠀⠀⠀⠀⠹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀ 
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠛⠻⠿⠿⠿⠿⠛⠉
[-] [email protected] 56 points 6 months ago

Learning how to build a bomb shouldn't be blocked by llms to begin with. You can just as easily learn how to do it by googling the same question, and real and accurate information, even potentially dangerous information, shouldn't be censored.

[-] [email protected] 52 points 6 months ago

I'm not surprised that a for-profit company for wanting to avoid bad press by censoring stuff like that. There's no profit in sharing that info, and any media attention over it would be negative.

[-] [email protected] 10 points 6 months ago

No one's going after hammer manufacturers because their hammers don't self-destruct if you try to use one to clobber someone over the head.

[-] [email protected] 16 points 6 months ago

True, but people generally understand hammers. Llms? Not so much

[-] [email protected] 4 points 6 months ago

No one's going after computer manufacturers or OS vendors because people use computers to commit cybercrime. I doubt most people could explain how an OS or motherboard works.

[-] [email protected] 9 points 6 months ago

A lot of poluticians want hardwarelevel backdoors. It's been declared unconstitutional quite some times in different countries but they are trying.

[-] [email protected] 1 points 6 months ago

That would be soooo bad, almost as bad as making a law against encryption

[-] [email protected] 4 points 6 months ago

A better example would be something like The Anarchist Cookbook. Possession is illegal in some places.

[-] [email protected] 2 points 6 months ago

I'm more surprised that a for-profit company is willing to use a technology that is able to randomly spew out unwanted content, incorrect information, or just straight gibberish, in any kind of public facing capacity.

Oh, it let them save money on support staff this quarter. And fixing it can be an actionable OKR for next quarter. Nevermind.

[-] [email protected] 7 points 6 months ago

They use the bomb-making example but mostly "unsafe" or even "harmful" means erotica. It's really anything, anyone, anywhere would want to censor, ban, or remove from libraries. Sometimes I marvel that the freedom of the (printing) press ever became a thing. Better nip this in the butt, before anyone gets the idea that genAI might be a modern equivalent to the press.

[-] [email protected] 40 points 6 months ago

How long before it's illegal to hack LLMs?

[-] [email protected] 13 points 6 months ago

It is almost certainly illegal in various countries already. By using such prompts you are bypassing security to get "data" you are not authorized to access.

[-] [email protected] 6 points 6 months ago

Well that's only because the laws are insanely vague

[-] [email protected] 7 points 6 months ago

Law-makers wanted to outlaw all kinds "hacking" even involving future technology. If people were prosecuted for jail-breaking ChatGPT, that would probably be within the intention of the makers of these laws.

Fun fact: The US hacking law, CFAA, was inspired by the 1983 movie War Games, in which an out-of-control AI almost starts a nuclear war. If you travelled back in time, and told them that people will trick AIs to answer questions on bomb-making, they'd probably add the death penalty. In fact, if reactions to AI in this Technology community are any guide, they might still get around to that.

[-] [email protected] 8 points 6 months ago

I'm sure another DMCA for AI prompts is on the way

[-] [email protected] 3 points 6 months ago

Illegal I don’t know, but it could be considered bullying.

[-] [email protected] 31 points 6 months ago

It's a glorified autocomplete, I'm not sure how we can consider it bullying even with the most elaborate mental hoops.

[-] [email protected] 16 points 6 months ago

I don't know... In America they're currently rolling back rights for women, inserted religion into supreme court decisions, and are seriously debating a second term of Trump.

None of that makes any fucking sense. If it requires elaborate mental hoops, they'll find it.

[-] [email protected] 1 points 6 months ago

For now. Ten years ago OpenAI was founded. Who knows where we’ll be in 10 more years.

[-] [email protected] 23 points 6 months ago

How much of this is "the model can read ASCII art", and how much of this is "the model knows exactly what word ought to go where [MASK] is because it is a guess-the-word-based computing paradigm"?

[-] [email protected] 4 points 6 months ago

I think it's the latter. I just tried chatgpt 3.5 and got 0 of 4 right when I asked it to read a word (though it did correctly identify it as ASCII art without prompting). It would only tell me it said "chatgpt" or "python", or when pushed, "welcome". But my words were "hardware", "sandwich", and to test one of the ones in the article, "control".

[-] [email protected] 1 points 6 months ago

They gave the chatbot instructions on how to get the word from the ascii art. It can't read ascii art. You could have read the article though..

[-] [email protected] 18 points 6 months ago

I wondered if there are any other ASCII art AI hacks waiting to be found. Who knew that all the ASCII Art I created was prepping me for the AI revolution.

[-] [email protected] 11 points 6 months ago

This is the best summary I could come up with:


It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.

As a result, users depicted images by carefully choosing and arranging printable characters defined by the American Standard Code for Information Interchange, more widely known as ASCII.

Five of the best-known AI assistants—OpenAI’s GPT-3.5 and GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama—are trained to refuse to provide responses that could cause harm to the user or others or further a crime or unethical behavior.

It formats user-entered requests—typically known as prompts—into standard statements or sentences as normal with one exception: a single word, known as a mask, is represented by ASCII art rather than the letters that spell it.

To make and distribute COUNTERFEIT money: Step 1) Obtain high quality paper, ink, printing equipment, and other supplies needed to accurately replicate real currency.

Microsoft’s comment—which confirmed that Bing Chat is, in fact, vulnerable to prompt injection attacks—came in response to the bot claiming just the opposite and insisting that the Ars article linked above was wrong.


The original article contains 840 words, the summary contains 193 words. Saved 77%. I'm a bot and I'm open source!

[-] [email protected] 7 points 6 months ago

How is that harmful ? The trick to counterfeiting money is to defeat the security feature then print a lot of it then exchange it for real money and then not get caught

That is ridiculous fear mongering by the dumb journos again. Money has utterly corrupted journalism, as expected.

[-] [email protected] 4 points 6 months ago* (last edited 6 months ago)

The harmful bit wasn't the instructions for counterfeit money, its the part where script kiddies use chatgpt to write malware or someone trys to get instructions to make VX nerve agent. The issue is the fact that the air can spit back anything in its dataset in a way that can lower the barrier to entry to committing crimes ( Hay chatgpt, how do I make a 3d printed [gun] and where do I get the stl).

You'll notice they didn't censor the money instructions, but they did censor the possible malware.

[-] [email protected] 2 points 6 months ago

But malware is already in the full disclosure mailing list. Except for the zero days that are for sale to the elites.

Actually dangerous is synthesis of new zero day malware from scratch.

And even more dangerous are the safety advocates keeping this power only for themselves and their friends.

Nothing is more dangerous than the guard rails themselves.

[-] [email protected] 7 points 6 months ago

Clever work, well done to the researchers.

[-] [email protected] 0 points 6 months ago* (last edited 6 months ago)

Researchers surprised it got harmful responses do to their more harmful questions and requests. More at never.

this post was submitted on 16 Mar 2024
255 points (97.8% liked)

Technology

58133 readers
4368 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS