this post was submitted on 07 Mar 2024
484 points (97.5% liked)

Technology

59340 readers
5887 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. "Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn," Edelman global technology chair Justin Westcott told Axios in an email. "Companies must move beyond the mere mechanics of AI to address its true cost and value — the 'why' and 'for whom.'"

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -2 points 8 months ago (1 children)

Maybe I'm not stating my point explicitly enough but it actually is that names or goalposts aren't very important. Cultural impact is. I think already the current AI has had a lot more impact than any chatbot from the 60s and we can only expect that to increase. This tech has rendered the turing test obsolete, which kind of speaks volumes.

[–] [email protected] 8 points 8 months ago* (last edited 8 months ago) (1 children)

Calling a cat a dog won't make her start jumping into ponds to fetch sticks for you. And calling a glorified autocomplete "intelligence" (artificial or otherwise) doesn't make it smart.

Problem is, words have meanings. Well, they do to actual humans, anyway. And associating the word "intelligence" with these stochastic parrots will encourage nontechnical people to believe LLMs actually are intelligent. That's dangerous—potentially life-threatening. Downplaying the technology is an attempt to prevent this mindset from taking hold. It's about as effective as bailing the ocean with a teaspoon, yes, but some of us see even that as better than doing nothing.