this post was submitted on 22 Nov 2024
-20 points (27.3% liked)

Technology

59559 readers
3938 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 19 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 4 hours ago

a virtual replica of you is able to embody your values and preferences with stunning accuracy.

I'm calling BS on this one. "Values and preferences" are such a far cry from Actual Personality that it's meaningless. Just more LLM hype

[–] [email protected] 3 points 5 hours ago

Just one or all of them? /s

[–] [email protected] 14 points 9 hours ago (3 children)

But is it convincing enough to attend meetings for me

[–] [email protected] 1 points 6 hours ago (1 children)

Or family reunions.

...Asking for a friend.

[–] [email protected] 1 points 2 hours ago

What does an AI look like in jorts?

[–] [email protected] 3 points 9 hours ago (2 children)

Ugh someone recently sent me LLM-generated meeting notes for a meeting that only a couple colleagues were able to attend. They sucked, a lot. Got a number of things completely wrong, duplicated the same random note a bunch of times in bullet lists, and just didn’t seem to reflect what was actually talked about. Luckily a coworker took their own real notes, and comparing them made it clear that LLMs are definitely causing more harm than good. It’s not exactly the same thing, but no, we’re not there yet.

[–] [email protected] 2 points 1 hour ago

I hosted a meeting with about a dozen attendees recently, and one attendee silently joined with an AI note taking bot and immediately went AFK.

It was in about 5 minutes before we clocked it and then kicked it out. It automatically circulated its notes. Amusingly, 95% of them were "is that a chat bot?" "Steve, are you actually on this meeting?" "I'm going to kick Steve out in a minute if nobody can get him to answer", etc. But even with that level of asinine, low impact chat, it still managed to garble them to the point of barely legible.

Also: what a dick move.

[–] [email protected] 2 points 8 hours ago (1 children)

Wait until you hear about doctors using AI to summarize visits 😎

[–] [email protected] 1 points 6 hours ago (1 children)
[–] [email protected] 1 points 6 hours ago (1 children)

All the above would apply to doctor visit notes. Would you find that helpful?

Plus, they can hallucinate phrases or entire sentences

[–] [email protected] 1 points 2 hours ago

Have you seen current doctor visit note summaries? The bar is pretty low. A lot of these are made with conventional dictation software that has no sense of context when it misunderstands. Agree the consequences can be worse if the context is wrong, but I would guess a well programmed AI could summarize better on average than most visit summaries are currently. With this sort of thing there will be errors, but let's not forget that there ARE errors.

[–] [email protected] 1 points 9 hours ago

Asking the important questions

[–] [email protected] 15 points 11 hours ago (2 children)

Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.

Okay, but can it embody my traumas?

[–] [email protected] 2 points 2 hours ago

lol because people always behave in ways consistent with how they tell an interviewer they will.

[–] [email protected] 3 points 9 hours ago

Maybe some of the symptoms of the traumas that you exhibited during the interview.

[–] [email protected] 9 points 11 hours ago* (last edited 11 hours ago)

I'm pretty sure we already explored this timeline in a black mirror episode

[–] [email protected] 12 points 11 hours ago* (last edited 11 hours ago)

I actually have wanted to try this out to see how accurate it can actually be. I already have conversations with myself, so I can truly compare the reality to a LLM. It's actually weird that even the supposed "unlocked and able to generate anything from anything" tools I've found still don't let you just input direct forum posts and shit to use. Though, I can totally understand why; most people probably aren't gonna use it with their own shit, but someone else's.

[–] [email protected] 12 points 11 hours ago (1 children)

Great. Now I can see first hand how annoying I am 🤔

[–] [email protected] 2 points 8 hours ago

I like you.