this post was submitted on 19 Nov 2024
882 points (97.8% liked)

People Twitter

5236 readers
1884 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 61 points 15 hours ago* (last edited 15 hours ago) (16 children)

I just tried out Gemini.

I asked it several questions in the form of 'are there any things of category x which also are in category y?' type questions.

It would often confidently reply 'No, here's a summary of things that meet all your conditions to fall into category x, but sadly none also fall into category y'.

Then I would reply, 'wait, you don't know about thing gamma, which does fall into both x and y?'

To which it would reply 'Wow, you're right! It turns out gamma does fall into x and y' and then give a bit of a description of how/why that is the case.

After that, I would say '... so you... lied to me. ok. well anyway, please further describe thing gamma that you previously said you did not know about, but now say that you do know about.'

And that is where it gets ... fun?

It always starts with an apology template.

Then, if its some kind of topic that has almost certainly been manually dissuaded from talking about, it then lies again and says 'actually, I do not know about thing gamma, even though I just told you I did'.

If it is not a topic that it has been manually dissuaded from talking about, it does the apology template and then also further summarizes thing gamma.

...

I asked it 'do you write code?' and it gave a moderately lengthy explanation of how it is comprised of code, but does not write its own code.

Cool, not really what I asked. Then command 'write an implementation of bogo sort in python 3.'

... and then it does that.

...

Awesome. Hooray. Billions and billions of dollars for a shitty way to reform web search results into a coversational form, which is very often confidently wrong and misleading.

[–] [email protected] 2 points 10 hours ago* (last edited 10 hours ago) (1 children)

Cool, not really what I asked. Then command ‘write an implementation of bogo sort in python 3.’

… and then it does that.

Alright, but... it did the thing. That's a feature older search engines couldn't reliably perform. The output is wonky and the conversational style is misleading. But its not materially worse than sifting through wrong answers on StackExchange or digging through a stack of physical textbooks looking for Python 3 Bogo Sort IRL.

I agree AI has annoying flaws and flubs. And it does appear we're spending vast resources doing what a marginal improvement to Google five years ago could have done better. But this is better than previous implementations of search, because it gives you discrete applicable answers rather than a collection of dubiously associated web links.

[–] [email protected] 6 points 9 hours ago* (last edited 9 hours ago) (1 children)

But this is better than previous implementations of search, because it gives you discrete applicable answers rather than a collection of dubiously associated web links.

Except for when you ask it to determine if a thing exists by describing its properties, and then it says no such thing exists while providing a discrete response explaining in detail how there are things that have some, but not all of those properties...

... And then when you ask it specifically about a thing you already know about that has all those properties, it tells you about how it does exist and describes it in detail.

What is the point of a 'conversational search engine' if it cannot help you find information unless you already know about said information?!

The whole, entire point of formatting it into a conversational format is to trick people into thinking they are talking to an expert, an archivist with encyclopedaeic knowledge, who will give them accurate answers.

Yet it gatekeeps information that it does have access to but omits.

The format of providing a bunch of likely related links to a query is a format much more reminiscent of doing actual research, with no impression that you will immediately find what you want right away, that this is a tool to aide you in your research process.

This is only an improvement if you want to further unteach people how to do actual research and critical thinking.

[–] [email protected] 1 points 7 hours ago (1 children)

Except for when you ask it to determine if a thing exists by describing its properties

Basic search can't answer that either. You're describing a task neither system is well equipped to accomplish.

[–] [email protected] 3 points 4 hours ago* (last edited 4 hours ago)

With basic search, it is extremely obvious that that feature does not exist.

With conversational search, the search itself gaslights you into believing it has this feature, as it understands how to syntactically parse the question, and then answers it confidently with a wrong answer.

I would much rather buy a car that cannot fly, knowing it cannot fly, than a car that literally talks to you and tells you it can fly, and sometimes manages to glide a bit, but also randomly nose dives into the ground whilst airborne.

load more comments (14 replies)