this post was submitted on 18 Sep 2023
66 points (92.3% liked)

Technology

59232 readers
4387 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.::DeepMind cofounder Mustafa Suleyman wants to build a chatbot that does a whole lot more than chat. In a recent conversation I had with him, he told me that generative AI is just a phase. What’s next is interactive AI: bots that can carry out tasks you set for them by calling on other software…

top 22 comments
sorted by: hot top controversial new old
[–] [email protected] 28 points 1 year ago (2 children)

You don't need a scientific genius to deduce that - even I did it! One of the most impressive things for me about ChatGPT has been the ability to "undestand" what you mean, properly communicate with you. For the time being it's not hooked up to anything but it shouldn't be too hard to make it translate our natural language requests (which it already "understands") into software commands. The possibilities are endless.

[–] [email protected] 4 points 1 year ago (1 children)

If only it understood instead of "understood"

[–] [email protected] 0 points 1 year ago (1 children)

Meaning what? It needs Cartesian Dualist qualia floating around between its wires and transistors, or else it's just a word vending machine? What's the demonstrable test for understanding vs "understanding"?

[–] [email protected] 2 points 1 year ago (1 children)

I'm not saying I have a definition or way to get there, just that it actually hasn't demonstrated that it actually understands (through the tasks where it fails)

[–] [email protected] 1 points 1 year ago (1 children)

I still don't understand what you mean. If you don't have a criterion for "actually" understanding, how has it demonstrably failed?

[–] [email protected] 1 points 1 year ago (1 children)

I don't have an exact example for you to test it out so I'll try to explain in general terms:

Let's say you give a task to ChatGPT that a human can do easily but ChatGPT fails at it consistently, isn't that proof that it doesn't understand.

It might be hard to grasp from this without example, but the problem with any example would be that OpenAI can become aware of a problem and tweak the algorithm to correct just that specific example.

One example I remembered while typing this is how it fails at giving you a list of words which fit a certain criteria like having a specific number of letters. This is not the best example I had come across in the past but it still seems to fail at this one.

Anyway, hopefully you got my point about lack of understanding.

[–] [email protected] 1 points 1 year ago (1 children)

Fair enough but it just seems like a fluffy distinction.

And I don't think they "tweak the algorithm" so much as generate a load more training data of that one specific task to get it up to spec.

In any case, humans make mistakes on lots of stuff too, so if the criterion for "true" understanding is to make no mistakes then humans cannot be said to understand either.

[–] [email protected] 1 points 1 year ago

As I said, my example wasn't the best one, but you're right that based on it humans can be judged badly too

[–] [email protected] 1 points 1 year ago (1 children)
[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

That's just a toolbox and in my experience a pretty limited one as well. What OP means is that Gen AI doesn't connect to your Emails, Photoshop, your IDE, your browser, and what not with text or speech.

Imagine not using your keyboard and mouse anymore, but only using your speech and natural language for everything (not commands, but natural language).

Confidently interfacing with smart glasses would be a game changer for so many things.

[–] [email protected] 2 points 1 year ago

computer, make me a sandwich

[–] [email protected] 12 points 1 year ago (1 children)

Give me that on my phone with voice control through my ear bud, and I’ll finally have something worthy of being called a PDA.

[–] [email protected] 2 points 1 year ago (1 children)

Think further. Don't own a phone anymore but only smart glasses.

[–] [email protected] 3 points 1 year ago (1 children)

Have you seen how ugly those are? Give me smart contact lenses!

[–] [email protected] 0 points 1 year ago

In time they will become better

[–] [email protected] 8 points 1 year ago (2 children)

Right, I'll trust a complex AI to take charge of my other apps.

"I want to send a text to my mother"

"Autogenerated sexting message sent"

"WAIT NO"


The tech enthusiast in me likes the idea. The IT professional however is very sceptical of trusting software to that extent.

Hell, I feel a sting of uncertainty every time I use inter-app interfaces on Android. Sure, I know how it's supposed to work, and often enough it does, but the error rate and fragmentation of standards are still too high for me to have enough faith that somehow an AI would circumvent that. We see purpose-built machines like Tesla's autopilot fail dramatically, much less an ambitious multi-function tool.

The above example may be strongly exaggerated, but the wealth of side effects and weird interactions between different human-made and thus typically inherently flawed tools concerns me. It's hard, probably even impossible, to predict all the potential mishaps.

I want to believe and I hope we'll reach a level of maturity and QA standards where I can trust it. I like the idea. I'm an excited pessimist who would like nothing more than to be wrong.

[–] [email protected] 3 points 1 year ago

Hell, I feel a sting of uncertainty every time I use inter-app interfaces on Android. Sure, I know how it's supposed to work, and often enough it does

I thought I was the only one worried about that

[–] [email protected] 1 points 1 year ago

Of course you're going to be right at first and very wrong in the long run.

[–] [email protected] 6 points 1 year ago

Please no, this is incredibly dangerous. They didn't stop at giving people AI which gave developers incredibly untrusted and deceptive code. Now they want to run this code without oversight.

People are going to be rm -rf /* by the AI and will only then understand how stupid of an idea this is.

[–] [email protected] 3 points 1 year ago

we can just say this is bullshit right? Literally inventing problems to solve.

[–] [email protected] 1 points 1 year ago

ChatGPT with plugins already does this. Nothing controversial here.

[–] [email protected] 0 points 1 year ago

Only read the headline but I entirely agree with that conclusion. I'm interested to see where this will go.