this post was submitted on 17 May 2024
5 points (100.0% liked)

Technology

59020 readers
4241 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (1 children)

If these people actually cared about "saving humanity", they would be attacking car dependency, pollution, waste, etc.

Not making a shitty cliff notes machine.

[–] [email protected] -1 points 5 months ago

What a bloody stupid take. No one cares about saving humanity unless that's their only pursuit in life?

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago)

Humanity is surrounding itself with the harbingers of its own self-inflicted destruction.

All in the name of not only tolerated avarice, but celebrated avarice.

Greed is a more effectively harmful human impulse than even hate. We've merely been propagandized to ignore greed, oh im sorry "rational self-interest," as the personal failing and character deficit it is.

The widely accepted thought terminating cliche of "it's just business" should never have been allowed to propagate. Humans should never feel comfortable leaving their empathy/decency at the door in our interactions, not for groups they hate, and not for groups they wish to exploit value from. Cruelty is cruelty, and doing it to make moooaaaaaar money for yourself makes it significantly more damning, not less.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (2 children)

I mean is this stuff even really AI? It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out. I’m not sure this is the tech that will decide humanity is unnecessary.

It’s just rebranded machine learning, IMO.

[–] [email protected] 1 points 5 months ago

Supposedly they found a new method (Q*) that significantly improved their models, enough to make some key people revolt to force the company to not monetize it out of ethical concern. Those people have been pushed out ofc.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago) (1 children)

It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out.

Neither of these things are true.

It does create world models (see the Othello-GPT papers, Chess-GPT replication, and the Max Tegmark world model papers).

And while it is trained on predicting the next token, it isn't necessarily doing it from there on out purely based on "most probable" as your sentence suggests, such as using surface statistics.

Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of "my pieces" and "opponent pieces."

And that was a toy model.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

AKA Othello-GPT chooses moves based on statistics.

Ofc it's going to use a virtual board in this process. Why would a computer ever use a real one board?

There's zero awareness here.

[–] [email protected] 1 points 5 months ago

I guess Altman thought "The ai race comes 1st. If Openai will lose the race, there'll be nothing to be safe about." But Openai is rich. They can afford to devote a portion of their resources to safety research.

What if he thinks that the improvement of ai won't be exponential? What if he thinks that it'll be slow enough that Openai can start focusing on ai safety when they can see superintelligence's approach from the distance? That focusing on safety now is premature? That surely is a difference in opinion compared to Sutskever and Leike.

I think ai safety is key. I won't be :o if Sutskever and Leike will go to Google or Anthropic.

I was curious whether or not Google and Anthropic have ai safety initiatives. Did a quick search and saw this –

For Anthropic, my quick search yielded none.

[–] [email protected] 1 points 5 months ago

Don't fall for this horseshit. The only danger here is unchecked greed from these sociopaths.

[–] [email protected] 0 points 5 months ago (1 children)

Miss me with the doomsday news cycle capture, we aren't even close to AI being a threat to ~anything

(and all hail the AI overlords if it does happen, can't be worse than politicians)

[–] [email protected] -1 points 5 months ago

Except for the environment

[–] [email protected] 0 points 5 months ago (2 children)

Extinction by AI takeover or robot apocalypse does seem cooler than extinction by pollution rendering then environment uninhabitable.

I'd rather not go extinct at all, but if we're fucked regardless.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

Instead we're going to get "D- All of the above."

[–] [email protected] 1 points 5 months ago

Combine the two and we've got a proper Matrix situation on our hands