this post was submitted on 10 Nov 2024
77 points (86.0% liked)

Futurology

1776 readers
92 users here now

founded 1 year ago
MODERATORS
 

The argument for current LLM AIs leading to AGI has always been that they would spontaneously develop independent reasoning, through an unknown emergent property that would appear as they scale. It hasn't happened, and there's no sign that it will.

That's a dilemma for the big AI companies. They are burning through billions of dollars every month, and will need further hundreds of billions to scale further - but for what in return?

Current LLMs can still do a lot. They've provided Level 4 self-driving, and seem to be leading to general-purpose robots capable of much useful work. But the headwinds look ominous for the global economy, - tit-for-tat protectionist trade wars, inflation, and a global oil shock due to war with Iran all loom on the horizon for 2025.

If current AI players are about to get wrecked, I doubt it's the end for AI development. Perhaps it will switch to the areas that can actually make money - like Level 4 vehicles and robotics.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 20 points 6 days ago (13 children)

I don't think anyone in the industry thought LLMs were going to reach AGI. But LLMs will be useful as part of an AGI framework. That's the current focus in the industry.

[–] [email protected] 1 points 6 days ago (8 children)

It's what Altman has constantly said was going to happen. Up to you to decide if he's actually in the industry or not.

[–] [email protected] 2 points 6 days ago

It's remarkable that anyone would think sam said that or thought that. It's like there is a whole other universe where 3rd and 4th hand sources are treated like 1st hand.

load more comments (7 replies)
load more comments (11 replies)