this post was submitted on 07 Nov 2023
169 points (99.4% liked)

AI

4101 readers
4 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 11 months ago (23 children)

It should be so fucking obvious that self driving cars can't exist yet. Anyone playing with LLMs right now knows it takes a massive model to have general functionality and flexibility. There is no chance that fine tuning a small model can replace a large model for real world situational adaptability. My favorite open source offline LLMs are all 70B models. Running these an real time capable hardware costs around $30k in hardware. This is not scalable to lower costs. This is bleeding edge 5nm fab nodes and some of the largest dies ever produced. No one is ethically monitoring and regulating this. I'm just playing with this as a hobby, but AI in cars is so obviously stupid right now. The hardware is simply not present yet.

[–] [email protected] 17 points 11 months ago* (last edited 11 months ago) (21 children)

In ways yes, in ways no. LLMs are a tiny sliver of AI. Taking the current state of LLMs being overly-sold as AGI, and trying to extrapolate that to other applications, or to other strategies crosses a wide variety of oversimplifications. AI is not one single thing.

It's like seeing someone trying to hammer in a screw and saying "anyone playing with tools right now knows they're never going to get a screw in." But you've never seen a screwdriver.

If you were saying "visual machine learning for a general purpose and unassisted driverless car is not happening tomorrow", then sure.

But things like the Waymo model are doing exceeding well right now. Instead of taking a top down approach to train cars to understand any intersection or road they could ever run into, they're going bottom up by "manually" training them to understand small portions of cities really well. Waymo's problem set is greatly reduced, it's space is much narrower, it's much more capable of receiving extremely accurate training data, and it performs way better because of it. It can then apply all the same techniques for object and obstacle detection other companies are using, but large swaths of the problem space are entirely eliminated.

The hardware to solve those problems is much more available now. The capabilities of doing the "computationally intensive" stuff offline at a "super computer center" and doing small trivial work on the cars themselves is very much a possibility. The "situational adaptability" can be greatly reduced by limiting where the cars go, etc.

The problems these cars are trying to solve have some overlap with your LLM experience, but it's not even close to the same problem or the same context. The way you're painting this is a massive oversimplification. It's also not a problem anyone thinks is going to be solved overnight (except Elmo, but he's pretty much alone on that one), they just know we're sitting just over the cusp and being the first company to solve this is going to be at a huge advantage.

Not to be rude, but there is a reason there are many field experts pushing this space, but LLM hobbyists are doubting it. LLMs are just a tiny subset of AI. And being a hobbyist you're likely looking at one tiny slice of the pie and not the huge swaths of other information in the nearby spaces.

[–] [email protected] 0 points 11 months ago (1 children)

As somebody fairly well-versed in the tech, and have done more than just play around with ChatGPT, I can tell you that self-driving AI is not going to be here for at least another 40-50 years. The challenges are too great, and the act of driving a car takes a lot of effort for even a human to achieve. There are too many fatal edge cases to consider already, and the current tech is tripping over its balls trying to do the most basic things, killing people in the process. When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky, then we know that this tech is far worse than human drivers, despite all of the bullshit we had been told otherwise.

Level 5 autonomous driving is simply not a thing. It will never be a thing for a long time.

Billions of dollars poured into the tech has gotten us a bunch of aggressive up-starts, who think they can just ignore the fatalities as the money comes pouring, and lie to our face about the capabilities of the technology. These companies need to be driven off of cliff and buried in criminal cases. They should not be protected by the shield of corporate personhood and put up for trial, but here we fucking are now...

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (1 children)

As somebody fairly well-versed in the tech, and have done more than just play around with ChatGPT

Lol. See above. And below. Someone "fairly well versed" in ChatGPT gives you just about basically zero expertise in this field. LLMs are a tiny little sliver in the ocean of AI. No one uses LLMs to drive cars. They're LANGUAGE models. This doesn't translate. Like, at all.

Experts in the AI field know much more than some random person who has experimented with a "fad" of an online tool that gained massive popularity in the past year. This field is way bigger than that and you can't extrapolate LLMs to driving cars.

I can tell you that self-driving AI is not going to be here for at least another 40-50 years. The challenges are too great, and the act of driving a car takes a lot of effort for even a human to achieve.

This is a fucking ludicrous statement. Some of these are already outperforming human drivers. You have your head in the sand. Telsa and Cruise are notoriously poorly performing. But they're the ones in the public eye.

When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky,

If you don't understand how minor these problems are in the scheme of the system, you have no idea how any of this works. If you do some weird shit to a car like plant an object on it that normally wouldn't be there then I fucking hope to God the thing stops. It has no idea what that means so it fucking better stop. What do you want from it? To keep driving around doing it's thing when it doesn't understand what's happening? What if the cone then falls off as it drives down the highway? Is that a better solution? What if that thing on its windshield it doesn't recognize is a fucking human? Stopping is literally exactly what the fucking car should do. What would you do if I put a traffic cone on your windshield? I hope you wouldn't keep driving.

When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky, then we know that this tech is far worse than human drivers

This is just a fucking insane leap. The fact that they are still statistically outperforming humans, while still having these problems says a lot about just how much better they are.

Level 5 autonomous driving is simply not a thing. It will never be a thing for a long time.

Level 5 is just a harder problem. We've already reached four. If you think 5 is going to take more than another ten to fifteen years, you're fucking insane.

Billions of dollars poured into the tech has gotten us a bunch of aggressive up-starts, who think they can just ignore the fatalities as the money comes pouring, and lie to our face about the capabilities of the technology. These companies need to be driven off of cliff and buried in criminal cases. They should not be protected by the shield of corporate personhood and put up for trial, but here we fucking are now...

This paragraph actually makes sense. This is the one redeeming chunk of your entire post. Everything else is just bullshit. But yes, this is a serious problem. Unfortunately people can't see the nuance in stuff like this, and when they see this they start with the "AI BAD! AUTONOMOUS VEHICLES ARE A HUGE PROBLEM! THIS IS NEVER HAPPENING!".

Yes, they're are fucking crazy companies doing absolutely crazy shit. That's the same in every industry. The only reason many of these companies exist and are allowed is because companies like Google/Waymo slowly pushed this stuff forward for many years and proved that cars could safely drive autonomously on public roads without causing massive safety concerns. They won the trust of legislation and got AI on the road.

And then came the fucking billions in tech investment in companies that have no idea what they're doing and putting shit on the road under the same legislation without the same levels of internal responsibility and safety. They have essentially abused the good faith won by their predecessors and the governing bodies need to fix this shit yesterday to get this dangerous shit off the streets. Thankfully that's getting attention NOW and not when things got worse.

But overwhelmingly slandering the whole fucking industry and claiming all AI or automous vehicles are bad is just too far off the deep end.

[–] [email protected] -2 points 11 months ago

I really appreciate you saying the things I wanted to say, but more clearly and drawn from far more domain experience and expertise than I have.

I hope that you will be willing to work on avoiding language that stigmatizes mental health though. When talking about horribly unwise and unethical behavior ableism is basically built into our langage. It's easy to pull from words like "crazy" when talking about problems.

But in my experience, most times people use "crazy" they're actually talking about failures that can be much more concretely attributed to systems of oppression and how those systems lead individuals to:

De-value the lives of their fellow human beings. Ignore input from other people they see as "inferior". Overvalue their own superiority and "genius". And generally avoid accountability and dissenting opinions.

I feel like this discussion in particularly really highlights those causes, and not anything related to mental health or intellectual disability.

load more comments (19 replies)
load more comments (20 replies)