this post was submitted on 05 Jun 2024
43 points (82.1% liked)

No Stupid Questions

35726 readers
3091 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?

I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.

top 35 comments
sorted by: hot top controversial new old
[–] [email protected] 50 points 5 months ago (5 children)

Here's a secret. It's not true AI. All the hype is marketing shit.

Large language models like GPT, llama, and Gemini don't create anything new. They just regurgitate existing data.

You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

Until a llm can understand why it is wrong we won't have true AI.

[–] [email protected] 20 points 5 months ago (1 children)

It's just a stupid probability bucket. The term AI shits me.

[–] [email protected] 9 points 5 months ago

Statistical methods have been a longstanding mainstay in the field of AI since its inception. I think the trouble is that the term AI has been co-opted for marketing.

[–] [email protected] 17 points 5 months ago* (last edited 5 months ago) (1 children)

It is true AI, it's just not AGI. Artificial General Intelligence is the sort of thing you see on Star Trek. AI is a much broader term and it encompasses large language models, as well as even simpler things like pathfinding algorithms or OCR. The term "AI" has been in use for this kind of thing since 1956, it's not some sudden new marketing buzzword that's being misapplied. Indeed, it's the people who are insisting that LLMs are not AI that are attempting to redefine a word that's already been in use for a very long time.

You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

Reminds me of the classic quote from Charles Babbage:

"On two occasions I have been asked, – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"

How is the chatbot supposed to know that the information it's been given is wrong?

If you were talking with a human and they thought something was true that wasn't actually true, do you not count them as an intelligence any more?

[–] [email protected] 5 points 5 months ago

If you were talking with a human and they thought something was true that wasn't actually true, do you not count them as an intelligence any more?

If they refuse to learn and change their belief? Absolutely.

[–] [email protected] 10 points 5 months ago (1 children)

That's not a secret. The industry constantly talks about the difference between LLMs and AGI.

[–] [email protected] 14 points 5 months ago (1 children)

Until a product goes through marketing and they slap that 'Using AI' into the blurb when it doesn't.

[–] [email protected] 8 points 5 months ago (1 children)

LLMs are AI. They are not AGI. AGI is a particular subset of AI, that does not preclude non-general AI from being AI.

People keep talking about how it just regurgitates information, and says incorrect things sometimes, and hallucinates or misinterprets things, as if humans do not also do those things. Most people just regurgitate information they found online, true or false. People frequently hallucinate things they think are true and stubbornly refuse to change when called out. Many people cannot understand when and why they're wrong.

[–] [email protected] 2 points 5 months ago (1 children)

People can also stop saying words and think for a second about the information they're actually saying first, whereas an LLM just vomits up words that seem to match the pattern of the rest of the sentence. If I were to ask you what 2 + 2 is, you'd stop, run the math in your head, get 4, then reply with 4. An LLM would just start vomiting out words based on what it's been trained on without verifying that the information is good (or even relevant), and can end up confidently telling you that 2 + 2 is in fact equal to the cube root of 5 because that's what the data said so it has to be right, for instance.

I'm aware this is a drastic oversimplification, and I think the tech is neat (although I avoid non-self-hosted models like the plague due to privacy concerns), but it's oversold to all hell, and is definitely not even close to intelligent.

[–] [email protected] 1 points 5 months ago

You haven't really looked into multi-agent setups at all, have you? Basically any system of multiple agents can double-check themselves.

Additionally, none of this conflicts with my original point. If you train a human on bad data, they'll GIGO too. I know plenty of humans who have confidently told me objectively false things because they had bad training data.

[–] [email protected] 8 points 5 months ago (1 children)

Large language models like GPT, llama, and Gemini don’t create anything new

That's because it is a stupid use case. Why should we expect AI models to be creative, when that is explicitly not what they are for?

[–] [email protected] -5 points 5 months ago

They are creative, though:

They put things that are "near" each-other into juxtaposition, and sometimes the insights are astonishing.

The AI's don't understand anything, though: they're like bacteria-instinct: total autopilot.

The real problem is that we humans aren't able to default to understanding such non-understanding apparent-someones.

We've created a "hack" of our entire mental-system, and it is the money-profit-rules-the-world group which controls its evolution.

This is called "Darwin Award territory", at the species-scale.

No matter:

The Great Filter, which is what happens when a world-species hasn't grown-up, but gains adult-level technology ( nukes, entire-country-destroying-militaries, biotech, neurotoxins, immense industrial toxic wastelands like the former USSR, accountability-denial-mechanisms in all corporate "persons", etc.. )

you have a toddler with a loaded gun, & killing can happen.

"there's no such thing as a dangerous gun: only a dangerous man", as the book "Starship Troopers" pushed..

Toddlers with guns KILL people in the US.

AI's our "gun", & narcissistic-sociopathy's our "toddler commanding the ship" nature.

Maybe we should rename Earth to "The Titanic", for honesty's sake..

_ /\ _

[–] [email protected] 5 points 5 months ago (1 children)

I have different weights for my two dumbbells and I asked ChatGPT 4.0 how to divide the weights evenly on all 4 sides of the 2 dumbbells. He told me to use 4 half-pound weighs instead of my 2 pound weighs constantly, and finally after like 15 minutes, it admitted that, with my sets of weights, it’s impossible to divide them evenly…

[–] [email protected] 8 points 5 months ago (1 children)

You used an LLM for one of the things it is specifically not good at. Dismissing its overall value on that basis is like complaining that your snowmobile is bad at making its way up and down your basement stairs, and so it is therefore useless.

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago)

You are totally right! Sadly, people think that LLMs are able to do all of these things…

[–] [email protected] 28 points 5 months ago

Disclaimer : I currently work in the field, not on the fundamental side of things but I build tooling for LLM-based products.

There are a ton of true uses for newer AI models. You can already see specialized products getting mad traction in their respective niches, and the clients are very satisfied with them. It's mostly boring stuff, legal/compliance like Hypercomply or accounting like Chaintrust. It doesn't make headlines but it's obvious if you know where to look.

[–] [email protected] 23 points 5 months ago* (last edited 5 months ago)

The most successful applications (e.g. translation, medical image processing) aren’t marketed as “AI”. That term seems to be mostly used for more controversial applications, when companies want to distance themselves from the potential output by pretending that their software tools have independent agency.

[–] [email protected] 22 points 5 months ago (1 children)

You're falling into a no true Scotsman fallacy. There are plenty of uses for recent AI developments, I use them quite frequently myself. Why are those uses not "true" uses?

[–] [email protected] 14 points 5 months ago

Because by design, once an AI implementation finds a use, it changes names. It has to, it's just how marketing this stuff works. We don't use writer AI, we have predictive text; we don't have vision AI, we have enhanced imaging cancer diagnosis; we don't have meeting's AI, we have automatic transcription; we don't have voice AI, we have software dictation. And this is not exclusive to AI, all fields of technology research follow the same pattern. Because selling AI is a grift. No matter how much you want to fold it, it's the same thing as selling NFT or Blockchain or any of the previous tech grifts, solutions without problems. No one actually have a use for a fancy chatbot. And when they do and get a nice chatbot going, they won't call it AI, because AI is associated with grifts and no one wants that perception problem. But when you actually make a product that solves a problem, you sell that product, you stop selling AI. Also AI is way larger than the current stream of LLMs.

[–] [email protected] 17 points 5 months ago

"recent AI developments"

so, you just want to talk about the current batch of narrow AI LLMs?

or are you open to all the graphics/video editing stuff? (Topaz's quality is pretty amazing)

it's a lot better than "is hotdog".

it's also slow.

remember, all these systems do is take a bunch of data in and guess until they get it right, then based on that, process more data and so on.

Have you ever read the story about the AI tank from the 90s?

https://gwern.net/tank

short version of the story is: computer was fed a bunch of pictures. some with tanks, some without. after a while, it got great at identifying them.

when they tried it out with a tank, it kept shooting at trees.

turns out, all the pics with tanks were taken in the shade.

now, like I said: story.

but the point is, this is something that's been worked on for decades. it's a problem as big as teaching as it is how to teach.

so, to be clear: there are LOTS of "true uses". the issue is "they aren't ready yet".

we're just playing around with beta versions (effectively) while still being amazed at how far they've come.

[–] [email protected] 15 points 5 months ago* (last edited 5 months ago)

Between OCR and LLM, summarising scanned things (something I do ~20% of the time) has about halved in terms of mental effort and time. As I'm paid on billable hours, this is big for me. I have told nobody and have not increased my overall output commensurately. This is the only good kind of automation I've observed: bottom-up, no decrease in compensation, no negotiations.

I tried FreedomGPT for better personal ownership, but for now, the hardware isn't up to snuff for my needs. With stronger processing and somewhat better open source models I'll be sitting pretty.

[–] [email protected] 13 points 5 months ago (1 children)

They're looking for something like the internet or smartphones and are disappointed that it's not doing something on that level. Doesn't matter that there's tons of applications in science and art (even if we'd like to ignore the latter).

Or maybe they thought we'd have human level AI by now.

[–] [email protected] 1 points 5 months ago

I'm pretty chuffed with what we have now. Considering it really hasn't been that long that this sort of stuff has even been around, yet the average person can utilize an "AI" in their everyday life without even knowing how to use a computer.

Sure, it's not 100% perfect, but I'll take "stupidly convenient and right 90% of the time" over "takes hours of sifting through blogspam to find useful information that may or may not be correct". Especially when it comes to mundane stuff like writing a resume or things where you have the knowledge, but just not the time.

[–] [email protected] 11 points 5 months ago* (last edited 5 months ago)

Recently I saw AI transcribe a YT video. It was genuinely helpful.

https://lazysoci.al/comment/9866410

[–] [email protected] 7 points 5 months ago

https://en.m.wikipedia.org/wiki/File:Gartner_Hype_Cycle.svg

It's not that helpfull as everybody thinks and slowly people are realizing that.

[–] [email protected] 6 points 5 months ago* (last edited 5 months ago) (1 children)

Current gen AI is pretty mediocre. It's not much more than the bastard child of a search engine and every voice assistant that has been around for the last ten years. It has the potential to be a stepping stone to fantastic future tech, but that's been true of tons of different technologies for basically as long as we've been inventing things.

AI is not good enough to replace the majority of workers yet. It summarizes information pretty well and can be helpful with drafting any sort of document, but so was Clippy. When it doesn't know something it can lie confidently. Lie isn't really the right word but I'll come back to that concept in a second. Incorrect information is frustrating in most cases but it can be deadly when presented by a source that is viewed as trustworthy, and what could be more trustworthy than an AI with access to the collective knowledge of mankind? Well, unfortunately for us AI as we know it isn't really intelligent and the databases they're trained on also contain the collective stupidity of mankind.

That brings us back to the concept of lying and what I view as the fundamental flaw of current AI; namely that any sort of data interpretation can only be as good as the data it describes. ChatGPT isn't lying to you when it says you can put glue on your cheese pizza, it's just pointing out that someone who said that got a lot of attention. Unfortunately it leaves out all the context which could have told you that pizza would not be fit to consume and presents the fact that it was a popular answer as if that is the only thing that defines the best answer. There's so much more that needs to be taken into account, so much unconscious human experience being drawn from when an actual human looks at something and tries to categorize or describe it. All of that necessary context is really difficult to impart to a computer and right now we're not very good at that essential piece of the puzzle.

If we could assume that all datasets analyzed by AI were free from human error, AI would be taking over the world right now. However, that's not the world we live in. All data has errors. Some are easy to spot but many are not. AI firms are getting companies to salivate at the idea of easy manipulation of data in one form or another. They aren't worried about the errors in the data because they view that as someone else's problem and the companies all think their data is good enough that it won't be an issue. Both are wrong. That's exactly why you hear a lot of talk about AI right now and not all that much practical application beyond replacing customer service reps, especially in the business world. Companies are finding out that years of bad practices have left them with a dataset full of errors. Can they find a way to get AI to correct those errors? In some cases yes, in others no. In either case the missing piece preventing a full scale AI takeover is all that human background context necessary for relevant data interpretation. If we find a way to teach that to an AI then the world is going to look vastly different than it does today, but we're not there yet.

[–] [email protected] 2 points 5 months ago

There is truth in statistics. The minor errors are irrelevant in the actual LLM. Problems like the bad reddit quotes by google have nothing to do with and actual LLM, that is a RAG (augmented retrieval) and just bad standard code. The model itself is learning statistical word associations across millions of instances of similar data. The minor errors are irrelevant in this context.

Generative tools posted online are trash in their controls and especially the depth of capabilities. If you play with an enthusiast level consumer machine, with ComfyUI, the full nodes manager (not just the comfy anonymous repo), and the hundreds of nodes, things change. I've spent the last week reading white papers, following code examples, and trying new techniques. The possibilities are getting exponentially complex in a short period of time. I think most people working on generative AI in the public space are turning inward at the moment because it is hard to grasp all the possibilities, or maybe I'm just not following the right people.

We are in a data grab phase where it is feasible to collect more data as opposed to refining what exists. I think the techniques are growing too fast to say what will be the most efficient way of refining data. Eventually a refinement phase is likely.

Hallucinations are not actually a thing. The reasons they happen are just too complex to explain to a consumer public or no one would use the tool. If you learn about alignment and you really start reading into the tokenizer code, you'll learn that it is just a complex system where most errors are due to safety alignment. The rest are generalizations made for an average use case. The underlying capability is far more complex and nuanced than any publicly hosted stalkerware data mining operation might appear. These real capabilities of the LLM are the building blocks of change. There are many other systems than just the tensor tables and word relationship statistics.

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago)

I think most of the media coverage is hype. That doesn't directly answer your question... But I take everything I read with a grain of salt.

Currently, for the tech industry, it's main use is to generate hype and drive the speculation bubble. Whether it's useful or not, slapping the word "AI" on things and offering AI services increases the value of your company. And I personally think if they complain about this, it's they want the bubble even bigger, but they already did the most obvious things. But that has nothing to do with "find use" in the traditional sense (for the thing itself.)

And other inventions came with hype. Like smartphones (the iPhone.) Everyone wanted one. Lots of people wanted to make cash with that. But still, if it's super new, it's not always obvious at what tasks it excels and what the main benefits are in the long term. At first everyone wants in just because it's cool and everyone else has one. In the end it turned out not every product is better with an App (or Bluetooth). And neither a phone, nor AI can (currently) do the laundry and the other chores. So there is a limit in "use" anyways.

So I think the answer to your question: what did they have in mind... is: What else can we enhance with AI or just slap the words on to make people buy more. And to be cool in the eyes of our investors.

I think one of the next steps is the combination with robotics. That will make it quite more useful. Like input from sensors so AI can take part in the real world, not just the virtual one. But that's going to take some time. We've already started, but it won't happen over night. And for the close future i think it's gonna be gradual increase. AI just needs to get more intelligent, make less errors, be more affordable to run. That's going to be a gradual increase and provide me with a better translation service on my phone, a smart-home that i can interact with better, an assistant that can clean up the mess with all the files on my computer, organize my picture folder... But the revolution already happened. I think it's going to be constant, but smaller steps/ progress from now on.

[–] [email protected] 3 points 5 months ago (1 children)

AI is being used to replace a lot of jobs, but companies usually do not want to advertise that.

There are possibilities of consumer products (e.g. smarter alexa and siri) but those are non monetized, so they cannot generate 100B revenue from it.

There is possibility of more innovative products e.g. smart christmas toy, but AI needs few more years to get there.

[–] [email protected] 6 points 5 months ago (1 children)

AI is being used to replace a lot of jobs, but companies usually do not want to advertise that.

I would be careful with that statement.

I've been involved in some projects about "leveraging on data" to reduce maintenance costs. And a big pitfall is that you still someone to do the job. Great, now, you know that the "Primary pump" is about to break. You still need to send a tech to replace-it, and often you have to deal with a user who can't afford to turn the system off until the repair is done, and the you can't let someone work alone in the area. So you end-up having to send 2 persons asap to repair the "primary pump".

It's a bit better in term of planning/ressources than "Send 2 persons to diagnose what's going wrong, get the part and do the repair", which allows to replace engineer able to do a diagnostic by technicians able to execute a procedure (which is itself an issue as soon as you have to think out of the box). It allow to have a more dynamic "preventive maintenance planning". So somehow, it helped cutting down the maintenance costs and improve system reliability. But in the end, you still need staff to do the repair. And I let alone, all the manpower needed to collect/process the data, hardware engineer looking on how to integrate sensor in the machines, data-engineer building a data-base able to use these data, data-scientists building efficient algorithm, product maintenance expert trying to make-sense of these data and so on.

I feel like, a big chunk of the AI will be similar, with some jobs being cut down (or less qualified) while tons of new jobs will take over

[–] [email protected] 3 points 5 months ago

I'm not sure it's going to be that. That was the model for the last wave of tech advancement layoffs and job replacements. This one is going to be so much dumber.

It's no secret that most companies are stagnant or losing money right now across the board. For many reasons, disposable income is way down, COVID mentality change (people decided they wanted to live instead of just consume), and products have just been getting worse. So, CEOs are using AI to replace jobs that AI cannot yet replace. It immediately makes their bottom line look better for investors while doing nothing useful. This will bite them in the ass soon but they'll say AI was oversold and it's not their fault. Meanwhile, they look like the nothing they're doing to improve their company is working and will survive another day.

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago)

Nobody's mentioning this but the reason is that when they say 'next big thing' what they mean is 'being able to monetize it and make it profitable'.

They care about usefulness only insofar as its a way to monetize it. It doesn't need to be useful at all. It's maybe a nice buzzword for on the PowerPoint slide when they're trying to convince investors.

But investors aren't idiots and they are usually pretty fucking tuned in to whatever they put money on. And ai is very oversold and overpromised. It's not that great, very difficult to get to do what you want, and very costly to operate, with mostly questionable/untrustworthy results that still require a lot of knowledge to be able to work with. Plus it's begging for a lot of new legislation to protect copyrights and privacy law. So, we need a bunch of idiots with money to make this work and those are usually in the large tech companies (think the bezos and musks of this world). They have the infrastructure and resources to put into it, and then they try to incorporate it into their ecosystem. They'll probably fuck everything up forever and probably make it so llm's and other models are going to have to be destroyed to be able to comply to legislation.

Anyone with a brain stays away from investing in this and maybe hedge it a bit. See what happens... I don't think that there's going to be other companies popping up in this space. But just the continuing progress of big tech streamlining their current systems until enough people are exposed to enough bullshit to change legislation. Depending on that, maybe some companies will be able to give us something useful like: An ai personal assistant that figured out in the middle of a conversation to put the appointment you made in the agenda, ordered a seat in the restaurant and reroutes you to a gasstation because you're low on fuel and messages your spouse you're 5 minutes late because of it. While your privacy is protected and your data secure.

In the meantime we can make pictures of cats in space wearing a clown costume.

[–] [email protected] 0 points 5 months ago

A buddy told me he used AI to mostly author a PowerShell script for something or other automation at his work the other day. Sounded like it was reasonably complex and all he had to do was sanity-check the code and touch it up to make sure it worked correctly. I've barely dabbled in that area, but I was reasonably impressed with the small tasks I threw at it.

[–] [email protected] -1 points 5 months ago

AI seems to be for coders what the PC was for Designers.

We used to have a guy for type, a guy for colours, a copywriter, an art director, and a graphic designer. Now it's all one guy whose responsible for everything start to finish.

[–] [email protected] -1 points 5 months ago

Are you writing a term paper or business plan and looking for ideas? Lol cuz that's how you get ideas. Data mining for business ideas -- an AI function, actually...

AI is seeing a lot of uses. At my job were using it for Change Control and Requirements Analysis. Legal Discovery was already a thing for LLM models but this threatens to make contracts simpler to digest. My graduate advisor mentioned how she's using it to grade papers.

These aren't the same use cases as the average person, and may not be using the same products, platforms, or models.