this post was submitted on 21 May 2024
120 points (86.6% liked)

Technology

59366 readers
5692 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 203 points 5 months ago* (last edited 5 months ago) (5 children)

Short term yes; long term probably not. All the dipshit c-suites pushing the “AI” worker replacement initiatives are going to destroy their workforces and then realize that LLMs can’t actually reliably replace any of the workers they fired. And I love that for management.

[–] [email protected] 85 points 5 months ago (2 children)

They're gonna realize the two jobs it can actually replace is HR and the C suite.

[–] [email protected] 46 points 5 months ago (1 children)

And neither of those two groups will allow themselves to be replaced.

[–] [email protected] 20 points 5 months ago

Yeah, HR gets by because of legal compliance, and execs get by through convincing the board to give them X years, and then jump to the next one.

[–] [email protected] 23 points 5 months ago (6 children)

Lol AI cannot replace either of those jobs. "I'm sorry I can't help with your time off request but here is a gluten free recipe for a pie that feeds 30 people."

[–] [email protected] 30 points 5 months ago (1 children)

You're right, that sounds better than the average HR rep.

[–] [email protected] 1 points 5 months ago

And better than the average pie 😋

[–] [email protected] 17 points 5 months ago

"Help" with a time-off request?

Here's the help:

Sure!

[–] [email protected] 14 points 5 months ago* (last edited 5 months ago)

It won't replace any jobs entirely, it will just reduce the number of people needed for each job.

Not that there's much difference if you're the one being made redundant.

[–] [email protected] 6 points 5 months ago

Well at least you'd get a recipe

[–] [email protected] 5 points 5 months ago (1 children)

I'd unironically like that recipe, please

[–] [email protected] 1 points 5 months ago

Ingredients: 1 potato

Steps:

Cut into 30 prices and serve

[–] [email protected] 4 points 5 months ago (2 children)

I bet project managers could be replaced with AI super easily, I mean all they have to do is respond to all messages with 👍

[–] [email protected] 6 points 5 months ago

Then you don't have good project managers.

[–] [email protected] 3 points 5 months ago* (last edited 5 months ago)

At least then the project plan would get updated and tasks opened on time.....

[–] [email protected] 3 points 5 months ago (1 children)

It can potentially allow 1 worker to do the job of 10. For 9 of those workers, they have been replaced. I don't think they will care that much for the nuance that they technically weren't replaced by AI, but by 1 co-worker who is using AI to be more efficient.

That doesn't necessarily mean that we won't have enough jobs any more, because when in human history have we ever become more efficient and said "ok, good enough, let's just coast now"? We will just increase the ambition and scope of what we will build, which will require more workers working more efficiently.

But that still really sucks because it's not going to be the same exact jobs and it will require re-training. These disruptions are becoming more frequent in human history and it is exhausting.

We still need to spread these gains so we can all do less and also help those whose lives have been disrupted. Unfortunately that doesn't come for free. When workers got the 40 hour work week it was taken by force.

[–] [email protected] 7 points 5 months ago (1 children)

My colleagues are starting to use AI, it just makes their code worse and harder to review. I honestly can’t imagine that changing, AI doesn’t actually understand anything.

[–] [email protected] 3 points 5 months ago (1 children)

This comment has similar vibes to a boomer in the 80s saying that the Internet is useless and full of nothing but nerds arguing on forums, and he doesn't see that changing.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (1 children)

Probably. I'm just not seeing it actually doing any logic or problem solving. It’s a pattern matching machine today. A new technology could certainly happen.

[–] [email protected] 1 points 5 months ago

Do you know what pattern matching is great for? Finding commonly cited patterns in long debug log messages. LLMs are great for brainstorming problem solving. They're basically word granularity search engines, so they're great for looking things up are more niche knowledge that document search engines fail on. If the thing you're trying to look up doesn't exist, it will make shit up so you need to cross reference everything, but it's still incredibly helpful. Pattern matching is also great for boilerplate. I use the codium extension and it comes up with auto complete suggestions that don't have much logic, but save a good amount of key strokes.

I didn't think the foundational tech of LLMs are going to get substantially better, but we will develop programming patterns that make them more robust and reliable.

[–] [email protected] 3 points 5 months ago (2 children)

You're referring to something that is changing and getting better constantly. In the long term LLMs are going to be even better than they are now. It's ridiculous to think that it won't be able to replace any of the workers that were fired. LLMs are going to allow 1 person to do the job of multiple people. Will it replace all people? No. But even if it allows 1 person to do the job of 2 people, that's 50% of the workforce unemployed. This isn't even mentioning how good robotics have gotten over the past 10 years.

[–] [email protected] 22 points 5 months ago (1 children)

You must have one person constantly checking for hallucinations in everything that is generated: how is that going to be faster?

[–] [email protected] -4 points 5 months ago* (last edited 5 months ago) (3 children)

Sure you sort of need that at the moment (not actually everything, but I get your hyperbole), but you seem to be working under the assumption that LLMs are not going to improve beyond what they are now. It is still very much in its infancy, and as the tech matures this will be less and less until it only requires few people to manage LLMs that solve the tasks of a much larger work force.

[–] [email protected] 7 points 5 months ago

It's hard to improve when the data in is human and the data out cannot be error checked back against its own data in. It's like trying to solve a math problem with two calculators that both think 2+2 = 6 because the data they were given said that it's true.

[–] [email protected] 2 points 5 months ago

(not actually everything, but I get your hyperbole)

How is it hyperbole? All artificial neural networks have "hallucinations", no matter their size. What's your magic way of knowing when that happens?

[–] [email protected] 0 points 5 months ago

LLMs now are trained on data generated by other LLMs. If you look at the "writing prompt" stuff 90% is machine generated (or so bad that I assume it's machine generated) and that's the data that is being bought right now.

[–] [email protected] 2 points 5 months ago

There is a plateau to be hit at some point. How close it is, depends who you ask. Some say we are close, others say we are not but it definitely exists. LLMs suffer, just like other forms of machine learning, from data overload. You simply can't be infinitely feeding it data and keep getting better and better results. ChatGPT's models got famous because value function for learning had humans involved who helped curate quality of responses.

[–] [email protected] 1 points 4 months ago (1 children)

and then realize that LLMs can’t actually reliably replace any of the workers they fired.

Depends on the job. Reliability is not really important to these companies. They can be imperfect and cost them money, but nowhere near as much as a human will cost them, and they'll probably do the job better than the majority of them.

[–] [email protected] 0 points 4 months ago

Short term? Sure.

Long term? Not a chance that equation works out favorably.

But then again, c-suites these days only seem to give a shit about short-term implications.