this post was submitted on 21 Sep 2024
403 points (97.6% liked)

Technology

59285 readers
4747 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Modern AI data centers consume enormous amounts of power, and it looks like they will get even more power-hungry in the coming years as companies like Google, Microsoft, Meta, and OpenAI strive towards artificial general intelligence (AGI). Oracle has already outlined plans to use nuclear power plants for its 1-gigawatt datacenters. It looks like Microsoft plans to do the same as it just inked a deal to restart a nuclear power plant to feed its data centers, reports Bloomberg.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 41 points 1 month ago (4 children)

I'm sure that everyone will recognize that this was a great idea in a couple of years when generative LLM AI goes the way of the NFT.

[–] [email protected] 38 points 1 month ago (2 children)

Honestly, it probably is a great idea regardless. The plant operated for a very long time profitably. I'm sure it can again with some maintenance and upgrades. People only know three mile island for the (not so disastrous) disaster, but the rest of the plant operated for decades after without any issues.

[–] [email protected] 15 points 1 month ago (2 children)

with some maintenance and upgrades.

Hopefully we can trust these tech bros to do that properly and without using their usual "move fast and break things" approach.

[–] [email protected] 18 points 1 month ago

They are only buying 100% of the output. The old owners are still owning and operating it.

[–] [email protected] 2 points 1 month ago

And if they do skimp on maintenance and upgrades and the plant melts down, we can be assured that no harm will come to the company because the scale of the disaster would wipe them out and they're "too big to fail."

[–] [email protected] 5 points 1 month ago (1 children)

It's one of a hell of an old nuclear plant if it's the original three mile island one.

[–] [email protected] 7 points 1 month ago

It is, yeah. It was in operation until 2019.

[–] [email protected] 8 points 1 month ago

Once operational, the energy generated is cheap and will still be in demand

[–] [email protected] 7 points 1 month ago

LLMs have real uses, even if they're being overhyped right now. Even if they do fail, though, more nuclear power is a great outcome

[–] [email protected] 6 points 1 month ago* (last edited 1 month ago) (2 children)

Nfts were a scam from the start something that has no actual purpose utility or value being given value through hype.

Generative AI is very different. In my honest opinion you have to have your head in the sand if you don't believe that AI is only going to incrementally improve and expand in capabilities. Just like it has year over year for the last 5 to 10 years. And just like for the last decade it continues to solve more and more real-world problems in increasingly effective manners.

It isn't just constrained to llms either.

[–] [email protected] 7 points 1 month ago (3 children)

The creators who made the LLM boom said they cannot improve it any more with the current technique due to diminishing returns.

It's worthless in its current state.

Should be dying out faster imo.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

One of the major problems with LLMs is it's a "boom". People are rightfully soured on them as a concept because jackasses trying to make money lie about their capabilities and utility -- never mind the ethics of obtaining the datasets used to train them.

They're absolutely limited, flawed, and there are better solutions for most problems ... but beyond the bullshit LLMs are a useful tool for some problems and they're not going away.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (1 children)

I cannot think of one single application where an LLM is better or even equivalent than having a person do the job. Its real only use is to trade human workers for cheaper but inferior output, at the detriment to mankind as a whole because we have in excess labor and in shortage power.

[–] [email protected] 0 points 1 month ago (1 children)

There are jobs where it's not feasible or practical to pay an actual human to do.

Human translators exist and are far superior to machine translators. Do you hire one every time you need something translated in a casual setting, or do you use something Google translate? LLMs are the reason modern machine translation is is infinitely better than it was a few years ago.

[–] [email protected] 0 points 1 month ago

Google Translate was functional BEFORE llms were a hit, arguably moreso, and we had datasets on human language which are now polluted by AI making it harder now to build dictionaries than it was before.

[–] [email protected] 2 points 1 month ago (1 children)

That's one groups opinion, we still see improving LLMs I'm sure they will continue to improve and be adapted for whatever future use we need them. I mean I personally find them great in their current state for what I use them for

[–] [email protected] 3 points 1 month ago (2 children)

What skin do you have in this game? Leading industry experts, who btw want to SELL IT TO YOU, told you it has hit a ceiling. Why do you refute it so much? Let it die, we will all be better off.

[–] [email protected] 0 points 1 month ago (1 children)

I use them regularly for personal and work projects, they work great at outlining what I need to do in a project as well as identifying oversights in my project. If industry experts are saying this, then why are there still improvements being made, why are they still providing value to people, just because you don't use them doesn't mean they aren't useful.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Maybe you saw the news about a major hit to US Cybersecurity due to morons like you copy-pasting from the GeePeeTee? Or about a wave of falsified research papers generated by AI? Or how a lawyer tried to use an AI assistant resulting in fines and a bar reviewal?

[–] [email protected] -1 points 1 month ago (1 children)

Even if it didn't improve further there are still uses for LLMs we have today. That's only one kind of AI as well, the kind that makes all the images and videos is completely separate. That has come on a long way too.

[–] [email protected] -1 points 1 month ago (1 children)

I made this chart for you:

------ Expectations for AI

 

 

 

 

 

----- LLM's actual usefulness

----- What I think if it

 

 

----- LLM' usefulness after accounting for costs

[–] [email protected] 0 points 1 month ago (1 children)

Bruh you have no idea about the costs. Doubt you have even tried running AI models on your own hardware. There are literally some models that will run on a decent smartphone. Not every LLM is ChatGPT that's enormous in size and resource consumption, and hidden behind a vail of closed source technology.

Also that trick isn't going to work just looking at a comment. Lemmy compresses whitespace because it uses Markdown. It only shows the extra lines when replying.

Can I ask you something? What did Machine Learning do to you? Did a robot kill your wife?

[–] [email protected] 1 points 1 month ago (1 children)

Earlier this year, the International Energy Agency released its energy usage and forecast and has predicted that the total global electricity consumption of data centers is set to top 1 PWh (petawatt-hour) in 2026. This more than doubles its 2022 value and (as the report states) “is equivalent to the electricity consumption of Japan.” SOURCE

It does fuck all for me except make art and customer service worse on average, but yes it certainly will result in countless avoidable deaths if we don't heavily curb its usage soon as it is projected to Quintuple its power draw by 2029.

[–] [email protected] 1 points 1 month ago (1 children)

I am not talking about things like ChatGPT that rely more on raw compute and scaling than some other approaches and are hosted at massive data centers. I actually find their approach wasteful as well. I am talking about some of the open weights models that use a fraction of the resources for similar quality of output. According to some industry experts that will be the way forward anyway as purely making models bigger has limits and is hella expensive.

Another thing to bear in mind is that training a model is more resource intensive than using it, though that's also been worked on.

[–] [email protected] -1 points 1 month ago* (last edited 1 month ago) (1 children)

You put power in and you get worthless garbage out. Do the world a favor and just mine crypto, try FoldingCoin out.

[–] [email protected] 0 points 1 month ago (1 children)

I've seen teachers use this stuff and get actually decent results. I've also seen papers where people use LLMs to hack into a computer, which is a damn sophisticated task. So you are either badly informed or just lying. While LLMs aren't perfect and aren't a replacement for humans, they are still very much useful. To believe otherwise is folly and shows your personal bias.

[–] [email protected] 0 points 1 month ago

Anybody who uses a bullshit generator in any step of the education process is unqualified.

[–] [email protected] -1 points 1 month ago (1 children)

There are always new techniques and improvements. If you look at the current state, we haven't even had a slowdown

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)
[–] [email protected] 1 points 1 month ago

I suspect you're right. But there really is never a good way to tell with these kinds of experimental techs. It could be a runaway chain of improvement. Or it is probably even odds that there is a visible and clear decline before it peters out, or just suddenly slams into a beick wall with no warning.