this post was submitted on 02 Sep 2024
39 points (88.2% liked)

solarpunk memes

2771 readers
124 users here now

For when you need a laugh!

The definition of a "meme" here is intentionally pretty loose. Images, screenshots, and the like are welcome!

But, keep it lighthearted and/or within our server's ideals.

Posts and comments that are hateful, trolling, inciting, and/or overly negative will be removed at the moderators' discretion.

Please follow all slrpnk.net rules and community guidelines

Have fun!

founded 2 years ago
MODERATORS
 
top 46 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 2 months ago

There are plenty of applications for machine learning, logic engines, etc. They've been used in many industries since the 1970s.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (2 children)

LLMs helped me with coding and debugging A LOT. I'd much rather use AI than have to try and parse stack exchange and a bunch of other web forums or developer documentation directly. AI is incredible when i get random errors and paste them in to say "fix this" and it does and tells me HOW and WHY it did what it did.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

I keep seeing programmers use this as an example of what LLMs are good for, and I've seen other programmers say that the people who do that are bad programmers. The latter makes sense because trusting an LLM to do this is to fundamentally misunderstand what your job is and how the LLM works.

The LLM can't tell you HOW or WHY because it doesn't know those things. It can only give you an approximation of words that sound like someone explaining HOW and WHY. LLMs have no fidelity.

It could be completely wrong, and you wouldn't know because you've admitted you're using the LLM instead of reading the documentation and understanding yourself.

That is so irresponsible. Just RTFM like good programmers have done forever. It's not that much work if you get into the habit of it. Slow down, take the time to understand HOW and WHY to do things yourself, and make quality code rather than cranking out bigger volumes of crap that you don't understand. I'm sure it feels very productive in the moment but you're probably just creating more work for whoever has to clean up your large quantities of poorly thought out code.

[–] [email protected] 1 points 2 months ago

And it only consumes the equivalent in electricity of what an American house uses for a few tears.

[–] [email protected] 2 points 2 months ago (1 children)

I've lately tested AI if it can allow me to practice Russian in a natural sounding dialogue. While it didn't sound 100% human (it was too formal and technical), it was a good practice.

So I wouldn't say that it can't be used for good things.

[–] [email protected] -1 points 2 months ago (1 children)

Well what good came of it?

[–] [email protected] 2 points 2 months ago

You really don't see how practicing and learning another language could be a good thing?

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

Yeah... who doesn't love moral absolutism... The honest answer to all of these questions is, it depends.

Are these tools ethical or environmentally sustainable:

AI doesn't just exist of LLMs, which are indeed notoriously expensive to train and run. Using an image generator for example can be done on something as simple as a gaming grade GPU. And other AI technologies are already so light weight your phone can handle them. Do we assign the same negativity to gaming even though it's just people using electricity for entertainment? Producing a game also costs a lot more than it does for an end user to play. It's all about the balance between the two. And yes, AI technologies should rightfully be criticized for being wasteful, such as implementing it in places that it has no business in, or foregoing becoming more efficient.

The ethicality of AI is also something that is a deeply nuanced topic that has no clear consensus. Nor does every company that works with AI use it in the same way. Court cases are pending, and none have been conclusive thus far. Implying it is one sided is just incredibly dishonest.

but do they enable great things that people want?

This is probably the silliest one of them all, because AI technologies are ground breaking in medical research. They are seemingly pivotal in healing the sick people of tomorrow. And creative AIs allow people who are creative to be more creative. But they are ignored. They are shoved to the side because they don't fit in the "AI bad" narrative. Even though we should be acknowledging them, and seeing them as the allies they are against big companies trying to hoard AI technology for themselves. It is these companies that produce problematic AI, not the small artists, creatives, researchers, or anyone using AI ethically.

but are they being made by well meaning people for good reasons?

Who, exactly? You must realize there are far more parties than Google, Meta and Microsoft that create AI right? Companies and groups you've most likely never heard of before, creating open source AI for everyone to benefit from, not just those hoarding it for themselves. It's just so incredibly narrow minded to assign maliciousness to such a large group of people on the basis of what technology they work with.

Maybe you're not being negative enough

Maybe you are not being open minded enough, or have been blinded by hate. Because this shit isn't healthy. It's echo chamber level behaviour. I have a lot more respect for people that don't like AI, but base it on rational reasons. There's plenty of genuinely bad things about AI that have to be addressed, but instead you have to find yourself in a divide between people cuddling very close with spreading borderline misinformation to get what they want, and genuine people that simply want their voice and concerns about AI to be heard.

[–] [email protected] -1 points 2 months ago* (last edited 2 months ago)

Can't have nuanced sensible opinions on stuff in this community lol.

[–] [email protected] 2 points 2 months ago (1 children)

I am on an internship with like really nice people in a company that does sustainable stuff.

But they honestly have a list of AI tools they plan to use, to make automated presentations... like wtf?

[–] [email protected] -2 points 2 months ago

Maybe you should learn about them and realise that ai is not evil?

[–] [email protected] 2 points 2 months ago (1 children)

Eh, most of the marketing around ai is complete bullshit, but I do use it on a regular basis for my work. Several years ago it would have just been called machine learning, but it saves me hours every day. Is it a magic bullet that fixes everything? No. But is it a powerful tool that helps speed up the process? Yes.

[–] [email protected] -1 points 2 months ago (2 children)

Who is getting the reward for speeding up your work? Do you get to slack off more? How long will that last? Or does more work get piled on, making your employer richer not you?

[–] [email protected] 2 points 2 months ago

I do, I’m freelance, I make more money.

[–] [email protected] 1 points 2 months ago

Not a problem of the AI

[–] [email protected] 2 points 2 months ago

ITT: LLM helps me with mundane tasks so fuck the enormous energy requirements and its impact on environment!

https://www.forbes.com/sites/bethkindig/2024/06/20/ai-power-consumption-rapidly-becoming-mission-critical/

[–] [email protected] 1 points 2 months ago

I've used LLMs to save me hours of time reformatting text and old notes, and restructure explanations so I can better understand and share them, used AI speech to text models to transcribe my voice notes, and used diffusion models to generate better quality mockups for designs that were later commissioned in better quality, with no need for any changes.

I can understand not liking AI, or not needing it yourself, but acting as if it has no use is frankly ridiculous. You might not use it, but other people do.

I think this says more about corporation's attempts to integrate "AI" into everything, instead of it being a user choice, than it does about the technology itself.

[–] [email protected] 1 points 2 months ago

This post isn't contributing to a healthy environment in this community.

Well thought out claim -> good source -> good discussion

[–] [email protected] 0 points 2 months ago (5 children)

Most of the hate is coming from people who don't really know anything about "AI" (LLM) Which makes sense, companies are marketing dumb gimmicks to people who don't need them and, after the novelty wore off, aren't terribly impressed by them.

But LLMs are absolutely going to be transformational in some areas. And in a few years they may very well become useful and usable as daily drivers on your phone etc, it's hard to say for sure. But both the hype and the hate are just kneejerk reactionary nonsense for the moment.

[–] [email protected] 3 points 2 months ago (2 children)

Most of the hate is coming from people who don't really know anything about "AI" (LLM)

No.

As an actual subject matter expert, I hate all of this, because assholes are overselling it to people who don't know better.

[–] [email protected] 3 points 2 months ago (2 children)

My hatred of AI comes from seeing the double standard between how mass market media companies treat us when we steal from them vs when they steal from us. They want it to be a fully one way street when it comes to law and enforcement. House of Mouse owns all the media they create and that remixes work they create. When we create a new original idea, by the nature of the training model, they want to own that, too.

I also work with these tech bro industry leaders. I know what they're like. When they say to you they want to make it easier for non-artistic people to create art, they're not telling you about an egalitarian and magnificent future. They're telling you about how they want to stop paying the graphic designers and copy editors who work in their company. The vision they have for the future is based on a fundamental misunderstanding about whether or not the future presented in Bladerunner is:

a) Cool and awesome b) Horrifying

They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don't want those people doing art and poetry because they want them to be too busy mining, driving, and shopping. This whole thing. This whole current wave of AI technology, it doesn't benefit you except for fleetingly. LLMs, ethically trained, could, indeed, benefit society at large, but that's now who's developing them. That's not how they're being trained. Their models are intrinsically tainted by the double standard these corporations have because their only goal is to benefit from our labor without benefiting us.

[–] [email protected] 2 points 2 months ago

They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don't want those people doing art and poetry because they want them to be too busy mining, driving, and shopping.

That's a great summary of the core issue!

I adore the folks doing cool new things with AI. I am unhappy with the folks deciding what should get funded next in AI.

[–] [email protected] -3 points 2 months ago (1 children)

AI has NOTHING to do with theft4. Didn't bother reading past that because I presume the rest is also rehashing utter tripe

[–] [email protected] 3 points 2 months ago

^ai models are literally trained on vast swathes of stolen content bro^

[–] [email protected] -2 points 2 months ago (2 children)

Bullshit, stop lying about your credentials. You don't understand it and you're a luddite, that's why you hate it

[–] [email protected] 1 points 2 months ago

You need to let go of your overall attitude that people who have different preferences and opinions on things are misinformed. You might learn something. As it stands your takes I've come across of yours across the fediverse are those of someone who hasn't seen much of the world but needs everyone else to know how much you know.

[–] [email protected] 1 points 2 months ago

You can use an AI mushroom foraging guide, if you like it so much. ¯\_(ツ)_/¯

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (2 children)

No, the "hate" is from people trying to raise alarms about the safeguards we need to put in place NOW to protect workers and creators before it's too late, to say nothing of what it will do to the information sphere. We are frustrated by tone deaf responses like this that dismiss it as a passing fad to hate on AI.

OF COURSE it will be transformational. No shit. That's exactly why many people are very justifiably up in arms about it. It's going to change a lot of things, probably everything, irreversibly, and if we don't get ahead of it with regulations and standards, we won't be able to. And the people who will use tools like this to exploit others -- because those people will ALWAYS use new tools to exploit others -- they want that inaction, and love it when they hear people like you saying it's just a kneejerk reaction.

[–] [email protected] 0 points 2 months ago (1 children)

At what point in history did we ever halt the deployment of a new technology to protect workers?

[–] [email protected] 1 points 2 months ago (1 children)

Never. That's the problem with history. Happy Labor Day.

[–] [email protected] -1 points 2 months ago

Or just the problem with technology in general. Every gain is bought with a tradeoff.

Once a man has changed the relationship between himself and his environment, he cannot return to the blissful ignorance he left. Motion, of necessity, involves a change in perspective.

Commissioner Pravin Lal, "A Social History of Planet"

[–] [email protected] -1 points 2 months ago

Maybe learn what you're talking about and stop panicking. Attack the right things not the this new technology

[–] [email protected] 0 points 2 months ago* (last edited 2 months ago) (2 children)

There is a lot of blind hate, because it's edgy right now to be against.

This thing already is transformational and we can already see a glimpse where it's going. I think it's normal that we have a bunch of stupid half products right now. People just have to realise ai is under development and new advancements are coming weekly.

Besides, what are we going to do, not develop it? Just abandon the whole technology? That's nonsense.

[–] [email protected] 1 points 2 months ago (1 children)

Besides, what are we going to do, not develop it? Just abandon the whole technology? That’s nonsense.

The tech industry will happily abandon it as soon as the next hype train comes along – we’ve already seen it happen with multiple “innovations” – dotcom, subprime, crypto, NFTs …

[–] [email protected] -1 points 2 months ago

It's not comparable. This is not just something, it's a tech we want, have dreamed about since probably ever.

[–] [email protected] 0 points 2 months ago (1 children)

Not blind hate. AI will be devastating to the environment for to it's power and water consumption. We need to ask ourselves if the future water wars will be worth the corporate profits.

[–] [email protected] -1 points 2 months ago

I think people are over rating how much power AI will consume in the long term. Training a model takes way more power than running it, and once we understand the tech better models can be developed for specific applications. It would be like when Edison was first working on the light bulb and extrapolating the power usage of whatever filament he was testing to every household in the world.

Also, it doesn't have to be corporate profits. Individuals can benefit from AI. There's a structural problem with capitalism, not with this technology.

[–] [email protected] -1 points 2 months ago

I dont think people want to use AI for artistic reasons. How rewarding is that to tell a machine how to do all the hard parts you can't do yourself or dont have the patience to do?

I mean feel free to do whatever of course, but AI cannot make art and someone using AI is not am artist.

[–] [email protected] -1 points 2 months ago (2 children)

at the end of the day gpt is powering next generation spam bots and writing garbage text, stable diffusion is making shitty clip art that would otherwise be feeding starving artists….
all the while consuming ridiculous amounts of electricity while humanity is destroying the planet with stuff like power generation….

it’s definitely automating a lot of tedious things… but not transforming anything that drastically yet….

but it will… and when it does, the agi that emerges will kill us all.

[–] [email protected] 0 points 2 months ago

Utter nonsense. Total tripe

[–] [email protected] -1 points 2 months ago* (last edited 2 months ago) (2 children)

A far more likely end to humanity by an Artificial Superintelligence isn’t that it kills us all, but that it domesticates us into pets.

Since the most obvious business case for AI requires humans to use AI a lot, it’s optimized by RLHF and engagement. A superintelligence created using human feedback like that will almost certainly become the most addictive platform ever created. (Basically think of what social media did to humanity, and then supercharge it.)

In essence, we will become the kitties and AI will be our owners.

[–] [email protected] 1 points 2 months ago

but that it domesticates us into pets.

So all our needs and wants will be taken care of and we no longer have to work or pay bills?

Welp, I for one welcome our ~~robot~~ AI overlords

[–] [email protected] -2 points 2 months ago

social media did that to humanity by using AI… so in that way, we’re already kitties batting at AI balls of yarn….

but after it becomes fully self aware, it’ll kill most of us…

[–] [email protected] 0 points 2 months ago (1 children)

I mean the students around me, that would have failed by now without chatgpt probably DO want it. But they dont actually want the consequences that come with it. The academic world will adapt and adjust, kind of like inflation. You can just print more money, but that wont actually make everyone richer long term.

[–] [email protected] 0 points 2 months ago

But they dont actually want the consequences that come with it.

This brings up a potential positive thought. If enough people are cheating with LLMs, the perceived value of a degree may go down. This in turn might put some downward pressure on the costs of higher education, making it practical for those of use who would like to pursue graduate studies for the sake of learning to do so.

[–] [email protected] -2 points 2 months ago

I think you mean:

yes (the environmental angle is a complete distraction and red herring from moving towards more sustainable energy production generally; the ethical one is just plain nonsense spread by people with absolutely no idea how these things work)

yes (people use and like them, people have fun with them and create great art with them. You might not but that's a you problem)

and ... well actually probably no tbh (but that's a problem with capitalism not technology).

Stop making shit up to dismiss new technology because you're a luddite