this post was submitted on 08 May 2024
173 points (95.8% liked)
Technology
59366 readers
3789 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why?? Please make this make sense. Having AI to help with coding is ideal and the greatest immediate use case probably. The web is an open resource. Why die on this stupid hill instead of advocating for a privacy argument that actually matters?
Edit: Okay got it. Hinder significant human progress because a company I don't like might make some more money from something I said in public, which has been a thing literally forever. You guys really lack a lot of life skills about how the world really works huh?
Because being able to delete your data from social networks you no longer wish to participate in or that have banned you, as long as they specifically haven't paid you for the your contributions, is a privacy argument that actually matters, regardless and independent of AI.
In regards to AI, the problem is not with AI in general but with proprietary for-profit AI getting trained with open resources, even those with underlying license agreements that prevent that information being monetized.
Now this is something I can get behind. But I was talking about the decision to retaliate in the first place.
Because none of the big companies listen to the privacy argument. Or any argument, really.
AI in itself is good, amazing, even.
I have no issue with open-source, ideally GPL- or similarly licensed AI models trained on Internet data.
But involuntarily participating in training closed-source corporate AI's...no, thanks. That shit should go to the hellhole it was born in, and we should do our best to destroy it, not advocate for it.
If you care about the future of AI, OpenAI should long be on your enemy list. They expropriated an open model, they were hypocritical enough to keep "open" in the name, and then they essentially sold themselves to Microsoft. That's not the AI future we should want.
Were in a capitalist system and these are for-profit companies, right? What do you think their goal is. It isn't to help you. It's to increase profits. That will probably lead to massive amounts of jobs replaced with AI and we will get nothing for giving them the data to train on. It's purely parasitic. You should not advocate for it.
If it's open and not-for-profit, it can maybe do good, but there's no way this will.
Why can’t they increase profits, by you know, making the product better.
Do they have to make things shitter to save money and drive away people thus having to make it more shitter.
If they make it better that may increase profits temporarily, as they draw customers away from competitors. Once you don't have any competitors then the only way to increase profits is to either decrease expenses or increase revenue. Increasing revenue is limited if you're already sucking everything you can.
And is it wrong to stop at a certain amount of profit.
Why they always want more. I ain’t that greedy.
To us? No, it isn't wrong. To them? Absolutely. You don't becoming a billionaire by thinking you can have enough. You don't dominate a market while thinking you don't need more.
Hating on everything AI is trendy nowdays. Most of these people can't give you any coherent explanation for why. They just adopt the attitude of people around them who also don't know why.
I believe the general reasoning is something along the lines of not wanting bad corporations to profit from their content for free. So it's just a matter of principle for the most part. Perhaps we need to wait for someone to train LLM on the freely available to everyone data on Lemmy and then we can interview it to see what's up.
Mega co operations like Microsoft, Google are evil. Very easy explanation. Even if it was a good open source company scraping the data to train ai models, people should be free to delete the datta they input. It's pretty simple to understand.