In my job I write a lot of bullshit sentences that I'd rather a machine write for me. But the solution is to make it so I don't have to write bullshit sentences, not to get a machine to write bullshit.
If your job is anything like mine, the entire reason for those blshit sentences is to fool a machine at Google into putting your website higher in their search results.
So now it's bullshit Ai writing stuff for another bullshit Ai to judge. Consideration for humans is non existent.
But if you don't write more bullshit sentences, who's going to pay for AI to summarize by getting rid of your bullshit sentences?
Every time someone asked me if they should worry about AI i've always replied that they should only worry about humans, especially the rich ones.
It is so funny to see that AIBros are exactly like Creeptobros/NFTBros of their time. Saying that "you're gonna miss out", "you're luddites" and all that jazz. So what's next? They gonna tell me "have fun staying poor" too? Lmfao.
Just like the former, they are completely okay with stealing from others, cuz they are literally worthless without the data they have hoarded outta so many people.
They should keep going, so that more people will see them for what they truly are. :P
"Scraper"
A scraper scrapes. A scrapper scraps.
I used to think AI was a helpful tool, but now I see it is scraping absolute garbage because people are absolute garbage, so now the AI output will be absolute garbage, just exponentially faster.
AI:
The future of search engines will be forums where we create topics stating our search criteria and real people post results.
A horse looks at a car something something. The technology is here to stay and has it's uses, the tech industry will get bored of it's limitations and a new thing will come along for us to scream at. AI has practical applications but I don't think you should dismiss it entirely on principle. I think it's about learning to use this technology practically and ethically in the long run.
I think it's about learning to use this technology practically and ethically in the long run.
If that was happening, I think we’d probably be fine with it. But it just appears almost everywhere, uninvited, as half-baked and soaked in mile-high promises.
I'm more frustrated by the haste with which it's implemented. I've seen (secondhand) instances of this Google search AI spitting out results that are either flat-out wrong (e.g., presenting fan theories as fact in response to a question about warhammeer 40k), or actively harmful (e.g., recommending self harm in response to a search for "how do I stop crying")
Man never thought I’d miss crypto but here we are.
Just Post
Just post something 💛