this post was submitted on 22 Jun 2023
139 points (100.0% liked)
Technology
37742 readers
506 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
i don't think they were trying to make money off of the API changes. like others are saying, it has to do with AI and they figured they might as well take the chance and knock out 3rd party in the same swoop so that they can funnel more people onto the official app
they can data harvest much better that way
AI has nothing to do with it other than a convenient, topical scapegoat.
It would have been perfectly possible to charge a different rate for AI harvesting than for Reddit Apps.
of course, but they wanted to kill 3rd party apps without explicitly saying "we're killing 3rd party apps"
this way they can (or at least they thought they could have) had plausible deniability saying stuff like "we tried to work with them" and this is essentially what they tried in the first couple of days
I feel like AI being the reason doesn't hold up particularly well from a technical standpoint. From my searching, web-scraping is completely legal. It'd be slower, but a massive dataset is still very collectable.
Plus building a web-scraper is so easy now. Funny enough, generative AI like chat gpt can get you like 95% of the way there in just a few minutes.
Though, none of the reasons they've stated so far seem to hold up to scrutiny.
It's slower, but to use an API requires you to customize your system to use each different sites unique API. It would be a massive development undertaking, for such a small benefit that it would never pay off. For an LLM, you only need to read each page once, you just wait til a post is a month or so old, and essentially all discussion has stopped, and you will get everything you need. So "fast" isn't really a concern at all.
You can pull much more data much quicker through the API than some sort of HTML scraper. These LLMs need a lot of data and reddit is a big site.