this post was submitted on 26 Jul 2024
661 points (97.4% liked)
Technology
59583 readers
2891 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Around here we love the idea of Reddit being totally devoid of life but the fact is it's still one of the most active public facing sites on the web. The attrition to sites like Lemmy is pretty negligible to the overall Reddit activity and bot AI activity only really affects the largest subreddits which have always been a bit spammy and click batey. The medium and small subreddits are still full of active people. Don't get me wrong, Lemmy is my daily driver for this content but I won't pretend everyone fled Reddit for this.
Additionally, exclusivity with Google isn't necessary just to keep the search results but to prevent their biggest AI competition ChatGPT and their ties to Microsoft from getting access to what is the Internet's largest database of public facing conversation.
https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/
For perspective:
https://www.cbsnews.com/news/google-reddit-60-million-deal-ai-training/
So if you annualize that, Reddit's seeing revenue of about $1 billion/year, and net income of about $74 million/year.
Given that Reddit granting exclusive indexing to Google happened at about the same time, I would assume that that AI-training deal included the exclusivity indexing agreement, but maybe it's separate.
My gut feeling is that the exclusivity thing is probably worth more than $60 million/year, that Google's probably getting a pretty good deal. Like, Google did not buy Reddit, and Google's done some pretty big acquisitions, like YouTube, and that'd have been another way for Google to get exclusive access. So I'd think that this deal is probably better for Google than buying Reddit. Reddit's market capitalization is $10 billion, so Google is maybe paying 0.6% the value of Reddit per year to have exclusive training rights to their content and to be the only search engine indexing them; aside from Reddit users themselves running into content in subreddits, I'd guess that those two forms are probably the main way in which one might leverage the content there.
Plus, my impression is that the idea that a number of companies have -- which may or may not be valid -- is that this is the beginning of the move away from search engines. Like, the idea is that down the line, the typical person doesn't use a search engine to find a webpage somewhere that's a primary source to find material. Instead, they just query an AI. That compiles all the data that it can see and spits out an answer. Saves some human searcher time and reduces complexity, and maybe can solve some problems if AIs can ultimately do a better job of filtering out erroneous information than humans. We definitely aren't there yet in 2024, but if that's where things are going, I think that it might make a lot of strategic sense for Google. If Google can lock up major sources of training data, keep Microsoft out, then it's gonna put Microsoft in a difficult spot if Microsoft is gunning for the same thing.
I haven't used the text-based search queries myself; I've used LLM software, but not for this, so I don't know what the current situation is like. My understanding is that current approach doesn't really permit for it. And there are two issues with that:
There isn't a direct link between one source and what's being generated; the model isn't really structured so as to retain this.
Many different sources probably contribute to the answer.
All information contributes a little bit to the probability of the next word that the thing is spitting out. It's not that the software rapidly looks through all pages out there and then finds a given single reputable source that could then cite, the way a human might. That is, you aren't searching an enormous database when the query comes in, but repeatedly making use of a prediction that the next word in the correct response is a given word, and that probability is derived from many different sources. Maybe tens of thousands of people have made posts on a given subject; the response isn't just a quote from one, and the generated text may appear in none of them.
To maybe put that in terms of how a human might think, place you in the generative AI's shoes, suppose I say to you "draw a house". You draw a house with two windows, a flowerbed out front, whatever. I say "which house is that"? You can't tell me, because you're not trying to remember and present one house -- you're presenting me with a synthetic aggregate of many different houses; probably all houses have mentally contributed a bit to it. Maybe you could think of a given house that you've seen in the past that looks a fair bit like that house, but that's not quite what I'm asking you to tell me. The answer is really "it doesn't reflect a single house in the real world", which isn't really what you want to hear.
It might be possible to basically run a traditional search for a generated response to find an example of that text, if it amounts to a quote (which it may not!)
And if Google produces some kind of "reliability score" for a given piece of material and weights the material in the training set by that (which I will guess that if they don't now, they will), they could maybe use the reliability score to try to rank various sources when doing that backwards search for relevant sources.
But there's no guarantee that that will succeed, because they're ultimately synthesizing the response, not just quoting it, and because it can come from many sources. There may potentially be no one source that says what Google is handing back.
It's possible that there will be other methods than the present ones used for generating responses in the future, and those could have very different characteristics. Like, I would not be surprised, if this takes off, if the resulting system ten years down the road is considerably more complex than what is presently being done, even if to a user, the changes under the hood aren't really directly visible.
There's been some discussion about developing systems that do permit for this, and I believe that if you want to read up on it, the term used is "attributability", but I have not been reading research on it.
have you tried perplexity? it’s probably the best ai search engine right now although it still misunderstands context sometimes. it’s pretty good at citing its sources though
At least on some smaller subs, there seems to be a suspicious amount of brand new accounts asking one question to get human answers.
It would not surprise me if reddit, or some other service, are seeding to get more LLM-able content. Of course, this might backfire if people start giving stupid answers to eff up the data.
If I'm not mistaken, Reddit has actual staff centered around asking questions to get engagement in small communities. Not so much for LLM reasons but to actually grow those communities (and thus edge out competition).