Thing about the singularity is that it doesn't look like it right up until it does
196
Be sure to follow the rule before you head out.
Rule: You must post before you leave.
@agamemnonymous No, it looks like it beforehand. ChatGPT's just a language prediction engine, but people think it can think. It can only discern what the most probable language patterns are, it can't make judgements. But people are arguing it is working off inspiration.
And we've KNOWN it will look like it beforehand, that's why there's even concepts like a Turing test, to prepare us for discerning the illusion of intelligence from actual intelligence.
Prersonally, I suspect social media and the way that Bigsoc companies hack the human mind using feed algorithms is an argument for a Non-AI Singularity, and more likely than a math engine that predicts the next word in an astoundingly natural way.
I think you may underestimate the nature of exponential positive feedback. The AI singularity centers around an inflection point of self-programming before which noticeable improvements take place over months and weeks, and after which they take place over seconds and microseconds. Self-modification iterates faster than you can record.
It has nothing to do with "inspiration" or "actual intelligence". It is entirely based on self-modification, and the "illusion" of intelligence is sufficient for that task. Eventually, the illusion is indiscernible from reality (spoken as a very complex method of distributing gametes).
You underestimate yourself as a complex method of distributing gametes. Because you are operating on a much more complicated mathematical base than a computer. You're analog. Your brain is an analog computing engine that moves faster than any analog machine we've ever been able to make, the only way we can transmit faster is by using digital in our computers. Which means that down to it, while we might just be chemical and electrical signals, the computer itself is just two signals. Two voltages. 1 and 0. Our thinking is vastly more complex, even as fast as this thing goes. That's what instinct and intuition is, our brains processing evidence against memory.
Our brain is still binary: a neuron fires or it doesn't. Our computational complexity comes from the dense interconnections, the architecture. Just because a digital computer doesn't presently have architecture that complex doesn't make it fundamentally impossible. In fact if I'm not mistaken, ChatGPT is already more complex than we can presently understand. It wasn't "designed" top-down in its present state, it has transformative matrices weighted by iterated training. We literally don't know why it gives the answers it does. We gave it criteria to fulfill, and trained it over and over again until it got really good at fulfilling those criteria. That process is only accelerating, I don't know why you'd think there's some arbitrary barrier.
It's not an arbitrary barrier. It's important. Can the computer actually make a decision? Can it be HELD ACCOUNTABLE for that decision? If we're going to deploy these things to replace human beings, this is a question that needs a "Yes" answer.
Right now the answers are no. They can't make a decision that takes multiple dimensions of an issue into account. But businesses ARE saying that they can replace human writers, people ARE using them to write legal briefs and technical instructions.
I don't know why you are so insistent that this doesn't matter. We're watching something kick its legs, it can't even crawl yet, but it's being signed up for a marathon and you're arguing that it'll be able to do the marathon eventually so that's good enough.
What I said is that the transition from kicking it's legs to crawling will take some time, but the transition from crawling to marathons will happen basically overnight. My whole original comment was based on that, and the fact that it will look like it's struggling to crawl right up until that fateful night when it teaches itself to run at supersonic speeds.
Yes, it's kicking its legs now. Yes, there's no good way to predict when the inflection point of exponential growth will happen.
No, philosophizing about accountability has nothing to do with the facts of AI singularity. No, questions about "actual" intelligence vs "illusory" intelligence are neither relevant to the conversation, nor even meaningfully solved even when just talking about other humans, much less non-human organisms.
That's why these barriers are arbitrary. You can't even prove that an average human is capable of responsible accountability in any scientifically objective or meaningful way (and I'd argue that, anecdotally, a disturbingly large percentage are in fact ill-suited to the task). But again, none of these points have anything to do with the AI singularity.
The AI singularity is based upon exactly one premise: can an AI reprogram itself to be slightly better at reprogramming itself. That's it. Nothing about consciousness, or accountability, or morality or responsibility or initiative or anything else. It all boils down to editing is own code to be more efficient at editing its own code (or generating "children" along the same premise, it's functionally the same). This creates a positive feedback loop which increases exponentially in capability. You're trying to moralize a mathematical function.
That's because the capital-s Singularity as proposed by Verner Vinge is what we're worried about here. The advent of a technological achievement that forever changes humanity, possibly signalling the end of it.
This does specifically set a barrier, which is a "Point of No Return" when it comes to technology. This is what most people mean when they mean the Singularity. When a program becomes capital-I Intelligent.
Neumann's original proposal is as limited by mathematics as an LLM itself. The term Singularity has, as is common in the English language, become a larger term to signify a barrier has been crossed. There are other theories beyond the idea that it's just self-replication gone wild.
You're trying to reduce what to most people is a moral quandry to pure mathematics. Since my core point is that pure mathematics is not enough to capture the depth and potential of humanity, I'm not going to be swayed by being told it's just a mathematical function.
I will give you a boost for being interesting, though.
I think the issue here is you're interpolating a couple different concepts:
-
Iterated technological self-improvement resulting in exponential growth
-
Artificial General Intelligence
-
The threat to humanity from advanced AI
1 is the singularity, 2 and 3 are frequently hypothesized consequences of 1. Kinda like extensive use of fossil fuels is one concept, the greenhouse effect is another, and rising sea levels a third. They are related, but distinct, even though one contributes to another.
Combining related concepts under one term dilutes the term and makes it more difficult to effectively communicate. Of course, the moral quandaries are valuable topics of discussion, but the mathematical function is a separate topic, and likewise valuable in and of itself
Look, I've had to watch it happen to "triggered", "mansplain", and "woke." You're going to have to accept that it happened to Singularity.
You don't honestly think that the improvement of an LLM's predictive algorithm is going to lead to it taking over the world? All it can do is produce words. Unless we stupidly do everything it says, thinking it's truly intelligent, it has no power.
We only have to worry about machine overlords if we PUT machines in charge of stuff, and we'll only do that if we think they are intelligent enough to make decisions. So yeah, determining whether it has real intelligent is a key thing here.
(Dammit, we've reached the end of the chat tree)
Again, you are hung up on semantics and terminology. You are going down a checklist based on one specific person's extrapolation on the possible consequences of the implementation of a concept. I am looking at the core concept underlying that extrapolation (the exponential increase in capability of a system, due to the recursive application of the system's transformative capabilities to the architecture underlying those same capabilities).
You are caught up on whether the ability to operate on the basis of more data every second than any human can digest in an academic lifetime qualifies as "superhuman". You are hung up on the same extraneous and irrelevant concepts you introduced: consciousness, accountability, decisions, understanding, inspiration.
My original statement was that the singularity doesn't look like the singularity until it does.
Even your liberal definitions still rotate around the concept of exponential iterative growth (despite their addition of functionally extraneous {though derivative} concepts like supremacy or emergent consciousness). There's nothing more that I can say there. You're going on about definitions changing, the center of the definition is the same. Iteration. Self-programming. Exponential growth.
It doesn't look like it until it does. That's what the exponential function does. It's nearly horizontal, negligable, barely noticable gradual growth; until it hits the anchor point when it rockets up, nearly vertical, almost infinite growth. That's the core concept at play. Learning to crawl for months, then setting impossible records the next day.
Learn what an exponential function is. Learn why it looks like that, and what the anchor point represents. Learn how LLMs work. Look into Microsoft's LongNet.
It's not going to look like the singularity, right up until it does
@agamemnonymous Take it up with Verner, man. The idea's been popularized in a way that gathers all three, and there's even theories about a Non-AI Singularity.
This happens all the time with terms.
Popularity is not correctness. You're using a sloppily defined term. I'm using the fundamental definition. Your (Verners) concept muddles matters pointlessly.
The fact is, self-refining LLMs can very possibly exhibit the intelligence explosion fundamental to Von Neumann or I.J. Good's definition. They are already beginning to alter the way human society operates (coding, school, replacing jobs). They easily pass the Turing test with the right prompts. Your whole point is that it's not "real" intelligence because they don't really "understand", but I can say the same for you. For all I know, you're an LLM and there's literally no way that you can prove you aren't.
Lines in the sand about "real" intelligence are purely philosophical, and that kind of hyperopic philosophizing is exactly the sort of behavior that dooms humanity via underestimation. I'd rather we didn't find ourselves under machine overlords because "technically they aren't even really intelligent".
Look, I've had to watch it happen to "triggered", "mansplain", and "woke." You're going to have to accept that it happened to Singularity.
You don't honestly think that the improvement of an LLM's predictive algorithm is going to lead to it taking over the world? All it can do is produce words. Unless we stupidly do everything it says, thinking it's truly intelligent, it has no power.
Look, I've had to watch it happen to "triggered", "mansplain", and "woke." You're going to have to accept that it happened to Singularity.
You don't honestly think that the improvement of an LLM's predictive algorithm is going to lead to it taking over the world? All it can do is produce words. Unless we stupidly do everything it says, thinking it's truly intelligent, it has no power.
We only have to worry about machine overlords if we PUT machines in charge of stuff, and we'll only do that if we think they are intelligent enough to make decisions. So yeah, determining whether it has real intelligent is a key thing here.
Look, I've had to watch it happen to "triggered", "mansplain", and "woke." You're going to have to accept that it happened to Singularity.
You don't honestly think that the improvement of an LLM's predictive algorithm is going to lead to it taking over the world? All it can do is produce words. Unless we stupidly do everything it says, thinking it's truly intelligent, it has no power.
We only have to worry about machine overlords if we PUT machines in charge of stuff, and we'll only do that if we think they are intelligent enough to make decisions. So yeah, determining whether it has real intelligent is a key thing here.
Pointless to continue. You're falling for a con, but you're very invested so I wish you good luck.
I'm invested in nothing, there is no con. I'm sorry, you do not seem to understand the fundamental concepts at play. I would recommend trying to learn but I understand if you cannot.
The plans for colonising Mars involve manufacturing Metholox via the Sabatier reaction, which is a fully renewable process, not a fossil fuel.
@Yendor Point is that it's jumping the gun to think we can escape climate change by rocketing to Mars and terraforming the climate there, rather than just concentrating on terraforming Earth back to a liveable environment and THEN worrying about moving elsewhere. If we can't keep Earth inhabitable, we can't make Mars inhabitable.
Just like people who think Large Language Models are genuine AI are completely jumping the gun about what we're capable of coding right now.
Nah, but chatgpt is fun to fuck with
Weird kink but okay
Lmao
thought this was a pi chart
People saying LLMs are a singularity don't know how LLMs work.
When you feed ChatGPT some text, what you get out wasn't the result of the text being processed by a neural network. It's the result of the text being processed by a deterministic algorithm, one part of which was decided on by a neural network far in advance. That's what neural nets do, they contribute to other algorithms that, while often complicated, are deterministic. If ChatGPT were to become sentient somehow, it would be happening behind the scenes with a neural network you've literally never interacted with before.
What are LLM’s?
This isn't a perfect example but consider you feed a program all of the books ever written. The program parses these books and keeps track of how often one word correlates with another word, based on the frequency that the word appears along the other words.
Now store all that data into a huge (35gb) file. This file isn't human readible, it's just sort of a large table of all of these word correlations. Install this program with its large language model (the 35gb file generated from parsing all the books) on a system or systems capable of doing lots of math fast. Something like a high end GPU.
Now, as a user, send a series of words to the program. The program will look at the words you have written and come up with words that correlate to what you have written and what the bot has already written.
"Correlate" isn't really the best term to use here, but statistics are done based on surrounding words. The program still acts like a program, just predicting the next word using statistics found in the LLM. The program doesn't know how to do math, or write code, but it can have very convincing discussions on both, or anything really.