It's not really a step towards Sci fi level ai, it's just a slightly more advanced version of clicking on the first autopredicted word when you type a sentence on your cell phone. the tools you needed already existed and were stolen are spit out by a very fancy text prediction algorithm.
I'd disagree, and go so far as to say that it's a baby AGI, and we need new terms to talk about the future of these approaches.
To start, "fancy autocomplete" is correct but useless, in the same way that saying the human brain is just a bunch of meat or the like. Assume that we built an autocomplete so good at its job that it knew every move you were about to make and every word you were about to speak. Yes, it's "just a fancy autocomplete", but one that must be backed by at least human-level intelligence. At some level of autocomplete ability, there must be a model backing it that can be called "intelligent", even if that intelligence looks nothing like human intelligence.
Similarly, the "fancy autocomplete" that is GPT-4 must have some amount of intelligence, and this intelligence is a baby AGI. When AGI is invoked, people tend to get really excited, but that's what the "baby" qualifier is for. GPT-4 is good at a large variety of tasks without extra training, and this is undeniable. You can quibble about what good means in this context, but it is able to handle simple tasks from "write some code" to "what are the key points in this document?" to "tell me a bedtime story" without being specifically trained to handle those tasks. That was unthinkable a year ago, and is clearly a sign of a model that has been able to generalize across many different tasks. Hence, AGI. It's not very good at a lot of those tasks (but surprisingly good at a lot of them), but it knows what the task is, and is trying its best. Hence, baby AGI.
Yeah, it's got a lot of limitations right now. But hardware is only getting cheaper, and we're developing techniques like Chain of Thought prompting that lets the LLMs have short-term working memory, which helps immensely. A linguist I know once said that the approaches we're taking are like building a ladder to the moon. Well, we've started building a hell of a ladder, and I'm excited to see where it takes us.
I don't care what yall call it, ai, agi, Stacy, it doesn't change the fact it was 100% trained on books tagged as "bed time stories" to tell you a bed time story, it couldn't tell you one otherwise.
Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.
Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.
Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.
But why? Also, "has free will" is exactly equivalent to "i cannot predict the behavior of this object". This is a whole separate essay, but "free will" is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don't have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn't really make a difference.
Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.
But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I'd say most humans wouldn't pass your test for intelligence, and in fact they're just 3 LLMs in a trenchcoat.
Yeah, the reality is that we've built a Chinese room. And saying "well, it doesn't really understand" isn't sufficient anymore. In a few years are you going to be saying "we're not really being oppressed by our robot overlords!"?
I'm saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn't exist at all.
It's not really a step towards Sci fi level ai, it's just a slightly more advanced version of clicking on the first autopredicted word when you type a sentence on your cell phone. the tools you needed already existed and were stolen are spit out by a very fancy text prediction algorithm.
I'd disagree, and go so far as to say that it's a baby AGI, and we need new terms to talk about the future of these approaches.
To start, "fancy autocomplete" is correct but useless, in the same way that saying the human brain is just a bunch of meat or the like. Assume that we built an autocomplete so good at its job that it knew every move you were about to make and every word you were about to speak. Yes, it's "just a fancy autocomplete", but one that must be backed by at least human-level intelligence. At some level of autocomplete ability, there must be a model backing it that can be called "intelligent", even if that intelligence looks nothing like human intelligence.
Similarly, the "fancy autocomplete" that is GPT-4 must have some amount of intelligence, and this intelligence is a baby AGI. When AGI is invoked, people tend to get really excited, but that's what the "baby" qualifier is for. GPT-4 is good at a large variety of tasks without extra training, and this is undeniable. You can quibble about what good means in this context, but it is able to handle simple tasks from "write some code" to "what are the key points in this document?" to "tell me a bedtime story" without being specifically trained to handle those tasks. That was unthinkable a year ago, and is clearly a sign of a model that has been able to generalize across many different tasks. Hence, AGI. It's not very good at a lot of those tasks (but surprisingly good at a lot of them), but it knows what the task is, and is trying its best. Hence, baby AGI.
Yeah, it's got a lot of limitations right now. But hardware is only getting cheaper, and we're developing techniques like Chain of Thought prompting that lets the LLMs have short-term working memory, which helps immensely. A linguist I know once said that the approaches we're taking are like building a ladder to the moon. Well, we've started building a hell of a ladder, and I'm excited to see where it takes us.
I don't care what yall call it, ai, agi, Stacy, it doesn't change the fact it was 100% trained on books tagged as "bed time stories" to tell you a bed time story, it couldn't tell you one otherwise.
Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.
Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.
https://en.m.wikipedia.org/wiki/Chinese_room
But why? Also, "has free will" is exactly equivalent to "i cannot predict the behavior of this object". This is a whole separate essay, but "free will" is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don't have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn't really make a difference.
But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I'd say most humans wouldn't pass your test for intelligence, and in fact they're just 3 LLMs in a trenchcoat.
Yeah, the reality is that we've built a Chinese room. And saying "well, it doesn't really understand" isn't sufficient anymore. In a few years are you going to be saying "we're not really being oppressed by our robot overlords!"?
I'm saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn't exist at all.
http://xkcd.com/2169