BrickedKeyboard

joined 1 year ago
[–] [email protected] -1 points 1 year ago (1 children)

Note that the new line of thinking is "if you didn't use at least 10,000 GPUs you didn't try anything". All the models that show even a spark of intelligence had very absurd amounts of compute put into their training. It is possible that galactica would have worked had facebook put more resources into it.

[–] [email protected] -2 points 1 year ago (4 children)

I'm old enough to remember this same line of argument about internet company hype. That everyone wanted a company as successful as Microsoft or yahoo and was throwing money into anything that had a .com. Of course, one of those was Amazon.com ...

It's possible for one field to be 100% a scam (blockchain, NFTs), while another field is 99% a scam (AI startups), and yet the 1% ends up creating a massive new sector of the economy that is richer than anything prior.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (6 children)

Just to be clear, you can build your own telescope now and see the incoming spacecraft.

Right now you can go task GPT-4 with solving a problem about equal to undergrad physics, let it use plugins, and it will generally get it done. It's real.

Maybe this is the end of the improvements, just like maybe the aliens will not actually enter orbit around earth.

[–] [email protected] 0 points 1 year ago (14 children)

Sure, but they were 4 function calculators a few months ago. The rate of progress seems insane.

[–] [email protected] 0 points 1 year ago (6 children)

My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

That's right. Eliezer's LSD vision of the future where a smart enough AI just figures it all out with no new data is false.

However, you could...build a fuckton of robots. Have those robots do experiments for you. You decide on the experiments, probably using a procedural formula. For example you might try a million variations of wing design, or a million molecules that bind to a target protein, and so on. Humans already do this actually in those domains, this is just extending it.

[–] [email protected] -1 points 1 year ago (1 children)

It's 8 instances and the MoE architecture is a little more complex than that.

[–] [email protected] -2 points 1 year ago (17 children)

Just to engage with the high school bully analogy, the nerd has been threatening to show up with his sexbot bodyguards that are basically T-800s from terminator for years now, and you've been taking his lunch money and sneering. But now he's got real funding and he goes to work at a huge building and apparently there are prototypes of the exact thing he claims to build inside.

The prototypes suck...for now...

[–] [email protected] 0 points 1 year ago (1 children)

I keep seeing this idea that all GPT needs to be true AI is more permanence and (this is wild to me) a robotic body with which to interact with the world. if that’s it, why not try it out? you’ve got a selection of vector databases that’d work for permanence, and a big variety of cheap robotics kits that speak g-code, which is such a simple language I’m very certain GPT can handle it. what happens when you try this experiment?

??? I don't believe GPT-n is ready for direct robotics control at a human level because it was never trained on it, and you need to use a modification on transformers for the architecture, see https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action . And a bunch of people have tried your experiment with some results https://github.com/GT-RIPL/Awesome-LLM-Robotics .

In addition to tinker with LLMs you need to be GPU-rich, or have the funding of about 250-500m. My employer does but I'm a cog in the machine. https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini

What I think is the underlying technology that made GPT-4 possible can be made to drive robots to human level at some tasks, though if you note I think it may take to 2040 to be good, and that technology mostly just includes the idea of using lots of data, neural networks, and a mountain of GPUs.

Oh and RSI. That's the wildcard. This is where you automate AI research, including developing models that can drive a robot, using current AI as a seed. If that works, well. And yes there are papers where it does work. .

[–] [email protected] -2 points 1 year ago* (last edited 1 year ago) (3 children)

No literally the course material has the word "belief". It means "at this instant what is the estimate of ground truth".

Those shaky blue lines that show where your Tesla on autopilot thinks the lane is? That's it's belief.

English and software have lots of overloaded terms.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (23 children)

1, 2 : since you claim you can't measure this even as a thought experiment, there's nothing to discuss 3. I meant complex robotic systems able to mine minerals, truck the minerals to processing plants, maintain and operate the processing plants, load the next set of trucks, the trucks go to part assembly plants, inside the plant robots unload the trucks and feed the materials into CNC machines and mill the parts and robots inspect the output and pack it and more trucks...culminating in robots assembling new robots.

It is totally fine if some human labor hours are still required, this cheapens the cost of robots by a lot.

  1. This is deeply coupled to (3). If you have cheap robots, if an AI system can control a robot well enough to do the task as well as a human, obviously it's cheaper to have robots do the task than a human in most situations.

Regarding (3) : the specific mechanism would be AI that works like this:

Millions of hours of video of human workers doing tasks in the above domain + all video accessible to the AI company -> tokenized compressed description of the human actions -> llm like model. The llm like model thus is predicting "what would a human do". You then need a model to transform the what to robotic hardware that is built differently than humans, and this is called the "foundation model": you use reinforcement learning where actual or simulated robots let the AI system learn from millions of hours of practice to improve on the foundation model.

The long story short of all these tech bro terms is robotic generality - the model will be able to control a robot to do every easy or medium difficulty task, the same way it can solve every easy or medium homework problem. This is what lets you automate (3), because you don't need to do a lot of engineering work for a robot to do a million different jobs.

Multiple startups and deepmind are working on this.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

I'm trying to find the twitter post where someone deepfakes eliezer's voice into saying full speed ahead on AI development, we need embodied catgirls pronto.

[–] [email protected] -1 points 1 year ago (4 children)

The one issue I have is that "what if some are their beliefs turn out to be real". How would it change things if Scientologists get a 2 way communication device, say they found it buried in Hubbard's backyard or whatever and it appears to be non human technology - and are able to talk to an entity who claims it is Xenu. Doesn't mean their cult religion is right but say the entity is obviously nonhuman, it rattles off the method to build devices current science knows no method to build and other people build the devices and they work and YOU can pay $480 a year and get FTL walkie talkies or some shit sent to your door. How does that change your beliefs?

view more: ‹ prev next ›