the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this.
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
view the rest of the comments
Do you have a source that presents an unbiased evaluation of this technology? Because I really question the ability of machine learning models to write complex software, so I'll believe it when I see actual results (all I could find with a quick Google search were what amounted to press releases).
no source of course. Imo the propaganda of AI is so strong that no one will listen to you that we are far away from any kind of "AI" replacing programmers that we'll just have to wait for the hype to pass
You mean the annoying little sidebar in Azure which constantly spits out nonsense isn't taking my job?
Edit: honestly it's incredible that something with a default context set to my environment isn't even putting out syntactically correct but meaningless code. Like the syntax is often wrong.
for their current models to serve the function of a human writing code well enough to be entirely replaced the bot would need to be able to problem solve without a stolen database of knowledge. It would need to make minute adjustments to hyper-specific functions that may not even exist in an internet's trove worth of past code.
These models do not think. They are not thinking, they are regurgitating. We do not know how to make the magic rocks think like a human.
I know I'm preaching to the choir at you here but I didn't know where else to put this response
And even if they could write code that well, they wouldn't be able to answer questions about it accurately. And that's a very large chunk of the job!
And debugging? Give me a break!
They're trained to produce streams of characters, even though code is natively a syntax tree, because the entire AI industry is just hitting everything with the same rock and hoping to get a new result.
My guess is, the AI will spit out code that, by itself, won’t be functional, but it’ll be in a state where a small team of coders could fix the oddball things that create errors and inefficiencies. So you would still need real coders, just not as many. Which I’m sure would be an absolute treat for coders, constantly having to debug bizarro AI code.
it's ok at spitting out boilerplate but I honestly find it faster to a) write the code that obviates the boilerplate or b) make an editor macro that does the same thing. I've seen a couple of ooh, ahh type demos of function generation based on a doc string, but the code is almost always incorrect and you need to understand what the code should be in order to figure out what's actually wrong.
i think i agree with this take but it would work a little differently. This will probably shrink the job market not because there will be teams of programmers who have to fix chatgpt's bullshit but because teams of programmers will instead use chatgpt for mundane tasks that they may have otherwise pawned off to an entry-level new grad. Personally that's essentially what I use it for. If I need some script that does some random bullshit it's faster for me to just do it with ChatGPT than try to figure it out myself. Then I can get back to the thing I'm actually trying to do that ChatGPT unfortunately does a horrible job on (I've tried, I really have).
They don't have to produce functioning software with dozens of components. If they produce one component that takes 60% the labor time to debug and integrate than it takes to do the same from scratch, they're still gonna lay off a lot of programmers.
Yeah pretty much, that is how automation goes. And capital will push for it hard, specially large companies, because they could cut labour costs significantly in some cases if these systems turn out to be even moderately successful. Even something like 5/10% cost cut could be substantial.
Although in this case I agree with silent_water that interest rates are the biggest culprit.
Never heard of Devin, but ChatGPT (the current leader) can barely follow instructions well, but is usually faster to get the information I need than Googling questions/topics for a wide variety of tasks. It's pretty much only good at things that had solid stackoverflow answers, including combining 2 questions or answering it for any language. Anytime you attempt to add complexity or details, previous details just fall off at random eventually. Sometimes it just can't combine 2 key details. Sometimes you get a lot of dead-ends. This mostly makes up for Google not being as good as it was 10 years ago, but my productivity boost is probably less than 10%. With Google, you probably click into 2-10 websites which almost all them missing a key keyword (you know a keyword that was key...). Some of the Google quick information boxes copy information from a website and phrase it in the form of a response to your question, but sometimes that specific data point had nothing to do with your question. An example would be a random date in the webpage was picked by Google's LLM to answer my question, but the date I was looking for was not even in the webpage. ChatGPT usually does well if you ask it 101 stuff or questions about well documented non-obscure facts or starting-from-scratch stuff or "write a function that does X". Anything else, you can almost safely assume a miss on first attempt, and a dead-end (no further conversing reaches viable solution) very likely.
Devin is afaik built on ChatGPT but it takes it a little farther and iterates on the code ChatGPT generates by attempting to build and run the program, taking screenshots and so on along the way. I'm a little skeptical that this brute force method will work well but it may end up giving us more shit-tier websites and apps that barely function and have random bugs that aren't 100% reproducible.
My skepticism of this being the thing to replace coders is really about scale. If we've really scraped every morsel of information off the internet and come up with GPT-4 and Claude 3 and Gemini 1.5 I don't know where we go with this technique. It is incredibly expensive to build, train and run these things. ChatGPT-4 is 40 requests per 3 hours for $20/month so if you use it as efficiently as you can each request costs about 2 tenths of a cent. Datacenters are now putting pressure on the US electrical system and I haven't heard of much in terms of making transformers (the core layer of these things) more efficient.
Anyway those are kind of disorganized thoughts but in summary unless something really transformative happen in the ML space I don't know if we can possibly get to the required scale of power and computation and memory we would need to have human-level reasoning. Let alone the fact that we need to apparently suck up all information in existence to get to basic human-like text generation.