this post was submitted on 27 Sep 2023
63 points (100.0% liked)

TechTakes

1490 readers
39 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

"Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto", per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

"(Jeff) Macpherson is a director and co-founder at Xagency.AI", a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s "over 7 years in the tech sector" which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

"Illustrator Martin Deschatelets" whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

"Ottawa economist Armine Yalnizyan", per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the "we" who have to adapt here?

AI is apparently "something that can tell you how many cows are in the world" (J.M.). Detecting a lack of results validation here again.

"At the end of the day that's what it's all for. The efficiency, the productivity, to put profit in all of our pockets", from J.M.

"You now have the opportunity to become a Prompt Engineer", from J.M. to the author and illustrator. (It's worth watching the video to listen to this person.)

Me about the article:

I'm feeling that same underwhelming "is this it" bewilderment again.

Me about the video:

Critical thinking and ethics and "how software products work in practice" classes for everybody in this industry please.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

Unlike blockchains, LLMs have practical uses (GH copilot, for example, and some RAG usecases like summarizing aggregated search results). Unfortunately, everyone and their mother seems to think it can solve every problem they have, and it doesn't help when suits in companies want to use LLMs just to market that they use them.

Generally speaking, they are a solution in search of a problem though.

[–] [email protected] 14 points 1 year ago (4 children)

GH copilot, for example, and some RAG usecases like summarizing aggregated search results

you have no idea how many engineering meetings I’ve had go off the rails entirely because my coworkers couldn’t stop pasting obviously wrong shit from copilot, ChatGPT, or Bing straight into prod (including a bunch of rounds of re-prompting once someone realized the bullshit the model suggested didn’t work)

I also have no idea how many, thanks to alcohol

[–] [email protected] 9 points 1 year ago

Ah, I see you, too, have an engineering culture of PDD

(Promptfan Driven Dev)

[–] [email protected] 7 points 1 year ago

Haha they are, in fact, solutions that solve potential problems. They aren't searching for problems but they are searching for people to believe that the problems they solve are going to happen if they don't use AI.

[–] [email protected] 2 points 1 year ago (2 children)

That sounds miserable tbh. I use copilot for repetitive tasks, since it's good at continuing patterns (5 lines slightly different each time but otherwise the same). If your engineers are just pasting whatever BS comes out of the LLM into their code, maybe they need a serious talking to about replacing them with the LLM if they can't contribute anything meaningful beyond that.

[–] [email protected] 9 points 1 year ago (2 children)

as much as I’d like to have a serious talk with about 95% of my industry right now, I usually prefer to rant about fascist billionaire assholes like altman, thiel, and musk who’ve poured a shit ton of money and resources into the marketing and falsified research that made my coworkers think pasting LLM output into prod was a good idea

I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same).

it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

I was gonna say... good old qa....q 20@a does the job just fine thanks :p

[–] [email protected] 6 points 1 year ago

“but my special boy text editing task surely needs more than a basic macro” that’s why Bram Moolenaar, Dan Murphy, and a bunch of grad students Stallman didn’t credit gave us Turing-complete editing languages

[–] [email protected] 1 points 1 year ago

Yes, the marketing of LLMs is problematic, but it doesn't help that they're extremely demoable to audiences who don't know enough about data science to realize how unfeasable it is to have a service be inaccurate as often as LLMs are. Show a cool LLM demo to a C-suite and chances are they'll want to make a product out of it, regardless of the fact you're only getting acceptable results 50% of the time.

it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

I'm perfectly fine with vscode, and I know enough vim to make quick changes, save, and quit when git opens it from time to time. It also has multi-cursor support which helps when editing multiple lines in the same way, but not when there are significant differences between those lines but they follow a similar pattern. Copilot can usually predict what the line should be given enough surrounding context.

[–] [email protected] 8 points 1 year ago (1 children)

@TehPers @self

5 lines slightly different each time but otherwise the same

I got questions tbh

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (1 children)

It's not that uncommon when filling an array with data or populating a YAML/JSON by hand. It can even be helpful when populating something like a Docker Compose config, which I use occasionally to spin up local services while debugging like DBs and such.

[–] [email protected] 3 points 1 year ago (1 children)
[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (2 children)

Copilot helped me a lot when filling in legendaryII.json based on data from legendary.json in this directory. The data between the two files is similar, but there are slight differences in the item names and flag names. Most of it was copy/paste, but filling in the When sections was much easier for me with copilot + verify, for example.

Edit: It also helped me with filling in the entries at the top of this C# file based on context I provided in a different format above (temporarily) in comments.

[–] [email protected] 5 points 1 year ago (1 children)

@TehPers there are tools for doing this sor... you know what, never mind

[–] [email protected] -1 points 1 year ago (1 children)
[–] [email protected] 7 points 1 year ago (2 children)

@TehPers "I used Github Copilot to help me hand-edit a massive JSON file which was *very slightly different* from another JSON file that I also maintain for some reason, therefor AI is good" is quite a take, but go off, I guess

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

it's over a decade since eevee wrote the php clawhammer post and there's a whole new generation still learning old mistakes

best don't waste your keys, it doesn't sound like this person wants to hear any different to what they already know/think

[–] [email protected] 7 points 1 year ago

@froztbyte just reinforcing my hypothesis that, as with crypto, everybody involved in this is either a cynical grifter or a wide-eyed first-day-on-the-computer 12-year-old

[–] [email protected] -3 points 1 year ago (1 children)

I genuinely don't get what point you're trying to make. I found the tool useful and it saved me time. Are you trying to say the tool did not in fact do what I needed it to, when my other usual approaches were not flexible enough to do what I needed? Did it not do its job and save me time writing my code?

Seriously, you don't see me making fun of people for using vim or notepad++, or whatever editors and tools you use.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (3 children)

You were asked to give a use-case for LLMs, and with this comes the implicit assumption that it's not something that can be easily done with a tool that costs about seven orders of magnitude less to produce and operate.

A bunch of junior devs writing repetitive code because it's easier or people refusing to learn proper tools because "AI can write my JSON" aren't exactly good reasons tor the rest of the industry to learn how LLMs work. Don't get me wrong, there are good reasons, but you've not listed any.

[–] [email protected] 6 points 1 year ago

inb4 “but copilot is freeeee” in 3..2..

[–] [email protected] -1 points 1 year ago (2 children)

You were asked to give a use-case for LLMs

No, I was asked to give a situation where Copilot was useful. For LLMs, go look at how popular ChatGPT-like tools are for people who aren't developers, especially RAG-based ones like Bing chat, and tell me they aren't finding use out of them when companies are literally providing guidance for using them to employees who barely know how to use Excel.

A bunch of junior devs writing repetitive code because it's easier or people refusing to learn proper tools because "AI can write my JSON" aren't exactly good reasons tor the rest of the industry to learn how LLMs work. Don't get me wrong, there are good reasons, but you've not listed any.

It saved me time in more than one instance. I don't particularly care what the industry does and never asked the industry to change, but the industry is changing without my input anyway. Clearly I'm not the only one who finds that it increases productivity, and no, sed and vim scripts aren't going to do the kind of predictive completions that Copilot can do.

Also, junior devs are going to junior dev regardless of the presence of LLMs. It has always been the responsibility of more senior devs to help them write code correctly. Blaming more junior devs for relying too much on LLMs is just an admission that as a senior dev, you are failing to guide them in the right direction and help them improve.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (2 children)

For LLMs, go look at how popular ChatGPT-like tools are for people who aren’t developers, especially RAG-based ones like Bing chat, and tell me they aren’t finding use out of them

the RAG-based bing chat that everyone in my social sphere (especially the non-developers) rags on for giving ridiculously bad answers? what a bizarrely shitty implementation to apparently be obsessed with

Also, junior devs are going to junior dev

given that you seem to be resistant to learning how to use an editor for anything more advanced than linear text insertion and seem to think git is “forcing” you to use vim, maybe instead of throwing other junior devs under the bus you should be focusing a bit more on learning your craft? I can guarantee you that all this black box bullshit is an impediment to understanding that your career will be better off without

and with that, the hour is late, this subthread is too fucking long, and your time posting godawful takes on this instance is coming to a close

e: oh yeah, I have a link handy for anyone who doesn’t believe my anecdote about how much people fucking hate bing AI

[–] [email protected] 5 points 1 year ago

this subthread is too fucking long

it was breaking mlem's ability to actually see this deep

[–] [email protected] 5 points 1 year ago (1 children)

also that's a great link, ty, adding it to my stash

[–] [email protected] 6 points 1 year ago (2 children)

I’m so glad I started adding this shit to Zotero to use as references in future long form articles, cause it turns out it’s also a pretty good bookmark manager

[–] [email protected] 4 points 1 year ago (1 children)

haha, I was wondering (and planning to ask)

it's still an unsolved problem in my life, and none of the solutions or frameworks I've come across yet have matched up to my needs. I might be doomed to have to write a software.

[–] [email protected] 6 points 1 year ago (1 children)

I’ve written some of my best software in a bout of rage and exasperation that somehow nobody has come up with a version of what I want that doesn’t suck

[–] [email protected] 5 points 1 year ago
[–] [email protected] 3 points 1 year ago (1 children)

Go Zotero!!!! I hope they don’t AI that shit

[–] [email protected] 6 points 1 year ago (1 children)

god they fucking would wouldn’t they. flashing back to MDN implementing a bunch of LLM bullshit and the two people responsible for sneaking it into the codebase getting increasingly passive aggressive (in a very cryptobro-reminiscent way) with the hundreds of developers who had a problem with it

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

shit what happened with that, I got too busy with life

did they just hunker down for a while and hope people would stop being mad? my first suspicion/expectation is that this is what they would do/did

[–] [email protected] 5 points 1 year ago (1 children)

I think they got rid of the ridiculously inaccurate autodocs, but kept the ridiculously inaccurate paid ChatGPT wrapper with a warning that its results “may be inaccurate”

[–] [email protected] 5 points 1 year ago

that's as close to diametrically opposite the right thing as one can manage to do

impressive, I guess

[–] [email protected] 4 points 1 year ago (1 children)

especially RAG-based ones like Bing chat

@self we've got another 'un. and it didn't even take them that long to go from pretending innocence to mask-off!

[–] [email protected] 6 points 1 year ago

“copilot isn’t an LLM” followed by “everyone loves bing AI” is a one-two punch of bad takes I admittedly didn’t see coming

[–] [email protected] -1 points 1 year ago (1 children)

@TehPers found an optimisation for you, without resorting to Copilot https://github.com/TehPers/StardewValleyMods/pull/37

Feel free to merge once your tests pass. What's that? There are no tests? Ah well...

[–] [email protected] 6 points 1 year ago

what was the point of this