this post was submitted on 27 Sep 2023
63 points (100.0% liked)

TechTakes

1427 readers
129 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

"Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto", per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

"(Jeff) Macpherson is a director and co-founder at Xagency.AI", a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s "over 7 years in the tech sector" which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

"Illustrator Martin Deschatelets" whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

"Ottawa economist Armine Yalnizyan", per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the "we" who have to adapt here?

AI is apparently "something that can tell you how many cows are in the world" (J.M.). Detecting a lack of results validation here again.

"At the end of the day that's what it's all for. The efficiency, the productivity, to put profit in all of our pockets", from J.M.

"You now have the opportunity to become a Prompt Engineer", from J.M. to the author and illustrator. (It's worth watching the video to listen to this person.)

Me about the article:

I'm feeling that same underwhelming "is this it" bewilderment again.

Me about the video:

Critical thinking and ethics and "how software products work in practice" classes for everybody in this industry please.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -3 points 1 year ago (1 children)

I genuinely don't get what point you're trying to make. I found the tool useful and it saved me time. Are you trying to say the tool did not in fact do what I needed it to, when my other usual approaches were not flexible enough to do what I needed? Did it not do its job and save me time writing my code?

Seriously, you don't see me making fun of people for using vim or notepad++, or whatever editors and tools you use.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (3 children)

You were asked to give a use-case for LLMs, and with this comes the implicit assumption that it's not something that can be easily done with a tool that costs about seven orders of magnitude less to produce and operate.

A bunch of junior devs writing repetitive code because it's easier or people refusing to learn proper tools because "AI can write my JSON" aren't exactly good reasons tor the rest of the industry to learn how LLMs work. Don't get me wrong, there are good reasons, but you've not listed any.

[–] [email protected] 6 points 1 year ago

inb4 “but copilot is freeeee” in 3..2..

[–] [email protected] -1 points 1 year ago (2 children)

You were asked to give a use-case for LLMs

No, I was asked to give a situation where Copilot was useful. For LLMs, go look at how popular ChatGPT-like tools are for people who aren't developers, especially RAG-based ones like Bing chat, and tell me they aren't finding use out of them when companies are literally providing guidance for using them to employees who barely know how to use Excel.

A bunch of junior devs writing repetitive code because it's easier or people refusing to learn proper tools because "AI can write my JSON" aren't exactly good reasons tor the rest of the industry to learn how LLMs work. Don't get me wrong, there are good reasons, but you've not listed any.

It saved me time in more than one instance. I don't particularly care what the industry does and never asked the industry to change, but the industry is changing without my input anyway. Clearly I'm not the only one who finds that it increases productivity, and no, sed and vim scripts aren't going to do the kind of predictive completions that Copilot can do.

Also, junior devs are going to junior dev regardless of the presence of LLMs. It has always been the responsibility of more senior devs to help them write code correctly. Blaming more junior devs for relying too much on LLMs is just an admission that as a senior dev, you are failing to guide them in the right direction and help them improve.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (2 children)

For LLMs, go look at how popular ChatGPT-like tools are for people who aren’t developers, especially RAG-based ones like Bing chat, and tell me they aren’t finding use out of them

the RAG-based bing chat that everyone in my social sphere (especially the non-developers) rags on for giving ridiculously bad answers? what a bizarrely shitty implementation to apparently be obsessed with

Also, junior devs are going to junior dev

given that you seem to be resistant to learning how to use an editor for anything more advanced than linear text insertion and seem to think git is “forcing” you to use vim, maybe instead of throwing other junior devs under the bus you should be focusing a bit more on learning your craft? I can guarantee you that all this black box bullshit is an impediment to understanding that your career will be better off without

and with that, the hour is late, this subthread is too fucking long, and your time posting godawful takes on this instance is coming to a close

e: oh yeah, I have a link handy for anyone who doesn’t believe my anecdote about how much people fucking hate bing AI

[–] [email protected] 5 points 1 year ago (1 children)

also that's a great link, ty, adding it to my stash

[–] [email protected] 6 points 1 year ago (2 children)

I’m so glad I started adding this shit to Zotero to use as references in future long form articles, cause it turns out it’s also a pretty good bookmark manager

[–] [email protected] 4 points 1 year ago (1 children)

haha, I was wondering (and planning to ask)

it's still an unsolved problem in my life, and none of the solutions or frameworks I've come across yet have matched up to my needs. I might be doomed to have to write a software.

[–] [email protected] 6 points 1 year ago (1 children)

I’ve written some of my best software in a bout of rage and exasperation that somehow nobody has come up with a version of what I want that doesn’t suck

[–] [email protected] 5 points 1 year ago
[–] [email protected] 3 points 1 year ago (1 children)

Go Zotero!!!! I hope they don’t AI that shit

[–] [email protected] 6 points 1 year ago (1 children)

god they fucking would wouldn’t they. flashing back to MDN implementing a bunch of LLM bullshit and the two people responsible for sneaking it into the codebase getting increasingly passive aggressive (in a very cryptobro-reminiscent way) with the hundreds of developers who had a problem with it

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

shit what happened with that, I got too busy with life

did they just hunker down for a while and hope people would stop being mad? my first suspicion/expectation is that this is what they would do/did

[–] [email protected] 5 points 1 year ago (1 children)

I think they got rid of the ridiculously inaccurate autodocs, but kept the ridiculously inaccurate paid ChatGPT wrapper with a warning that its results “may be inaccurate”

[–] [email protected] 5 points 1 year ago

that's as close to diametrically opposite the right thing as one can manage to do

impressive, I guess

[–] [email protected] 5 points 1 year ago

this subthread is too fucking long

it was breaking mlem's ability to actually see this deep

[–] [email protected] 4 points 1 year ago (1 children)

especially RAG-based ones like Bing chat

@self we've got another 'un. and it didn't even take them that long to go from pretending innocence to mask-off!

[–] [email protected] 6 points 1 year ago

“copilot isn’t an LLM” followed by “everyone loves bing AI” is a one-two punch of bad takes I admittedly didn’t see coming