this post was submitted on 29 Mar 2024
50 points (94.6% liked)

Technology

1278 readers
142 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 11 months ago
MODERATORS
 

The Microsoft-powered bot says bosses can take workers’ tips and that landlords can discriminate based on source of income

all 8 comments
sorted by: hot top controversial new old
[–] [email protected] 17 points 6 months ago* (last edited 6 months ago) (1 children)

Yet another example of people fundamentally misunderstanding the proper use of LLMs and throwing them into production without any kind of sanity checks on the input and output. As someone who used to work for NYS as a software engineer, this is entirely unsurprising.

[–] [email protected] 3 points 6 months ago* (last edited 6 months ago) (2 children)

Work in HR. Have a very smart boss. Asked me about AI for recruiting, screening and other purposes. Told my boss, wait 5 years, we'll see the catastrophic lawsuits and early adopters, then after 5 more there will be some plug and play usable solutions.

Anyone eating up Big4 and startups own horseshit deserve what they get. They've fully demonstrated they don't QC, and especially on critical, difficult to parse, contextual or changing info LLMs are incredibly immature.

[–] [email protected] 2 points 6 months ago (1 children)

LLMs are still good for the kind of flowery language you need in HR, but not for any sort of fact-based generation.

Think of it as being creative, not logical.

[–] [email protected] 1 points 6 months ago

Yeah not talking about flowery language.

HR needs automated text and audio(pretty much the same with transcription) screening/interviewing. It needs to be able to ask industry or role-specific questions, generate follow ups based on responses, and rank the answers accurately or at least better than humans do(which is miserable coin flip). 75% of interviewing could be done away with.

Right now however, the quality would be poor and the risk astronomical. I'm sure we'll get there in 5-10 years. The risk will still be there to a degree, but just like autonomous cars, at some point it will be statistically proven that the AI is less biased and of course more efficient.

The crap part will be subscribing to updates for your AI. "Oh, you want it to ask questions about that new software? $$$

[–] [email protected] 1 points 6 months ago

The biggest thing I've found is limiting the inputs with a filter and vetting outputs results in higher quality results. One project I'm working on takes highly complex language and simplifies it for users. There's no user input and it's not being used to create anything that isn't already there. It takes the highly technical language with lots of acronyms and breaks it down into more understandable units for normal people. Of course my company is heavily regulated so we're extremely focused on QA and ensuring it will never output something that doesn't align correctly.

[–] [email protected] 10 points 6 months ago

I guess the chat bot is drawing from the data where corpos get away with everything?

[–] [email protected] 6 points 6 months ago

Believe it when they say the truth out loud.