this post was submitted on 05 Feb 2024
324 points (98.8% liked)

Technology

59381 readers
2816 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 9 months ago (1 children)

Let's not confuse ourselves here. The opposite of one evil is not necessarily a good. Police reviewing their own footage, investigating themselves: bad. Unreliable AI beholden to corporate interests and shareholders: also bad.

[–] [email protected] 5 points 9 months ago (2 children)

It's fine to not understand what "AI" is and how it works, but you should avoid making statements that highlight that lack of understanding.

[–] [email protected] 8 points 9 months ago* (last edited 9 months ago) (1 children)

If you feel one's knowledge is lacking then explaining it may convince them, or others reading your post.

[–] [email protected] -1 points 9 months ago* (last edited 9 months ago)

Speaking of a broad category of useful technologies as inherently bad is a dead giveaway that someone doesn't know what they're talking about.

[–] [email protected] 4 points 9 months ago* (last edited 9 months ago) (1 children)

It’s fine to not understand what “AI” is and how it works

That's highly presumptive isn't it? I didn't make any statement about what AI is, or the mechanics behind it. I only made a statement regarding the owners and operators of AI. We're talking about the politics of using AI to aid in police accountability, and for those intents and purposes, AI need not be more than a black box. We could call it a sentient jar of kidney beans for all it matters.

So for the sake of argument - the one I made, not the one I didn't make - what did I misunderstand?

Unreliable

On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes, and citations.

Judge Castel’s opinion offers a detailed analysis of one such opinion, Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019), which the sanctioned lawyers produced to the Court. The Varghese decision is presented as being issued by three Eleventh Circuit judges. While according to Judge Castel’s opinion the decision “shows stylistic and reasoning flaws that do not generally appear in decisions issued by the United States Court of Appeals,” and contains a legal analysis that is otherwise “gibberish,” it does in fact reference some real cases. Additionally, when confronted with the question of whether the case is real, the AI platform itself doubles down, explaining that the case “does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.”

https://www.natlawreview.com/article/artificially-unintelligent-attorneys-sanctioned-misuse-chatgpt

Regardless of how ChatGPT made this error, be it "hallucination" or otherwise, I would submit this as exhibit A that AI, at least currently, is not reliable enough to do legal analysis.

Beholden to corporate interests

Most of the large, large language models are owned and run by huge corporations: OpenAI's ChatGPT, Google's Bard, Microsoft's Copilot, etc. It is already almost impossible to hold these organizations accountable for their misdeeds, so how can we trust their creations to police the police?

The naive "at-best" scenario is that AI trained to identify unjustified police shootings sometimes fails to identify them properly. Some go unreported. Or perhaps it reports a "justified" police shooting (I am not here to debate that definition but let's say they occur) as unjustified, which gums up other investigation efforts.

The more conspiratorial "at-worst" scenario is that a company with a pro-cop/thin-blue-line sympathizing culture could easily sweep damning reports made by their AI under the rug, which facilitates aggressive police behavior under the guise of "monitoring" it.

As reported by ProPublica, Patterson PD has a contract with a Chicago-based software company called Truleo to examine audio from bodycam videos to identify problematic behavior by officers. The company charges around $50,000 per year for flagging several types of behaviors, such as when officers use force, interrupt civilians, use profanities, or turn off their cameras while on active duty. The company claims that its data shows such behaviors often lead to violent escalation.

How does Truleo determine what is "risky" behavior, what is an "interruption" to a civilian? What is a profanity? Does Truleo consider "crap" to be a profanity? More importantly, what if you disagree with Truleo's definitions? What recourse do you have against a company that has zero duty to protect you? If you file a lawsuit alleging officer misconduct, can Truleo's AI's conclusions be admissible as evidence, and can it be used against you?

(1/2)

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago)

And shareholders

He couldn’t have imagined the drama of this week, with four directors on OpenAI’s nonprofit board unexpectedly firing him as CEO and removing the company’s president as chairman of the board. But the bylaws Altman and his cofounders initially established and a restructuring in 2019 that opened the door to billions of dollars in investment from Microsoft gave a handful of people with no financial stake in the company the power to upend the project on a whim.

https://www.wired.com/story/openai-bizarre-structure-4-people-the-power-to-fire-sam-altman/

Oh! Turns out I was wrong... "a handful of people with no financial stake in the company" doesn't sound like shareholders, and yet they could change the direction of the company at will. And just so we're clear, whether it's four faceless ghouls or Sam Altman, 1 or 4, the fact that the company is beholden to so few people, who themselves are not democratically elected, nor necessarily law experts, nor necessarily have any history being police officers... their AI is what decides whether or not to hold a police officer accountable for his misdeeds? Hard. Pass.

Oh, and lest we forget Microsoft is invested in OpenAI, and OpenAI has a quasi-profit-driven structure. Those 4 board directors aren't even my biggest concern with that arrangement.

(2/2)