this post was submitted on 28 Oct 2024
32 points (100.0% liked)

TechTakes

1480 readers
315 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

With the OSI publishing their abysmal - explicitly not open source - "Open Source AI" definition I thought I'd post my argument, why it is bad and why "Open Source AI" currently probably does not exist.

top 11 comments
sorted by: hot top controversial new old
[–] bitofhope@awful.systems 23 points 1 month ago (3 children)

The stretching is just so blatant. People who train neural networks do not write a bunch of tokens and weights. They take a corpus of training data and run a training program to generate the weights. That's why it is the training program and the corpus that should be considered the source form of the program. If either of these can't be made available in a way that allows redistribution of verbatim and modified versions, it can't be open source. Even if I have a powerful server farm and a list of data sources for Llama 3, I can't replicate the model myself without committing copyright infringement (neither could Facebook for that matter, and that's not an entirely separate issue).

There are large collections of freely licensed and public domain media that could theoretically be used to train a model, but that model surely wouldn't be as big as the proprietary ones. In some sense truly open source AI does exist and has for a long time, but that's not the exciting thing OSI is lusting after, is it?

[–] BlueMonday1984@awful.systems 9 points 1 month ago

I've already talked about the indirect damage AI's causing to open source in this thread, but this hyper-stretched definition's probably doing some direct damage as well.

Considering that this "Open Source AI" definition is (almost certainly by design) going to openwash the shit out of blatant large-scale theft, I expect it'll heavily tar the public image of open-source, especially when these "Open Source AIs" start getting sued for copyright infringement.

[–] JFranek@awful.systems 7 points 1 month ago (1 children)

Yeah, neural network training is notoriously easy to reproduce /s.

Just few things can affect results: source data, data labels, network structure, training parameters, version of training script, versions of libraries, seed for random number generator, hardware, operating system.

Also, deployment is another can of worms.

Also, even if you have open source script, data and labels, there's no guarantee you'll have useful documentation for either of these.

[–] bitofhope@awful.systems 3 points 1 month ago (1 children)

Yes, that just reiterates my point, doesn't it?

[–] JFranek@awful.systems 5 points 1 month ago (1 children)

It was supposed to. I'm just not that good at writing.

[–] bitofhope@awful.systems 5 points 1 month ago

Fair enough. Sorry for being rude about it.

[–] V0ldek@awful.systems 4 points 1 month ago* (last edited 1 month ago) (1 children)

People who train neural networks do not write a bunch of tokens and weights.

Reading this made me think of an analogy of generated code. This is basically exactly the same thing as distributing the code of your program but not in the source language, rather the assembly listing of the final binary, and calling it open source. You can turn any defense of the AI model of "open-source" into a defense of that model of distributing code. You can run my AI/code (if you have a powerful/similar enough machine), you can inspect it (it's just not going to tell you anything), you can modify it (lol), so it's open source!

Edit: The more I think about it the more I come to the realisation that the assembly listing is actually still vastly more useful than the AI models. Like at least a very dedicated and insane enough programmer could technically track down a bug in the assembly and correct it if given enough coffee.

[–] bitofhope@awful.systems 3 points 1 month ago

It's open source trust me I wrote that ELF file directly with C-x M-c M-butterfly.

[–] BlueMonday1984@awful.systems 11 points 1 month ago

A pretty solid piece on how AI is closed-source by nature, and a solid takedown on the OSI's FOMO-fuelled dumpster fire of an Open Source AI definition.

I've also thought a bit about AI's relationship with open-source as well - to expand my views a bit, I view AI as having a hostile relationship with open source, stealing whatever it wants and damaging open-source projects when it quote-unquote "gives back", and I suspect that we will see a severe decline in the FOSS ecosystem because of it.

With AI bros treating "publicly available" to mean "theirs to steal" (sometimes openly saying it, oftentimes suggesting it with their actions) and more-or-less getting away with it for the past two years, people have been given plenty of reason to view FOSS licenses (Creative Commons, GPL, etcetera) as not worth the .txt files they're written in, and contributing to it as asking to have their code stolen.

The recently-released Stallman Report (which you mentioned) definitely isn't helping FOSS either - all the diversity initiatives and codes of conduct in the world can't protect against a PR nightmare on the magnitude of "your movement's unofficial face becomes the Jeffery Epstein of coding".

Baldur Bjarnason's also talked about open-source's rocky financial future - I'd recommend checking it out.

[–] jaschop@awful.systems 7 points 1 month ago

Hi @tante@awful.systems 👋

Nice writeup. Been reading some of your stuff for a few years. Seems like the gravitational attraction of awful.systems got us both.

[–] weker01@sh.itjust.works 2 points 1 month ago* (last edited 1 month ago)

Now always closed open- but mineded