this post was submitted on 22 Dec 2023
113 points (90.6% liked)
Technology
59381 readers
3497 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Can't you determine how and why that choice is made?
What if you had a team of people whose only job was to understand this? After awhile they would get better and better at it.
Here is a simple video that breaks down how neurons work in machine learning. It can give you an idea about how this works and why it would be so difficult for a human to reverse engineer. https://youtu.be/aircAruvnKk?si=RpX2ZVYeW6HV7dHv
They provide a simple example with a few thousand neurons, and even then, we can't easily tell what the network is doing, because the neurons do not produce any traditional computer code with logic that can be followed. They are just a collection of weights and biases (a bunch of numbers) which transform the input in a some way that the computer decided that it can arrive at the solution. GPT4 contains well over a trillion neurons, for comparison.
No. The training output is essentially a set of huge matrices, and using the model involves taking your input and those matrices and chaining a lot of matrix multiplications (how many and how big they are depends on the complexity of the model) to get your result. It is just simply not possible to understand that because none of the data has any sort of fixed responsibility or correspondence with specific features.
This is probably not exactly how it works, I'm not a ML guy, just someone who watched some of those "training a model to play a computer game" videos years ago, but it should at the very least be a close enough analogy.