The AMD graphics driver is reputedly the biggest that mainstream Linux users will encounter, approaching six million lines of code.
That does seem a bit ... excessive.
A community for everything relating to the linux operating system
Also check out [email protected]
Original icon base courtesy of [email protected] and The GIMP
The AMD graphics driver is reputedly the biggest that mainstream Linux users will encounter, approaching six million lines of code.
That does seem a bit ... excessive.
A decent chunk of that is autogenerated code
Do they know what it does?
"Register bit twiddling." Setting all the modes that all their various cards can operate in, with the associated code for sending the bit updates over the connection bus. Tedious stuff that's very prone to copy-paste errors if written by hand.
At some point you have to take AMDs word for it that these codes = this functionality, but if the right graphics come out then it can't be so wrong.
Thanks
why?
On an Intel machine, this makes me want to compile my kernel so much
I should learn how to compile RPM kernels on COPR
Compiling has never been the hard part. The challenge is making it through the entire configuration menu system before succumbing to the urge to gouge your own eyes out with blunt sticks.
Once that's done, kick off make
take a long break; it'll be compiled by the time you get back to it.
I hear build times are getting longer with the Rust parts, though, so do it soon before you need mainframe access to get a compile within your lifetime.
The thing is I need to configure, compile, package, sign and then layer, because I am on Fedora Atomic (and because that is the correct way)
And I dont know many of the steps in the middle.
A Github runner for this would be great, like a template where people can choose what kernel they need, which then packages it.
They should split it in two drivers, one legacy and one all other.
At least two. Or make it with modules and only load the necessary one.
Right; any solution they come up with presumably needs to be more scalable than "new drivers" and "old drivers". Eventually there will be too large a set of "old drivers" and we'll end up in the same situation with a small "new drivers" driver and a large "old drivers" blob.
A different driver set for each chip architecture maybe?
Maybe; it does sound like reducing the size of the driver is potentially possible as well https://www.phoronix.com/news/AMDGPU-Headers-Repo-Idea
From my understanding, a lot of code in the graphics drivers is special-case handling for specific games to optimize for the way that the game uses the APIs. Is this correct?
In which case it would make sense to have the game-specific code loaded dynamically when that game is launched, since 99.99% of the game specific code will be for games that the user never runs.
From my understanding 99.999% of those “Game ready drivers” are patches for machines that you do not use but have to download anyway to keep everything on the same version.
My understanding is that most of that all lives in mesa, and the kernel driver basically just abstracts the hardware.
How old are those machines?
Worth noting from the original article
Fedora is working around this in their latest packages by beginning to probe SimpleDRM immediately. Fedora / Red Hat though isn't the only ones using Plymouth but is largely in use by all major Linux distributions of the past decade. But in recent years the AMDGPU driver has only continued to grow much larger in supporting newer GPUs and tacking on additional features and optimizations.
Why can’t you just use a contemporary version?
Monolithic kernels and drivers are an issue.
I understand why its easy, but on Fedora Atomic I even have all the userspace drivers for intel, amd, nvidia and maybe more, even though I clearly just use intel...
The problem is not caused by mono kernel. Just because AMDGPU driver was developed in mono style. i.e. they include the code of all generations in one driver. In monolithic kernel, the developer can develop drivers with "micro" style. e.g. Intel's GPU driver doesn't use mono style, they created a new driver when they changed GPU hardware architecture.
Monolithic kernel is a concept about address space. If all parts of a kernel are running in the same address space, this is a monolithic kernel, otherwise it's a micro kernel.
This problem is about how to split parts, but not how to place parts in memory.
Thanks for the insight
This is the real issue. This is one area that Windows, despite its historical hardships, handles much better.
(Mac OS too but they killed kexts for the public anyway)
I'd love to see a more dynamic approach (that doesn't rely on DKMS) someday.
I believe the advantage is that old drivers still work as they are all in the kernel. With them sharing much code it's not even that big of a disk space issue. Edit: A more dynamic approach would be great though, especially with this size issue popping up.
In a way it's great that I'm able to replace any part of my system and it just works without me having to make sure the old GPU driver doesn't leave some traces behind–altough while writing this the latter part shouldn't be an issue with Windows auto download and installation of drivers.