Despite US dominance in so many different areas of technology, we're sadly somewhat of a backwater when it comes to car headlamps. It's been this way for many decades, a result of restrictive federal vehicle regulations that get updated rarely. The latest lights to try to work their way through red tape and onto the road are active-matrix LED lamps, which can shape their beams to avoid blinding oncoming drivers.
From the 1960s, Federal Motor Vehicle Safety Standards allowed for only sealed high- and low-beam headlamps, and as a result, automakers like Mercedes-Benz would sell cars with less capable lighting in North America than it offered to European customers.
A decade ago, this was still the case. In 2014, Audi tried unsuccessfully to bring its new laser high-beam technology to US roads. Developed in the racing crucible that is the 24 Hours of Le Mans, the laser lights illuminate much farther down the road than the high beams of the time, but in this case, the lighting tech had to satisfy both the National Highway Traffic Safety Administration and the Food and Drug Administration, which has regulatory oversight for any laser products.
The good news is that by 2019, laser high beams were finally an available option on US roads, albeit once the power got turned down to reduce their range.
NHTSA's opposition to advanced lighting tech is not entirely misplaced. Obviously, being able to see far down the road at night is a good thing for a driver. On the other hand, being dazzled or blinded by the bright headlights of an approaching driver is categorically not a good thing. Nor is losing your night vision to the glare of a car (it's always a pickup) behind you with too-bright lights that fill your mirrors.
This is where active-matrix LED high beams come in, which use clusters of controllable LED pixels. Think of it like a more advanced version of the "auto high beam" function found on many newer cars, which uses a car's forward-looking sensors to know when to dim the lights and when to leave the high beams on.
Here, sensor data is used much more granularly. Instead of turning off the entire high beam, the car only turns off individual pixels, so the roadway is still illuminated, but a car a few hundred feet up the road won't be.
Rather than design entirely new headlight clusters for the US, most OEMs' solution was to offer the hardware here but disable the beam-shaping function—easy to do when it's just software. But in 2022, NHTSA relented—nine years after Toyota first asked the regulator to reconsider its stance.
this post was submitted on 21 Sep 2024
360 points (97.6% liked)
Technology
59232 readers
3103 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Honestly, while it does add more moving parts into the equation, I wonder how realistic it is to just take the plunge and go with driving a car via display rather than directly looking through glass.
Ever since we got cars with a pane of glass up front rather than goggles, that's been kind of the standard way that motorists operated -- look through a big sheet of glass.
But that was also a system that was developed based on technology from around 1900.
There have been displays that provide heads-up augmentation of stuff, projected on the windshield. But end of the day, maybe instead of augmenting vision, it's time to just outright go with displays.
I mean, it's more moving parts, but you've got a lot of moving parts, computers and such, already involved in controlling your car now.
Maybe shining really bright lights in front of a car as a way to see at night when traveling at high speed is getting obsolete.
As people get older, their eyes inevitably don't adjust as quickly to darkness after being hit by a bright light. Can't do much about that short of swapping out all headlights on older cars, even if you produce a new standard. It takes decades and decades to age that out. But this provides an immediate benefit to users of such displays.
We had an thread up the other day on [email protected] talking about how people driving tall vehicles that have poor visibility in front create some risks for kids. You can put cameras wherever you want.
If you're looking out a window, you have some blind spots due to the roof support beams. Doesn't need to exist with a display.
Other people in the car don't need to obstruct one's view.
We've gotten increasingly-compelling systems that can process data from the outside world. Here's a video clip from ENVG-B. That's a US military system that can do things like do edge-detection and highlighting -- I believe it aims to specifically detect and highlight humans -- see into the infrared, prevents people from being blinded by flashes (which is a particular problem for the military, with muzzle flash and explosions and such).
While there is some competition with self-driving cars (if I never drive my car, if the computer drives it, then I don't need to see to drive it) there's a lot of overlap in the problems to solve. For self-driving cars, the car has to be able to generate a 3d model of the world around itself with sensors and such. That's also the same data that you'd need to be obtaining, processing, and providing a human driver with if you wanted to provide an display of the world around oneself.
You can leverage sensor fusion, combining data from many different sensors. LIDAR, millimeter-wave radar, numerous cameras, hyperspectral imaging, light polarity-sensitive sensors.
While our eyes are pretty good, there are some environments that we run into, like dense fog or rapid transitions in brightness, that they just aren't all that great at dealing with compared to the sensors that we have.
If you have the driver driving via display, a lot of constraints on where you place them in the vehicle go away. You don't have the left-hand/right-hand split with vehicles, or even need to have the driver sitting at the front of the car. You can make a vehicle a lot shorter, and still potentially provide for pretty good visibility -- and I understand that wanting more visibility is part of why people buy taller vehicles.
Windows aren't great in terms of thermal insulation, especially single-pane car windows. If we didn't have to have a lot of a car's sides covered in glass, we wouldn't need to spend as much energy on climate control.
Windows -- though newer ones have improved on this -- let in a fair amount of solar energy. Be nice to not have the "greenhouse effect" with a hot car that's been parked in a parking lot in summer.
You can provide for more privacy if people can't just see into cars. Some people tint vehicles for this reason, but tint comes with visibility drawbacks.
It's not obvious that a parked car contains something valuable left in it, and you can make a car a lot more secure if someone can't just smash in a window to get in.
Laminated car windshields are pretty safe and durable compared to their early forms, but they're still a weak point in terms of safety; people have had rocks go through them and whack a driver.
One situation that I recall reading about that apparently has a nasty tendency to hit epileptics is driving down tree-shaded avenues; that can produce the regular flashing at about the right frequency to trigger seizures. If you've got a computer in the middle, it can filter that out.
One downside is that I'm not really happy with the present state of computer-integrated cars in terms of privacy. Like, it's technically doable to build a system like this without privacy implications -- a computer gathering data doesn't need to mean that that data goes to anyone else -- but the track record car manufacturers have here is not good. I don't want to buy a car with sensors that can measure everything around me if what it's going to do with that data is to then have the manufacturer try to figure out to make money from that data.
Another is that cars tend to have longer lives than do computers, as automotive technology hasn't moved as quickly. As things stand today, you can't really upgrade the computer in a car, much less sensors. A thirty-year-old car from 1994 might be perfectly driveable in 2024, but if we built a computer into it back in 1994, it'd have long-outdated electronics. My guess is that the kind of view of the world we could provide in 2054, thirty years from now, is gonna be a lot better than the view we can provide in 2024. I don't really want to throw out a car in order to get a newer car computer and sensors. That's not a fundamental problem -- it'd be possible to make cars that have computer systems and sensors that can be replaced -- but the economics would need to make sense.
It sounds like describing Ng's tank from Snow Crash.
...I want to drive Ng's tank from Snow Crash.
Ng drove his vehicle from a VR interface:
That might be a bit of a jump past where we are today technologically, since he's got a tactile-feedback rig for VR and the car driving under voice control. But, yeah, it'd be a pretty capable vehicle.