duncesplayed

joined 1 year ago
[–] [email protected] 6 points 11 months ago* (last edited 11 months ago)

won’t be useful beyond basic word processing and browsing.

Not even that. For most basic users, web browsing is by far the most resource-intensive thing they'll ever do, and it'll only get moreso. If it weren't for modern web design, most users could honestly probably be okay with 4GB or 8GB of RAM today. For a laugh, I tried using a 512MB Raspberry Pi 1B for all my work for a few days. I could do absolutely everything (mostly developing code and editing office documents) without any problems at all except I couldn't open a single modern web page and was limited to the "retro" web. One web page used up more resources than all of my work combined. I'm guessing it won't be too many years before web design has evolved to the point where basic webpages will require several GB of RAM per tab.

(I agree with your overall point, by the way. Soldering in 8GB of RAM these days is criminal just based on its effects on the environment)

[–] [email protected] 7 points 11 months ago

I used to run a TFTP server on my router that held the decryption keys. As soon as a machine got far enough in the boot sequence to get network access, it would pull the decryption keys from the router. That way a thief would have to steal the router along with the computer, and have the router running when booting up the computer. It works wirelessly, too!

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (1 children)

Last I checked, Signal still hasn't fixed their giant UX problem, which is that when you first install the app, it announces you to other Signal users on your contact list. This makes it completely unusable for anybody who actually needs, you know, a secure messenger (like a domestic abuse victim).

I mean I use Signal every day and I love it. But it irks me that they're like "Oh we're super secure. Unless you're trying to get help from your abusive husband, in which case, guess what, we just snitched on you to your abusive husband! Good luck with that!"

[–] [email protected] 57 points 11 months ago (4 children)

Holy shit. If I understand correctly, the trains were programmed to use their GPS sensors to detect if they were ever physically moved to an independent repair shop. If they detected that they were at an independent repair shop, they were programmed to lock themselves and give strange and nonsensical error codes. Typing in an unlock code at the engineer's console would allow the trains to start working normally again.

If there were a corporation-sized mirror, I don't know how NEWAG could look at itself in it.

[–] [email protected] 16 points 11 months ago (1 children)

After reading the article (and others), I think this is giving him too much credit.

He was the BBC studio controller at the time and pushed to have the matches broadcast in colour. The BBC ignored him for years and then suddenly changed their minds without reason, and in 1967 they ordered David Attenborough to do the next Wimbledon in colour.

However, the colour of the ball wasn't changed. Balls remained white even while being broadcast in colour.

Yellow balls had actually already been used since the 19th century, but not consistently, and not at Wimbledon. After a few years of tennis matches being broadcast in colour, the ITF (without David Attenborough's involvement) conducted a study and found that flourescent yellow balls were easiest for viewers to watch, and so they started being used at Wimbledon starting in 1972.

[–] [email protected] 52 points 11 months ago* (last edited 11 months ago) (3 children)

I'm going to reframe the question as "Are computers good for someone tech illiterate?"

I think the answer is "yes, if you have someone that can help you".

The problem with proprietary systems like Windows or OS X is that that "someone" is a large corporation. And, in fairness, they generally do a good job of looking after tech illiterate people. They ensure that their users don't have to worry about how to do updates, or figure out what browser they should be using, or what have you.

But (and it's a big but) they don't actually care about you. Their interest making sure you have a good experience ends at a dollar sign. If they think what's best for you is to show you ads and spy on you, that's what they'll do. And you're in a tricky position with them because you kind of have to trust them.

So with Linux you don't have a corporation looking after you. You do have a community (like this one) to some degree, but there's a limit to how much we can help you. We're not there on your computer with you (thankfully, for your privacy's sake), so to a large degree, you are kind of on your own.

But Linux actually works very well if you have a trusted friend/partner/child/sibling/whoever who can help you out now and then. If you've got someone to help you out with it, Linux can actually work very very well for tech illiterate people. The general experience of browsing around, editing documents, editing photos, etc., works very much the same way as it does on Windows or OS X. You will probably be able to do all that without help.

But you might not know which software is best for editing photos. Or you might need help with a specific task (like getting a printer set up) and having someone to fall back on will give you much better experience.

[–] [email protected] 10 points 11 months ago (2 children)

Here's the source for it. Since this is PCM, you should quote the bottom part of that page, too, for maximum lulz. 7% of Biden voters vs 4% of Trump voters. 10% registered Democrats vs 6% Republicans. 14% urban vs 3% rural.

Clearly the problem is young, urban Democrats.

[–] [email protected] 2 points 11 months ago

I think for most people they won't care either way.

Some people do legitimately occasionally need to poke around in GRUB before loading the kernel. Setting up certain kernel parameters or looking for something on the filesystem or something like that. For those people, booting directly into the kernel means your ability to "poke around" is now limited by how nice your motherboard's firmware is. But even for those people, they should always at least have the option of setting up a 2-stage boot.

[–] [email protected] 4 points 11 months ago

Whenever I'm started anything new, I just go AGPL without even thinking about it. If I later change my mind and think GPL or LGPL or BSD or something would be more appropriate later, I can always change it (though I've never found a need to), but you can't really go the other way. If you start permissive, that's just out there, forever.

[–] [email protected] 30 points 11 months ago (1 children)

The principled "old" way of adding fancy features to your filesystem was through block-level technologies, like LVM and LUKS. Both of those are filesystem-agnostic, meaning you can use them with any filesystem. They just act as block devices, and you can put any filesystem on top of them.

You want to be able to dynamically grow and shrink partitions without moving them around? LVM has you covered! You want to do RAID? mdadm has you covered! You want to do encryption? LUKS has you covered? You want snapshotting? Uh, well...technically LVM can do that...it's kind of awkward to manage, though.

Anyway, the point is, all of them can be mixed and matched in any configuration you want. You want a RAID6 where one device is encrypted split up into an ext4 and two XFS partitions where one of the XFS partitions is in RAID10 with another drive for some stupid reason? Do it up, man. Nothing stopping you.

For some reason (I'm actually not sure of the reason), this stagnated. Red Hat's Strata project has tried to continue pushing in this direction, kind of, but in general, I guess developers just didn't find this kind of work that sexy. I mentioned LVM can do snapshotting "kind of awkward"ly. Nobody's done it in as sexy and easy way to do as the cool new COWs.

So, ZFS was an absolute bombshell when it landed in the mid 2000s. It did everything LVM did, but way way way better. It did everything mdadm did, but way way way better. It did everything XFS did, but way way way better. Okay it didn't do LUKS stuff (yet), but that was promised to be coming. It was Copy-On-Write and B-tree-everywhere. It did everything that (almost) every other block-level and filesystem previously made had ever done, but better. It was just...the best. And it shit all over that block-layer stuff.

But...well...it needed a lot of RAM, and it was licensed in a way such that Linux couldn't get it right away, and when it did get ZFS support, it wasn't like native in-the-kernel kind of stuff that people were used to.

But it was so good that it inspired other people to copy it. They looked at ZFS and said "hey why don't we throw away all this block-level layered stuff? Why don't we just do every possible thing in one filesystem?".

And so BtrFS was born. (I don't know why it's pronounced "butter" either).

And now we have bcachefs, too.

What's the difference between them all? Honestly mostly licensing, developer energy, and maturity. ZFS has been around for ages and is the most mature. bcachefs is brand spanking new. BtrFS is in the middle. Technically speaking, all of them either do each other's features or have each other's features on their TODO list. LUKS in particular is still very commonly used because encryption is still missing in most (all?) of them, but will be done eventually.

[–] [email protected] 2 points 11 months ago (1 children)

I'm curious about auto-regressive token prediction vs planning. The article just very briefly mentions "planning" and then never explains what it is. As someone who's not in this area, what's the definition/mechanism of "planning" here?

view more: ‹ prev next ›