duncesplayed

joined 1 year ago
[–] [email protected] 1 points 5 months ago* (last edited 5 months ago) (1 children)

LocalSend. It’s exactly like Apple Airdrop

This may be super-nitpicky (and I lose LocalSend and use it a lot), but there is one difference between LocalSend and Airdrop. LocalSend requires network connectivity (and requires the devices to be on the same network), whereas Airdrop can work without any network connection (using Bluetooth).

[–] [email protected] 9 points 5 months ago (2 children)

I recently discovered that he believes it's theft if you watch one of his videos with an adblocker. Just out of spite, sometimes I put one of his videos on in the background (muted) with an adblocker.

[–] [email protected] 2 points 6 months ago

Something something broken arms

Edit: Wow, thank you for the gold, kind stranger!

[–] [email protected] 2 points 6 months ago

To be honest I'm more concerned by language-humor. Like not even saying what kind of humour, just any type of humour at all. Jokes are for adults only!

[–] [email protected] 7 points 6 months ago

"But you already have a queen on the board"

"Have you heard of a sex act called 'the ladder mate'? You're the bottom bitch"

[–] [email protected] 0 points 6 months ago (1 children)

WWII sent a very clear message. You can annex Austria. You can invade Czechoslovakia. You can take over Lithunia. But you don't fuck with Poland

Well, I mean, you can fuck with Poland a little bit. You just can't take over, like, too much of Poland.

[–] [email protected] 5 points 6 months ago

Some extra info about Sierra's game engines....

AGI was indeed first used in KQ1, though earlier Sierra adventure games (even going back to Mystery House in 1980) used something extremely similar. AGI was just formalizing what they'd done before and setting it as a common platform for all future games.

In those days, it was, of course, not possible to write an entire adventure game in machine code because there wasn't even memory to hold more than a handful of screens. The use of bytecode was as much a compression scheme as anything else. So AGI was just a bytecode interpreter. Vector graphics primitives (e.g., draw line, flood fill) could be written in just a few bytes, much better than machine code.

Ken Williams made a splash with early Sierra games because he had an extremely simple insight that most others at the time didn't seem to have: for graphics operations, allow points to be on even-numbered x coordinates only. Most platforms had a horizontal resolution of 320, too much for 1 byte. Ken Williams had his early game engines divide every x coordinate by 2 so that it could fit into a single bit (essentially getting only 160 horizontal pixels). A silly trick, but it got big memory savings, and allowed him to pack more graphics into RAM than many other people could at the time.

After AGI (KQ3 was the last King's Quest to use AGI), Sierra switched over to their new game engine/bytecode interpreter: SCI. SCI was rolled out in two stages, though.

SCI0 (e.g., KQ4) was 16 colours and still revolved around the text parser. SCI1 (e.g., KQ5) was 256 colours and was point-and-click. (SCI2 and later were full multimedia)

For the game player, the major differences you'll notice between AGI and SCI0 (both 16 colours, both text-based) are that SCI0 renders using dithering, gets full horizontal precision (x coordinates stored in 2 bytes), multiple fonts, support for real sound devices (MT32, Adlib). For the programmer, though, AGI and SCI0 were pretty radically different. SCI0 as a programming language was an object-oriented vaguely Scheme-inspired sort of language, and was actually pretty radically different from AGI.

[–] [email protected] 0 points 6 months ago

Yeah during the reddit exodus, people were recommending to overwrite your comment with garbage before deleting it. This (probably) forces them to restore your comment from backup. But realistically they were always going to harvest the comments stored in backup anyway, so I don't think it caused them any more work.

If anything, this probably just makes reddit's/SO's partnership more valuable because your comments are now exclusive to reddit's/SO's backend, and other companies can't scrape it.

[–] [email protected] 4 points 6 months ago (2 children)

According to here, Vermont and Utah do not have any titled players. At least Oregon has a FM.

[–] [email protected] 7 points 6 months ago* (last edited 6 months ago) (1 children)

Why the quotes?

If you ever see quotation marks in a headline, it simply means they're attributing the word/phrase to a particular source. In this case, they're saying that the word "security" was used verbatim in the intranet document. Scare quotes are never used in journalism, so they're not implying anything by putting the word in quotation marks. They're simply saying that they're not paraphrasing.

[–] [email protected] 4 points 6 months ago

The article mentions they'll continue making the eZ80. If you're in the middle of making a PCB around the Z80, you'll just have to change the pins, I guess.

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago) (2 children)

Heads up for anyone (like me) who isn't already familiar with SimpleX, unfortunately its name makes it impossible to search for unless you already know what it is. I was only able to track it down after a couple frustrating minutes after I added "linux" into the search on a lark.

Anyway it's a chat protocol

31
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I'm a university professor and I often found myself getting stressed/anxious/overwhelmed by email at certain times (especially end-of-semester/final grades). The more emails that started to pile in, the more I would start to avoid them, which then started to snowball when people would send extra emails like "I sent you an email last week and haven't got a response yet...", which turned into a nasty feedback loop.

My solution was to create 10 new email folders, called "1 day", "2 days", "3 days", "4 days", "5 days", "6 days", "7 days", "done", "never" and "TIL", which I use during stressful times of the year. Within minutes of an email coming into my inbox, I move it into one of those folders. "never" is for things that don't require any attention or action by me (mostly emails from the department about upcoming events that don't interest me). "TIL" is for things that don't require an action or have a deadline, but I know I'll be referring to a lot. Those are things like contact information, room assignments, plans for things, policy updates.

The "x days" folders are for self-imposed deadlines. If I want to ensure I respond to an email within 2 days, I put it in the "2 days" folder, for example.

And the "done" folder is for when I have completed dealing with an email. This even includes emails where the matter isn't resolved, but I've replied to it, so it's in the other person's court, so to speak. When they reply, it'll pop out of "done" back into the main inbox for further categorizing, so it's no problem.

So during stressful, email-heavy times of year, I wake up to a small number of emails in my inbox. To avoid getting stressed, I don't even read them fully. I read just enough of them that I can decide if I'll respond to them (later) or not, categorize everything, and my inbox is then perfectly clean.

Then I turn my attention to the "1 day" box, which probably only has about 3 or 4 emails in it. Not so overwhelming to only look at those, and once I get started, I find I can get through them pretty quickly.

The thing I've noticed is that once I get over the initial dread of looking at my emails (which used to be caused by looking at a giant dozens-long list of them), going through them is pretty quick and smooth. The feeling of cleaning out my "1 day" inbox is a bit intoxicating/addictive, so then I'll want to peek into my "2 days" box to get a little ahead of schedule and so on. (And if I don't want to peek ahead that day, hey, no big deal)

Once I'm done with my emails, I readjust them (e.g., move all the "2 days" into "1 day", then all the "3 days" into "2 days", and so on) and completely forget about them guilt-free for the rest of day.

Since implementing this system a year ago, I have never had an email languish for more than a couple weeks, and I don't get anxiety attacks from checking email any more.

 

Thomas Glexiner of Linutronix (now owned by Intel) has posted 58 patches for review into the Linux kernel, but they're only the beginning! Most of the patches are just first steps at doing more major renovations into what he calls "decrapification". He says:

While working on a sane topology evaluation mechanism, which addresses the short-comings of the existing tragedy held together with duct-tape and hay-wire, I ran into the issue that quite some of this tragedy is deeply embedded in the APIC code and uses an impenetrable maze of callbacks which might or might not be correct at the point where the CPUs are registered via MPPARSE or ACPI/MADT.

So I stopped working on the topology stuff and decided to do an overhaul of the APIC code first. Cleaning up old gunk which dates back to the early SMP days, making the CPU registration halfways understandable and then going through all APIC callbacks to figure out what they actually do and whether they are required at all. There is also quite some overhead through the indirect calls and some of them are actually even pointlessly indirected twice. At some point Peter yelled static_call() at me and that's what I finally ended up implementing.

He also, at one point, (half-heartedly) argues for the removal of 32-bit x86 code entirely, arguing that it would simplify APIC code and reduce the chance for introducing bugs in the future:

Talking about those museums pieces and the related historic maze, I really have to bring up the question again, whether we should finally kill support for the museum CPUs and move on.

Ideally we remove 32bit support alltogether. I know the answer... :(

But what I really want to do is to make x86 SMP only. The amount of #ifdeffery and hacks to keep the UP support alive is amazing. And we do this just for the sake that it runs on some 25+ years old hardware for absolutely zero value. It'd be not the first architecture to go SMP=y.

Yes, we "support" Alpha, PARISC, Itanic and other oddballs too, but that's completely different. They are not getting new hardware every other day and the main impact on the kernel as a whole is mostly static. They are sometimes in the way of generalizing things in the core code. Other than that their architecture code is self contained and they can tinker on it as they see fit or let it slowly bitrot like Itanic.

But x86 is (still) alive and being extended and expanded. That means that any refactoring of common infrastructure has to take the broken hardware museum into account. It's doable, but it's not pretty and of really questionable value. I wouldn't mind if there were a bunch of museum attendants actively working on it with taste, but that's obviously wishful thinking. We are even short of people with taste who work on contemporary hardware support...

While I cursed myself at some point during this work for having merged i386/x86_64 back then, I still think that it was the correct decision at that point in time and saved us a lot of trouble. It admittedly added some trouble which we would not have now, but it avoided the insanity of having to maintain two trees with different bugs and "fixes" for the very same problems. TBH quite some of the horrors which I just removed came out of the x86/64 side. The oddballs of i386 early SMP support are a horror on their own of course.

As we made that decision more than 15 years [!] ago, it's about time to make new decisions.

Linus responded to one of the patches, saying "I'm cheering your patch series", but has obviously diplomatically not acknowledged the plea to remove 32-bit support.

 

Hey all technology people!

Not my community, but I thought I'd advertise someone else's new lemmy community to see if anyone else is interested.

Head over to [email protected] for BBSes and retrocomputing.

 

It feels like we have a new privacy threat that's emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of:

  1. First we were concerned about governments spying on us. The way we fought back (and continue to fight back) was through encrypted and secure protocols.
  2. Then we were concerned about corporations (Big Tech) taking our data and selling it to advertisers to target us with ads, or otherwise manipulate us. This is still a hard battle being fought, but we're fighting it mostly by avoiding Big Tech ("De-Googling", switching from social media to communities, etc.).
  3. Now we're in a new wave. Big Tech is now building massive GPTs (ChatGPT, Google Bard, etc.) and it's all trained on our data. Our reddit posts and Stack Overflow posts and maybe even our Mastodon or Lemmy posts! Unlike with #2, avoiding Big Tech doesn't help, since they can access our posts no matter where we post them.

So for that third one...what do we do? Anything that's online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you've provided? If you do care, do you think there's any reasonable way we can fight back? Can we poison their training data somehow?

view more: next ›