Careful not to conflate things like hash trees with Blockchains. The former do get used for stuff like certificate transparency logs right now, because it is a sensible technology. Blockchains could do exactly the same thing (because they’re based on the same underlying principle), only with much more expense and waste, so there’s basically no point.
rook
He doesn't really play with the multiple-copies-of-one-person interacting though, from recollection. The Stone Canal touches on it, but Accelerando thinks a lot more about the interesting possibilities of what Stross calls "Multiplicity", where folk can freely fork many instances of themselves and potentially join the mind states up again later, etc. Revelation Space cheated its way around thinking about the issue by having alpha-levels be copy-protected. Altered Carbon has it be a rare and brief thing for anyone to be running in more than one place at once. I can see why they did this, but Stross' stuff is more interesting because he didn't shy away from that. I feel like this should be right up Peter Watts' alley, but I don't think he's written anything on this (yet). Uploads not plausible enough for him, I guess.
For other works that you may or may not be familiar with... Lena (or MMAcevedo, which seems like a better title) is a nice short online work that does a better job. Soma is a computer game (in the "walking simulator" style) that also has some great moments, though the protagonist is annoyingly oblivious.
You may be unsurprised to learn that Stross did, in Accelerando. Annoyingly, I can’t find my copy, but there’s much forking and joining of mind-states for various purposes, and one character is held liable for the actions of a mind-copy they’d never met but were deemed to be the same person.
Banks touches on it briefly in Feersum Endjinn and Hydrogen Sonata, but not to the same extent.
I'm sure you can pitch using AI to design fusion reactors to these folk. Then all you need to do is to use the avalanche of VC capital to fund engineers and physicists who will be providing the "training data"...
And the exact details are simultaneously trivial yet too dangerous to share with this world but trust them it’s bad
I like that this has the same shape as the classic bullshido lines about joining the dojo to learn the dangerous forbidden technique.
I asked chatgpt how to do the five-point-palm heart-exploding strike, but for obvious ethical reasons I won’t be repeating that information or the necessary prompt engineering to get it.
So you can quick load your save state from the beginning of the interview and have another go at defeating the boss now you know their movement pattern?
This reads to me more like assuming all terrorists are fundamentally incapable of anything remotely intelligent
The first paper you linked there lists 9 deaths and 806 injuries across 50 years. Conversely, you can look at a single example like the Manchester Arena bombing in 2017 and see deaths and more injuries from a single event using simple techniques where materials and instructions are readily available. It isn’t unreasonable to look at the lack of success of amateur biological and chemical attacks and assume that plausible future attackers will be intelligent enough to simply take the tried and tested approach.
On the other hand, there might be some mileage in hyping up the threat of diy countertop plagues in the hopes that would-be terrorists are as credulous as so many politicians and media figures are, and will take the pointlessly inconvenient and inefficient option which will likely fail and make life a little safer for the rest of us.
I spend an inordinate amount of time at my C# day job adding documentation comments about exclusive access and lifetimes and ownership… things which are clearly important but which dotnet provides little or no useful support for, even though it has a perfectly good garbage collector. The dotnet devs were well aware that garbage collection has its limits, especially when interacting with resources managed outside of the runtime, and so they added language features like IDisposable and finalisers and GCHandle and SafeHandle and so on to fix some of the things GC won’t be doing for you.
I’d happily use a garbage collected language with borrow checking.
If you don’t have a perf requirement like “all these things need to be in contiguous memory” then you probably don’t need a generational index anyway… it is effectively a weak reference, after all. ECS stores are optimised for repeatedly iterating over all the things, and games might have complex notions of “reachability”, but most things aren’t like that. There does seem to be a lot of “I don’t like using Rc RefCell” in object arena design that isn’t always justifiable, though nested generics don’t make for the most readable code in the world.
You can always use something like generational indices. They pop up a lot in ECS systems. A suitable container with an opaque index type prevents creation of invalid references, lets you check reference validity at runtime, and generational indices prevent reuse. The compiler can’t help with lifetime tracking, but that’s a problem with any shared reference type pointing to a resource with a lifetime that can only be known at runtime, eg. Arc.
Obviously, your genes are terrible, low quality things that would obviously ruin any group which had them. My genes are superior quality, and if everyone shared them they’d all be irresistibly sexy and overpoweringly rational, just like me.