Wojciakowski took the critiques on board. “Wow, tough crowd … I’ve learned today that you are sensitive to ensuring human readability.”
Christ, what an asshole.
Wojciakowski took the critiques on board. “Wow, tough crowd … I’ve learned today that you are sensitive to ensuring human readability.”
Christ, what an asshole.
For a client I recently reviewed a redlined contract where the counterparty used an "AI-powered contract platform." It had inserted into the contract a provision entirely contrary to their own interests.
So I left it in there.
Please, go ahead, use AI lawyers. It's better for my clients.
Adam Christopher comments on a story in Publishers Weekly.
Says the CEO of HarperCollins on AI:
"One idea is a “talking book,” where a book sits atop a large language model, allowing readers to converse with an AI facsimile of its author."
Please, just make it stop, somebody.
Robert Evans adds,
there's a pretty good short story idea in some publisher offering an AI facsimile of Harlan Ellison that then tortures its readers to death
Kevin Kruse observes,
I guess this means that HarperCollins is getting out of the business of publishing actual books by actual people, because no one worth a damn is ever going to sign a contract to publish with an outfit with this much fucking contempt for its authors.
There's a whole lot of assuming-the-conclusion in advocacy for many-worlds interpretations — sometimes from philosophers, and all the time from Yuddites online. If you make a whole bunch of tacit assumptions, starting with those about how mathematics relates to physical reality, you end up in MWI country. And if you make sure your assumptions stay tacit, you can act like an MWI is the only answer, and everyone else is being ~~un-mutual~~ irrational.
(I use the plural interpretations here because there's not just one flavor of MWIce cream. The people who take it seriously have been arguing amongst one another about how to make it work for half a century now. What does it mean for one event to be more probable than another if all events always happen? When is one "world" distinct from another? The arguments iterate like the construction of a fractal curve.)
"Ah," said Arthur, "this is obviously some strange usage of the word scientist that I wasn't previously aware of."
Resolved: that people still active on Twitter are presumed morally bankrupt until proven otherwise.
The peer reviewers didn't say anything about it because they never saw it: It's an unilluminating comparison thrown into the press release but not included in the actual paper.
"Quantum computation happens in parallel worlds simultaneously" is a lazy take trotted out by people who want to believe in parallel worlds. It is a bad mental image, because it gives the misleading impression that a quantum computer could speed up anything. But all the indications from the actual math are that quantum computers would be better at some tasks than at others. (If you want to use the names that CS people have invented for complexity classes, this imagery would lead you to think that quantum computers could whack any problem in EXPSPACE. But the actual complexity class for "problems efficiently solvable on a quantum computer", BQP, is known to be contained in PSPACE, which is strictly smaller than EXPSPACE.) It also completely obscures the very important point that some tasks look like they'd need a quantum computer — the program is written in quantum circuit language and all that — but a classical computer can actually do the job efficiently. Accepting the goofy pop-science/science-fiction imagery as truth would mean you'd never imagine the Gottesman–Knill theorem could be true.
To quote a paper by Andy Steane, one of the early contributors to quantum error correction:
The answer to the question ‘where does a quantum computer manage to perform its amazing computations?’ is, we conclude, ‘in the region of spacetime occupied by the quantum computer’.
Petition to replace "motte and bailey" per the Batman clause with "lying like a dipshit".