saint

joined 2 years ago
MODERATOR OF
5
Incantations (josvisser.substack.com)
 

Incantations

Metadata

Highlights

The problem with incantations is that you don’t understand in what exact circumstances they work. Change the circumstances, and your incantations might work, might not work anymore, might do something else, or maybe worse, might do lots of damage. It is not safe to rely on incantations, you need to move to understanding.

 

We can best view the method of science as the use of our sophisticated methodological toolbox

Metadata

Highlights

Scientific, medical, and technological knowledge has transformed our world, but we still poorly understand the nature of scientific methodology.

scientific methodology has not been systematically analyzed using large-scale data and scientific methods themselves as it is viewed as not easily amenable to scientific study.

This study reveals that 25% of all discoveries since 1900 did not apply the common scientific method (all three features)—with 6% of discoveries using no observation, 23% using no experimentation, and 17% not testing a hypothesis.

Empirical evidence thus challenges the common view of the scientific method.

This provides a new perspective to the scientific method—embedded in our sophisticated methods and instruments—and suggests that we need to reform and extend the way we view the scientific method and discovery process.

In fact, hundreds of major scientific discoveries did not use “the scientific method”, as defined in science dictionaries as the combined process of “the collection of data through observation and experiment, and the formulation and testing of hypotheses” (1). In other words, it is “The process of observing, asking questions, and seeking answers through tests and experiments” (2, cf. 3).

In general, this universal method is commonly viewed as a unifying method of science and can be traced back at least to Francis Bacon's theory of scientific methodology in 1620 which popularized the concept

Science thus does not always fit the textbook definition.

Comparison across fields provides evidence that the common scientific method was not applied in making about half of all Nobel Prize discoveries in astronomy, economics and social sciences, and a quarter of such discoveries in physics, as highlighted in Fig. 2b. Some discoveries are thus non-experimental and more theoretical in nature, while others are made in an exploratory way, without explicitly formulating and testing a preestablished hypothesis.

We find that one general feature of scientific methodology is applied in making science's major discoveries: the use of sophisticated methods or instruments. These are defined here as scientific methods and instruments that extend our cognitive and sensory abilities—such as statistical methods, lasers, and chromatography methods. They are external resources (material artifacts) that can be shared and used by others—whereas observing, hypothesizing, and experimenting are, in contrast, largely internal (cognitive) abilities that are not material (Fig. 2).

Just as science has evolved, so should the classic scientific method—which is construed in such general terms that it would be better described as a basic method of reasoning used for human activities (non-scientific and scientific).

An experimental research design was not carried out when Einstein developed the law of the photoelectric effect in 1905 or when Franklin, Crick, and Watson discovered the double helix structure of DNA in 1953 using observational images developed by Franklin.

Direct observation was not made when for example Penrose developed the mathematical proof for black holes in 1965 or when Prigogine developed the theory of dissipative structures in thermodynamics in 1969. A hypothesis was not directly tested when Jerne developed the natural-selection theory of antibody formation in 1955 or when Peebles developed the theoretical framework of physical cosmology in 1965.

Sophisticated methods make research more accurate and reliable and enable us to evaluate the quality of research.

Applying observation and a complex method or instrument, together, is decisive in producing nearly all major discoveries at 94%, illustrating the central importance of empirical sciences in driving discovery and science.

 

How much are your 9's worth?

Metadata

Highlights

All nines are not created equal. Most of the time I hear an extraordinarily high availability claim (anything above 99.9%) I immediately start thinking about how that number is calculated and wondering how realistic it is.

Human beings are funny, though. It turns out we respond pretty well to simplicity and order.

Having a single number to measure service health is a great way for humans to look at a table of historical availability and understand if service availability is getting better or worse. It’s also the best way to create accountability and measure behavior over time…

… as long as your measurement is reasonably accurate and not a vanity metric.

Cheat #1 - Measure the narrowest path possible.

This is the easiest way to cheat a 9’s metric. Many nines numbers I have seen are various version of this cheat code. How can we create a narrow measurement path?

Cheat #2 - Lump everything into a single bucket.

Not all requests are created equal.

Cheat #3 - Don’t measure latency.

This is an availability metric we’re talking about here, why would we care about how long things take, as long as they are successful?!

Cheat #4 - Measure total volume, not minutes.

Let’s get a little controversial.

In order to cheat the metric we want to choose the calculation that looks the best, since even though we might have been having a bad time for 3 hours (1 out of every 10 requests was failing), not every customer was impacted so it wouldn’t be “fair” to count that time against us.

Building more specific models of customer paths is manual. It requires more manual effort and customization to build a model of customer behavior (read: engineering time). Sometimes we just don’t have people with the time or specialization to do this, or it will cost to much to maintain it in the future.

We don’t have data on all of the customer scenarios. In this case we just can’t measure enough to be sure what our availability is.

Sometimes we really don’t care (and neither do our customers). Some of the pages we build for our websites are… not very useful. Sometimes spending the time to measure (or fix) these scenarios just isn’t worth the effort. It’s important to focus on important scenarios for your customers and not waste engineering effort on things that aren’t very important (this is a very good way to create an ineffective availability effort at a company).

Mental shortcuts matter. No matter how much education we try, it’s hard to change perceptions of executives, engineers, etc. Sometimes it is better to pick the abstraction that helps people understand than pick the most accurate one.

Data volume and data quality are important to measurement. If we don’t have a good idea of which errors are “okay” and which are not, or we just don’t have that much traffic, some of these measurements become almost useless (what is the SLO of a website with 3 requests? does it matter?).

What is your way of cheating nines? ;)

 

J.G. Ballard: My Favorite Books

Metadata

Highlights

In this respect I differed completely from my children, who began to read (I suspect) only after they had left their universities. Like many parents who brought up teenagers in the 1970s, it worried me that my children were more interested in going to pop concerts than in reading “Pride and Prejudice” or “The Brothers Karamazov” — how naive I must have been. But it seemed to me then that they were missing something vital to the growth of their imaginations, that radical reordering of the world that only the great novelists can achieve.

I now see that I was completely wrong to worry, and that their sense of priorities was right — the heady, optimistic world of pop culture, which I had never experienced, was the important one for them to explore. Jane Austen and Dostoyevsky could wait until they had gained the maturity in their 20s and 30s to appreciate and understand these writers, far more meaningfully than I could have done at 16 or 17.

Books:

  • “The Day of the Locust,” Nathanael West
  • “Collected Short Stories,” Ernest Hemingway
  • “The Rime of the Ancient Mariner,” Samuel Taylor Coleridge
  • “The Annotated Alice,” ed. Martin Gardner
  • “The World through Blunted Sight,” Patrick Trevor-Roper
  • “The Naked Lunch,” William Burroughs
  • “The Black Box,” ed. Malcolm MacPherson
  • “Los Angeles Yellow Pages”
  • “America,” Jean Baudrillard
  • “The Secret Life of Salvador Dalí,” by Dalí
 

Cyber Conflict and Subversion in the Russia-Ukraine War

Metadata

Highlights

The Russia-Ukraine war is the first case of cyber conflict in a large-scale military conflict involving a major power.

Contrary to cyberwar fears, most cyber operations remained strategically inconsequential, but there are several exceptions: the AcidRain operation, the UKRTelecom disruption, the September 2022 power grid sabotage, and the catastrophic Kyivstar outage of 2023.

These developments suggest hacking groups are increasingly fusing cyber operations with traditional subversive methods to improve effectiveness.

The first exceptional case is AcidRain. This advanced malware knocked out satellite communication provided by Viasat’s K-SAT service across Europe the very moment the invasion commenced. Among the customers of the K-SAT service: Ukraine’s military. The operation that deployed this malware stands out not only because it shows a direct linkage to military goals but also because it could have plausibly produced a clear tactical, potentially strategic, advantage for Russian troops at a decisive moment.

The second exception is a cyber operation in March 2022 that caused a massive outage of UKRTelecom, a major internet provider in Ukraine. It took only a month to prepare yet caused significant damage. It cut off over 80 percent of UKRTelecom’s customers from the internet for close to 24 hours.

Finally, the potentially most severe challenge to the theory of subversion is a power grid sabotage operation in September 2022. The operation stands out not only because it used a novel technique but also because it took very little preparation. According to Mandiant, it required only two months of preparation and used what is called “living off the land” techniques, namely foregoing malware and using only existing functionality.

After all, why go through the trouble of finding vulnerabilities in complex networks and develop sophisticated exploits when you can take the easy route via an employee, or even direct network access?

11
Why We Love Music (greatergood.berkeley.edu)
 

Some article from the past ;)

Why We Love Music

Metadata

Highlights

Using fMRI technology, they’re discovering why music can inspire such strong feelings and bind us so tightly to other people.

“A single sound tone is not really pleasurable in itself; but if these sounds are organized over time in some sort of arrangement, it’s amazingly powerful.”

There’s another part of the brain that seeps dopamine, specifically just before those peak emotional moments in a song: the caudate nucleus, which is involved in the anticipation of pleasure. Presumably, the anticipatory pleasure comes from familiarity with the song—you have a memory of the song you enjoyed in the past embedded in your brain, and you anticipate the high points that are coming.

During peak emotional moments in the songs identified by the listeners, dopamine was released in the nucleus accumbens, a structure deep within the older part of our human brain.

This finding suggested to her that when people listen to unfamiliar music, their brains process the sounds through memory circuits, searching for recognizable patterns to help them make predictions about where the song is heading. If music is too foreign-sounding, it will be hard to anticipate the song’s structure, and people won’t like it—meaning, no dopamine hit. But, if the music has some recognizable features—maybe a familiar beat or melodic structure—people will more likely be able to anticipate the song’s emotional peaks and enjoy it more. The dopamine hit comes from having their predictions confirmed—or violated slightly, in intriguing ways.

On the other hand, people tend to tire of pop music more readily than they do of jazz, for the same reason—it can become too predictable.

Her findings also explain why people can hear the same song over and over again and still enjoy it. The emotional hit off of a familiar piece of music can be so intense, in fact, that it’s easily re-stimulated even years later.

“Musical rhythms can directly affect your brain rhythms, and brain rhythms are responsible for how you feel at any given moment,” says Large.

“If I’m a performer and you’re a listener, and what I’m playing really moves you, I’ve basically synchronized your brain rhythm with mine,” says Large. “That’s how I communicate with you.”

He points to the work of Erin Hannon at the University of Nevada who found that babies as young as 8 months old already tune into the rhythms of the music from their own cultural environment.

“Liking is so subjective,” he says. “Music may not sound any different to you than to someone else, but you learn to associate it with something you like and you’ll experience a pleasure response.”

 

Interesting findings

 

Sorry for the Downtime, had not been paying enough attention to the change log and made wrong assumptions that the site does not work because up pictrs update.

But it was wrong port on the docker template!

 

Startling differences between humans and jukeboxes

Metadata

Highlights

Similarly, when you survey people about what motivates them at work, they go “Feeling good about myself! Having freedom, the respect of my coworkers, and opportunities to develop my skills, learn things, and succeed!” When you survey people about what motivates others, they go, “Money and job security!” In another survey, people claimed that they value high-level needs (e.g., finding meaning in life) more than other people do.3

I’m saying “people” here as if I wasn’t one of them, but I would have agreed with all of the above. It was only saying it out loud that made me realize how cynical my theory of human motivation was, and that I applied it to everyone but myself. Yikes!

It’s not that hard to give people skills. It’s way harder to give them interests.

The best way to use incentives, then, is to:

  1. find the people who already want what you want
  1. help them survive

in a department where an internal survey revealed low morale among the graduate students. A town hall was convened to investigate the issue. The students knew that one of the biggest problems was that a handful of professors terrorize and neglect their underlings, and the fastest way to fix this would be to put those faculty members on an ice floe and push it out to sea. This, of course, was difficult to bring up (some of those faculty members were in the room), and so instead we talked about minor bureaucratic reforms like whether there should be some training for advisors, or whether bad advisors should have fewer opportunities to admit students. Nobody could name who these mysterious bad advisors were, of course, so even these piddling suggestions went nowhere.

If you hire someone based on the shininess of their CV and then hope that, somehow, the employee handbook will show them how to also be a good person and not just a prolific paper-producer, you’re going to end up with a department full of sad graduate students.

If you’re writing a constitution or a code of conduct, by all means, do a good job. But if you’re counting on something like “SUBSECTION 3A: Being evil is not allowed” to stop people from being evil, or if you think Roberts Rules of Order are going to turn an insecure despot into an enlightened ruler, well, strap in for some Dark Ages and some bad improv.

discovering your inner motivations takes time and experience, and we gum up the process with lots of strong opinions about what should motivate us.

If you believe that people need to be treated like jukeboxes or secret criminals, you are accepting the behaviorist premise that we need to put people inside a giant [operant conditioning chamber](https://en.wikipedia.org/wiki/Operant_conditioning_chamber#:~:text=An%20operant%20conditioning%20chamber%20(also,graduate%20student%20at%20Harvard%20University.) that dispenses food pellets for good behavior and electric shocks for bad behavior.

 

Archive link: https://archive.ph/c8Dbc

 

Instead of Oscars.

 

Went well with this this

How AI Will Change Democracy

Metadata

Highlights

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication.

It gets interesting when changes in degree can become changes in kind. High-speed trading is fundamentally different than regular human trading. AIs have invented fundamentally new strategies in the game of Go. Millions of AI-controlled social media accounts could fundamentally change the nature of propaganda.

We don’t know how far AI will go in replicating or replacing human cognitive functions. Or how soon that will happen. In constrained environments it can be easy. AIs already play chess and Go better than humans.

keep in mind a few questions: Will the change distribute or consolidate power? Will it make people more or less personally involved in democracy? What needs to happen before people will trust AI in this context? What could go wrong if a bad actor subverted the AI in this context?

The changes are largely in scale. AIs can engage with voters, conduct polls and fundraise at a scale that humans cannot—for all sizes of elections. They can also assist in lobbying strategies. AIs could also potentially develop more sophisticated campaign and political strategies than humans can.

But as AI starts to look and feel more human, our human politicians will start to look and feel more like AI. I think we will be OK with it, because it’s a path we’ve been walking down for a long time. Any major politician today is just the public face of a complex socio-technical system. When the president makes a speech, we all know that they didn’t write it.

In the future, we’ll accept that almost all communications from our leaders will be written by AI. We’ll accept that they use AI tools for making political and policy decisions. And for planning their campaigns. And for everything else they do.

AIs can also write laws. In November 2023, Porto Alegre, Brazil became the first city to enact a law that was entirely written by AI. It had to do with water meters. One of the councilmen prompted ChatGPT, and it produced a complete bill. He submitted it to the legislature without telling anyone who wrote it. And the humans passed it without any changes.

A law is just a piece of generated text that a government agrees to adopt.

AI will be good at finding legal loopholes—or at creating legal loopholes. I wrote about this in my latest book, A Hacker’s Mind. Finding loopholes is similar to finding vulnerabilities in software.

AIs will be good at inserting micro-legislation into larger bills.

AI can help figure out unintended consequences of a policy change—by simulating how the change interacts with all the other laws and with human behavior.

AI can also write more complex law than humans can.

AI can write laws that are impossible for humans to understand.

Imagine that we train an AI on lots of street camera footage to recognize reckless driving and that it gets better than humans at identifying the sort of behavior that tends to result in accidents. And because it has real-time access to cameras everywhere, it can spot it … everywhere.

The AI won’t be able to explain its criteria: It would be a black-box neural net. But we could pass a law defining reckless driving by what that AI says. It would be a law that no human could ever understand. This could happen in all sorts of areas where judgment is part of defining what is illegal. We could delegate many things to the AI because of speed and scale. Market manipulation. Medical malpractice. False advertising. I don’t know if humans will accept this.

It could audit contracts. It could operate at scale, auditing all human-negotiated government contracts.

Imagine we are using an AI to aid in some international trade negotiation and it suggests a complex strategy that is beyond human understanding. Will we blindly follow the AI? Will we be more willing to do so once we have some history with its accuracy?

Could AI come up with better institutional designs than we have today? And would we implement them?

An AI public defender is going to be a lot better than an overworked not very good human public defender. But if we assume that human-plus-AI beats AI-only, then the rich get the combination, and the poor are stuck with just the AI.

AI will also change the meaning of a lawsuit. Right now, suing someone acts as a strong social signal because of the cost. If the cost drops to free, that signal will be lost. And orders of magnitude more lawsuits will be filed, which will overwhelm the court system.

Another effect could be gutting the profession. Lawyering is based on apprenticeship. But if most of the apprentice slots are filled by AIs, where do newly minted attorneys go to get training? And then where do the top human lawyers come from? This might not happen. AI-assisted lawyers might result in more human lawyering. We don’t know yet.

AI can help enforce the law. In a sense, this is nothing new. Automated systems already act as law enforcement—think speed trap cameras and Breathalyzers. But AI can take this kind of thing much further, like automatically identifying people who cheat on tax returns, identifying fraud on government service applications and watching all of the traffic cameras and issuing citations.

But most importantly, AI changes our relationship with the law. Everyone commits driving violations all the time. If we had a system of automatic enforcement, the way we all drive would change—significantly. Not everyone wants this future. Lots of people don’t want to fund the IRS, even though catching tax cheats is incredibly profitable for the government. And there are legitimate concerns as to whether this would be applied equitably.

AI can help enforce regulations. We have no shortage of rules and regulations. What we have is a shortage of time, resources and willpower to enforce them, which means that lots of companies know that they can ignore regulations with impunity.

Imagine putting cameras in every slaughterhouse in the country looking for animal welfare violations or fielding an AI in every warehouse camera looking for labor violations. That could create an enormous shift in the balance of power between government and corporations—which means that it will be strongly resisted by corporate power.

The AI could provide the court with a reconstruction of the accident along with an assignment of fault. AI could do this in a lot of cases where there aren’t enough human experts to analyze the data—and would do it better, because it would have more experience.

Automated adjudication has the potential to offer everyone immediate justice. Maybe the AI does the first level of adjudication and humans handle appeals. Probably the first place we’ll see this is in contracts. Instead of the parties agreeing to binding arbitration to resolve disputes, they’ll agree to binding arbitration by AI. This would significantly decrease cost of arbitration. Which would probably significantly increase the number of disputes.

If you and I are business partners, and we have a disagreement, we can get a ruling in minutes. And we can do it as many times as we want—multiple times a day, even. Will we lose the ability to disagree and then resolve our disagreements on our own? Or will this make it easier for us to be in a partnership and trust each other?

Human moderators are still better, but we don’t have enough human moderators. And AI will improve over time. AI can moderate at scale, giving the capability to every decision-making group—or chatroom—or local government meeting.

AI can act as a government watchdog. Right now, much local government effectively happens in secret because there are no local journalists covering public meetings. AI can change that, providing summaries and flagging changes in position.

This would help people get the services they deserve, especially disadvantaged people who have difficulty navigating these systems. Again, this is a task that we don’t have enough qualified humans to perform. It sounds good, but not everyone wants this. Administrative burdens can be deliberate.

Finally, AI can eliminate the need for politicians. This one is further out there, but bear with me. Already there is research showing AI can extrapolate our political preferences. An AI personal assistant trained on and continuously attuned to your political preferences could advise you, including what to support and who to vote for. It could possibly even vote on your behalf or, more interestingly, act as your personal representative.

We can imagine a personal AI directly participating in policy debates on our behalf along with millions of other personal AIs and coming to a consensus on policy.

More near term, AIs can result in more ballot initiatives. Instead of five or six, there might be five or six hundred, as long as the AI can reliably advise people on how to vote. It’s hard to know whether this is a good thing. I don’t think we want people to become politically passive because the AI is taking care of it. But it could result in more legislation that the majority actually wants.

I think this is all coming. The time frame is hazy, but the technology is moving in these directions.

All of these applications need security of one form or another. Can we provide confidentiality, integrity and availability where it is needed? AIs are just computers. As such, they have all the security problems regular computers have—plus the new security risks stemming from AI and the way it is trained, deployed and used. Like everything else in security, it depends on the details.

In most cases, the owners of the AIs aren’t the users of the AI. As happened with search engines and social media, surveillance and advertising are likely to become the AI’s business model. And in some cases, what the user of the AI wants is at odds with what society wants.

We need to understand the rate of AI mistakes versus the rate of human mistakes—and also realize that AI mistakes are viewed differently than human mistakes. There are also different types of mistakes: false positives versus false negatives. But also, AI systems can make different kinds of mistakes than humans do—and that’s important. In every case, the systems need to be able to correct mistakes, especially in the context of democracy.

Many of the applications are in adversarial environments. If two countries are using AI to assist in trade negotiations, they are both going to try to hack each other’s AIs. This will include attacks against the AI models but also conventional attacks against the computers and networks that are running the AIs. They’re going to want to subvert, eavesdrop on or disrupt the other’s AI.

Large language models work best when they have access to everything, in order to train. That goes against traditional classification rules about compartmentalization.

Can we build systems that reduce power imbalances rather than increase them? Think of the privacy versus surveillance debate in the context of AI.

And similarly, equity matters. Human agency matters.

Whether or not to trust an AI is less about the AI and more about the application. Some of these AI applications are individual. Some of these applications are societal. Whether something like “fairness” matters depends on this. And there are many competing definitions of fairness that depend on the details of the system and the application. It’s the same with transparency. The need for it depends on the application and the incentives. Democratic applications are likely to require more transparency than corporate ones and probably AI models that are not owned and run by global tech monopolies.

AI will be one of humanity’s most important inventions. That’s probably true. What we don’t know is if this is the moment we are inventing it. Or if today’s systems are yet more over-hyped technologies. But these are security conversations we are going to need to have eventually.

AI is coming for democracy. Whether the changes are a net positive or negative depends on us. Let’s help tilt things to the positive.

Yea or Nay?

[–] [email protected] 9 points 1 year ago (1 children)

not good, sometimes still trying to use it and get lost from time to time

[–] [email protected] 2 points 1 year ago

nice thinking, TRIZ like.

[–] [email protected] 1 points 1 year ago

Genius is in simplicity

[–] [email protected] 3 points 1 year ago

It is by design, you can read more here: https://mastodon.social/@Gargron/4947733

[–] [email protected] 7 points 1 year ago

interesting question, somehow i think that drones were launched from somewhere in Moscow

[–] [email protected] 5 points 1 year ago (1 children)
[–] [email protected] 20 points 1 year ago (6 children)
[–] [email protected] 4 points 1 year ago

midnight commander, especially if i need to delete files/dirs with '-' and non-ascii characters. i do it without thinking.

[–] [email protected] 9 points 1 year ago

read books, play games, watch tv, walk the dog, love my wife, sleep

[–] [email protected] 0 points 1 year ago

a bro and a sis, live in different countries all of us. crossed water and fire, internal conflicts from time to time, but if somebody dares to touch from the "outside" - we become one buddha palm ;)

[–] [email protected] 3 points 1 year ago (1 children)

death stranding

[–] [email protected] 1 points 1 year ago

Reading: Everything is Under Control by Robert Anton Wilson Listening: Galaxy Outlaws: The Complete Black Ocean Mobius Missions by J.S. Morin, Mikael Naramore (Narrator)

view more: ‹ prev next ›