1
-6
submitted 1 year ago by [email protected] to c/[email protected]

Hello and welcome! For about half a year I've been researching about the ways belief systems get built, and how new information can change and refine those belief systems.

It's a bit unusual for me to set up a community. I essentially wanted a platform to discuss these things with others, and to share what I've learnt, as I think it is fascinating. Things like the way new information gets processed into mental systems, how these systems can have shortcomings and lead to bias, how these systems are challenged and refined, the way the human brain is structured to predict and manipulate the world, but also to refine these predictive models, and much much more, anything like that.

Expect posts from me about the relationship between synthesis and analysis, between generalists and analysts, between depth and breadth and the way knowledge works.

Please also post yourself, as I'd love to hear more perspectives!

2
-6
submitted 1 year ago by [email protected] to c/[email protected]

Have you ever had the experience of just minding your own business, when suddenly something in the corner of your eyes catches your attention? What's that about?

This is something universal amongst many animals that have separate brain hemispheres. Imagine a bird sitting in a tree eating an apple, when it suddenly spots a buzzard approaching in the corner of its eyes, which alerts it to hide. The bird was paying direct attention to the apple it was eating, but at the same time a separate attention was working to spot any potential predators outside of its direct focus. You can see why this has an evolutionary advantage.

In humans this works the same way, we can focus on the task at hand, but if we spot something in the corners of our eyes we can rapidly change our focus to that point. These are two different types of attention at work.

What are the properties of these two types of attention? Most of us are the most familiar with the first type of attention, direct focus. We work on something, like washing our hands, and see our own hands moving around. We focus on the task. This attention magnifies whatever the current task is, and doesn't take into account whatever is outside of it. Like a lens, this attention only sees whatever is in focus: the current task.

The second type of attention is a bit different. Usually it goes unnoticed, because it often works more outside of our consciousness, but not necessarily. This type of attention sees the things that are out of focus: whatever's outside of the current task. It sees the blurry corners of our eyes, and can signal us whenever something interesting happens, so we can point our direct attention at it and see it clearer.

I like to give these types of attention a name. Let's call the direct type of attention the depth attention, and the more vague type of attention the breadth attention. One of them focuses and can see the details, but can not see whatever's outside of focus, and the other sees whatever's outside of focus, but not what is in focus. These two are exclusive.

A proper analogy would be a microscope. Imagine you have a book. At first you can see it by itself and with its surrounding objects, but the only thing you can see are the objects themselves. Now you pick out the book and look at a page under a microscope. Zoom in, and you see each letter by itself. Zoom even further, and you will see every individual fiber the paper is made out of. If you would look at the zoomed in image, you wouldn't even know that you are looking at a book, but look at the book outside of the microscope, and you wouldn't even know that the paper of the book is made out of individual fibers if you hadn't looked at a page under the microscope. Breadth and depth are mutually exclusive, which warrant the two types of attention. You can not have both at the same time.

To solve this mutual exclusion, your brain is divided into two hemispheres, each being in charge of one of the two types of attention, and they share information with each other to work together. Your left hemisphere is in charge of the depth attention, and your right hemisphere is in charge of the breadth attention.

This is far from the only differences between the two hemispheres, but this difference is one part of the general pattern that exists between the relationship of the two hemispheres, and in the way knowledge works. Expect more posts about these differences.

A useful analogy is that the two hemispheres work together like CPU cores do in a computer. They can both work on their own tasks, but they can also share information with each other. They function relatively independent from one another, but have access to information from the other, and need it to work properly.

3
-6
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]

This sentence is a lie.

Paradoxes: often funny, often baffling. What are they, and where do they come from?

Let's begin with models. What are models? Models are essentially simplifications of principles and systems in the real world, and this makes them very useful, as we can use them to manipulate and predict the world. Some examples of models are language, maps, mathematics, music sheets, weather forecasts, and the laws of nature.

A model is essentially made up of recognized patterns in real life: they are representations of the real world, but not the real thing itself. When models are incomplete, they are extended and changed so that they become more accurate. The problem is that the more complex a model becomes, the less usable and understandable it is because of its complexity.

What often happens with complex systems is that the system is first broken up into smaller parts, and that a model is made for each part of the system, so that this model can be used to predict or manipulate only this small part. These small models can then be used together to predict the entire system.

What happens when someone tries to make a fully accurate model of an extremely complex system that cannot easily be broken up into separate parts? The model has to account for every case in the real situation, and will become as complex as the real system itself, making it a useless model. To make a useful model for an extremely complex system, it is required that some cases in the system are not considered, so that the model is simpler but less accurate.

Because models are made for separate parts, models can also clash with each other when they are used together, if they have different mechanics or have mismatching purposes. Think of using weather forecasting models and music sheets together, I can't think of a situation in which they would be compatible, because they have a completely different use case. This means that some systems can not be used together, as they are too different from one another. This can even happen with specific mechanics in one model. Let's call them submodels.

I suppose the whole universe could be seen as an extremely complex system that can often not be broken up into separate parts. Models for gravity work very well to utilize and predict gravity, but try making a fully accurate model of the human brain, and it will become way too complex. In this case the principle of gravity can be succesfully broken up from all phenomena into a separate part. The human brain is more difficult to break up into smaller parts, because it works more as a whole.

Well then, with that out of the way, what are paradoxes? I've never seen a paradox in real life, but always in the context of language or mathematics, or any other model. As far as I understand, paradoxes occur because a model is incomplete, or limited in other ways.

Maybe you've heard of the Sorites paradox (paradox of the heap). Imagine you have a heap of sand, and you remove grains of sand, one at a time. At how many remaining grains of sand will it stop being a heap? If it always stays a heap, will it still be a heap when only one grain remains? Because the concept of a heap in language is arbitrary, it cannot be quantified, leading to the paradox. I only know what a heap is based on what others have called a heap, not by how many grains of sand are in it.

There's a mismatch in the arbitrary concept of a heap and the knowledge that a heap is made up of individual grains of sand. This means that language is also an incomplete model: it is arbitrary to an extent. Defining exactly how many grains of sand together is considered a heap would solve this problem, but that would make language too complex, so it's left out. Imagine having to define exactly how many trees together is a forest. Nobody wants that, so we deal with paradoxes.

The first sentence in this post, this sentence is a lie, refers to itself as the sentence and negates itself. The sentence it refers to negates itself, so it negates its own negation, leading to paradox. This is also an example of two mechanics of language being incompatible, just like the previous example. In this case the mechanics are the referrence to the whole sentence and the negation. This is also an example of parts of a system not properly working together, where the subsystems collide.

So, to conclude, paradoxes probably don't exist in the real world, at least not as far as I'm aware, but only exist in our heads, because our models have flaws. Discovering paradoxes is a great way to discover problems in our models and to make them better, both with our internal and our external models. Paradoxes have definitely led to great improvements in the field of mathematics from what I know, and probably also in many other fields. Contradictions lead to new discoveries, but I wonder if we'll ever get rid of them, judging by the nature of what we are dealing with. All I know is, when you discover contradictions, go after them, because they are the edges of our preconceptions.

I hope this was a clear and coherent story. If you have opinions or feedback, please leave a comment, as I'd love to hear more perspectives.

4
-5
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]

Knowledge is like sunlight: it lets us see, but it hides the stars.

As humans, we have senses that allow us to receive information about the outside world. Very useful, but by default this is just raw information. When you see something, it's just light-detecting cells in your eyes firing, and when you touch something, it's just electricity moving up your spinal cord.

Your brain has all kinds of faculties to post-process this information to make a more coherent image, like stitching the twodimensional images both eyes separately receive together to create a threedimensional experience.

Another example, your eyes can only detect the signal strength of three separate color frequencies: red, green and blue, and these colors plus their signal strengths get mixed in the brain so we can experience different colors. This is fun and all, but this is still relatively raw information, and says nothing about what is actually being sensed. How does your brain even know what it is sensing?

Your brain is smart, and it can learn things about the outside world. Imagine a small child being given a grape for the first time. The child knows nothing about the grape, and is made to eat it by his mother. The child enjoys the grape, and this has created an association in his brain, something like this, with every arrow resembling processing done on the information:

My eye cells fire in this configuration -> I see a shiny green sphere -> This is a grape -> I can eat this grape and it will taste delicious.

This is knowledge, and without it we would be very dumb. This is a very abstract example, and this only works if your brain even has a concept of "delicious", but luckily we are are already born with some knowledge about how to process the outside world, like that food is good for us.

This mechanism works for every sense, and taps on multiple senses at the same time: experience something, make and/or refine an association, and use this association in the future to predict and manipulate the world. This learning process is always going on when we are conscious, even if we aren't aware of this.

Now then, this all sounds extremely useful, what is the dark side? I won't deny its extreme usefulness, without it we couldn't function at all, as everything would just be noise, but knowledge has some properties that can be problematic. What happens if the only thing we know about is grapes? With no other reference material, we will think that everything is a grape: it is advantageous for us to do so. Things that resemble grapes for us, which is everything because of our extremely limited knowledge, might be as delicious as the grape we had in the past. We don't only use our knowledge specifically in the context of our specific past experience, we also tap into it in more unknown situations, in the hopes of having the same result.

Keeping this in mind, what happens to a specialist that has studied volcanoes for 40 years? He will be extremely knowledgeable about volcanoes, obviously, and will be extremely helpful if information about volcanoes is desired. What happens if you ask him about something outside of his own domain, something he doesn't have knowledge about, like the question of what made the dinosaurs extinct? He might just say that volcanoes killed the dinosaurs, having been so immersed in this subject matter throughout his life.

Being confronted by an unknown problem, it makes sense that he will tap on the pool of knowledge that has worked for him in the past, but you can see why this is problematic. There is a certain difficulty with seeing problems in a broader context, as every experience is filtered through his knowledge, and it's difficult to shake off. I've also been here myself.

Another problem with knowledge, categorization. Whatever's a grape is not something else. Whatever's orange is not red or yellow. Because knowledge creates categories, distinct borders can form between concepts. Can we really look at something orange and see either red or yellow? I know I can't, because those colors don't fit in what I define as orange. Categorization is incredibly useful to grasp the world, but it also separates, and might lead us to miss the things that bridge categories to each other. It creates a dividing line that might not necessarily exist in the real world.

The final problem with knowledge in this article: faulty experience. What happens if bad things are experienced, like a couple of people hurting you that happen to be bald? Because knowledge is used for future predictions, you might now think that all bald people are out to hurt you. Of course this is wrong, but because you now distrust all bald people so badly, you choose not to engage with them anymore, and you will never get the chance to learn that this knowledge is wrong.

It's a deadly combo-wombo that leads to all kinds of problems and misery, and is something which I've began realizing and correcting in my own life recently, with some wild results. Not specifically with bald people, that's just as an example. Stay tuned for a more in-depth post about this.

This is also the case with learning new things, learn bad knowledge that teaches you to seek out more bad knowledge, and you get stuck in a self-reinforcing loop. This is a big part of how echo chambers happen.

So, what can we do? I'd say that all these three problems also have a counter strategy:

⚔️ Specialism - 🛡️Generalism: Broadening our knowledge, getting informed about different domains than our own so we have more diverse knowledge to use for unknown problems. The more diverse the knowledge the better, as it allows us to make better predictions.

⚔️ Categories - 🛡️Similarities: Consciously looking for connections between categories, seeing if there are similarities, and nurturing these similarities. This one is difficult as it is more unconscious, and often already happens anyway when learning new things, it's then a matter of not dismissing the connection because of other differences, as far as it is reasonable.

⚔️ Faulty experience - 🛡️New experience: This one sucks the most, we need to override our fears and put ourselves in situations that we think might hurt us, to see if our predictions are right or not. If not, they will slowly correct themselves based on our new experience. To cope with the fear, I like to see it as tuning myself, like you'd tune a car to work better, to make the experience more detached. In the case of knowledge, we need to expose ourselves deliberately to different views whenever we seek out new information.

Interesting is that all the counter strategies have to do with learning new information. If you've made it this far, please leave feedback below, as feedback is very welcome and helps a lot.

Bridging the Gap

1 readers
1 users here now

May contradictions collapse!

Exploring the dynamics of models and the way new information interacts with them


to help ourselves.

How does knowledge work, how do echo chambers appear, and how do systems and models form?

Not necessarily limited to these topics, but you get a rough idea. New perspectives are very welcome!

founded 1 year ago
MODERATORS