Why (most) people don’t learn about the world on Twitter: the case for intellectual dark matter

[(After smugly criticizing Jane Austen) “What Jane Austen novels have you read?” “None. I don’t read novels. I prefer good literary criticism.” – Metropolitan]

This scene, while amusing, encapsulates a dynamic I’m seeing more and more: someone forming a strong opinion based on someone else’s take they read on Twitter or a niche subreddit, without any understanding of the original content or broader context. This essay is my attempt at explaining why most people don’t learn about the world in a meaningful way from how they use Twitter and Reddit, despite an abundance of facts and strong opinions, at the end, only gaining a superficial understanding of complex issues.

What’s missing in all of this is recognition of a sort of “intellectual dark matter.” It’s the invisible mass of context, background knowledge, and systemic understanding that gives weight and meaning to individual facts and opinions.

When a truly well-informed person forms an opinion on a new issue, they’re not just processing the new information in isolation. They’re running it through a complex web of prior knowledge, understanding how it fits into broader systems and structures. Even if they don’t explicitly cite this background in their analysis, it’s all there, acting as a system of checks and balances on the conclusions they draw.

But when you absorb someone else’s opinion on Twitter or Reddit, you don’t build all of this intellectual dark matter. You just get the final product – a conclusion or opinion that seems to float in space, disconnected from the gravitational pull of deeper understanding.

Consider this quote from Adam Mastroianni:

“Once, long ago, my friend’s mom went to the library looking for a book for her kids. ‘Do you recommend this Hunger Games book?’ she asked a librarian. ‘Oh yes,’ the librarian replied. ‘It’s about a world that’s divided into districts, and each district makes something different: one makes grain, another makes energy, and so on. Your kids will really like it.’ This is, of course, factually true about The Hunger Games, but it misses the point, which is that the book is actually about a bunch of teenagers being forced to kill each other in the woods. The more you talk to this librarian, then, the less you will understand The Hunger Games. ‘District 8 makes textiles! District 10 makes livestock!’ As you acquire more of these pointless facts, you’ll probably feel like you’re becoming a Hunger Games expert when you’re actually becoming a Hunger Games dummy.”

To further illustrate this concept, let’s consider an analogy from the world of cycling:

In the past, as a seasoned road cyclist, every time I came across someone biking 30+kmh, it meant I was biking with someone else who has lots of experience cycling, and I could trust they knew proper etiquette, and had proper bike handling skills, making it safe and predictable to ride alongside them. Now, with the rise of e-bikes, suddenly anyone can zoom past you at 40kmh, without having put in the years of training and experience that help them develop good riding abilities. 

Discussing complex issues with people who mostly learn about the world through Twitter or places like /r/neoliberal feels like riding a bike alongside an e-biker, except they typically don’t know they are using a motor. 

To see how this plays out in online communities, let’s examine two case studies:

I used to enjoy reading /r/neoliberal, which on the surface, is a place that has lots of high-level intellectual discussion, engaged in high-fact, high-confidence, strongly opinionated takes on current issues. 

While I am very sympathetic to neoliberal beliefs and the values of this community, I eventually realized that many of its users have learned about the world nearly exclusively through reading /r/neoliberal. Despite all the confidence and factual support, most users haven’t actually read very much that will give them the broader context to fully understand the topics they’re discussing. What’s happening is a kind of intellectual telephone game:

  1. A news story breaks and is shared on /r/neoliberal
  2. A handful of genuinely knowledgeable users provide initial analysis, which is often quite good.
  3. Other users absorb this view/analysis.
  4. As the story evolves over the next 10+ threads in the coming weeks/months, users continue to repeat the initial analyses, often missing crucial new context or developments, and expand the original view/analysis into broader stories where it doesn’t belong.
  5. Over time, users build up a fact-filled viewpoint on recurring issues, supported by the confident opinions expressed in earlier threads, without spending significant time reading books/exploring other environments, which would over time, help them understand the broader context better.

A similar phenomenon occurs in rationalist circles on Twitter and on blogs:

  1. A complex issue emerges.
  2. Various rationalists share their initial thoughts in tweets or blogs, often trying to apply models from other domains to this specific problem.
  3. A respected figure in the community will make a “winning” argument, which gets adapted as “this is the rationalist consensus on this issue. It accounts for all the important factors.” This opinion gets memorialized perhaps on thezvi.wordpress.com or lesswrong.com as the definitive take.
  4. Many in the community accept this view wholesale, feeling they now understand the issue completely. Further exploration or questioning on this topic is seen as unnecessary, citing it’s already been settled, irrespective of new emerging facts.

This process creates an illusion of clear understanding of all issues. One of the challenges with this is that many people have adapted a smug elitist attitude that the world is incompetent, and they know all these deep truths the experts miss. This reminds me of Scott Alexander’s Cardiologists and Chinese Robbers essay. When looking at a huge sample of potential issues, you can always find examples to support any narrative. Rationalists might fixate on a few instances of the experts being wrong, believing they’ve uncovered systemic failures, when in reality they’re cherry-picking from millions of potential examples of some individual or institution being wrong.

The danger here is a kind of “Mushy Brain Syndrome.” With each thread read on r/neoliberal, each hot take consumed on Twitter, or each “rationalist consensus” accepted without deeper exploration, our brains become a little mushier. The more facts we accumulate within a narrow framework, the less we actually understand the issue in its full complexity. It’s a kind of anti-learning, where increased exposure to information paradoxically leads to decreased understanding. By absorbing facts without broader context, we’re like an overly confident e-biker.

This phenomenon is almost a reverse of the Gell-Mann Amnesia effect. Instead of noticing that people are wrong when talking about the specific things you know more about, they are wrong about things they actually superficially know much more about than you. They might have object-level facts, but lack the crucial understanding of how these facts intersect with broader systems.

In a way, this is similar to how large language models work, but in reverse. These AI models have vast amounts of data and can identify statistical relationships between pieces of information. But people too reliant on learning from Twitter and Reddit often don’t have the ability to connect the facts they are learning about into relationships with things beyond the direct issue.

I don’t have any strong recommendations of where to go from here. Perhaps start reading more, trying to understand broader systems and different distinct fields, and stop trying to feel like you can be the expert on everything. Maybe read Tyler Cowen’s “Context is that which is scarce” 1000 times until the point finally hits home.