[(After smugly criticizing Jane Austen) “What Jane Austen novels have you read?” “None. I don’t read novels. I prefer good literary criticism.” – Metropolitan]
This scene, while amusing, encapsulates a dynamic I’m seeing more and more: someone forming a strong opinion based on someone else’s take they read on Twitter or a niche subreddit, without any understanding of the original content or broader context. This essay is my attempt at explaining why most people don’t learn about the world in a meaningful way from how they use Twitter and Reddit, despite an abundance of facts and strong opinions, at the end, only gaining a superficial understanding of complex issues.
What’s missing in all of this is recognition of a sort of “intellectual dark matter.” It’s the invisible mass of context, background knowledge, and systemic understanding that gives weight and meaning to individual facts and opinions.
When a truly well-informed person forms an opinion on a new issue, they’re not just processing the new information in isolation. They’re running it through a complex web of prior knowledge, understanding how it fits into broader systems and structures. Even if they don’t explicitly cite this background in their analysis, it’s all there, acting as a system of checks and balances on the conclusions they draw.
But when you absorb someone else’s opinion on Twitter or Reddit, you don’t build all of this intellectual dark matter. You just get the final product – a conclusion or opinion that seems to float in space, disconnected from the gravitational pull of deeper understanding.
Consider this quote from Adam Mastroianni:
“Once, long ago, my friend’s mom went to the library looking for a book for her kids. ‘Do you recommend this Hunger Games book?’ she asked a librarian. ‘Oh yes,’ the librarian replied. ‘It’s about a world that’s divided into districts, and each district makes something different: one makes grain, another makes energy, and so on. Your kids will really like it.’ This is, of course, factually true about The Hunger Games, but it misses the point, which is that the book is actually about a bunch of teenagers being forced to kill each other in the woods. The more you talk to this librarian, then, the less you will understand The Hunger Games. ‘District 8 makes textiles! District 10 makes livestock!’ As you acquire more of these pointless facts, you’ll probably feel like you’re becoming a Hunger Games expert when you’re actually becoming a Hunger Games dummy.”
To further illustrate this concept, let’s consider an analogy from the world of cycling:
In the past, as a seasoned road cyclist, every time I came across someone biking 30+kmh, it meant I was biking with someone else who has lots of experience cycling, and I could trust they knew proper etiquette, and had proper bike handling skills, making it safe and predictable to ride alongside them. Now, with the rise of e-bikes, suddenly anyone can zoom past you at 40kmh, without having put in the years of training and experience that help them develop good riding abilities.
Discussing complex issues with people who mostly learn about the world through Twitter or places like /r/neoliberal feels like riding a bike alongside an e-biker, except they typically don’t know they are using a motor.
To see how this plays out in online communities, let’s examine two case studies:
I used to enjoy reading /r/neoliberal, which on the surface, is a place that has lots of high-level intellectual discussion, engaged in high-fact, high-confidence, strongly opinionated takes on current issues.
While I am very sympathetic to neoliberal beliefs and the values of this community, I eventually realized that many of its users have learned about the world nearly exclusively through reading /r/neoliberal. Despite all the confidence and factual support, most users haven’t actually read very much that will give them the broader context to fully understand the topics they’re discussing. What’s happening is a kind of intellectual telephone game:
- A news story breaks and is shared on /r/neoliberal
- A handful of genuinely knowledgeable users provide initial analysis, which is often quite good.
- Other users absorb this view/analysis.
- As the story evolves over the next 10+ threads in the coming weeks/months, users continue to repeat the initial analyses, often missing crucial new context or developments, and expand the original view/analysis into broader stories where it doesn’t belong.
- Over time, users build up a fact-filled viewpoint on recurring issues, supported by the confident opinions expressed in earlier threads, without spending significant time reading books/exploring other environments, which would over time, help them understand the broader context better.
A similar phenomenon occurs in rationalist circles on Twitter and on blogs:
- A complex issue emerges.
- Various rationalists share their initial thoughts in tweets or blogs, often trying to apply models from other domains to this specific problem.
- A respected figure in the community will make a “winning” argument, which gets adapted as “this is the rationalist consensus on this issue. It accounts for all the important factors.” This opinion gets memorialized perhaps on thezvi.wordpress.com or lesswrong.com as the definitive take.
- Many in the community accept this view wholesale, feeling they now understand the issue completely. Further exploration or questioning on this topic is seen as unnecessary, citing it’s already been settled, irrespective of new emerging facts.
This process creates an illusion of clear understanding of all issues. One of the challenges with this is that many people have adapted a smug elitist attitude that the world is incompetent, and they know all these deep truths the experts miss. This reminds me of Scott Alexander’s Cardiologists and Chinese Robbers essay. When looking at a huge sample of potential issues, you can always find examples to support any narrative. Rationalists might fixate on a few instances of the experts being wrong, believing they’ve uncovered systemic failures, when in reality they’re cherry-picking from millions of potential examples of some individual or institution being wrong.
The danger here is a kind of “Mushy Brain Syndrome.” With each thread read on r/neoliberal, each hot take consumed on Twitter, or each “rationalist consensus” accepted without deeper exploration, our brains become a little mushier. The more facts we accumulate within a narrow framework, the less we actually understand the issue in its full complexity. It’s a kind of anti-learning, where increased exposure to information paradoxically leads to decreased understanding. By absorbing facts without broader context, we’re like an overly confident e-biker.
This phenomenon is almost a reverse of the Gell-Mann Amnesia effect. Instead of noticing that people are wrong when talking about the specific things you know more about, they are wrong about things they actually superficially know much more about than you. They might have object-level facts, but lack the crucial understanding of how these facts intersect with broader systems.
In a way, this is similar to how large language models work, but in reverse. These AI models have vast amounts of data and can identify statistical relationships between pieces of information. But people too reliant on learning from Twitter and Reddit often don’t have the ability to connect the facts they are learning about into relationships with things beyond the direct issue.
I don’t have any strong recommendations of where to go from here. Perhaps start reading more, trying to understand broader systems and different distinct fields, and stop trying to feel like you can be the expert on everything. Maybe read Tyler Cowen’s “Context is that which is scarce” 1000 times until the point finally hits home.
Nice piece, Adam. I agree with all of this.
I have a hobbyist interest in mathematics, and have devoted a lot of time over the years to learning as much as I can about the subject. One thing I learned early on is that it is very easy to delude yourself into believing you’ve learned something, that you understand something.
To really understand a piece of mathematics, I often have to be actively involved in coming up with the idea myself, from scratch. Lots of mathematics books are actually structured to help you do this: they’ll lead you through a series of guided problems that are specifically designed to help you develop intuition and prove the core results on your own such. But I only learned that this was the best approach for me through many false starts.
It’s easy to watch a YouTube video or a blogpost or some example problems and think you’ve understood a piece of mathematics. But this is an illusion, and a dangerous one at that. You need to do the work yourself, build it up for yourself, in order to really have a first-principles, second-nature understanding.
However, it’s not enough to just “do the work”. There are lots of gruesome math problems a smart person can solve with some ingenuity and persistence. But it would be a mistake to conclude that, because you can do a hard problem, you have a good grasp of the concepts underlying the problem. To get this kind of grasp, you need a lot of context: you need to be able to name these concepts, understand how they are related to other problems an concepts, you need to understand how far they extend, etc.
Being exposed to the problem at the right time, with the right scaffolding in place, with the right problems before and after, is really important. Hard work and a knack for getting your hands dirty with problems—these are necessary but not sufficient conditions to develop mastery.
In some ways, math is easy. There are problems that you can work on that can help you test your understanding, etc. There’s feedback, as well as the opportunity to be “in it”. In other fields, like in history say or in philosophy, getting context is much harder. At some point, you have to hunker down and just intake as much as you can. However, the mistake I see lots of people making is thinking that blogposts, podcasts, etc. are a substitute for books. Indeed, it has become strangely fashionable nowadays to decry books for being too long, for being repetitive, etc. What a mistake this is!
It might be possible to take a 400 page book and compress its message down to a reasonable 20 pages. But the problem is that most people simply wont be able to appreciate those 20 pages, wont be able to truly “get” what is in them. You need build-up, you need repetition, you need different ways of articulating the same idea, you need concrete examples, etc. if you want any hope of being able to understand those 20 pages cold.
That extra 380 pages looks inefficient, especially afterwards if you’ve actually mastered the core 20 pages. But the funny thing with many forms of insight is that, after you’ve internalized it, it looks obvious, maybe even trivial. All of the work that had to be put in to get that understanding is glossed over, distorted, maybe even forgotten. But that work needs to be done. Those 380 pages aren’t wasteful; they are essential.
(Incidentally, nowadays, I’m more likely to think of the blogposts or podcasts or whatever as wasteful. I think most people, if they are honest with themselves, will recognize that these are often just high-class entertainment masking as education.)
Should say “Dan”, not “Adam”. The perils of writing a comment late at night, le sigh.