Internet's Information Overload Problem And The Generative AI Tricky Solution
The internet democratized access to information but paradoxically made learning less efficient. The explosion of content created severe signal-to-noise problems, decision paralysis, and cognitive overhead that pre-internet learners never faced. While AI chatbots promise to solve this through intelligent curation, they may amplify the problem. As AI floods the internet with synthetic content, verified human expertise and trusted digital identities become exponentially more valuable. AI's fundamental limitation—reducing knowledge to common denominators - makes it nearly useless for anyone seeking exceptional performance, which by definition requires understanding the unique human context behind successful strategies. The breakthrough could come if LLMs and chatbots evolve to truly understand the learner's unique combination and customize knowledge delivery accordingly - though this same capability could enable unprecedented manipulation.
The Thesis
Here's a statement that sounds absurd at first: While internet could be genuinely considered a giant leap in human evolution, learning might have been more efficient before the internet. Not more accessible. Not more equitable. Not more comprehensive. But more efficient - as measured by the ratio of knowledge gained to attention invested. The internet solved the information scarcity problem so completely that it created a new, potentially more severe problem: information abundance without effective filtering mechanisms.
Consider the pre-internet learner who wanted to study investing. They had limited options:
- Local library with perhaps 20-50 books on the subject
- A university course with assigned textbook
- A local mentor or expert
- Financial newspapers and magazines
The constraint forced focus. You didn't evaluate 10,000 options. You learned from what was available, invested full attention, and internalized deeply. Now consider the modern learner with the same goal. You face:
- Millions of articles, blog posts, and videos
- Thousands of "experts" with varying credibility
- Hundreds of books with conflicting methodologies
- Real-time social media commentary
- Algorithmic feeds optimizing for engagement, not learning
- Sponsored content disguised as education
The abundance forces triage. Before you learn anything, you must first learn what to learn and who to trust. This meta-problem consumes enormous cognitive resources that never reach actual learning. The pre-internet model had natural gatekeepers—publishers, universities, libraries—performing heavy curation. Their incentives were roughly aligned with quality. Modern algorithmic curation optimizes for engagement, which is orthogonal to (and often inversely correlated with) educational value. A clickbait article titled "3 Secrets Wall Street Doesn't Want You To Know" gets more distribution than a comprehensive analysis. Not because it's more valuable, but because it's more engaging.
The Cognitive Load Burden
Learning requires working memory, which has strict capacity constraints. Cognitive Load Theory demonstrates that learning happens only when working memory can focus on processing information, not managing it. Pre-internet learning imposed one type of cognitive load: intrinsic complexity (the inherent difficulty of the material). Modern internet learning adds many others:
- Source credibility
- Contradictions with other sources
- Subject fragmentation
- Interface complexity
These additional loads exhaust working memory before it reaches actual learning. You finish a session feeling like you've "done research" but retained little. The pre-internet learner had far less extraneous cognitive load. Open book, read chapter, internalize. The simplicity wasn't a bug—it was a feature that enabled deep encoding.
Enter the LLMs and modern chatbots: Saviors or Accelerants?
Large Language Models and AI chatbots promise to solve the information abundance problem through intelligent curation and synthesis. The pitch: instead of drowning in information, ask an AI to filter and summarize. This could go multiple ways.
Optimistic scenario:
LLMs become the intelligent intermediary that solves the discovery problem:
How it helps:
- Reduced evaluation load: AI pre-filters quality
- Efficient synthesis: Combines multiple sources coherently
- Interactive tutoring: Immediate feedback and clarification
- Noise filtering: Cuts through redundancy automatically
In this scenario, LLMs restore the pre-internet advantage (curated, structured learning) while keeping internet advantages (breadth, currency, accessibility).
Example: Instead of reading 50 articles about market psychology, you ask an AI to synthesize the consensus view, highlight areas of disagreement, and identify the three most authoritative sources for deep reading.
This is efficient curation at scale. The learner's attention goes to learning, not managing information.
Pesimistic scenario:
LLMs make the problem worse by adding new layers of intermediation and noise:
How it hurts:
- Source opacity: Learning without knowing origin degrades trust and understanding
- Synthetic slop: AI-generated content floods the internet, increasing noise
- Echo chamber effects: AIs trained on AI content create feedback loops
- Expertise atrophy: Over-reliance on AI prevents development of evaluation skills
- Disintermediation of experts: Breaks direct relationships that enable deep learning
- Confidence without competence: Fluent AI responses mask gaps and errors
In this scenario, LLMs become another layer of mediation between learner and knowledge, not a solution to mediation problems.
Example: You ask an AI about market psychology. It synthesizes information from multiple sources (some quality, some garbage), presents it confidently, and you learn a slightly-wrong mental model without realizing it. You never engage with the original experts or primary sources. This is synthetic knowledge pollution. Appears educational, actually degrades understanding.
The Trust Imperative: Why Human Identity Becomes Critical
Even in an LLM-dominated world, people still need to trust the source of knowledge. This isn't optional. It's fundamental to human cognition and learning. When you learn something, you don't just store facts - you store provenance. Where did this knowledge come from? Who vouches for it? What's their track record? This is how humans assess reliability and know when to update beliefs.
AI can synthesize information brilliantly, but it cannot provide accountability. When an AI gives you investment advice, there's no human behind it staking their reputation. No track record to verify. No consequences for being wrong. No skin in the game. This creates a profound problem: Trust without traceability is just faith.
Learners instinctively know this. When learning something important, they want to trace the information chain back to a credible human source:
- Who developed this framework?
- What's their track record?
- What's their methodology?
- Can I verify their claims?
- What's their incentive structure?
This means the LLM era doesn't eliminate the need for trusted human experts - it amplifies it. But there's something even deeper here.
Why Understanding the Human Behind the Thesis Matters
Knowing who created knowledge fundamentally changes how we understand and apply it. This isn't just about trust - it's about context dependency. Consider reading an investment thesis. The thesis itself - the facts, logic, projections - is just words on a page. But those words take on completely different meaning when you understand the author:
Scenario 1: Anonymous AI synthesis "Company X is undervalued based on DCF analysis showing 40% upside. The market is underestimating their expansion into adjacent markets."
Scenario 2: Human author with known identity "John Smith, 15-year tech investor with 22% CAGR track record, argues Company X is undervalued..."
Suddenly you can ask:
- What's John's investment style? (Value? Growth? Contrarian?)
- What's his time horizon? (Does he hold 1 year or 10?)
- What's his risk tolerance? (Conservative or aggressive?)
- What are his other positions? (Is this consistent with his thesis?)
- What's his background? (Engineer? Marketer? Psychologist?)
- What's his psychological makeup? (Patient? Opportunistic? Conviction-driven?)
The thesis doesn't exist in a vacuum. It exists in the context of the human who created it.
This matters enormously because the same investment thesis might be brilliant for one investor and disastrous for another. The thesis worked for him because of who he is. It may not work for you because of who you are. This is why understanding first principles thinking in investing matters so much. You cannot simply copy surface-level strategies, you must understand the fundamental components and how they interact for you specifically.
The Common Denominator Problem: Why AI Fails at Performance
This reveals LLMs' most fundamental limitation: they synthesize to common denominators. By design, LLMs aggregate information across sources and extract consensus patterns. This is valuable for:
- Factual information (what's the capital of France?)
- Procedural knowledge (how do I change a tire?)
- Established best practices (what's the standard approach to X?)
- Common knowledge domains (basic investing principles)
But it's nearly useless for anyone seeking exceptional performance. Why? Because performance - especially outlier performance - comes from unique combinations, not from averaging. This connects deeply to the concept of compounding intellectual skills, psychological traits, social context and other less obvious variables. Exceptional outcomes result from compounding advantages across multiple dimensions simultaneously. It's not one thing done well - it's the multiplication of many factors aligned specifically to an individual, a company, a country etc.
Let me express this mathematically:
Performance = Skill₁ × Skill₂ × Skill₃ × ... × Personality₁ × Personality₂ × Resources × Context × Time
Note the multiplication, not addition. This is compounding, not accumulation.
If you're missing even one critical factor (it's zero or very low), the entire product collapses. This is why "best practices" often fail - they're optimized for the average, not for your specific combination.
Example: Investment Performance Decomposition
Consider a successful tech investor's returns:
- Technical understanding: 1.3x (better than average analyst)
- Market psychology insight: 1.4x (understands behavioral patterns)
- Network access: 1.2x (gets deal flow others don't)
- Risk tolerance: 1.3x (can handle volatility others can't)
- Time horizon: 1.4x (genuinely long-term)
- Capital structure: 1.2x (no forced redemptions)
Combined performance: 1.3 × 1.4 × 1.2 × 1.3 × 1.4 × 1.2 = 4.5x multiplier
Each component is modest. None is individually exceptional. But compounded, they create exceptional performance.
Now imagine an AI synthesizes this investor's advice:
"Focus on technology companies with strong network effects and moats. Hold for 5+ years. Ignore short-term volatility."
You follow this advice, but you have:
- Technical understanding: 0.8x (below average)
- Market psychology insight: 1.1x (decent)
- Network access: 0.9x (limited deal flow)
- Risk tolerance: 0.7x (anxiety-prone)
- Time horizon: 0.9x (struggle to hold multi-year)
- Capital structure: 0.8x (may need liquidity)
Your performance: 0.8 × 1.1 × 0.9 × 0.7 × 0.9 × 0.8 = 0.4x multiplier
The same strategy, different person, negative alpha.
This is why the human identity behind any thesis is critical. You need to understand:
- What made it work for them?
- Which of their traits are essential vs. incidental?
- Which of their advantages do I have or can develop?
- Is this strategy compatible with my psychological makeup?
AI Gives You the Average; Exceptional Performance Requires the Specific
LLMs are optimized for the common case. They synthesize consensus. They find patterns across many examples. They reduce variance.
This is exactly wrong for performance seeking.
Exceptional performance requires:
- Understanding variance: What makes this person different?
- Identifying unique combinations: What specific factors compound here?
- Path dependency: How did they get here? Can I replicate the path?
- Psychological compatibility: Does this match my temperament?
- Resource alignment: Do I have or can I acquire necessary resources?
None of this is available in AI synthesis. It's all stripped away in the aggregation process.
When an AI tells you "successful investors focus on competitive advantages," you get the common denominator truth. What you need is:
"Investor A focuses on network effects, holds 7-10 years, ignores quarterly results, has technical background enabling deep product evaluation, and has psychological makeup allowing conviction through 50% drawdowns."
Then you can ask: Am I like Investor A? If not, should I study Investor B instead?
The Breakthrough: Truly Personalized AI
But here's where it gets interesting: LLMs could theoretically solve their common denominator problem through genuine personalization. Imagine an AI that truly understood you - your specific combination of:
- Skills and knowledge gaps
- Psychological traits and emotional patterns
- Risk tolerance and time horizons
- Resources and constraints
- Learning style and cognitive patterns
- Goals and values
- Strengths and weaknesses
Such an AI could transform common denominator knowledge into personalized compatibility-matched guidance.
This would be an evolution similar to the targeted advertising revolution, but for a far more important purpose. The technology is similar - deep personalization through data - but the application and outcome could be radically different.
Imagine asking an AI about investment strategies:
Current AI response (common denominator): "Successful investing requires diversification, long-term thinking, understanding competitive advantages, and controlling emotions during market volatility. Focus on quality companies with sustainable moats."
Personalized AI response (compatibility-matched): "Based on your profile - strong technical background, high analytical skills, but tendency toward anxiety during volatility and difficulty holding positions long-term - here's what I recommend:
Avoid: High-conviction concentrated positions like Investor X. Your anxiety patterns suggest you'll exit during inevitable drawdowns, preventing the strategy from working.
Consider instead: Quantitative systematic approaches that remove emotional decision-making. Your analytical skills are your strength; your emotional volatility is your constraint. Build strategies that leverage the former and constrain the latter.
Specifically: Look at Investor Y's systematic value approaches or momentum strategies with mechanical rules. These match your profile better.
Development areas: If you want to build conviction-holding capability (to access strategies like Investor X's), work on:
- Stress management techniques
- Smaller position sizing initially
- Pre-commitment devices (like written investment theses)
- Mindfulness practices for volatility tolerance
This is transformative. The AI isn't just synthesizing information - it's:
- Filtering for compatibility with your unique combination
- Highlighting strategies that work WITH your traits, not against them
- Identifying development areas to expand your capability set
- Matching you with compatible experts to study
- Preventing costly mistakes from mismatched strategies
This connects directly to compounding principles. When knowledge is truly personalized:
Multiplicative advantage emerges:
Instead of:
- Learning general strategies (limited utility)
- Trying to force-fit them to your situation (friction and failure)
- Abandoning approaches that don't work (wasted time)
You get:
- Learning compatibility-matched strategies (high utility)
- Natural alignment with your capabilities (leverage)
- Compounding returns from strategies you can actually execute (sustainable performance)
The formula changes from: Performance = Generic Knowledge × (Difficult Adaptation Process) × (Low Probability of Fit)
To: Performance = Personalized Knowledge × Natural Capability Leverage × High Execution Probability
This could genuinely ignite compounding in ways previously impossible at scale.
We're not there yet ... but we'll get there eventually
For this to work, AI systems would need deep user modeling:
- Psychological profiling (personality traits, emotional patterns)
- Skill assessment (what you know vs. don't know)
- Resource mapping (time, capital, network, etc.)
- Constraint identification (what limits you)
- Learning pattern analysis (how you best internalize information)
- Goal alignment (what you're actually optimizing for)
The Utopian Scenario: Education Revolution
If this evolves toward genuine personalization for learning, individual benefits would be huge. By compounding individual benefits, societal benefits would be immense. Given enough time to propagate, we should experience a truly huge leap for the human civilization.
The Dystopian Scenario: Targeted Ads 2.0
But there's a less interesting and maybe dangerous path: personalization weaponized for profit extraction rather than genuine education. There is a genuine risk that "personalization" could only mean showing you what's most profitable to show, not what's most useful, basically an extension for current targeted ads algorithms. Some other risks could evolve from the "wrong" kind of personalization:
- Formation of echo chambers, never challenged by different perspectives
- Inability to develop versatility or stretch capabilities
- Manipulation
The difference between utopian and dystopian scenarios comes down to one question: What is and will the AI optimized for? If it only follows the steps made by the Google Search ecosystem it will only succeed in extracting more commercial value from user data. The fact that around 20% of people working for OpenAI (some with important positions) used to work for Facebook (META) does not sound very encouraging.
The Realistic Outcome: Fragmented Evolution
Most likely, we'll see a fragmented landscape:
Premium tier (small percentage of users - true value creation):
- High-quality personalized AI aligned with user learning
- Experts properly compensated
- Transparent systems with strong privacy
- Genuine capability building
- Expensive but transformative
Mainstream tier (majority of users - value extraction):
- Manipulative personalization disguised as helpfulness
- Engagement-optimized rather than outcome-optimized
- Data harvesting and behavior nudging
- Dependency creation