

Calling Bullshit: The Art of Skepticism in a Data-Driven World
Chapter Summaries
What's Here for You
In a world drowning in data and misinformation, are you tired of being misled? "Calling Bullshit" offers a refreshing dose of skepticism, arming you with the tools to dissect the deceptive practices lurking in everything from news headlines to scientific studies. Prepare to embark on an intellectually stimulating journey where you'll learn to distinguish correlation from causation, identify selection biases, and decode misleading data visualizations. This isn't just about spotting falsehoods; it's about understanding the *nature* of bullshit and how it permeates our data-driven society. With wit and rigor, Carl T. Bergstrom and Jevin D. West empower you to become a savvy consumer of information, capable of critically evaluating claims and confidently calling bullshit when you see it. Get ready to sharpen your mind, challenge assumptions, and reclaim your intellectual autonomy in an age of rampant deception.
Bullshit Everywhere
In this chapter of *Calling Bullshit*, Carl T. Bergstrom and Jevin D. West dissect the pervasive nature of bullshit, tracing its roots from animal deception to sophisticated human manipulation. The authors begin by illustrating how even creatures like mantis shrimp and ravens engage in deceptive behaviors for survival, setting the stage for understanding human bullshit. The key difference, they argue, lies in humans' advanced cognitive abilities, particularly our theory of mind and complex language, which allow us to craft elaborate deceptions. The authors reveal the first core insight: bullshit thrives because everyone is trying to sell something, be it a product, an idea, or an image. Think of it as the modern world's echo chamber, where truth and falsehood are often indistinguishable. Bergstrom and West then delve into the nuances of paltering—misleading without outright lying—and the use of weasel words to evade responsibility, showcasing how language can be twisted to create false impressions. They present the second core insight that the gap between literal meaning and implied meaning is a playground for bullshitters. Consider the image of a politician skillfully dodging a question, leaving a trail of ambiguity in their wake. Moving beyond intentional deception, the authors explore how even seemingly innocent storytelling can become a form of bullshit when the focus shifts from truth to self-promotion. This reveals a third core insight: much bullshit isn't about deceiving others, but about constructing a desired self-image. The authors introduce Brandolini's principle, highlighting the asymmetry between the ease of producing bullshit and the difficulty of refuting it. The image of truth struggling to pull its breeches on while falsehood gallops around the world encapsulates this challenge. They then analyze the spread of misinformation on social media, using the example of the false story about an eight-year-old girl killed in the Boston Marathon bombing to illustrate how quickly rumors can propagate despite efforts to correct them. The authors introduce a fourth core insight: cleaning up bullshit requires significantly more effort and resources than creating it. They underscore the rapid changes in information sharing, from newspapers to social media, which have created fertile ground for the proliferation of misinformation. The authors present the fifth core insight that the speed and reach of modern communication technologies amplify the spread of bullshit, making it harder to contain. Ultimately, Bergstrom and West paint a sobering picture of a world inundated with bullshit, driven by self-interest, cognitive biases, and the dynamics of information dissemination, yet equip the reader with the beginnings of a critical eye. As the authors conclude, the fight against bullshit is an uphill battle, but one that is essential for navigating the complexities of our data-driven world, and the sixth core insight is this: skepticism, critical thinking, and awareness are essential tools for combating bullshit in all its forms.
Medium, Message, and Misinformation
In "Calling Bullshit," Carl T. Bergstrom and Jevin D. West dissect the modern infodemic, starting with a bold paradox: smartphones, meant to be bullshit detectors, have instead become bullshit amplifiers. The authors trace this problem back to the printing press, noting how each technological leap democratizes information but also floods the market with fluff. Like Filippo de Strata lamenting the printing press's 'brothel,' critics now decry the Internet’s endless stream of verbal excrement, a torrent that overwhelms our capacity for discernment. Bergstrom and West argue that the click-driven Internet prioritizes sparkle over substance; headlines, once concise summaries, now contort themselves into clickbait promising emotional experiences rather than conveying facts. They reveal how partisanship exploits this vulnerability, with hyper-partisan news amplified by social media algorithms that prioritize engagement over truth, creating echo chambers where tribal epistemologies reign. Judith Donath’s communication theory highlights that sharing information often signals allegiance, reinforcing community bonds even at the expense of factual accuracy. The algorithms, those silent bullshitters, learn to feed us increasingly extreme content, hijacking our attention and wasting our minds. The digital landscape becomes a hall of mirrors, where misinformation and disinformation thrive. The authors paint a vivid scene: a family in India, wrongly accused based on WhatsApp rumors, becomes a victim of mob violence. They explore the rise of digital counterfeiting, where bots and deepfakes erode trust and swamp genuine voices. Jenna Abrams, a fictional persona created by a Moscow propaganda outfit, demonstrates the effectiveness of disinformation spread through trusted networks. Ultimately, Bergstrom and West offer a multi-pronged defense: technological solutions, governmental regulation, and, most importantly, education in media literacy and critical thinking. The authors leave us with a call to action: to triangulate information, seek independent witnesses, and become vigilant bullshit detectors in this age of digital distortion.
The Nature of Bullshit
In this chapter of *Calling Bullshit*, Carl T. Bergstrom and Jevin D. West dissect the essence of bullshit, distinguishing it from mere lies or simple inaccuracies. They begin by noting how the term is often used loosely, as a synonym for anything disliked, before honing in on a more precise definition. The authors clarify that bullshit isn't just about deception; it's about a speaker's indifference to truth, prioritizing persuasion or impression over factual accuracy, like a shimmering mirage in the desert of discourse. Frankfurt's perspective is highlighted: bullshit arises when one tries to impress without regard for truth. Cohen adds that academic bullshit is often 'unclarifiable unclarity,' so dense and convoluted that critique becomes impossible. Bergstrom and West emphasize that the bullshitter manipulates rather than communicates, employing rhetorical flair or statistical 'snake oil' to overwhelm the audience. They define bullshit as language or data presentation intended to persuade or impress, blatantly disregarding truth and coherence. Latour’s work on power dynamics between author and reader underscores how authors often create 'black boxes'—complex jargon or impenetrable methodologies—to shield claims from scrutiny. The authors illustrate this with an example of genetic variants associated with bullshit susceptibility, where the complex biotechnology becomes a black box. Lies, they argue, become bullshit when concealed behind rhetorical artifices, like a magician's smoke and mirrors. The chapter then turns to a case study: an algorithm claiming to detect criminality from facial images. Bergstrom and West demonstrate how, without even delving into the algorithm's mechanics, one can expose the study's flaws by examining the biased training data—criminals' ID photos versus non-criminals' professional headshots. The algorithm, rather than detecting criminality, was merely detecting smiles. The extraordinary claim of linking facial structure to criminal tendencies crumbled under the weight of a more plausible explanation: situational facial expressions. Ultimately, the authors reveal that spotting bullshit often doesn't require technical expertise but careful consideration of data and results, urging readers to question the data's biases and the plausibility of conclusions, transforming the reader into a discerning detective of data.
Causality
In this exploration of causality, Carl T. Bergstrom and Jevin D. West dissect the pervasive human tendency to confuse correlation with causation, a cognitive leap that often leads to misinformation. The authors begin by illustrating how easily we assume causal relationships where only associations exist, such as the link between self-esteem and kissing, urging us to recognize that correlation does not imply causation, a mantra against simplistic narratives. They introduce the concept of linear correlation, emphasizing that while associations can be informative, they rarely reveal the underlying causal mechanisms, like the suggestive, yet ultimately misleading, correlation between housing prices and birth rates. Bergstrom and West then navigate the philosophical complexities of causation, acknowledging the instrumental need to understand cause-and-effect for practical purposes, even when definitive proof remains elusive. The narrative tension rises as the authors expose how media outlets frequently misrepresent research findings, particularly in health-related stories, where correlations are inflated into causal claims, painting a vivid picture of how easily we are swayed by compelling stories over rigorous evidence. Bergstrom and West highlight the importance of scrutinizing prescriptive claims, especially in medical journalism, where advice is often based on mere associations, further cautioning against the post hoc ergo propter hoc fallacy, the assumption that because one event follows another, the first caused the second. They dismantle the misconception surrounding the marshmallow test, revealing how parental socioeconomic status, a common cause, influences both a child's ability to delay gratification and their later academic success, challenging the notion that delayed gratification directly causes success. The chapter crescendos as the authors unmask spurious correlations, those chance alignments that lack any meaningful connection, such as the whimsical correlation between the age of Miss America and murders by hot objects, cautioning against data dredging, where endless comparisons lead to coincidental similarities. They critique arguments that deny causal relationships by conflating probabilistic cause with sufficient or necessary causes, using Mike Pence's flawed logic on smoking as a stark example of how statistical truths can be twisted. Finally, Bergstrom and West offer a resolution: manipulative experiments, when ethically feasible, provide the strongest evidence of causality by isolating the purported cause and controlling other variables, referencing experiments on fever to illustrate how scientists can tease apart correlation and causation, ultimately empowering us to become more discerning consumers of information.
Numbers and Nonsense
In a world saturated with data, Carl T. Bergstrom and Jevin D. West cast a critical eye on the seductive power of numbers, revealing how easily they can be manipulated to mislead. The authors highlight that while numbers project objectivity, their interpretation is always subjective, shaped by context and presentation. Bergstrom and West illustrate how even seemingly exact counts can be flawed due to miscounting, sampling errors, or biased procedures, pointing out that the choice of summary statistics—mean versus median, for example—can drastically alter the perceived narrative; politicians, for instance, might tout average tax cuts that overwhelmingly favor the wealthy, obscuring the reality for the median family. The narrative tension builds as the authors dissect how indirect measurements, like radar guns or whale population estimates, rely on models that introduce potential inaccuracies, stressing the importance of calibration and acknowledging the inherent guesswork involved. The chapter illuminates the concept of 'distilling numbers,' showing how choices in representation—percentages of starting versus final volume, annual versus total loss—can create vastly different impressions, like the whisky industry's 'angel's share' evaporation being framed as either a minor or major loss. The authors caution against 'pernicious percentages,' where figures are presented without meaningful comparison, such as a '99.9% caffeine-free' label on cocoa that's as informative as labeling strong coffee the same way, or Breitbart's misleading statistic about DREAM Act recipients. Goodhart's Law enters the stage, cautioning that when a measure becomes a target, it ceases to be a good measure, illustrated by the Hanoi rat bounty program and colleges gaming ranking metrics. The authors then introduce 'mathiness'—formulas that look mathematical but lack logical coherence, exemplified by the VMMC Quality Equation and the Trust Equation, which promise precision but fail to deliver, often lacking dimensional consistency. The chapter concludes with a warning about 'zombie statistics'—outdated or fabricated numbers that persist through repetition, like the myth that 50% of scientific articles are never read, emphasizing that without source and context, statistics are nearly worthless. Like shadows playing on a wall, numbers can distort reality if we forget to question their origins and interpretations.
Selection Bias
In this illuminating chapter of 'Calling Bullshit,' Carl T. Bergstrom and Jevin D. West turn our attention to selection bias, a pervasive yet often invisible force distorting our understanding of data. Like skiers at Solitude convinced it's the best mountain because, well, they're *at* Solitude, we often fail to recognize that where we look profoundly shapes what we see. The authors caution against extrapolating findings from one group to another without considering whether the sample truly represents the broader population. They expose how selection bias arises when the sampled individuals systematically differ from the eligible population, leading to skewed impressions, such as the auto insurance paradox where every company claims substantial savings for switchers because only those poised to save actually switch. Bergstrom and West then reveal how this bias hides within seemingly innocuous statistics, like university class sizes, where the 'average' class size touted by administrators clashes with the 'experienced' class size felt by students, because larger classes disproportionately impact more students. This mathematical sleight of hand extends to the friendship paradox, where most people's friends have more friends than they do, not due to personal unpopularity, but because social butterflies inflate the average. Like traffic during rush hour where one is more likely to find themselves in the slower lane, the presence of the observer becomes intrinsically linked to the observed variable, skewing perception. The authors introduce Berksons paradox, illustrating how selection for certain qualities, like niceness and attractiveness in dating, can create negative correlations where none exist in the general population. Finally, Bergstrom and West dissect data censoring, exemplified by the misleading graph of musicians' death rates across genres, where the premature deaths in newer genres like rap skew the data because the genre is simply not old enough to account for a full lifespan. To disarm selection bias, the authors advocate for randomization in studies, citing the example of employer wellness programs, where randomized controlled trials reveal the ineffectiveness of programs that observational studies had previously lauded. Only by understanding and actively mitigating selection bias can we hope to navigate the data-driven world with a clearer, more skeptical eye, ensuring that our conclusions reflect reality, not just the echoes of our own skewed perspectives.
Data Visualization
In this enlightening chapter, Carl T. Bergstrom and Jevin D. West turn their skeptical eye toward the world of data visualization, a realm where clarity should reign, but often confusion and deception lurk. They begin with a striking example: a graph about Florida's Stand Your Ground law, its inverted axis transforming a rise in gun murders into an apparent decline, a potent reminder that even unintentional design flaws can mislead. The authors then trace the history of data visualization, from William Playfair's pioneering charts to the complex interactive graphics of today's New York Times, noting a critical gap: our education hasn't kept pace with this visual deluge, leaving many ill-equipped to interpret these powerful tools. Bergstrom and West introduce the concept of 'ducks'—graphs where aesthetics overwhelm data, like USA Today's whimsical but ultimately confusing charts, arguing that such cuteness undermines understanding. They warn against 'glass slippers,' visualizations that force data into ill-fitting forms, such as endless, nonsensical periodic tables of everything imaginable, losing logical coherence in the process. The chapter then sharpens its focus, revealing how axes can be manipulated to distort the story, like the truncated bar chart that exaggerates differences in trust levels or the dual-axis graph that falsely links vaccines to autism. It is a world where choices, even subtle ones, shape perception. The authors present the 'principle of proportional ink,' a guiding star for ethical visualization: the area of a shaded region should directly reflect the value it represents, violated by 3D bar charts, donut charts, and other visual gimmicks that prioritize form over function. Bergstrom and West urge us to be vigilant, to question the stories that data visualizations tell, and to recognize that behind every graph lies a designer with the power to shape our understanding, for better or for worse. They remind us that data visualization, when done right, illuminates; when done wrong, it obscures, manipulates, and ultimately, calls bullshit.
Calling Bullshit on Big Data
In this exploration of big data, Carl T. Bergstrom and Jevin D. West cast a skeptical eye on the promises of artificial intelligence, starting with Frank Rosenblatt's early, grandiose predictions about perceptrons, which echo even today. The authors note how little has changed in the AI hype cycle, despite advances being driven primarily by increased data and processing power rather than algorithmic breakthroughs. They reveal the core principle of machine learning: feeding computers labeled training data to generate new programs, emphasizing that flawed data inevitably leads to flawed outcomes—garbage in, garbage out. Bergstrom and West caution against focusing solely on algorithms, urging critical examination of the data itself, and they illustrate this with the U.S. Postal Service's effective handwriting recognition system, which thrives on high-quality labeled data, contrasting it with the complexities of identifying fake news. The narrative tension rises as the authors dissect a study claiming AI could detect sexual orientation from facial features, revealing how easily interpretations can be skewed and extraordinary claims can lack solid evidence. They expose the myth that machines are free of human biases, explaining that algorithms often perpetuate biases present in their training data, which leads to unfair outcomes in areas like criminal sentencing and lending. The authors then discuss algorithmic accountability and transparency, noting the difficulty of eradicating biases and the challenge of understanding how complex algorithms make decisions; as an example, they describe how Google Flu Trends failed due to overfitting and spurious correlations. Ultimately, Bergstrom and West advocate for a balanced perspective: acknowledging the value of data while remaining vigilant against hype and uncritical acceptance, reminding us that big data is not inherently better, just bigger, and it certainly doesn't speak for itself.
The Susceptibility of Science
In this exploration of scientific integrity, Carl T. Bergstrom and Jevin D. West reveal that science, humanity's greatest invention, is also a human endeavor, prone to both accidental and deliberate missteps; it’s a system jury-rigged atop the evolved psychology of humans, driven by curiosity, status-seeking, and the pursuit of recognition. The authors underscore that scientists, while driven by a quest for truth, are also motivated by career advancement and peer recognition, a tension that can lead to the priority rule, where being first overshadows being accurate. Scientific papers, the currency of this world, are assessed through peer review, a process intended to uphold standards but not immune to human fallibility. The announcement of cold fusion by Fleischmann and Pons serves as a stark reminder that even revolutionary claims must withstand the scrutiny of replication, a cornerstone of scientific self-correction. Bergstrom and West then dissect the concept of p-values, illuminating how easily they can be misinterpreted, leading to the prosecutor's fallacy, where the probability of a match given innocence is confused with the probability of innocence given a match. Like a defense attorney skillfully revealing the flaws in a seemingly airtight case, they caution against mistaking statistical significance for definitive proof. The replication crisis, a storm gathering on the horizon of scientific research, highlights the alarming rate at which published studies cannot be reproduced, not always due to fraud, but often due to subtler issues like p-hacking and publication bias. The authors illustrate how researchers, under pressure to publish, may inadvertently manipulate data to achieve statistical significance, a practice that undermines the integrity of the scientific process. To further clarify, they introduce the base rate fallacy, emphasizing that the prevalence of a condition must be considered when interpreting test results, lest false positives lead to misguided conclusions. Ioannidis's provocative assertion that most published research findings are false is examined, revealing the potential for publication bias to skew the scientific literature, creating a mirage of certainty where doubt should prevail. As science reporting amplifies these biases, sensationalized headlines often overshadow the preliminary nature of findings, misleading the public and eroding trust. Bergstrom and West argue that science should not be viewed as a collection of definitive facts, but rather as a series of arguments, each contributing to an ongoing dialogue. Finally, they expose the market for bullshit science, where predatory journals exploit the pressure to publish, polluting the literature with unreliable articles and undermining the credibility of scientific research. Yet, despite these challenges, the authors conclude on a note of cautious optimism, highlighting the cumulative nature of science and its inherent drive toward self-correction, a testament to its enduring power to illuminate the world around us.
Spotting Bullshit
In a world awash with misinformation, Carl T. Bergstrom and Jevin D. West cast themselves as guides, arming us with skepticism in their chapter, "Spotting Bullshit." They open with a stark example: a fabricated image of a football player desecrating the American flag, a digital phantom born from cultural division. This sets the stage for their core argument: spotting falsehoods requires cultivating specific habits of mind. Like a driver scanning for danger, one must learn to question information at its source, mirroring a journalist’s interrogation: Who is telling me this? How do they know it? What are they trying to sell me? The authors illustrate this with Colleen McCann, a crystal healer, and her claims about crystals absorbing information, urging us to scrutinize not just the message, but the motives behind it; everyone, they remind us, is selling something, be it a product, an idea, or a perspective. Bergstrom and West then caution against unfair comparisons, spotlighting media’s penchant for sensationalism, for instance, the claim that airport security trays are germier than toilet seats—a misleading comparison focusing only on respiratory viruses. The authors suggest a deeper look; ranked lists, they argue, are only meaningful when comparing truly comparable entities, and they dissect the pitfalls of "most dangerous cities" lists, revealing how arbitrary city boundaries skew crime statistics. A key insight emerges: question the metrics. Shifting gears, the instructors introduce the "too good or too bad to be true" heuristic, recounting the dubious claim of a 40% drop in international student applications due to Trump's policies. The authors urge us to dig to the source, revealing how a tweet distorted the original survey's findings, reminding us that in the age of social media, extreme claims often mask deeper inaccuracies. They advocate for thinking in orders of magnitude, offering the Fermi estimation technique—a mental tool to quickly assess numerical claims. They dismantle the claim of nine billion tons of plastic entering the ocean yearly, a figure dwarfing the entire human population, revealing it as a thousandfold exaggeration. The authors then caution against confirmation bias, the tendency to embrace information aligning with pre-existing beliefs. They dissect a viral tweet about gender bias in recommendation letters, revealing that the data actually contradicted the initial, emotionally resonant interpretation. Finally, they urge us to consider multiple hypotheses, illustrating this with the example of Disney's stock drop following the Roseanne Barr controversy, highlighting that correlation does not equal causation. Bergstrom and West conclude with practical steps for spotting online misinformation, urging us to corroborate claims, trace information to its origin, and be wary of deepfakes. They paint a vivid picture: the internet as an information superhighway, urging us to stop littering it with unverified claims, and reminding us to think more and share less, cultivating a cleaner, more discerning information environment.
Refuting Bullshit
In this culminating chapter, Carl T. Bergstrom and Jevin D. West shift from spotting bullshit to actively calling it out, framing it not just as an intellectual exercise, but as a performative utterance—a powerful act demanding prudence. They dissect the anatomy of "calling bullshit," distinguishing it from mere skepticism, emphasizing its public nature and its potential to protect communities. The authors caution against careless accusations, highlighting the importance of accuracy and respect. Like a seasoned debate coach, they introduce key strategies for refutation, starting with *reductio ad absurdum*, turning flawed arguments into laughable extremes, as seen in the critique of overly simplistic models predicting women would outrun men in the Olympics. They then advocate for the power of counterexamples, a single tree standing defiant against sweeping claims about immune systems, illustrating how one well-chosen instance can dismantle an entire theory. Bergstrom and West champion the use of analogies to reframe arguments, like comparing Seattle traffic improvements to baseball team investments, urging us to trust our critical thinking. The authors emphasize redrawing misleading figures, revealing hidden truths in data, and deploying null models, creating simplified systems to expose flawed assumptions. The chapter gently guides us through the psychology of debunking, advising us to decouple identity from the issue at hand, simplify the narrative, and find common ground. Navigating the minefield of misinformation requires ethical precision: be correct, be charitable, admit fault, be clear, and crucially, be pertinent, distinguishing the insightful "bullshit caller" from the annoying "well-actually guy." Ultimately, Bergstrom and West present calling bullshit not as a mere intellectual game, but as a moral imperative—a necessary act to safeguard truth, science, and democracy in a world drowning in deception, urging us to start with self-reflection, recognizing that the most potent source of bullshit we must confront resides within ourselves.
Conclusion
“Calling Bullshit” equips us to navigate a world saturated with misinformation. Beyond identifying falsehoods, it fosters critical thinking about data, visualizations, and even scientific claims. The book reveals how easily numbers can mislead, biases skew results, and algorithms amplify existing prejudices. It underscores that skepticism isn't cynicism, but a vital tool for informed decision-making. The emotional lesson lies in recognizing our own susceptibility to manipulation and the responsibility we bear to challenge misleading narratives. Practical wisdom emphasizes questioning sources, scrutinizing data presentations, and understanding the limitations of AI. Ultimately, the book empowers us to become discerning consumers of information, promoting a more truthful and transparent world.
Key Takeaways
Bullshit thrives because everyone is trying to sell something, whether it's a product, an idea, or an image.
The gap between literal meaning and implied meaning provides ample opportunity for misleading communication.
Much bullshit isn't about deceiving others, but about constructing a desired self-image.
Cleaning up bullshit requires significantly more effort and resources than creating it.
The speed and reach of modern communication technologies amplify the spread of bullshit, making it harder to contain.
Skepticism, critical thinking, and awareness are essential tools for combating bullshit in all its forms.
Democratization of information inherently increases the volume of misinformation, demanding heightened critical evaluation skills.
The click-driven economy incentivizes sensationalism and emotional manipulation over factual accuracy in online content.
Partisan content thrives on social media due to algorithms that prioritize engagement, reinforcing echo chambers and tribal epistemologies.
Social media algorithms, designed to maximize user engagement, can inadvertently promote increasingly extreme content and conspiracy theories.
Disinformation spreads effectively through trusted social networks, leveraging personal connections to bypass skepticism.
Digital counterfeiting, including bots and deepfakes, erodes trust in online information and threatens democratic processes.
Combating misinformation requires a multi-faceted approach, including technological solutions, governmental regulation, and enhanced media literacy education.
Bullshit is characterized by a speaker's indifference to truth, prioritizing persuasion or impression over accuracy.
Effective bullshit often employs 'black boxes' of complex jargon or methodology to shield claims from scrutiny and deter fact-checking.
Examining the data's biases and the plausibility of the conclusions, rather than technical expertise, is often sufficient to expose bullshit.
Extraordinary claims require extraordinary evidence; scrutinize whether simpler, more reasonable explanations exist.
Lies become bullshit when they are concealed behind rhetorical artifices and superfluous details intended to distract from the truth.
Correlation does not equal causation; resist the urge to assume one variable causes another based solely on their association.
Be wary of media reports that overstate causal relationships from correlational studies, especially in health and social sciences.
Always question prescriptive claims; demand causal evidence before accepting advice based on observed associations.
Beware of the 'post hoc ergo propter hoc' fallacy, recognizing that chronological order doesn't automatically imply causation.
Consider common causes; look for underlying factors that might influence both variables in a correlation.
Recognize spurious correlations as chance alignments, avoiding the temptation to find meaning in random data patterns.
Manipulative experiments provide the strongest evidence of causality, allowing for the isolation of specific variables.
Recognize that numbers, though seemingly objective, are always interpreted through a subjective lens shaped by context and presentation.
Be wary of summary statistics like 'averages,' as they can obscure underlying distributions and disproportionately favor certain groups.
Understand that indirect measurements rely on models and assumptions that introduce potential inaccuracies, requiring careful calibration and scrutiny.
Present numbers in a way that allows for meaningful comparisons, avoiding misleading representations that distort the true picture.
Be cautious of percentages, as they can obscure relevant comparisons, make large values look small, and distort changes in net values.
Anticipate Goodhart's Law: when a measure becomes a target, it ceases to be a good measure, leading to unintended consequences and gaming of the system.
Question 'mathiness'—formulas that appear mathematical but lack logical coherence—demanding justification for their specific form and underlying assumptions.
Recognize that your perspective is shaped by where you look; avoid generalizing from non-representative samples.
Be aware that individuals who self-select into a group or study may differ systematically from the broader population, skewing results.
Consider the difference between 'average' and 'experienced' values when interpreting statistics, as disproportionately sized groups can skew perceptions.
Understand that observation selection effects can create the illusion of bad luck or unusual circumstances due to the increased likelihood of being present during specific events.
Be wary of data censoring, as the omission of data points can create misleading impressions of trends or patterns.
Implement randomization in studies to minimize selection biases and ensure more accurate results.
When assessing data, always question whether any selection process has occurred and how it might influence the observed patterns.
Inverted or truncated axes can drastically distort the perception of data, emphasizing the need to always examine axes critically.
Aesthetics in data visualization should serve clarity, not overshadow the data's message, lest the graphic become a distracting 'duck'.
Applying visualization formats designed for specific data types (like periodic tables or subway maps) to unrelated data creates misleading 'glass slippers'.
The principle of proportional ink dictates that the visual representation of data should be directly proportional to its numerical value, avoiding distortion.
Be wary of charts that compare quantities with different denominators, as they can obscure the true relative risks or proportions.
Three-dimensional graphs often add unnecessary complexity and can distort the viewer's perception of the data, making it harder to accurately interpret values.
Always consider the designer's intent and potential biases when interpreting data visualizations, recognizing that design choices can significantly influence the story being told.
AI's effectiveness hinges on the quality of its training data; flawed data leads to biased or inaccurate outcomes, regardless of algorithmic sophistication.
Overfitting to training data can cause AI to misclassify noise as relevant information, diminishing its ability to generalize and make accurate predictions on new data.
Human biases present in training data are often perpetuated and amplified by machine learning algorithms, leading to unfair or discriminatory outcomes.
Algorithmic accountability and transparency are crucial for ensuring fairness and preventing harm, but achieving true transparency is challenging due to the complexity and opacity of many algorithms.
The volume of data does not guarantee better results; critical analysis and theoretical grounding are essential to avoid spurious correlations and ensure the reliability of AI-driven insights.
Acknowledge the dual motivations of scientists—truth-seeking and career advancement—to better understand potential biases in research.
Recognize that peer review, while valuable, is not infallible and does not guarantee the correctness of published papers.
Understand the meaning and limitations of p-values to avoid misinterpreting statistical significance as definitive proof.
Be aware of p-hacking and publication bias, which can distort the scientific literature and lead to irreproducible results.
Consider the base rate fallacy when interpreting research findings to avoid overestimating the significance of positive results.
Approach science news with skepticism, recognizing that media coverage often amplifies biases and sensationalizes findings.
Evaluate the legitimacy of scientific articles by examining the journal's reputation, the publisher, and the consistency of claims with the venue.
Question the source of information by asking: Who is telling me this? How do they know it? What are they trying to sell me?
Be wary of unfair comparisons, ensuring that entities being compared are directly comparable, especially in ranked lists.
Apply the 'too good or too bad to be true' heuristic to claims, especially those amplified on social media, and dig to the source for verification.
Think in orders of magnitude using Fermi estimation to quickly assess the plausibility of numerical claims, breaking down numbers into easily estimated components.
Actively avoid confirmation bias by scrutinizing claims that align with pre-existing beliefs, seeking out contradictory evidence.
Consider multiple hypotheses for any given phenomenon, recognizing that correlation does not equal causation, and being open to alternative explanations.
Corroborate and triangulate information from unknown sources, using tools like reverse image lookup and fact-checking organizations to verify claims.
Calling bullshit is a performative utterance, an action that carries weight and requires responsibility, not just a passive observation.
Effective refutation requires understanding the audience and tailoring the approach to convince them, whether it's a child or a scientist.
Reductio ad absurdum exposes the flaws in an argument by demonstrating how its assumptions lead to ridiculous conclusions.
A well-chosen counterexample can dismantle sweeping claims, highlighting the importance of empirical evidence.
Analogies reframe arguments by drawing parallels between unfamiliar situations and examples the audience intuitively understands.
Ethical bullshit calling requires accuracy, charity, humility, and clarity, focusing on the argument rather than the person.
Distinguish between a caller of bullshit, who aims to advance truth, and a "well-actually guy," who seeks to demonstrate intellectual superiority.
Action Plan
Actively question the motivations behind information presented to you.
Pay attention to the gap between what is literally said and what is implied.
Be aware of your own biases and motivations when sharing information.
Prioritize verifying information from multiple sources before sharing it.
Develop a healthy skepticism towards claims that seem too good to be true.
Practice identifying weasel words and evasive language.
Support fact-checking organizations and initiatives.
Engage in constructive dialogue with those who hold different beliefs, focusing on evidence and reasoning.
Actively seek out diverse perspectives and news sources to avoid echo chambers.
Verify information from multiple independent sources before sharing it online.
Be wary of emotionally charged headlines and content that promises extreme reactions.
Consider the source and potential biases of information before accepting it as fact.
Use reverse image search tools to check the authenticity of online images.
Recognize that algorithms can create filter bubbles and actively seek out differing viewpoints.
Support media literacy education initiatives in schools and communities.
Engage in constructive dialogue with people who hold different beliefs, focusing on facts and evidence.
Advocate for policies that promote transparency and accountability on social media platforms.
When confronted with a claim, first assess the speaker's motivation: Are they prioritizing persuasion over truth?
Identify potential 'black boxes'—complex jargon or methodologies—and seek simpler explanations.
Scrutinize the data sources and collection methods for biases or flaws that could skew results.
Before accepting a claim, consider alternative, more plausible explanations for the observed results.
Demand extraordinary evidence for extraordinary claims, and be skeptical of arguments that lack supporting data.
Practice articulating why something 'smells like bullshit' to refine your critical thinking skills.
When evaluating data, ask: Are the data unbiased, reasonable, and relevant to the problem at hand? Do the results pass basic plausibility checks?
Before sharing information, verify the sources and claims, even if they align with your existing beliefs.
When encountering a claim of causality, ask: Is there a correlation? If so, could there be other explanations?
Actively seek out the original source of research cited in news articles to assess the strength of the evidence.
Be skeptical of headlines that use causal language (e.g., 'causes,' 'effects') when the study only shows correlation.
Consider potential confounding variables or common causes that might explain the observed relationship.
Look for manipulative experiments where researchers actively intervened to test the causal relationship.
Practice identifying examples of the 'post hoc ergo propter hoc' fallacy in everyday reasoning.
Before making a decision based on data, consult with a statistician or expert in research methodology.
Always question the source and context of any number or statistic before accepting it as fact.
Consider the potential for sampling error or bias when interpreting data based on samples.
Be aware of how summary statistics can distort the underlying distribution of data.
Ask whether the chosen representation of a number allows for meaningful comparisons.
Look for potential unintended consequences when implementing metrics to measure performance.
Demand justification for the specific form of any mathematical equation used to support an argument.
Be skeptical of claims that rely on outdated or unsubstantiated statistics.
Seek out the original source of data to understand the methodology and limitations.
Before accepting a statistic, ask: who was included in the sample, and who was excluded?
When evaluating claims of success or savings, consider whether only those who benefit are reporting.
Actively seek out diverse perspectives to counter the limitations of your own viewpoint.
When designing a study, prioritize randomization to minimize selection bias.
Be skeptical of data presented without context, especially regarding how it was collected.
Consider the 'experienced mean' alongside the 'average' to understand the true impact of statistics.
When reviewing research, look for discussions of potential biases and limitations.
Apply the principles of selection bias to evaluate claims in advertising and marketing materials.
Always examine the axes of a graph to check for truncation, inversion, or inconsistent scales.
Be skeptical of data visualizations that prioritize aesthetics over clarity, such as 'ducks' or overly complex designs.
Recognize and avoid 'glass slippers' by using appropriate visualization methods for the type of data being presented.
Apply the principle of proportional ink by ensuring that visual representations accurately reflect the underlying data values.
Question the choice of data ranges and bin sizes in charts to identify potential biases or distortions.
Be wary of 3D graphs, as they often obscure the data and can be misleading.
Consider the source and potential biases of the designer when interpreting data visualizations.
Critically evaluate the source, quality, and representativeness of training data used in machine learning applications.
Question the assumptions and interpretations behind AI-driven insights, considering alternative explanations and potential biases.
Advocate for algorithmic transparency and accountability in organizations and institutions that use AI to make decisions.
Seek diverse perspectives and challenge your own biases when interpreting data and drawing conclusions.
Prioritize theoretical grounding and domain expertise when developing or evaluating AI models, rather than relying solely on data-driven correlations.
When reading scientific papers, actively question the motivations and potential biases of the authors.
Before accepting a study's conclusions, seek out replication studies or meta-analyses to assess the robustness of the findings.
Familiarize yourself with the concept of p-values and the prosecutor's fallacy to avoid misinterpreting statistical significance.
Be wary of extraordinary claims, especially those published in lower-tier journals or promoted by non-experts.
Check for retractions or corrections before relying on the results of a scientific paper.
When encountering science news, seek out multiple sources and be mindful of sensationalized headlines or exaggerated claims.
Support open access publishing models that prioritize accessibility and transparency in scientific research.
Promote a culture of critical thinking and skepticism in your own community to combat the spread of misinformation.
Before sharing information, ask yourself: Who created this, and what is their agenda?
When encountering statistics, perform a quick mental check to ensure they are within a reasonable order of magnitude.
Actively seek out viewpoints that challenge your own beliefs to counteract confirmation bias.
When presented with an explanation, brainstorm at least three alternative explanations before accepting it.
Use reverse image search to verify the authenticity of photos and videos before sharing them online.
Visit fact-checking websites like Snopes or PolitiFact to confirm the accuracy of surprising claims.
Reduce your daily information intake to enhance your ability to process information skeptically.
Before sharing on social media, pause and ask yourself: Is this claim credible, and am I contributing to a cleaner information environment?
Before calling bullshit, double-check your facts and run your argument by a friend or colleague.
When refuting an argument, start by finding common ground with the person you're disagreeing with.
If you make a mistake, own it and admit fault swiftly and graciously.
Practice reframing arguments using analogies to make them more accessible to your audience.
Learn to redraw misleading figures to reveal hidden truths in data.
Develop a null model to challenge assumptions and expose flawed reasoning.
Be mindful of your own confirmation biases and be open to the possibility that you might be wrong.
Focus on refuting the argument, not attacking the person making it.
Before speaking up, consider whether it is worthwhile to derail a conversation, risk a confrontation, or make someone feel defensive.
Practice self-reflection and appreciate the difficulty of getting to the truth.