Every claim we make about what we think is wrong with social media has a paper behind it. Here they are, the research on attention, algorithms, adolescent mental health, and the specific shape the feed takes out of us.
In a preregistered 10-day field experiment with 1,256 X users during the 2024 US election, downranking content expressing partisan animosity shifted out-party feelings on a 100-point thermometer by roughly two points, comparable to three years of population-level change.
On Twitter/X, the engagement-based ranking algorithm systematically amplified emotionally charged, out-group hostile political content compared to a chronological feed, even though users themselves rated the algorithmically selected content as worse for their political attitudes.
Across 2.7 million Facebook and Twitter posts from US news media and members of Congress, posts mentioning the political out-group were shared roughly twice as often as posts about the in-group; out-group language was the strongest predictor of engagement, ~4.8x stronger than negative-affect language.
In an online music market with 14,341 participants, showing download counts to users sharply increased both inequality (a few songs dominated) and unpredictability (which songs dominated varied wildly across parallel 'worlds'). When social cues were hidden, rankings tracked song quality far more reliably, the foundational empirical case for hiding public score counts.
In a preregistered 10-day field experiment with 1,256 X users during the 2024 US election, downranking content expressing partisan animosity shifted out-party feelings on a 100-point thermometer by roughly two points, comparable to three years of population-level change.
Across a Twitter field study and controlled experiments, observers systematically overestimated how outraged authors actually felt, and this overperception inflated beliefs about how hostile the political out-group is, an effect amplified by algorithms that preferentially expose users to the most outraged voices.
Republican and Democratic Twitter users were paid to follow a bot retweeting prominent voices from the opposing party for one month. Republicans who complied became substantially more conservative (effects up to 0.60 points on a seven-point scale among the most-compliant) and Democrats became slightly more liberal, directly contradicting the contact-hypothesis prediction that cross-cutting exposure reduces polarization.
Analysis of 563,312 social-media messages on three polarizing political/moral issues found that each additional moral-emotional word in a message increased its diffusion by approximately 20%. The effect was stronger within partisan in-groups than across them.
Causal analysis of more than 100M posts before and after a 2015 platform-wide ban of two notorious hate-speech communities found that users who remained reduced their hate-speech usage by at least 80%, and migrant-receiving communities did not absorb the behavior at meaningful scale. Platform-level moderation works, content policy can suppress, not just displace.
Randomized experiment paid 2,743 users to deactivate Facebook for the four weeks before the 2018 US midterm. Deactivation freed roughly 60 minutes per day, increased subjective well-being, reduced political polarization, and caused a large persistent reduction in Facebook use after the experiment ended, implying users substantially overvalue the platform relative to its welfare effects.
Re-running specification curve analyses on the same large UK and US adolescent datasets used by earlier null-finding studies, the authors separated social media from total screen time and analyzed boys and girls separately. Median standardized betas for girls ranged from -0.11 to -0.24, two to three times larger than effects for boys, and larger than published associations between adolescent mental health and binge drinking, hard drug use, or obesity.