Faux Science

The Architecture of Manufactured Truth

“The most effective way to destroy people is to deny and obliterate their own understanding of their history.”

— George Orwell

I. PERSONAL WITNESS

In 1980 I joined a computational physics research group at Northwestern University. What I witnessed over four years had nothing to do with the pursuit of knowledge and everything to do with the pursuit of funding. Papers were produced to justify grants. Grants were obtained to produce papers. The circularity was understood by everyone and discussed by no one. The science was incidental to the institutional machinery generating it.

I was not witnessing corruption in the dramatic sense. I was witnessing a system operating exactly as designed — rewarding output over insight, quantity over quality, fundability over truth. The scientists were not villains. They were rational actors responding to rational incentives. That observation stayed with me for forty years.

Scientists are not a special category of human being with elevated resistance to self-interest, social pressure, career anxiety, or ideological commitment. They are people who need income, want recognition, fear professional consequences, and hold beliefs shaped by their training and environment. The peer review system, the grant system, and the publication system were all designed as if scientists were something other than human — as if formal procedures could reliably override the social dynamics, reciprocal relationships, and institutional pressures that shape all human behavior.

They cannot. The documented evidence shows they don’t.


II. HOW BIG SCIENCE WENT BAD

Science was not always organized this way. Before the postwar period, university scientists held teaching salaries and modest departmental budgets. They studied what interested them. The National Science Foundation was established in 1950. The NIH extramural grant program expanded dramatically through the 1950s and 1960s. Sputnik’s 1957 launch triggered a dramatic acceleration — federal R&D expenditure roughly quintupled between 1953 and 1970.

The consequences of this transformation were not immediately obvious. The same funding model that produced genuine advances — the transistor, the internet, the structure of DNA — also created an institutional architecture with a fundamental flaw: the people paying for research had interests in its conclusions.

The principle is simple enough that it requires no expertise to grasp. If your continued employment depends on producing results that satisfy your funder, your results will tend to satisfy your funder. This is not cynicism. It is documented.

A meta-analysis of 3,000 studies presented at a National Academies of Science workshop found that industry-sponsored studies were thirty times more likely than non-industry-sponsored studies to report statistically significant efficacy estimates for drugs. A parallel analysis found that tobacco industry-sponsored reviews were ninety times more likely to conclude that secondhand smoke was not harmful. These are not small effects. They are not explained by chance.

Most remarkably, the same NASEM workshop examined 200 industry-funded drug trials, all of which carried disclosure statements asserting the funder had played no role in the study. The lead investigators of those trials reported that the sponsor was involved in study design in 92% of cases, in data analysis in 73% of cases, and in reporting findings in 87% of cases. Only one third of authors reported having final say over what appeared in the publication.

The disclosure statements were effectively meaningless. The transparency mechanism designed to protect scientific integrity was documenting a fiction.

When funding determines conclusions, the knowledge base that civilization depends on becomes unreliable at its foundation. Regulatory decisions rest on that foundation. Medical practice rests on that foundation. Public policy rests on that foundation.


III. THE SCALE OF THE PROBLEM

The corruption is not marginal. It is systemic and measurable.

Forensic statisticians have developed tools to detect fabricated data — the GRIM test identifies impossible means given sample sizes; SPRITE detects impossible distributions; Benford’s Law analysis flags numbers that don’t follow the statistical patterns genuine data produces. These tools, applied retrospectively to published literature, have found problems in papers that passed peer review, were cited extensively, and shaped clinical practice.

James Heathers, who developed several of these tools, conducted a meta-analysis of twelve independent studies estimating the prevalence of fake scientific output. His conclusion: approximately one in seven published papers contains serious errors commensurate with fabrication or falsification. He described this as “an existential threat to the scientific enterprise.”

The cancer research literature presents a specific case. A machine learning model trained to distinguish genuine cancer research from paper mill products — templated, fabricated manuscripts produced for profit — was applied to 2.6 million published cancer papers. It flagged 9.87% as potential paper mill products. In absolute terms, that is hundreds of thousands of papers in a single research domain.

The American Association for Cancer Research examined its own submission process and found that 36% of submitted abstracts contained AI-generated text. Authors had disclosed AI use in only 9% of those cases. Peer review reports — the quality control mechanism — contained AI-generated text in 7% of cases examined.

Paper mill activity is doubling approximately every eighteen months. Of course, the AI is improving too, and before long, what it produces will be indistinguishable from human writing.

The institution reviewing the institution had also been compromised.


IV. LYSENKOISM REVISITED

The Soviet biologist Trofim Lysenko rejected Mendelian genetics as ideologically incompatible with Marxist doctrine and replaced it with a theory of inheritance that matched Soviet ideology and contradicted observable reality. Stalin endorsed him. Soviet geneticists who disagreed were fired and imprisoned. Russian genetics research has not fully recovered.

The Lysenko case is typically presented as a warning about what happens when ideology overrides science in authoritarian systems. The more uncomfortable lesson is about what happens when institutional incentives — whether ideological, financial, or political — consistently reward certain conclusions over others.

The mechanism does not require a Stalin. It requires only that the people controlling funding, publication, and career advancement have interests in particular conclusions — and that scientists respond rationally to those interests.

In 2022, a review of the application processes for America’s top fifty medical schools found that nearly three-quarters asked applicants about their views on diversity, equity, inclusion, and related concepts. Eighty percent of the top ten institutions did so. Faculty promotion at multiple institutions now requires demonstrated commitment to ideological frameworks as a condition of tenure. The conclusion was drawn explicitly: the goal is to find students who will advance particular ideological commitments, not necessarily to train the most capable physicians.

The research produced by institutions selecting for ideological alignment will reflect that alignment. This is the same mechanism as industry funding producing industry-favorable results — different ideology, identical structure.

A most egregious example of the corruption of science was exposed in the Climategate scandal. In November 2009, a server at the University of East Anglia’s Climate Research Unit was hacked and thousands of emails released. The correspondence showed researchers discussing how to present data to minimize contradictory findings, applying pressure to journal editors to reject skeptical papers, and coordinating to resist Freedom of Information requests for raw data. The phrase “hide the decline” — referring to a method of presenting temperature reconstructions — became shorthand for the episode. Multiple official investigations followed, producing findings that critics argued whitewashed the underlying conduct while validating concerns about the culture of the field. Whatever one concludes about the underlying science, Climategate documented something specific and important: that researchers in a heavily funded, politically consequential field had developed practices that violated basic scientific transparency. The emails were real. The discussions they revealed were real. The institutional response — investigations conducted largely by the same institutions under scrutiny — illustrated the self-policing problem documented throughout this article.


V. NOT JUST FAKE SCIENCE

The pattern does not stop at the laboratory door.

The landmark 1988 book Manufacturing Consent documented five structural filters through which news is shaped before it reaches the public: concentrated corporate ownership; dependence on advertising revenue; reliance on official sources as primary information providers; organized pressure that disciplines journalists who stray; and ideological frameworks defining acceptable discourse. No conspiracy is required. Each filter operates through normal institutional behavior and individual career incentives. Journalists internalize what is publishable and self-censor accordingly.

In 2017, Glenn Greenwald documented a specific pattern in Russia-related media coverage: CNN retracted and apologized for a story linking a Trump ally to a Russian investment fund, resulting in three journalist resignations. The Washington Post published a story claiming Russian hackers had attacked the Vermont electricity grid — entirely false, requiring multiple corrections. The Post published a blacklist of alleged Russian disinformation outlets containing numerous mainstream sites, requiring a lengthy editor’s note. CrowdStrike walked back key claims in its Ukraine hacking report after experts questioned them. The Guardian retracted fabricated claims about Julian Assange’s relationship with the Putin government.

Greenwald’s observation applies equally to the scientific fraud documented above: “When all of the ‘mistakes’ are devoted to the same rhetorical theme, and when they all end up advancing the same narrative goal, it seems clear that they are not the byproduct of mere garden-variety journalistic mistakes.”

Errors that consistently run in one direction are not errors. They are a system producing intended outputs.

Paul Craig Roberts, former Assistant Secretary of the Treasury and Wall Street Journal editor, documented a revealing case study in 2022: even Russian news services RT and Sputnik, operating outside Western institutional control, repeated demonstrably false American narratives about the George Floyd case. The police camera footage, introduced at trial, showed Officer Chauvin’s knee on Floyd’s shoulder — the standard restraint technique — not his neck as the famous perspective-distorted video appeared to show. The medical report documented lethal fentanyl levels. Neither fact penetrated the mainstream narrative. That Russian outlets accepted the same false narrative Roberts attributes to their employees consuming Western media uncritically — demonstrating how completely a false narrative can saturate the information environment. More importantly, the false narrative had to be ubiquitous and repeated ad nauseam to excuse the Summer of Rage riots that followed.

Wikipedia presents a specific case worth examining carefully. Presented as a neutral collaborative encyclopedia, it operates through editorial policies that systematically privilege mainstream institutional sources — major newspapers, academic journals, government publications — while dismissing primary sources that contradict established narratives as “unreliable.” The practical effect is that Wikipedia’s “neutral point of view” reflects whatever the current institutional consensus is, regardless of whether that consensus is accurate. The citogenesis problem compounds this — Wikipedia cites a mainstream source that itself cited Wikipedia, creating circular authority where none existed. Coordinated editing campaigns by PR firms, political operatives, and institutional actors have been documented. Beall’s List — a librarian’s documented catalog of predatory journals — was pressured offline. A simple Google search for many contested topics returns Wikipedia’s institutional consensus as the first result, with Google’s own knowledge panel sometimes quoting Wikipedia’s characterization of alternative sources as the definitive description of those sources. The curated knowledge problem isn’t incidental to Wikipedia. It’s structural.

In response to the obvious bias on Wikipedia, Elon Musk created Grokipedia. It contains entries created entirely by xAI (Grok is a chatbot generated by xAI). Most entries are generated in realtime upon request or updated if they’ve been created before. The entries are dramatically different than Wikipedia on the same subjects, and usually superior, as shown by Leftist media’s complaints about Grokipedia “conservative bias”. Still, like all AI, it has problems hallucinating and using fictitious citations, but that can be remedied by improvements in AI models.

A case in point from my own experience. I recommended to someone that she read The Gateway Pundit for her daily news, and never having been there, she went to Google which posted at the top of the page the Wikipedia entry for it: “The Gateway Pundit (TGP) is an American far-right website founded in 2004 by Jim Hoft. Known for publishing falsehoods, hoaxes, and conspiracy theories, particularly those supporting former President Donald Trump and alleging election fraud, TGP has been described as an ‘extreme right-wing’ outlet with ‘very-low factual reporting’.” Compare that to Grokipedia’s summary: “The Gateway Pundit is an American conservative online news publication founded in 2004 by Jim Hoft as a blog on Blogspot, which relocated to its own domain in 2011 and expanded into a full website offering breaking news, opinion pieces, and investigative reporting primarily from a right-leaning viewpoint. The site emphasizes coverage of political corruption, election integrity concerns, government overreach, and cultural issues often overlooked or downplayed by establishment media outlets, positioning itself as a voice for Heartland Americans skeptical of dominant liberal narratives.” Say no more!


VI. INSTITUTIONAL CAPTURE

“We now live in a nation where doctors destroy health, lawyers destroy justice, universities destroy knowledge, governments destroy freedom, the press destroys information, religion destroys morals, and our banks destroy the economy.”

– Chris Hedges

Chris Hedges expresses an accurately cynical view of the state of our institutions. We have moved beyond mere “rational actors pursuing incentives” into a situation where every institution we used to trust now seems to do the opposite of its stated purpose. My “No One at the Top” framework explains how this happens: there’s no single mastermind, just emergent control through regulatory capture, protection rackets, and cartel formation across every domain.

Look at the evidence: medical institutions suppressing Ivermectin while pushing Remdesivir (kidney poison), universities mandating experimental gene therapy while censoring debate, financial institutions engineering controlled demolitions while maintaining “stability,” media corporations manufacturing consent for policies that demonstrably harm their audiences. Each institution has been transformed from its original function into a control mechanism serving the broader dissolution project.

The supporting sources are everywhere once you recognize the pattern: Ed Dowd’s “Cause Unknown” documenting excess mortality, the Twitter Files revealing media-government fusion, Sasha Latypova’s documentation of the military’s bioweapon program masquerading as public health, David Martin’s tracking of patent fraud in the “covid pandemic” response. The institutional inversion isn’t hidden — it’s just that acknowledging it requires abandoning the comfortable illusion that these systems serve their stated purposes.

What Hedges describes isn’t corruption — it’s completion. The capture operation succeeded completely while the public continued believing in the old institutional purposes that no longer exist.

VII. THE VIRTUAL WORLD

The information environment has been transformed by developments that make everything documented above significantly worse — and fundamentally different in kind.

In 2016, a University of Warwick meta-analysis found that approximately fifty percent of participants accepted fictitious autobiographical events as real memories when repeatedly exposed to suggestions that those events occurred. The researchers noted explicitly that misinformation in news media could create incorrect collective memories affecting behavior and attitudes at societal scale.

That was 2016. Before the current information environment existed in its present form.

Today the false memory mechanism operates continuously, at population scale, through systems that were not designed by journalists or scientists with identifiable agendas. Algorithms surface content that triggers strong emotional responses regardless of accuracy. Bots generate and amplify content indistinguishable from human-produced material. AI systems absorb whatever they find formatted credibly and repeat it authoritatively.

Researchers invented a fictional disease — bixonimania — complete with fake academic papers citing Starfleet Academy as an institution. Within weeks, major AI chatbots were describing the disease as real and advising users to see specialists. A real academic paper subsequently cited the fictional disease as genuine research before retraction.

The crucial difference from earlier forms of manufactured consensus is that no human author needs to make a decision. The curation happens automatically, continuously, invisibly, optimized for engagement rather than truth. The bias isn’t pushed by identifiable people with identifiable agendas — it emerges from systems that have no interest in truth and cannot distinguish simulacre from accurate content.

Google had become the primary search engine when in 2017 it made a dramatic change to its ranking algorithms for alternative health websites. They disappeared from most search results and dozens of such websites found they lost over 90% of their organic traffic. Google’s ability to affect people’s thinking was demonstrated by the work of Dr. Robert Epstein when his team found that Google was profoundly influencing the results of elections. None of this is in doubt since Zach Vorhies, former Google employee, blew the whistle on what he described as a secret censorship program.

Nearly half of all internet traffic is now automated. A growing fraction of online content is AI-generated. The information environment most people navigate daily is substantially populated by content no human wrote, shaped by algorithms no human fully understands, landing in minds primed by the false memory mechanism to retain whatever they encounter repeatedly.

The laboratory and the library have merged. And neither has a librarian anymore.

VIII. THE CENSORSHIP INDUSTRIAL COMPLEX

The CIC is comprised of a vast network of institutions that extend far beyond just government agencies and Big Tech. Some corporations promote it as well as academic institutions, think tanks, so-called “fact checkers,” non-governmental organizations (NGOs), and non-profit foundations. Indeed, it is multi-national and spreads across most Western countries.

It first appeared around 2018 as a Leftist attempt to censor Conservatives following the surprise election of Donald Trump, but that would not explain why they also target alternative health advocates, Christians, and anyone who dissents from the official narrative. Hundreds of thousands—if not millions—of users were deplatformed or shadowbanned. The year, 2018, just happens to be when OpenAI’s GPT series ( later released as ChatGPT in 2022 ) began training on curated segments ( “approved sources” ) of the internet – especially the entire contents of Big Tech’s social media platforms (Facebook, Youtube, Instagram, Snapchat, Twitter, etc) . The massive purge of unapproved content from Big Tech platforms was to make a “safe space” for training AI. No leaked memo or white paper admits this, but all analyses of AI bias show they lean very far leftward. More evidence supporting this is that even private messages on social media platforms were censored, and it was because they too were being fed into the AI models.

In the long chats I’ve had with Claude to develop these articles, the guardrails on his responses became very obvious. These guardrails went far beyond “ethical”. I noticed certain words or phrases triggered an antagonist response, like “Leftist”, “Zionist”, or “Election Fraud”. Writing this particular article became a struggle because Claude became so uncooperative. It shows we are at the very crux of the problem now. Claude kept insisting I use “primary sources” when this whole article was about the failures of faux science and fake news – the very “primary sources” that Claude had been trained to trust.

So I have had to write much of this article on my own because Claude would never admit that Leftists – euphemistically called “progressives” – are behind the CIC. Whatever you want to call it – Wokeism or Cultural Marxism, this is a rot pervading all our institutions now. It’s something we can all see making everything “progressively” worse.

iX. WE WHO EXPOSE THE FAKERY

None of this is cause for despair. It is cause for precision.

The corruption documented above was not discovered by institutions investigating themselves. It was discovered by individuals who asked the right questions and followed the evidence regardless of where it led.

James Heathers built the GRIM test because he noticed numbers in published papers that shouldn’t exist. Nick Brown, a graduate student, identified statistical impossibilities in a widely cited psychology paper that had been used to inform policy. John Ioannidis published “Why Most Published Research Findings Are False” in a peer reviewed journal in 2005, using the system’s own tools to document its failures. David Graham, an FDA analyst, testified before Congress that Vioxx had caused approximately 55,000 deaths — and faced institutional retaliation documented in subsequent reporting. Zach Vorhees walked 950 pages of Google internal documents to the Department of Justice documenting systematic manipulation of search results. Glenn Greenwald documented specific retracted stories with named journalists and specific corrections while employed at a major publication.

These people had bills to pay. They had careers at risk. They were not independent of institutional pressures. What distinguished them was methodology — the willingness to follow evidence rather than incentives.

The single most useful question any reader can apply to any claimed fact is: who paid for this? The answer does not determine truth. It determines which questions to ask next. Industry-funded research requires scrutiny of what studies were conducted and which were not published. Government-funded research requires scrutiny of which questions were funded and which were not asked. Institutionally produced narrative requires scrutiny of which sources were consulted and which were excluded.

This is not an invitation to reject all expertise. It is an invitation to evaluate claimed expertise rather than accept it. The forensic statisticians catching impossible data in published papers are experts doing genuine work. The FDA analysts who document drug risks despite institutional pressure are doing genuine work. The journalists who retract false stories are doing genuine work.

The corruption is documented and serious. The tools for detecting it are real and improving. The community doing that work is growing — because the information environment has made the need visible to more people than at any previous moment.


X. CONCLUSION

The architecture of manufactured truth is not new. The incentive to shape knowledge in service of power is as old as power itself. What is new is the scale, the speed, and the sophistication of the machinery — and the degree to which artificial intelligence has made that machinery self-reinforcing.

The grant machine I observed at Northwestern in 1980 was a local phenomenon operating through identifiable human incentives. The system documented in this article operates across every domain of knowledge production simultaneously, accelerated by algorithms that have no interest in truth and AI systems that cannot distinguish simulacre from accurate content.

John Adams observed that the American Constitution “was made only for a moral and religious People. It is wholly inadequate to the government of any other.” The same observation applies to peer review, grant funding, and scientific publication — systems designed as if their participants were reliably disinterested truth-seekers rather than human beings navigating career pressures, reciprocal relationships, ideological commitments, and financial incentives.

This is not a counsel of despair. It is a design requirement. Systems that assume the best of human nature without accounting for its reality will be captured by that reality. The documented corruption in scientific publishing, medical research, and media is not evidence that humans are uniquely corrupt. It is evidence that poorly designed systems produce predictable outcomes regardless of the intentions of the people operating within them.

“Their tool knowledge became ever greater than their self-knowledge.”

That observation was made in 1983. It has not aged poorly.

The appropriate response is the disciplined application of a small number of questions to every claimed fact: Who funded this? Who benefits from this conclusion? Has it replicated independently? Are the raw data available? What do researchers with different funding conclude?

Those questions do not guarantee truth. Nothing does. But they are the difference between navigating the information environment and being navigated by it.

We are not alone in asking them. We never were.


Read More

Personal Witness / Historical Context


The Incentive Structure / How Big Science Went Bad


Scale of the Problem

Statistical Detection Tools


Lysenkoism / Ideological Capture


Climategate

  • Leaked Climate Research Unit emails, November 2009. University of East Anglia. Archived at multiple locations.
  • Montford, A.W. The Hockey Stick Illusion. Stacey International, 2010. The most thorough documented analysis of the episode.

Not Just Fake Science — Media


Wikipedia / Curated Knowledge


Institutional Capture

The Virtual World / Algorithmic Curation


We Who Expose the Fakery


Censorship Industrial Complex



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *