I have to admit that a fair bit of my animus toward science of late has been motivated by the authoritarianism of the Biden administration, particularly under the presiding of Dr. Anthony Fauci. This authoritarianism cloaks itself in the garb of scientific fact because the peruasive power of objectivity is strong. Who can argue with hard data and statistics? In a modernist world, the ideal is “open skepticism” — people ought to be highly skeptical, but concede even to facts they don’t like in the face of overwhelming evidence.
Here, and in the context of our current political climate, I would like to argue that the opposite is true. Contra the spirit of modernism and science, nobody has to concede to “overwhelming evidence,” and should in fact be especially on their guard when something is presented in the form of “overwhelming evidence.”
In my last post, I argued that there seems to be an association between a belief in science and a kind of manipulative treatment of the public:
I’m inclined to wonder if this kind of dishonesty and rhetorical manipulation is what happens when subjective qualities like “honor” and “integrity” are rendered invisible by science (Steven Pinker famously argued that honor is ‘that curious phenomena which we believe exists only because we believe everyone else believes it exists’). The objective view, and the isolation of discreet situations from broader contexts and relationships, seems like it should naturally incline believers in science toward an amoral behaviorist attitude which justifies all kinds of “noble lies” in order to achieve desired outcomes.
What should I say to get people to do what I want?
Cause and effect is a matter of science, and is visible to science.
We now seem to have reached a state where the knowledge acquired by science is — at least in this instance — at odds with the ethos by which the scientific process is pursued.
By which path does one “follow the science?”
The conventional position is that science functions by way of open and public dialogue, by sharing methodology and opening one’s assertions to retesting. However, it is also true that science has provided public administrators tools by which they can more successfully persuade the public. It doesn’t take a scientist to know that morality aside, lying can work. Science does, however, equip the administrator with tools to lie more effectively.
This is not just Elizabeth Loftus serving as a mercenary expert-for-hire on the witness stand, or journalists trying to censor Joe Rogan for possibly causing people to distrust established science. I refer to large-scale policy matters. Science can serve as an authority to appeal to, a tool of obfuscation and gate-keeping any kind of debate. The calling of a particular theory “baseless” or “unfounded” does not necessarily mean that it is un-scientific (at odds with prevailing scientific opinion) but rather that it is non-scientific (a form of knowledge which exists outside the scientific mode of research and understanding). “Common sense” wisdom about how to treat others, for example, would be non-scientific, but not unscientific. It would also be “baseless” and “unfounded” to a culture which holds the only legitimate “base” or “foundation” for a belief to be scientific.
This seems to be where we are today.
The specializations of knowledge within science breeds a kind of elitism and contempt for the unlettered public. In confusing one’s own area of research with the objectively correct factual matters most relevant to any political disagreement, it is easy to believe the public really is stupid, contemptible, and basically worthy of manipulation because they are clearly not willing to “do the hard work” of educating themselves in the relevant matter.
We see this kind of contempt revealed on occasion — to me, no instance is more illustrative than that of Jonathan Gruber, economist and architect of Obama’s Affordable Care Act.
In 2014, videos emerged of Gruber describing his strategy for getting Obamacare through:
This bill was written in a tortured way to make sure CBO did not score the mandate as taxes. If CBO scored the mandate as taxes, the bill dies. So it’s written to do that. In terms of risk-rated subsidies, if you get a law which makes explicit that healthy people are going to pay in and sick people get money, it would not have passed. Okay? Lack of transparency is a huge political advantage. And basically, you know — call it the stupidity of the American voter or whatever, but basically that was really critical to getting the thing to pass.
Gruber’s is the contempt of the economist, but it is the contempt of any expert who has come to conflate their own expertise with a kind of objective knowledge, without which people are simply hopelessly stupid.
(As an aside, it is certainly a risk in philosophy as well — some, philosophy teaches the inadequacy of philosophy alone, and the value of other domains of knowledge and thought; others, it enchants with the superiority of philosophy, and the idea that the world ought to be reigned over by Platonic philosopher-kings).
But we can step back and see this contempt preemptively, at a more conceptual level. We can see scientists and science advocates draw up the moral right to contempt in advance in some instances, and there is perhaps no better example of this than a TED talk given by Sam Harris, tellingly titled “Science can answer moral questions“:
We live in a world in which the boundaries between nations mean less and less and one day they will mean nothing. We live in a world filled with destructive technology, and this technology cannot be un-invented. It will always be easier to break things than to fix them. It seems to me to be patently obvious therefore that we can no more respect and tolerate vast differences in notions of human well-being than we can respect or tolerate vast differences in notions about how disease spreads, or in the safety standards of buildings and airplanes. We simply must converge on the answers we give to the most important questions in human life.
Now the premise of Harris’ position is that morality must necessarily relate to the well-being of conscious creatures. Without argument, he dismisses out of hand ideas about deontological duties, virtue ethics, or the possibility of telos or divine command, in favor of a kind of utilitarianism which is more compatible with his scientific instincts.
Morality concerns right action, which is related to but not identical with the well-being of conscious creatures. “Well-being” itself — as a positive experience — emerged in order to facilitate certain ends proper to an organism; we enjoy sex and experience it as “well-being” because it is to the benefit of our species and our lineage that that activity occur. The experience of well-being, therefore, is not the end, but a means to an ulterior end. This ulterior end requires us sometimes to experience negative experiences which we consider to be good, not out of masochism, but out of an innate understanding that hedonism/utilitarianism — even in its “enlightened,” longer time-horizon forms — are not the sole or even primary dictates of right action.
But these kinds of arguments — however logical they might appear — are not empirical. They are not the sort of things science can study.
In times of peace and plenty, scientists may appreciate unscientific forms of knowledge. However, when push comes to proverbial shove, the pretense of pluralism fades away and people acknowledge their true masters.
And if such a hard choice doesn’t presently exist, scientists and rationalists (modernists of all kinds, really) are very good at projecting forward until they find such a dire situation, and then bringing that to the attention of the public.
Consider the phenomenon of global warming, for example.
Contrary to popular recollection, the science was never clear on global climate change. What does appear to be clear is that global temperatures are rising — everything else is up for debate. All of the temperature prediction models were wrong, the attribution to humans is dubious at best (a single volcanic eruption does more to global carbon levels than decades of automobile traffic combined), and the proposed solutions can be summarized as economic favoritism, transferring government subsidies from gas, oil, and coal companies to “green” energy companies who specialize in solar and wind power (“green” is in quotes because the claim that these forms of energy are any more environmentally friendly is itself not clear).
But the idea of catastrophic weather events and climate change nevertheless serve as a catalyst for forcing a choice between “the arts”, tradition, subjective preferences, etc, and “the science.”
We must redesign our cars. We must re-engineer our lifestyle. We must rebuild our entire economy.
Richard Dawkins lightly pushed in this direction on the matter of child-rearing, famously arguing that inculcating children with religious values and stories constitutes child abuse. This is a sentiment Sam Harris echoed in his talk, pointing out as if it was obviously negative that corporal punishment still exists in the United States.
Need I mention our current flu season?
But more concerning than Harris or Dawkins is the opinion of another quieter, more serious philosopher — Nick Bostrom.
Bostrom is most known for his “simulation theory”, which argues on mathematical grounds that there is a very good chance we are living in a simulation. But one of his primary areas of study and interest is in the subject of “existential risk,” which he defines as risk which “threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.”
Take a moment to reread that definition. First, notice the description of “earth-originating intelligent life” instead of “humans.” But more importantly, what area of scientific exploration could not be construed as “desirable future development?”
In his TED talk, Bostrom claims that the value of reducing existential risk by 1/1,000,000th of one percent is a hundred times more valuable than a million human lives.
With this sort of rational mathematics, what manipulation, what atrocity, what genocide couldn’t be justified if it even might help mitigate existential risk, including the risk of “drastic destruction of potential for desirable future development?”
And yet, from a scientific framework, where is he wrong?
One possible answer might be that such things are unknowable, that existential risk (let alone “desirable development”) is not a category of thing that can be measured and weighed against human life in the present, or anything else. Aside from there being too many variables, one might argue that such projections are unfalsifiable.
Good science would not make such a claim, claim the good scientists defending the honor of their trade.
And they’d be right.
Yet the institutions of science and their benefactors continue to push these kinds of choices. Why?
In 1962, Thomas Kuhn published The Structure of Scientific Revolutions, which argued that science does not progress by pure falsification, as Karl Popper argued. Rather, it progressed by periods of stagnation, occasionally overcome by giant paradigm shifts. This happened because science is conducted by people, and people are not rational, much to the chagrin of purists like Popper. There is ideological inertia, and even after a theory is in fact falsified, stubborn human attachment maintains the theory within the field — it is simply modified to survive the falsification. Only after layers upon layers of torturous revising transform a prevailing theory beyond initial recognition is a theory finally dropped. This is because scientists are humans.
We are not the giants of reason that rationalists wish us to be.
(We now see why transhumanist Bostrom seems to prefer “earth-originating intelligent life” to “humans” — perhaps a foreshadowing to doing away with… existential risk?)
Whenever the institutions of science are attacked by critics, scientists are quick to point out that corruption in the field does not represent “real science.” But does “real science” exist in a world of humans? Can it exist? Is this the academic equivalent of “real communism has never been tried,” and can science be defined apart from the nature of the humans who participate in it, apart from all other human endeavors and their consequences?
The ideal of human rationality is perpetually flummoxed by our inexorably irrational decision-making processes. We are primates, evolved to make decisions quickly with limited information, often using rough heuristics and pattern recognition, slanted in favor of survival. In other words, we are “irrational” by design.
Despite its benefits in ordinary life, this kind of irrationality is a great problem for science, and for modernism in general. We can no more ignore the political implications of our nature when it comes to science than we can in relation to communism, which is perennially thought of as “great on paper” but neglects the complexities of human nature that can’t fit neatly into tidy historical theory.
Without understanding this, we will perpetually mistake the activity of science for the institutions of science (to the joy of those institutions and the politicians they work with). We will mistake the theories of science for “truth” in some functional manner, rather than the perpetual work-in-progress of those who have opted to pursue objectivity for their own, less-objective reasons. And from this, we will mistake scientific criticism as something legitimate outside of this domain of scientific inquiry. Scientific, rationalist questions about “freedom” have no bearing — and must have no bearing — on the civic body which begins with a different set of foundational assumptions and respects different sources of knowledge than science does.