Anthony Fauci was recently interviewed by David Wallace-Wells of the New York Times. Much of the interview turned on the lessons learned about the Covid pandemic and our society’s response. One exchange in particular struck me:
David Wallace-Wells: Three years ago, in March 2020, you and many others warned that Covid could result in as many as 100,000 or 200,000 American deaths, making the case for quite drastic interventions in the way we lived our daily lives. At the time, you thought “worst-case scenarios” of more than a million deaths were quite unlikely. Now here we are, three years later, and, having done quite a lot to try to stop the spread of the virus, we have passed 1.1 million deaths. What went wrong?
Anthony Fauci: Something clearly went wrong. And I don’t know exactly what it was. But the reason we know it went wrong is that we are the richest country in the world, and on a per-capita basis we’ve done worse than virtually all other countries. And there’s no reason that a rich country like ours has to have 1.1 million deaths. Unacceptable.
Wallace-Wells: How do you explain it?
Fauci: The divisiveness was palpable, just in trying to get a coherent message across of following fundamental public-health principles. I understand that there will always be differences of opinion among people saying, “Well, what’s the cost-benefit balance of restriction or of masks?” But when you have fundamental arguments about things like whether to get vaccinated or not — that is extraordinary.
The development of Covid vaccines is one of the most remarkable scientific achievements of our lifetime. Safe and reliable, and when combined with antivirals, all but guarantee that you will not die from Covid. Still, only 68 percent of the US population is vaccinated as of March 2023.
Fauci and Wallace-Wells indulge in a fair amount of post-hoc reasoning to try to understand the public’s reluctance, but their banter is no more convincing or insightful than others who have expounded on the topic. There is no denying that understanding why a large proportion of the population refused to be vaccinated is important, but as of yet there have been precious few rigorous, evidenced-based studies.
Other contemporary issues face similar headwinds. Billions of dollars, for example, are going into developing technological ways of lowering global temperatures. Great strides have been made and it is quite possible that we may soon have the ability to impact climate in ways that make the world much more sustainable for human beings. But will enough people change their behavior to implement these technologies? If current trends prevail, the answer is ‘no.’ As the IPCC warns in its sixth assessment report, Climate Change 2023:
Human activities, principally through emissions of greenhouse gases, have unequivocally caused global warming, with global surface temperature reaching 1.1°C above 1850–1900 in 2011–2020. Global greenhouse gas emissions have continued to increase, with unequal historical and ongoing contributions arising from unsustainable energy use, land use and land-use change, lifestyles and patterns of consumption and production across regions, between and within countries, and among individuals.
Government policies assume that if we make green technologies available at a price lower than traditional energy sources, people will follow their pocketbooks and adopt them. After all, this is just common sense. Indeed, classical economic theories assume that individuals will make decisions that benefit them the most; that they will act “rationally.” But what is the evidence that people, in enough numbers, will act as rational beings? Unfortunately, we don’t know the answer to this question. If it turns out to be two-thirds of the population, like Covid vaccines, then civilization as we know it is unlikely to survive.
Investigating questions related to human behavior falls to the social sciences. For decades, these sciences—anthropology, sociology, economics, political science, and psychology—have been under attack. Broadly speaking, these attacks call into question the legitimacy of social science as science. Unlike the hard sciences, social sciences labor under the constraints of studying complex phenomena—human beings and their behavior—under conditions that for social and moral reasons, does not allow for the control of critical variables. Many have been suspicious that social sciences are little more than advocacy for specific political agendas.
Yet there is no denying that the social sciences have left their mark. To take just one example from anthropology, we need to look no further than the issue of race. In the early 20th century, race was considered a biological imperative as opposed to a social construct. Franz Boas tested this statement through a well-constructed study of skeletal anatomy that clearly demonstrated that physical characteristics were variable and had more to do with environment, health, and diet and were not the result of inherited and immutable racial traits. Boas and his students then demonstrated through a series of ethnographic case studies that human behavior could not be explained by biology, but instead was the result of social learning. The replacement of “scientific racism” by “cultural relativism” ushered in anthropology as a social science and remains one of its most enduring accomplishments.
To face an uncertain future, replete with pandemics, climate change, warfare and civil strife, social injustice, and wealth inequality, we need to understand why people do what they do. Such an understanding can only come from rigorous, evidenced-based studies of people—as individuals, groups, and cultures. That studying human beings is qualitatively different—and harder—than investigating atoms and microbes is no excuse not to do it. The skeptic’s bar is rightfully high for the social sciences, which is all the more reason to fight against our own biases and make our studies as rigorous, objective, and replicable as possible.
Last week, Geoffrey Hinton, the “godfather” of artificial intelligence (AI), left Google so that he could freely speak about the growing risks of AI to our society. AI’s risks run from swamping the internet with false information to eliminating vast numbers of service and manufacturing jobs to generating its own code to turn robots against us. According to the New York Times, Hinton believes, “The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology.” By scientists, I presume that Hinton primarily means computer scientists. But are they really the right people? Why would those who invented AI be in the best position to control the technology? Stopping AI technological development, as Hinton acknowledges, is futile. Controlling that technology is key. As was the case with resistance to the Covid vaccines or the continuing reluctance to adopt green energy, the issue with AI is not a technological one but a question of understanding people’s acceptance, use, and abuse of that technology. It is in large part a social science question; one we must figure out. And to do that, we had better include some social scientists.