Last week I attended a meeting in Cambridge: the 7th Edwards Symposium – New Paradigms in Soft Matter and Statistical Physics. I even chaired the last session. It attracted a diverse range of attendees. As an example: As head of a research group in my University I am in charge of a £7,000 group budget. A speaker in the session I chaired – Jean-Phillipe Bouchard – is Chair of Capital Fund Management, which controls roughly €7,000,000,0000 of investors money. As I said, diverse range of attendees.
I really enjoyed Bouchaud’s talk, he’s a very smart guy. Clearly there are differences between what he and I do – I guess only one of us has discussions of how to afford a few hundred pounds for compressed gases to run a lab. But there are also parallels, we both work in interdisciplinary fields, he between physics and economics, me between physics and medicine/biology.
Bouchaud was quite scathing about some models that many economists use. These models assume that people act rationally at all times. Here the the idea is that the economy works by many people all acting rationally in their own interest. This is a strong simplifying assumption, and people like Bouchaud (and me!) who have met and interacted with people find that they often do not act rationally.
I think the point is that economies are complex so we need to make simplifying assumptions to build a tractable model. But that assuming we are all acting rationally in our own best interest is often a terrible assumption. It is a terrible assumption in the sense that the predictions made by the model are just wrong. The model completely misses things like a stock market crash that we would really like a model to help us understand. The model is then arguably worse than useless.
This is in contrast to say the Navier-Stokes model of flowing fluids in physics, whose predictions are often very accurate and useful. This is a good model.
So much for economists, what about medical doctors and medical epidemiologists? In many cases they refuse to go anywhere near models*, just doing randomised controlled trials to gather data, then carefully analysing this data with stats. This avoids the problem economists have of relying on models whose predictions are just wrong, because of the wrong underlying assumptions. But it means that the medics miss out on the advantages that come with a model of, for example, the transmission of a disease across the air.
A model helps you think. Here the model can be just a simple idea of COVID-19 spreading via approximately micrometre sized (and so too small to be seen with the naked eye) particles suspended in, and carried by, the air.
I don’t think the medics in the USA’s Centers for Disease Control and Prevention (CDC**) have such a mental model of COVID-19 transmission. They seem to be struggling to even define what transmission across the air is, in a coherent, workable way. See this NBC news story, and a letter from concerned scientists, engineers and representatives of healthcare workers. And this matters, the CDC sets standards for a country of 300 million and is influential across the globe. It would be a terrible shame if lives were lost because of a refusal to use a simple model to help the CDC’s medics think about the problem of airborne disease transmission.
* Here I specifically mean models of how infectious diseases are transmitted, eg the Wells-Riley model. Epidemiologists do have their own models, such as the Susceptible-Infected-Recovered model, which they use and that are very useful.
** You may be wondering why it is CDC not CDCP. I think the P bit was added later and as everyone was already familiar with CDC they kept it at that. But this is a guess.