In 1985 Peter Coveney came to the ULB to work with Nobel laureate Ilya Prigogine, from whom he acquired an interdisciplinary approach to scientific research. He now leads several projects in the field of supercomputing and has developed what he calls a “Brexit mitigation strategy“. This interview, granted in February 2021, was originally published in our 2020 Activity Report.

How did you find out about the Wiener-Anspach fellowship programme?

Around 1984 or 1985 I wanted to start working in a different field of research from the one I had been working on in my DPhil at the University of Oxford. I had become increasingly fascinated by the kind of questions Ilya Prigogine was interested in as well. I was reading his works and I got in contact with him. He was interested in meeting me and he mentioned the Fondation Wiener-Anspach. I can remember graphically the first time I met him in what was called the Service de Chimie-Physique 2 of the Université libre de Bruxelles. The first thing he told me was: “You see, Einstein was wrong!” It was about the direction of time. I worked with his team for two years, the first one funded by the Fondation. It was a very international setup.

What impact did those two years have on you research and career?

It was very formative for my way of thinking about science. I am interested in lots of things, and I don’t see why I shouldn’t be able to potentially work in these areas and contribute to them. That might be a bit unorthodox, because the general view is increasingly “you’ve got to specialize!” As time went by, however, I have been able to set up a centre at University College London, the Centre for Computational Science. If you look at our catchphrase, it says “Advancing science through computers”. In most countries you can’t function successfully as an academic without getting funding for your research. A way of doing that is to have a computational theme, because nothing could be more current and trendy, and then use it to study a lot of different things. It’s a kind of continuation of the same intellectual way of thinking that I acquired while working with Prigogine. In his team he had people working in many different areas. There was a unified link that had to do with thermodynamics and irreversibility, but there were very fundamental activities and there were applied things, from cosmology to ant colonies. In a way I do the same. I have people in my team who work on very basic stuff, but also on diverse applications.

What kind of applications?

One area which has become very strong in the last ten years or so is the biomedical area. It’s become increasingly digitized, because of advances in IT and data acquisition, and we’re looking at very complex systems. That was one of the things Prigogine was into: complexity, how do we understand that and then use it to make sense of ourselves. We have to use computers to do much of this work, often using supercomputers to perform very complicated simulations. We work on algorithms for making these codes work and we work on the end applications. We interact with people in mathematics, computer science, physics, chemistry, biology and medicine.

You mentioned irreversibility. How did Prigogine’s research on foundational matters influence your own research?

At the time I worked on the connections that he was advocating between systems which are chaotic (in other words they have this extreme sensitivity to initial conditions from a microscopic point of view), and how that connects to larger-scale descriptions of matter. If you have something which is extremely sensitive to initial conditions, in practical terms it means you will never be able to describe it in the detail that conventional physics description would wish you to do, because you would have to know an infinite amount about the detail of the starting conditions. So you have to develop methods that are more probabilistic and statistical. This is very relevant to biomedicine and personalized medicine. If you’re doing drug discovery and you have a drug that’s binding to a protein, you need to run simulations and you need to do it fast. But the traditional way is thinking you can run these so-called molecular dynamics simulations one-off and you’ll be able to predict something. It’s actually very unreliable. It is partly to do also with the reproducibility crisis in research, in science in particular. Someone runs one simulation, but the next person who tries to do the same thing is going to get a different result. You have to develop more robust statistical ways of handling the description. In a certain sense it’s extremely applied, because what we’re able to do is predict drugs that will bind reliably with well-defined error bars, but it’s informed by this idea that we can’t rely on a deterministic picture. We have to do things probabilistically.

This reminds me of an article about the study that you led on Covid-19 forecasts. Your team was   commissioned by the Royal Society to examine CovidSim, an epidemiological model developed by Neil Ferguson at Imperial College London.

That model came to prominence because it was used by the UK prime minister and the government to decide that they had better impose a lockdown in March 2020. The predictions, however, were based on a few individual simulations, but the model is very much a probabilistic one. We did some detailed analysis of that which reveals the amount of uncertainty that the model has in it – the code and its underlying mathematical model can predict a huge diversity of outcomes reflecting the uncertainty in the values of the parameters within it. Neil Ferguson’s own predictions were on the modest side compared to what the model could allow to occur. Our analysis is all to do with “ensemble simulations”, the study of all possible outcomes from the model. In order to convey this to the general public, I like to say that this is the same method used today in weather forecasting: one has to perform many simulations because a single one cannot give you a reliable prediction. This is all part of the same picture necessary to understand complex systems, and Prigogine was already talking about it all those years ago.

Can you tell us about another project you’re leading, “First full-scale 3D high-fidelity simulations of blood flow in the human vasculature”?

The physical model we’re using is called a lattice-gas or lattice Boltzmann model. I mention that because one of the people who was working with Prigogine’s large group was Jean-Pierre Boon (I’m still in contact with him though he retired quite a few years ago), and he was using these lattice models. They’re meant to represent a fluid – here it’s blood flow. The blood goes through the vasculature. With these extremely powerful computers, we’re able to do simulations of the blood flow through the whole body in a personalized way because we get the three-dimensional structure of the arteries and the veins, and then we want to flow the blood through this very fine network of arteries and veins. We use lattice models because they are very efficient computationally at managing large quantities of matter, whereas conventional fluid-dynamics code doesn’t scale. These methods scale very nicely because in order to go from one time-step to the next in the simulation you only need to know what happens in the immediate neighborhood of any point. It is sort of intuitive but with hydrodynamics, a bit like electrostatics, the range of interactions can be very large. This method enables you to keep the calculations local. This is one of the areas where I’d like to think that we have made some of the biggest advances in terms of the virtual human scale simulation, because we were able to include an entire human body within the simulations that we run today.

Have you developed collaborations around this project?

Yes, in particular with people in Amsterdam, Oxford and Barcelona. In Barcelona, they have developed a very powerful model of the human heart, called Alya, that carries the electromechanical beating, but it has also fluid flowing into it. It stops where the flow comes into and out of aorta. We’re connecting that model on the computer with my model, called HemeLB, so we can have a single closed loop describing an individual’s cardiovasculature. That work is in progress.
And it needs to be funded…

The European Union has a huge initiative which is worth many billions of euros about exascale computing [editor’s note: next generation machines which will perform more than one billion billion operations per second]. What they want is applications in areas which need this type of machines. My answer to them has always been: “In biomedicine, doing virtual human scale simulations, you are going to need these machines.”

This brings me to the sore issue of Brexit. The deal says the UK can continue to participate in certain European funding programmes, but details still need to be negotiated. What is your take on the situation?

From my perspective as a scientist, after all this activity with European collaborators, nothing has ever been so damaging as the threat to prevent me being able to pursue these scientific collaborations. The deal announced just before Christmas is great good news only by comparison with what might otherwise have been, which is nothing. It’s like a small mercy, but as you said the nature of the collaborations and the rush with which the deal was put together show there’s still a lot of things that have to be negotiated. UK participation in Horizon 2020 finally got sorted out in the end during the earlier parts of last year with the Withdrawal Agreement, and it was agreed that the UK would continue to play into those things. I’m partly running two big projects, the Centre of excellence in computational biomedicine and another, called VECMA, about uncertainty quantification. Together they are worth in total around 16 million euros. Just to be at risk of losing that would have been terrible. The latest news [editor’s note: in mid-February 2021] is that the UK expects to participate in Horizon Europe, but the terms still need to be agreed and signed off. But of course we’re already diminished because we’re no longer at the table agreeing what the work programme for that period is.

Another source of concern for you is the European High Performance Computing Joint Undertaking (EuroHPC).

Yes, a new European intiative, called EuroHPC, has been taken out of Horizon 2020 and Horizon Europe, although the budget for it comes partly from Horizon Europe. The detail is that only 50 per cent of the funding comes from Horizon Europe, the other 50 per cent comes from the nations involved. This is quite messy already and it has led to a lot of complications. It’s been designed in order to make the HPC programme even bigger, but the main point I’d make here is that the UK has never signed up to EuroHPC. That initiative got going after the Brexit referendum, so at that moment it became a political football when the British government didn’t want to be involved in new European initiatives. That’s a big problem for me now. If I want to continue in that area, I have to wear another hat, which is why I took a chair in HPC at the University of Amsterdam. It’s a deliberate mitigation strategy to get round this problem. From my background – my parents were both linguists, into European affairs and culture – I’ve always expected integration rather than division, and to see Brexit happen is extremely sad.