Loading

Interview: Alan Evans

Dr. Alan Evans is a professor of neurology and psychiatry at McGill University. He will serve as a panelist for a session taking place on the second day of the INS annual meeting, 'Challenges of Artificial Intelligence and Neuroscience to Democracy.'

What is your current position and field of research?

I’m a professor of neurology and psychiatry at the Montreal Neurological Institute, which is a research institute and hospital affiliated with McGill University. In terms of research, I do two main things: 1) the development of various methodologies associated with brain imaging, and 2) neuroinformatics, which is the integration of all kinds of data (imaging, genetics, clinical, biospecimen information, psychological testing) to ask questions about how the brain works.

Those are the methodological aspects of my work, and then — in my case — I apply those techniques to study two domains: brain degeneration (particularly Alzheimer's disease) at one end of the spectrum, and brain development at the other end. In both cases, I'm interested in understanding the fundamental mechanisms, not just a clinical description. We are more concerned with trying to get to the root causes and pathologies associated with brain disorders.

For instance, in Alzheimer’s disease, we want to understand what’s the underlying mechanism which gives rise to a loss of memory. We first use brain imaging and then we combine that with other kinds of information. Ultimately, a lot of the work is big data analytics and processing of multi-modal brain data to understand the structure and function of human brain development and degeneration.

What are some neuroethical issues in your research?

There’s a number of neuroethics-related issues that are increasingly important to me. They relate to things like open science. There’s a movement around the world to put big data into the public domain and allow many different researchers to analyze it, in comparison to the past model where there was one group in one lab collecting the data and jealousy guarding it.

We are much more moving to a model of collecting the data, getting the consent from the patients involved, anonymizing it, and then putting it into the public domain so that people around the world can download it, analyze it, and do a lot more work than was possible before. From society’s standpoint, that’s a good thing. You’re making more use of the same data.

With the technology that we have in my lab and in other labs around the world we’re increasingly able to assemble large amounts of data and make it publicly available to what we think is society’s benefit. However, this presents something of a challenge to the ethics machinery and we're constantly finding ourselves caught up in ethics conundrums where we’re told that we cannot put that data out in the public domain, even if it’s been de-identified.

What is an example of one of these neuroethical issues?

The way that the ethics machinery typically works is that they get a group of people to look into whether or not the science that you’re doing is reasonable and that can be tricky because a lot of people on an ethics committee are not familiar with the details of the science and they have to make broad judgement calls. But their primary concern is to make sure that no ethical malpractice takes place in the collection of the data, and that’s entirely appropriate to protect patients.

However, given what I've just described, there's this new kind of science becoming more prevalent: the secondary use of data that was collected somewhere else. A good example of that is the so-called UK biobank. This is a monstrous data set that was collected in the UK. They got literally hundreds and thousands of people and got the approvals to make this data publicly available, de-identified and anonymized. Anybody around the world can apply scientifically to get access to the data, and –if approved at their end– then we’re essentially free to do whatever we want at our end, right? Not exactly. Our own institutions are saying, “hold on a minute, we have to sign off on you using this data,” which at this point is ones and zeros, right? Patients are not identified, the datas anonymized, and it's being downloaded by many institutions around the world. In my view, our own university should essentially get out of the way because the use of this data has already been approved.

However, our review mechanisms are still working on the prior model of protecting patients during a planned primary data acquisition process. This is not that. This is data which is coming from outside –it's essentially just data, and the UK biobank has already approved our access to it. Basically, what I’m saying here is that the ethics machinery has not caught up to the idea of open science.

Do you see the ethics machinery catching up to the massive amounts of data that’s out there?

I think that they are trying, but institutional ethics mechanisms are slow and bound up in government regulatory practices, so right now they’re not moving anywhere near fast enough. They're stuck in this old mindset of protecting primary data acquisition, which is very different from secondary use of de-identified data. There really has to be a reckoning where the ethics community clearly distinguishes between those two domains, and essentially backs off from trying to oversee the promulgation of de-identified data throughout the scientific community.

At the end of the day, a fundamental point that often gets missed is that there should be a balance between the pros and cons of doing the research. It's always easy to say no. If you say no, then you are safe and you are protected from fickle uses of the data. However, what you should be doing at the same time, is asking what harm we do to society by saying no? What harm are we doing to society by not ultimately delivering a cure for Alzheimer's disease, for instance, because we're essentially obstructing research.

By slowing it down, people are denied timely interventions, they have morbidities, and they die. And so there's a downside to ethical oversight. The proper ethical balance is like an insurance, balancing risk versus reward. It's always easy to say no because that’s the safest position, but it's not taking into account the other side of the equation. I think that there's definitely a need to look at the cost of saying no, not just the benefits.

What aspects of the annual meeting are you excited about?

I do think that these ethical questions are fascinating and I like the discussion, but what I'm looking forward to at the end of the day are practical common-sense solutions, not just abstract debating or conceptual points that don't have a lot of practical import. I think it's very very important that the researchers are not perceived as the enemy to be policed; there has to be a mutual understanding of what’s important.

You have to protect the patients and protect the subjects of a research study, but we have to come up with a practical plan of action going forward in the face of new technologies which are moving so much faster than ethics can move. Technology is moving at a blinding pace, and the rise of open science is a good example of something that's completely blown past the ethical structures that we typically use. I am looking forward to having that kind of discussion with ethics professionals.

Additionally, some of the work that I'm involved in involves artificial intelligence. However, it's really just artificial intelligence in the sense of image processing, which is a small part of artificial intelligence. There are much bigger and, frankly, terrifying aspects of artificial intelligence which give me pause.

I'm very much looking forward to hearing what people have to say about artificial intelligence in decision-making, health care, categorizing people's motivations, and self-driving cars, among many others. All of these things have consequences for society and I'm just fascinated about all of these potential avenues and domains of artificial intelligence.

Our Digital Future: Building Networks Across Neuroscience, Technology and Ethics

The 2020 Annual Meeting of the International Neuroethics Society (INS) will convene virtually on Thursday and Friday, October 22-23, 2020. Sessions will address the many areas in which brain technologies and data concerning the brain are developed, deployed, utilized and regulated. We hope you will attend!