What is your current position and field of research?
I moved to Atlanta last year to found The Center for Translational Research in Neuroimaging and Data Science (TReNDS). The center itself is a partnership between Georgia State, Georgia Tech, and Emory University, and its focus is on algorithm development, data analytics and leveraging brain imaging data to develop ways to inform us about both the healthy and the disordered brain. We do a lot of work with different analytic approaches trying to push forward on that front. We collect some data here, but we're doing a lot of building up repositories from existing data sources as well. It's meant to be a collaborative location, we encourage people to come visit and spend time with us –– now it's virtually, of course. We’re trying to really build a collaborative science –– a team science –– as we try to accelerate discovery. Lastly, we have a special focus on translation of the research to potential tools, essentially trying some things out to see how they work.
What are your research areas?
I have multiple areas that I focus on, one of which is working with functional MRI and the methods for looking at brain activity, brain networks, and brain fluctuations. Another area is multimodal data, in which we try to combine information across different types of data. For example, that can be things like brain structure, brain function, genomics, behavior, or clinical information. A third area is developing neuroinformatics tools and providing resources for the community to use the techniques that we develop. We've got different types of data management and data sharing tools that we’ve developed as well. Ultimately, most of my work focuses on one of those three feature areas.
What are some neuroethical aspects of these research areas?
One area is privacy, and we’ve developed a tool that aims to address that directly. It allows for people to share data, without actually sharing the data. Instead, you can share the analysis of the data. One of the neuroethical issues that comes up pertains to what sort of responsibility people working with data have to the person who provided that data. Do they have a guarantee that you will remove their data if they want you to? Can you re-identify someone given enough data and given the algorithm? How might that potentially be something that’s invasive? We've thought through some of those questions, and we have a tool that we’re developing. Basically, you have people that agree to create a consortium, and somebody in that consortium creates a computation in order to do some sort of analysis. Then, that analysis gets replicated at the sites so that it’s run on the local data, but the data are never centralized. Ultimately, all the local analyses are pulled together to give you an aggregate result.
That's one piece of it. We're also pushing to apply some of our algorithms so that they can actually address clinical problems. For example, can you predict who's going to respond well to a certain medication and who’s going to respond poorly? The ethical issues here are obvious: what if the algorithm gets it wrong? How do you make something that’s useful while also trying to minimize potential harm? It’s the risk–benefit ratio that’s important.
There are also all sorts of issues that come up with sensitive types of studies. Maybe, for example, you have a substance use study where participants answer questions about their use of drugs or alcohol. This information could potentially be harmful to participants if it gets out.
How might brain imaging for the purpose of psychiatric diagnoses change societal conceptions of psychiatric disorders?
The emphasis on the biological basis of disease –– in particular in the neuropsychiatric realm –– is really important. There's a lot of science that gets at various mechanisms, but there haven't been a lot of actual applications that could impact someone in the real world. We’re moving that direction, but we're still not there yet.
I also think it's interesting to ask how the biology can help us with categorization. This is more true in things like mental disorders. Something like schizophrenia, for example, is very heterogeneous. This is also true of a lot of things –– there is heterogeneity, but maybe there are subsets of biological patterns which are more consistent. If we can identify those patterns, then it can help us clarify potential diagnostic categories.
One of the challenges that I face in the work we're doing here, is the fact that our ground truth for prediction is sometimes not very good. It's more straightforward, for example, if you're doing something like predicting a response to a certain medication. You know that if someone responded well or poorly, you can quantify that. However, if you're talking about categories, types, and continuums of disorders –– it gets fuzzy. Since a lot of those are based on self-reported symptoms and self-reported actions, it may be a bit different from what the biology might tell you. I think that's a really interesting area that will be home to a lot of revelation as more and more data is produced and this topic is addressed more.
What are you looking forward to about the INS annual meeting?
I’m really looking forward to the discussion. You know, I do wish it was going to be in person, but there’s a lot we can do with the virtual environment.
How did TReNDS start?
I was really happy that we didn’t have to start from the ground up. I actually had a pretty good group of folks that I was working with in New Mexico, and I moved a lot of them with me to Atlanta. We moved close to 45 people out, and ultimately we were able to retain the great team that we had built up already. Since we got out here, we've expanded with this great group of collaborators that we have across the Atlanta area, and we’re building up the computational resources that we need. Of course, halfway through that we got hit with COVID. Thankfully, we had a lot of things done before then.
How did you first become interested in neuroethics?
It was really a result of having a personal connection with someone who had a mental disorder. That got me into the brain space, and then all sorts of issues come up once you start to think about that.
My background is more from the engineering side of things. I basically started out with electrical engineering, and then I spent about 12 years in the psychiatry department at Johns Hopkins. And that was pretty interesting –– I like to joke that I was the engineer analyzing the psychiatrists. I was figuring out how to answer the questions that they had using algorithms, technology, and signal processing. I was figuring out how to make research more efficient, but without compromising individual privacy.
When I started my graduate degree at Hopkins, I really liked the first ethics course I took there. I've always liked ethics and philosophy, and I guess that angle is a hobby of mine.