CAN-BIND Podcast

Episode 13: Artificial Intelligence and Mental Health: The Future of Care? (Part 1)


Released: May 22, 2025

Podcast Transcript

Dr. Kuhathasan: Hello, and welcome to a very special episode of the CAN-BIND Podcast. I’m Nirushi Kuhathasan, and I’m so glad you’re joining us. Today, we’re bringing you part 1 of a two-part conversation exploring the growing role of artificial intelligence in the field of mental health.

In this first part, we dive into the basics of artificial intelligence—what it is, how it differs from other digital tools, and how it’s already influencing our lives in ways we may not notice. We also explore how AI is being applied in medical research and clinical care. It’s a fascinating conversation, and we’re so excited to share it with you. Happy listening!


Dr. Parikh: Hello everyone, and welcome to the CAN-BIND podcast. We have a very interesting discussion today. It’s about something that you’ve been hearing so much about: AI, or artificial intelligence. It’s everywhere. But what is AI, and how does it affect us? Well, CAN-BIND, as you know, is a major research network for mood disorders. And so we’ve been exploring AI as it pertains to medical research and clinical care. 

We’re fortunate today to have Dr. Frank Rudzicz, an associate professor at Dalhousie University who holds the distinction of two separate chairs. He holds the Canada CIFAR Chair in Artificial Intelligence, as well as the Killam Memorial Chair, both in the Faculty of Computer Science at Dalhousie. He’s a member of the CAN-BIND research network and really our leader in understanding how AI can be used both for medical research and, increasingly, for clinical care. Frank, welcome.

Dr. Rudzicz: Thank you. It’s a pleasure to talk to you.

Dr. Parikh: So, AI is everywhere. So I suppose you’re in big demand. You must be the most popular guy at the party.

Dr. Rudzicz: Ha, finally.

Dr. Parikh: All right, finally. Well, you know, I’m going to just start off with some real basics. What really is the definition of AI? 

Dr. Rudzicz: That’s an interesting question. I mean, AI has been around for, in some form or another, 70 or 80 years at this point and there’s no single definition of it. I think sometimes people kind of know it when they see it and what they see is software that is acting in ways that can be said to be intelligent. But what that specifically means can mean different things to different people. Sometimes the definition is trying to get machines that behave like a human would in certain situations. Another definition is getting the machines to behave logically. And then, you know, on the other side of things, behaviour might not be as much of an issue as what’s actually going on under the hood. So we don’t necessarily care if the system is behaving like a human or behaving logically, but we want, you know, to try to build systems that are built internally like humans. Like they think in the way that humans think using neural networks, as opposed to, you know, symbolic rules and so on. So many definitions, but generally, it’s trying to get software to behave intelligently and perform tasks that normally require what we call intelligence. 

Dr. Parikh: Well, you know, when we use our phones, we know that we have some potential AI utility or software on our phones, but we also can do Google searches. So what’s really the difference… you know, if I want to learn something and I do a Google search, what sort of different answer or different approach would I get from just asking Google, “How do you make this” or “How do you do this”, versus using one of the AI agents that’s on my phone?

Dr. Rudzicz: So actually, I, for the longest time, I used to use Google as an example of AI itself in practice. It might be useful to differentiate between the AI that we’re seeing in chatbots, like ChatGPT and others, that sort of respond to you in a humanlike way through natural text and novel responses. But what Google does, when you enter search terms and retrieves a series of documents for you, that also involves AI. So Google has to have software that understands the meaning behind the words you’re using in a search. It needs to kind of correlate those, so to speak, with the content in billions of webpages or videos or audio documents, and then sort them in a way that is most useful to you, the user, and each of those components requires some degree of what we call narrow or more focused AI. Or machine learning, which we can maybe talk about a bit later.

So it’s not apples and oranges. It’s two different flavours of apples, haha: Google and ChatGPT. The primary difference now is that, well, when you use a Google search to try to look something up, then you’re still kind of relying on your own ability to sift through the documents that are provided to you, identify which ones are relevant and which ones are not. And then at the end of the day, you literally read the original content. Now, the responses you get from chatbots, like ChatGPT, will provide you with new sentences or paragraphs that have never really been written before, but that is based on models that are trained on the same data that Google is feeding up to you. It’s just that ChatGPT sort of reorganizes the information in its own unique way. It’s a much more complicated version of AI than traditional Google searches, but AI nonetheless. The challenge is that, you know, when you get responses from ChatGPT, it’s not clear what the alternatives are. You don’t have, kind of, the human in the loop deciding, “Okay, this is relevant. This is irrelevant”. You have the response, and that’s basically it.

Dr. Parikh: Well, you know, I think it’ll be reassuring to all of us that… hey, I’ve been using AI for a long time…

Dr. Rudzicz: Yeah, we’re all experts.

Dr. Parikh: …because I’ve certainly been doing Google searches for a long time, so I’m smarter than I think, as the old ad goes. Okay, so are there other examples of AI that we’re actually all using? You know, might be in our banking, our televisions… or are there other examples of where AI is already being used, and maybe we don’t even think of it that way, but it actually is AI?

Dr. Rudzicz: Well, to a large extent, a lot of the speech based interfaces that we’re using—or all of them, really, involve AI as well. So some of these are more obvious, like when you’re asking Siri or Alexa to play a particular song. It seems very simple and straightforward to us, but under the hood, there’s a lot of AI that’s involved there to try to identify your intention. What are you asking the device to actually do? Speech recognition itself was a massive challenge 10 years ago. Still is challenging, but identifying the right band name or artist name from amongst, you know, hundreds of thousands is no simple task, and it involves AI to translate your speech into the right song playing on your speaker.

I think a lot of AI that already exists in our daily lives is sort of invisible to us. I think another good example is our inboxes. If you just scroll down to your junk mail, you’re going to see an awful lot of stuff which is junk, and before these sort of AI filters, all of that junk would have been in your inbox and up to you to filter out. But now that’s filtered out for us, and mostly, a pretty good job of it is being done. So, I mean, like those kinds of small little touchpoints are what we kind of already see in our daily lives. 

Dr. Parikh: So we’ve heard a fair bit in medical research about machine learning. Is machine learning a part of AI, and what does it mean for medical research?

Dr. Rudzicz: Yes. So the boundary between what is machine learning and what is artificial intelligence also is a little bit fuzzy, but generally the consensus is that machine learning is a part of artificial intelligence. That is to say, artificial intelligence generally is any software that behaves intelligently. But that includes software that does so based on rigid rules that are manually entered by experts. So expert systems in healthcare were traditional systems that could help with billing or disease identification or contraindications that had to be painstakingly coded up, line by line by humans, which is, you know, time-consuming and error-prone.

Machine learning, by contrast, tries to perform the same kinds of tasks, but it does so by automatically learning those rules that previously had to be hand-coded. And it does so by looking at as much data as possible and learning models to optimize metrics that matter to us. So to be as accurate as possible, to make as few errors as possible, to have as precise representation of the data as possible. So, at the end of the day, a machine learning model will still be trying to identify diseases or contraindications, but it’ll do so based on its own observations from reams of data. And the results tend to be more accurate, and a lot easier to and less expensive to produce.

Dr. Parikh: Is machine learning, though, almost like an earlier version or a more simplistic version of AI, and will it be replaced by other things? 

Dr. Rudzicz: Yeah, so for decades, machine learning was just one of many approaches to this artificial intelligence problem. And, in fact, the dominant theory was that in order to get machines to behave logically, the logical thing to do would be to try to encode logical processes in software, step by step, by human experts. And machine learning was sort of like a fringe sort of activity that a relatively small group of people within the AI community—that’s how they were trying to approach the problem. It just turned out that that was the correct approach, really. So machine learning kind of unlocks a lot of capabilities that good old-fashioned AI kept locked.

And really, I feel that despite the accelerating pace and the vast variety of applications of machine learning, it’s not really a matter of whether machine learning will be replaced. It’s more a question of what underlying architecture that we use to do machine learning — whether that will be replaced or not. So there’s a lot of upheaval within the methods we use. We used to use relatively simple machine learning models — you might have heard of support vector machines or random forests — to simple neural networks, to more complicated neural networks, to now neural networks that are so complicated that they have whole new names. You might have heard of transformers, for example. So I think machine learning is here to stay. It’s just, like, how we perform machine learning that is going to keep changing. 

Dr. Parikh: So, you know, we’ve alluded to both AI and the subset of machine learning in medical research, but increasingly it’s being used in clinical care. Where do you see AI really expanding in clinical care? 

Dr. Rudzicz: I am very optimistic about the applicability of machine learning across the entire healthcare spectrum. And from the first moment that someone walks into a hospital or is wheeled into a hospital to the moment they’re eventually discharged, I think there’s nothing but opportunities for AI. How we apply AI or machine learning in each of these opportunities is going to differ on a case-by-case basis. So, I mean, one thing that I’m working on now is kind of AI scribes. 

So an AI scribe is a kind of a tool, a machine learning-based tool, in which you record conversations between, say, doctors and patients, or nurses and patients, or doctors and other doctors. We apply speech recognition to take that audio signal, transform it into textual transcripts, but then we use, like, language models, large language models, and others to pick apart that conversation and identify important information that could tell healthcare practitioners, “Okay, this person needs to be admitted,” or “This person needs to be seen by a specialist,” etcetera, etcetera, and, you know, identify elements in the conversation and add it to the medical record. I think scribes is sort of the good example of where AI can be applied because the outcomes, the benefits are very clear. It’s not just a matter of doing this task well and measuring it in terms of things like accuracy and precision and the kinds of metrics that we tend to see in computer science, but the metrics are really about hours saved by the practitioners. There’s one study that came out in Ontario, I think not long ago, which suggested that the typical family doctor might see about seven hours a week freed up by the use of these tools, because these tools sort of automate interacting with the medical record or other software just to a large extent. So, you know, we have more time for patients. 

Dr. Parikh: That would certainly go a long way towards dealing with the doctor shortage… if we can free up doctors from doing paperwork, so to speak. My understanding is that you have a special interest in speech and AI. So, broadly speaking, can you tell us what you do there? 

Dr. Rudzicz: So I think one of my original projects involving speech in clinical care involved using signals in speech to identify sort of cognitive changes. Initially, that had to do with neurodegeneration, dementia, Alzheimer’s disease. The idea was that if people are just speaking normally—maybe they’re in a conversation or they’re describing a picture or they’re they’re recounting a memory—what they talk about and how they describe it both provide signals that can be indicative of memory loss, troubles with executive function, or other symptoms of neurodegeneration. Recording people’s voice over as few as 40 seconds or up to about 2 minutes carries a lot of that information. 

So, I mean, years ago, we gathered data from a relatively large collection of older adults with dementia and without it, and extracted signals from the audio, extracted statistical measures from the transcripts of speech recognition. We had a thousand features or so, and we fed those into relatively traditional machine learning models, and then we were able to make various predictions about people’s cognitive health from that information. In the years that have passed, the methods have changed, but the results have only sort of improved. It’s a tool that can be applied to many different circumstances. So we’ve gone from looking specifically for Alzheimer’s disease, or signals of Alzheimer’s disease in the voice, to looking for mild cognitive impairment, anxiety, depression, and other neuropsychiatric issues. And the ability to identify these conditions, to find like the needle in the haystack of signals that are being emitted through speech, is constantly, surprising in terms of how accurate it is. 

Dr. Parikh: I’m just curious—as you did this work, it’s fascinating that you can listen to a sample of speech and determine something about the cognitive health of an individual, potentially the presence or absence or likelihood of the presence of a disease like Alzheimer’s. Did you happen to compare it to experienced clinicians listening to the same clips? And what was that like?

Dr. Rudzicz: Yeah, so the challenge here is that sometimes, like, the gold standard we’re using to train these models in the first place are based on an expert labeling. So all this data we had collected, we had labels for each transcript. So, like, “This is an example of somebody with dementia,” “This is an example of somebody without dementia,” where we’re basing these models on the performance of the best human experts. The challenge is that a lot of evidence suggests that, I mean, humans are not always perfect either. With dementia in particular, there’s a tendency for human experts to sort of overdiagnose various kinds of dementias as Alzheimer’s disease because Alzheimer’s disease is so prevalent. That means you end up having a lot of false positives in terms of Alzheimer’s diagnoses and you have a lot of false negatives in terms of what’s actually affecting somebody. It happens to the effect that human beings are only about 85% accurate. I mean, human experts are only about 85% accurate as determined by post autopsy, and in some studies, you know, with regards to Alzheimer’s prediction. So we’re kind of moving away from a direct comparison of how accurate machines are compared to humans and analyzing things more in terms of inter-annotator agreement. So how often do human experts and machines actually agree with each other? That’s a bit more of, I think, a reasonable way forward. 

Dr. Parikh: So what are the applications of speech AI that you actually think will be more routine in clinical practice in, say, the next five years? 

Dr. Rudzicz: I think AI scribes in primary care is sort of the main area that we need to focus on, mostly in order to improve efficiencies and kind of patient experience across the board in healthcare. So that isn’t so much about using AI to listen to what patients are saying or trying to identify diagnoses from the voice. It’s more analyzing, like, what they’re talking about as opposed to how they say it.

That being said, I think our work in detecting Alzheimer’s disease from the voice generalizes to issues of depression and neuropsychiatric disorders, and I think there’s going to be a lot of opportunity for that application of AI. So again, this is subtly different from the AI scribes in the case, since we’re listening specifically to the voice as a source of signals of or symptoms of disorders. So yeah, long-term monitoring of people who are at risk of worsening mental health care outcomes is something we can easily do with speech-based technologies. So instead of having to wait for someone to physically come in to consult with a specialist, we might have a prescribed app or like web-based speech-enabled tool in which these patients sort of periodically, maybe once a week, maybe even once a day, sort of just check in, speak for a little bit, and then behind the scenes we use AI to analyze what they’re talking about and sort of on a much more fine-grained basis, monitor their well-being and sort of identify when risks to the patient might be more elevated.

Dr. Parikh: Well, you know, you’re a professor at Dalhousie, and I’m wondering if any of your students have given you a taste of your own medicine. You’ve talked about AI scribes. Do any of them set up AI scribes in your classroom to make, uh, handy condensed notes of your lectures? 

Dr. Rudzicz: That’s a good point. I’ve never been sort of formally assessed using any of my tools, but I noticed whenever I talk about my tools, I start exhibiting the symptoms that I’m talking about. One of the symptoms of the voice that kind of relates to cognitive change involves these filler words, which I noticed I’ve been using a lot now also… so uhs and ums, ha. I, uh… you end up using a lot more of them than you might realize until you kind of pay attention to them. I don’t do research that involves recording lectures myself, but a lot of my other colleagues have done the work of recording lectures, transcribing them, and then producing browsable or searchable summaries for students. That’s a very interesting use of this kind of technology as well. 


Dr. Kuhathasan: That brings us to the end of part 1 of our special two-part episode on artificial intelligence. We hope it gave you a helpful introduction to what AI really is. And don’t worry, the conversation continues in part two. We’ll pick up where we left off with a deeper dive into how AI is being used within the CAN-BIND program, its potential real-world impact, and some of the important ethical questions that come with it. So stay tuned! Until next time, stay curious!