Key Issues with AI in Healthcare - Dr. Alaa Youssef

Nov 20
Dr. Alaa Youssef is a postdoctoral fellow in AI Ethics and Governance at Stanford School of Medicine. Dr. Youssef holds a PhD in Health System and Population Health from the University of Toronto and is a leading expert in responsible AI implementation and evaluation.
Dr. Youssef leads several AI educational programs and policy initiatives. She co-directs the Stanford AIMI (acronym for Center for Artificial Intelligence in Medicine and Imaging, also known as AMY) High School Programs, preparing the next generation for careers in AI and medicine. She also serves on several AI policy and education committees at Stanford School of Medicine, shaping the future of AI in healthcare.

Transcript


Don Cameron: [00:00:00] Welcome to the AI Frontiers podcast, a dialogue with tech pioneers hosted by a Stanford University technology training. I'm Don Cameron. We are thrilled to have Dr. Ala Youssef, a postdoctoral fellow in AI ethics and governance at Stanford School of Medicine. Dr. Youssef holds a PhD in Health System and Population Health from the University of Toronto and is a leading expert in responsible AI implementation and evaluation.

Dr. Youssef leads several AI educational programs and policy initiatives. She co directs the Stanford AIMI, which is the acronym for the Center of Artificial Intelligence and Medicine and Imaging, also known as AIMI, high school programs, preparing the next generation for careers in AI and medicine. She also serves on several AI policy and education committees at Stanford School of [00:01:00] Medicine, shaping the future of AI in healthcare.

Join us as we explore the fascinating world of AI ethics and healthcare with Dr. Ala Youssef. And Dr. Youssef, thank you very much for joining us today. 

Dr. Alaa Youssef: Thank you. It's such a pleasure to be with you today. 

Don Cameron: And we want to leave things off with a question about if you can share with us how you got interested in the area of AI and its applications in health care.

Dr. Alaa Youssef: Yeah, this really goes back to my PhD at the University of Toronto where I was trying to use the machine learning to build algorithms that will be allow us to predict long term patient outcomes. And this is to improve population health. So health care has been always struggling with improving clinical outcomes and really one of the four core areas where I can help us is really predicting and preventing patients coming to the hospital.

to hospital rather than waiting for them to come to hospital and treating them. So working on that, I had the vision that I'm going to build that [00:02:00] algorithm that's going to help determine bariatric patients and determine, the outcomes. And really working on the health, the electronic health record to aggregate the data to work with the data.

I spent six month of my PhD cleaning the data and I was questioning whether This is the norm or not, but in what was really insightful is, as I got to do quantitative and qualitative research, I started to see that the qualitative and the patient stories are telling things that are not captured in the electronic health record.

And as we speak. Start testing different models. I see that the models are not are biased. So that got me interested in thinking of the broader picture off. How are we prepare being prepared as a health system to adopt a I? What do we need to improve about our core infrastructure in order to be able to utilize a I effectively?

Simply put, the health system is like an upside down funnel where what happens at the macro level is going to [00:03:00] translate to the clinical level and it's going to translate to the patient outcome. So if you get the system right, pretty much things follow right. So that was a motivation for me. To, seek opportunities in postdoctoral to focus on a I am training and implementation because I wanted to get that perspective and that expertise, to combine it with my understanding of the health systems.

Don Cameron: Great, thank you. AI systems can perpetuate and even exasperate existing biases in healthcare and data leading to unequal treatment across different populations, especially marginalized groups. You've worked extensively in evaluating the fairness and bias in large multi modal models. What methodologies do you recommend to ensure fairness in these AI models?

Dr. Alaa Youssef: I think this is a really complex question that has not been answered, and I'm definitely, I'm not There isn't one answer to that question, and the field is still grappling actually with it, whether in general AI algorithms, whether in large language models, whether in multimodal, we're [00:04:00] still ourselves in medicine trying to understand what does that mean?

What does bias and fairness mean with respect to the clinical data? Now, the challenge comes in what we can see as bias and can measure and what we don't see and we cannot measure. So what does that mean? So I can see that, for example, I can have in my health setting a patient population that does not include specific groups because those specific groups are not in this geographic area or not captured within this health system.

If I develop a model, it's it makes sense that the model performance might not translate equivalently across the different groups. But there is also implicit biases, which is like the things that are not captured in the Electronic health record that we as humans don't know about. But the machine as they learned from the corpus of knowledge and data they get, they start to just reproduce the same bias patterns that exist.

And so what? Proxy measures you used to measure bias. How do you determine, that, an algorithm is fair, not simply one question that [00:05:00] this field is grappling with from our one of our studies is, let's say I can improve access to health care, but it can provide you can have an algorithm that is at the lower perform at the lower accuracy in terms of sensitivity and specificity compared to another one that It's in an academic or, or in a high kind of well resourced health system.

Is it fair to use the low performance algorithm because it is going to lead to improvements in access to to care, but it's not going to be equivalent in terms of the quality of care to people who are in the other setting. And so what is fairness here? There's some conceptual questions that have not been addressed in the field yet.

And there is, like there is efforts across different groups to start really having the coalition of health and A. I. So this the collision of health in artificial intelligence is a community that has been developed. It's a nonprofit that brings people from across organizations in health in business in the business sector in government to think about these [00:06:00] issues and to think about how can we set Standards and measures to help ensure that, I tools are being developed in a reliable way.

So I would say that we are still trying to understand that. But I think as we start to study I implementation, we start to understand better where the models are going wrong. What are the bias issues? And we still need to answer some of many questions around what is fair.

Don Cameron: Do not favor certain groups over others. 

Dr. Alaa Youssef: I think it's going to, so I don't like to, I think we always have a tendency to think of, if some group is not represented that the model is going to perform worse, the some at the broader level. That's true. But at some sense, it might not translate, might not be.

For example, in diabetic retinopathy the model was trained on a large set of [00:07:00] data in our study when we interviewed people, but it turned out that, they said one of the interviews said we got lucky that, Different race have different levels of pigmentation, and the different level of pigmentation can affect, the model computer vision learning of whether there's diabetic retinopathy, which is a disease that indicates the person has diabetes and can lead to blindness, so it needs to be treated.

So one way of developing that is can an AI camera, for example, take a picture, detect whether the patient is likely to have diabetic retinopathy and help move them towards getting ophthalmology care. Now, It turned out that the model did reproduce well across different sites. It was not tested on. So this is a one example of how I in some cases will do well.

In some cases, it won't do well. We don't have this map of what are the use cases of where I is going to be generalizable versus I will be, I will be have the difference or discriminatory performance. To come back to the question of how do we think around that and how to evaluate [00:08:00] that, we need to ask ourselves, questions around the data, how, what is the data representative of, what are some of the assumptions about the workflow or, how are The data encoded and structured in the electronic health record, and we need to think about how the model is being trained.

There are different architectures, and there's different ways to measure that. How do we think about that? And there's ways of bias where the bias is not also only in the development, the device is also in the use. How can, we know it, we as humans are biased, so does an output that is biased can lead the clinician to be biased towards believing something that is not true?

To put that into examples, if that's helpful, if you look at data so some patients would come to the clinic and then within different hospitals, they can be encoded differently around their visits. So patient visit can mean different things in different hospitals. If I aggregate. Data across different sites, and I don't have a definition of how these [00:09:00] variables were defined that my variable from one site might not replicate in the other site, and that can lead to model bias.

Another example is, downstream bias is if you have an algorithm that tends to show, that this is a tumor and shows a heat map. We know from cognitive science that the image, and it might lead to distract you, or you think that there's something in here and lose attention if there's another tumor fracture in the image that you're not seeing.

So this is where, bias has multiple aspects and it's important to think about that.

We need to use, not only quantitative measure, but also qualitative measures we need to. We need to understand how it really translate in practice and really understand what are the ethical issues that will arise when it's deployed because we still don't have knowledge of that because there's very few solutions that have been deployed.

So that's a long answer. 

Don Cameron: And [00:10:00] following up on that many of the AI systems, they operate as black boxes making decisions without the clear explanations. And this lack of transparency can hinder trust and accountability. And what do you think can be done to help patients and clinicians to understand how AI systems reach their conclusion to maintain the trust in this technology?

Dr. Alaa Youssef: So this is a question I think about deeply for the fact that as I do a lot of qualitative evaluative interviews with clinicians in radiology and ophthalmology and in other domains of medicine initially, when before, large language models and chat GPT and all of these tools, if you ask a clinician about whether they trust an algorithm or not, they sure want to see its outputs.

But if they use it and it's consistent, they're likely to trust it. But they will ask you, how did the model get at this? Because one way is clinician from clinicians perspective is we need to be sure that we that the algorithm is doing the right thing, and we also might need to explain to [00:11:00] patients. So we need to understand what is the model picking up on to say that this is a cancer or no cancer.

Now, this problem of explainability has been, like a can of worms that has kind of like. Been hunting the I field for a bit. But I would argue that Dr. Nigam Shah has a blog in HAI where he talks about the interoperability and explainability can mean different things to different people.

The features that are interpretable and explainable to an engineer can be different from the ones that the clinician is looking at, and can be different to someone else who a data scientist who's looking to find out. Now the most important concept is, Interpretability and explainability of AI does not mean that the AI is always correct.

And this is what people get trapped on. Does it mean? Because it's explainable, it's more reliable, it can be explainable and taking shortcuts. It can be, and also the one other aspect is explainable in terms of what, like clinicians would like [00:12:00] to think, these are the symptoms that has led to disease.

This is how we're trained in medicine is to think, okay, This is the biological processes. As a result of these biological processes, we are seeing either something normal or something are abnormal. So the physician or the clinician is really trying to think what is really the clinical features.

They are speaking on when they are can be picking on pixel information can be picking on other things to make Oh To make the model prediction. Now, the other aspect of it is when it came to the era of large language models. Somehow the society and the field has shifted because that intuitive ability to interact with the models and feeling it is, it's convincing the component of, like it's confident.

It has made the people, more This is reliable, like people don't ask as much questions about explainability and reliability. And so there's a takeaway is that these models are incredible in their capabilities, but we're still also don't know where they are could lead to harm. And we need to While I [00:13:00] lose in them to be careful and conscious and have critical awareness because like autonomous using automation bias can happen.

And then we will be at the stage where, you know really not able to cognitively or be aware of the bias that we are making in our decision making. So that's why it's important to really pause and reflect and understand why I'm asking about interpret ability and explain ability and So this does not, even if it's interpretable, explainable, it should not replace your critical skills of appraising any model, whether large language or small model.

Don Cameron: When it comes to those models, what kind of applications or tools are the clinicians using? 

Dr. Alaa Youssef: So there is a suite of different tools in medicine that has been, making strides in getting into the clinical care. At the forefront is radiology. So we have over 700 or 800 medical devices.

And the reason this is, The innovation in deep learning and convolution neuronal network has shown that [00:14:00] the I can be reliable and see things and images that the radiologist might or the, the radiologist might miss. And This brings to the fact that an ethical question of so if there's something that can improve the standard of care, we should strive towards that.

Now, the problem is, can you validate that it's always going to do better than a radiologist? And another problem and challenge is usually it's, We test the model compared to the radiologist on a single task, but the radiologist is doing a hundred other tasks usually when they're doing their own task.

So it has been making strides coming into the clinic and tours, you being used in radiation therapy where it has been shown to be really powerful in reducing the radiation dose because it can allow exactly identifying exactly the area where the radiation therapy need to be localized.

There's so many imaging models where it can show where is the cancer it can one of the main core components is being used in emergency medicine To identify patients who are coming in with Brain aneurysms or stroke would need immediate attention [00:15:00] because minutes seconds actually can make a difference in their life So it has it's powerful in that component It's also making its way into rheumatology and other many other kind of medical specialties I think the biggest kind of hit has been in fields of medicine that has been digitized.

And so radiology was the beginning and then started leading an ophthalmology. Dermatology and pathology for sure. The other aspect of A. I. Models is predictive models that can predict outcome. And these models has been shown to be effective, but really, they have different performance.

And we can use it to optimize the performance based on the settings and can just sometimes be, might be effective in one side, but might not be a useful to another side. But with large language models, we're also seeing a lot of new use cases where large language models can be used to improve, workers productivity to be able to summarize to be able to help with billing, health insurance documentation policy.

So there's a lot of things that, AI can do. Use cases that are going to come up as a result of the generative AI era. 

Don Cameron: And when it comes [00:16:00] to policy, I want to turn to talk a little bit about the personal health data. And so when you're training the AI system, it requires this vast amount of this personal health data, raising concerns about PHI, data breaches, unauthorized access, and are patients fully informed about how the data will be used in practice.

Did they give, or do they give consent for the usage? 

Dr. Alaa Youssef: Yeah, so definitely patients have to give consent for the use of their data. Part of this is patients when they come to hospital for care, they consent that their data can be responsibly used for research as part of their, as part of their clinical care.

Now It's important to note that in most cases you don't train the algorithms using identified information, whether images or text. You need actually to stripe out a lot of the identifiable information to make it de identified. So the 18 variables that are considered personal health information need to be stripped out before we can use that.[00:17:00]

So there's actually one of the component that leads to, barriers and using large data because That process of a de identification and validation that it is not including PHI takes a lot of time. Now the second aspect of it relates to patients are informed, it's an open question of as we get Having a I as part of the clinical care as we get to use much of the data for, for developing the algorithms.

Do we need to reconsider how we talk with patients around the use of their data in the eye? And most of the hospitals and most of the patients. systems actually are like the reason there are barriers for building models and health care is because everyone is protective of the data because no one wants to get into the liability of in have a breach or if you try.

So that's a big actually consideration of medicine. And I think in our study on organizational readiness, we found that, it's important for privacy and governance offices to ensure that, [00:18:00] yes, promote the public good is good, but not at the risk of risking anything related to patient privacy and depending on the models there, there can be differences in how there could be data leakage or, risks for patient data being re identified.

So these are the. Tough questions to grapple with. 

Don Cameron: And you discussed Stanford's use cases in your publication on organizational factors and clinical data sharing for artificial intelligence and healthcare. What were some of your key findings? 

Dr. Alaa Youssef: So this study is really interesting because when at Stanford, Amy has been at the forefront of sharing medical data sets to allow machine learning scientists to be able to use data to develop algorithms.

Now one of the key barriers is There isn't enough data available, and not all many health systems are willing to share. The question has been why and what makes health systems ready to share and adopt the eyes part of their infrastructure. So in this study, we divided it to be looking across the health care ecosystem, dividing [00:19:00] them into government, Academic nonprofit and profit sectors.

And we wanted to understand within these different sectors. How are organizations being open about data sharing for your development? And what are there? What are the readiness scale? Starting with that, Stanford was a key use case to get us started and to understand what are the core competencies within Stanford Medicine.

That, promote that data sharing. And the core finding from the study is if you want to ensure that your health system or your organization is really one is utilizing AI, it needs to be part of its core mission and vision. Stanford Medicine has a has a vision towards precision medicine.

It strives for data strives for digital, like bringing in using digital health to improve and lead to precision medicine and so data and scientific research around that is super important as a part of an academic institution because it's [00:20:00] aligned. There is a lot of leadership and system support to dedicate resources governance.

infrastructure and, people to be working on these type of problems. And that's what makes Stanford ready to be at the forefront of, having the expertise for integration within health systems, having the expertise to, data scientists who can aggregate the data and advice on data for building the models, having the ability to have a robust Privacy and governance and having the funding and resources to do that.

Now, not all of many of the health systems do not have that. And even health systems that have resources have been more like some organizations are very restrictive towards releasing data. And why could this be? One thing is they want to release data. They want to promote public good, but they want to first do their scientific discovery and be recognized for their name that they have, they have done on this data and identify scientific discovery.

So [00:21:00] here was where you start to see that like differences. Resources is key components. Organization capability is key fundamental, but really what makes the switch between organizations that are willing to share versus organizations. That are not willing to share is motivation. And this is was the key core component of that alignment was key.

Now, it's very interesting that aspect that we saw that made difference across some other use cases has been incentives. If you're able to provide the right incentives, you're able to push organizations that are not willing to share or to be already. One of the core examples comes from COVID 19.

And you know how government and IH funding towards data sharing consortium and AI. Required that there's a large number of diverse centers, and that got people to feel that, we need to get our infrastructure. We need to be able to collaborate with the different centers, and we need to find a way to collectively aggregate the data in a secure way.

And so a lot of minds came in to put [00:22:00] that and that promoted actually building the first representative. Medical imaging data such as metric, for example that is representative of the U. S. Population because there's so much specific effort have been put into that. Now you come on to the profit for profit sector.

For profit sector, they are broken into so called data brokers. Their core component is For them to share data, it needs to be, again, part of their motivation, and they can be sharing this data for financial component. So they're, it's part of the selling and being able to gather data.

But then also they can be getting data, but they're not investing in the data sharing because they're not the owners of the data. They are just the users. There's just they have contracts with the health systems to use the data to build cloud infrastructure to build, pipelines, but not to have the capability to do data sharing.

And so the other key finding for us was who really [00:23:00] controls. the release of data and the development and advances in the eye in medicine is truly healthcare centers because they have the patient data. And so how do you get systems to be collaborative and to be willing to share? That is the part about incentives that should be looked further.

So I think, to recap, I think Stanford It's a different, unique example, but it was, has been very helpful to be able to contrast many other use cases and understand what are some of the barriers to our data sharing. 

Don Cameron: And when it comes to barriers and using tools, let's say like ChatGPT, especially in healthcare, is there a workaround or is there something that the School of Medicine is doing to help with that for usage of a PHI data with a, 

Dr. Alaa Youssef: Yeah.

Those kind of tools. Yeah. So I think it's no surprise that, the chat GPT tools has been very powerful and they can really do a lot of important tasks that reduces time significantly. And they are not, they're not, they're they can actually [00:24:00] improve the performance of people, whether workers, clinicians.

Now, one of the biggest concern is you don't want to have. your data leakage being, leaked from these big models as it goes and then be part of a training data. So what Stanford Medicine and many other organizations have Recently released is a secure GPT platform within the organization itself, and it allows make sure that whatever data you're using within Stanford, it stays within Stanford, and it's not being used for any other purposes.

And I have to say that this has been such a big impact because many things you can think about We don't really understand how large language models are leaking data, but we know that they leak data. And so having the ability to have a secure platform that allow us to leverage it for, the tasks that we need to do it, of course, not using phrines deliberately, but having that safe boundary, safe space of using these generative tools is very powerful and, have been very useful.[00:25:00]

Don Cameron: And on top of that what are some steps that Stanford Medicine is taking to protect this this patient pr p privacy? Let me ask that question again. What are some steps, or, going back, okay. On top of that, what are some steps that Stanford Medicine is taking in protecting patient privacy? 

Dr. Alaa Youssef: In our study when we interviewed leaders from governance and privacy at Stanford, what they have shown that they actually not only think about, data requests.

So let's say, for example, someone wants to release data for research or they want to collaborate with a startup or organization as the privacy office reviews that it gives a specific consider a number of factors, not only the de identification of the data, but also any it evaluates. Re identification or privacy risks that occur.

And based on that, it's going to that's going to inform its decision. It also not only do that, but something that was very interesting and surprising was [00:26:00] they also there's collaborating with, vendors or people in the profit sector. They have, A bit of a stronger regulation because they feel that, like they're nonprofit or, a health system in general, their moral obligation is to provide care and, and protect patient privacy.

And so they really care about that. Provide deliberate attention towards which vendors they collaborate with. What are the terms on how they can use the data? What are obligations on what they cannot use the data? And also, like, how likely and successfully artists vendors to be compliant with the from their behavior in the market.

So it's not only the aspect of the identifying the data, but many more complex aspects related to privacy and compliance that the privacy office needs to consider before. Providing authorizations for patient like data release. 

Don Cameron: Okay. Thank you. And determining who is accountable when an AI system makes an error is complex.

How are clinicians and healthcare [00:27:00] institutions addressing this issue? 

Dr. Alaa Youssef: So I have to say that this is an, another unsolved question. Many of the questions actually seems to be unsolved, but So this is a big concern for clinicians and and patients because patients want to ensure that the care they get is It's safe.

And if the harm occurs, someone is accountable. But for clinicians, it's really we're pushing them to the middle zone where they are receiving product like we're expecting them to be accountable for mistakes that occur about devices that they have not seen how it was developed.

And they are not, they do not have the time to, to be able to vet. In depth, what are the things that are coming from these outputs from these models. So their role is to provide patient care. But we hold them accountable because we say these devices are supposed to assist you in decision making and not to replace decision making.

But here's the point. At what point do you draw the line between assistance versus replacement in decision [00:28:00] making? If the algorithm shows me that there's a tumor and I keep thinking a lot, even if I don't make the wrong decision, that if I spend more time spending more time on one image than I would have normally done, then I have not added efficiency to my workflow.

If an AI says that there's a tumor, the geologist or the clinician say there's no tumor, but then it turns out that few years later, the patients comes in and say, Hey, that's that I said there is a tumor. You said there's no tumor. I now have cancer. Who's liable? So these are questions people are grappling with, but it's a complex because are we holding physicians to be experts in vetting the system, experts in understanding what is going in the workflow that they don't see also knowledgeable of the clinical phenomena and decision making.

So I feel it's a little bit of of an unfair evolving conversation where we need really to think about how the, and there's research by but many faculties at at Stanford looking into, like the governance and law about accountability towards the eye. But the [00:29:00] main component to consider is what are we holding physicians accountable for and why we're not holding vendors and AI developers accountable for, errors and or defaults in the medical devices.

Don Cameron: Do you envision the government stepping in with regulation and oversight to help address this issue? 

Dr. Alaa Youssef: I think the government has been putting on executive orders and initiatives towards responsible AI implementation, but it's going to vary of how this framework gets, translated across different states across different hospitals and health systems and organizations.

Government is all is definitely, like we have seen from the Biden executive order that, it's going to vary. It puts in principles of what should be like responsible a I and many of the health systems are trying to re envision what their a I implementation strategy would be within that lens of responsible a I, but I have to say that we don't really have an insight, like having [00:30:00] regulation is great, showing that the regulation creates impact and does not restrain is another thing.

And we have not had a medicine. You the ability to still see the impact of this regulation on AI, but definitely there's considerable efforts on how, different types of AI should be governed and what are should be things that FDA approves versus, things that are, does not fall within the preview of the FDA and things like that.

Don Cameron: And have there been any studies on the impact of AI and the doctor patient relationship? 

Dr. Alaa Youssef: Yeah, there has been actually studies. So some studies most recently has been shown to show that, it's not about really physician patient relationship, but one of the kind of like recent studies in the field.

In the era of generative A. I. That kind of promoted some attention in the literature has been like showing physician. Sorry, having paid physicians responses shown to patient and having a I generated responses shown to patients and [00:31:00] seeing the patients preferred the generated response because they interpreted to be more empathetic.

And so what this is re is reputting for us is work ahead of medicine is to think about how actually are we going to make sure that our interactions with patients state patient centered in order for, the patient physician interaction to be always valuable more than any eye patient interaction.

Don Cameron: Got it. Another important area of concern involves the equity of access to the new technology and the availability of AI technologies may be limited to well funded health care institutions, creating disparities and access to advanced medical care. There will be undoubtedly be global inequality. And how do you feel this issue is being addressed?

Dr. Alaa Youssef: So unfortunately, this issue is not being addressed at all because it's. Yeah, like the infrastructure requires putting everyone wants to work on the A. I. Because that's the shiny, shiny [00:32:00]object. But what is underneath the Chinese object that makes this object shines is the infrastructure. And what is the benefit for companies to build that infrastructure?

They are interested in building products and selling these products. And health systems are going to invest in building infrastructure if they have the resources, but when many health systems excluding, the academic centers are lagging behind in terms of, their margin their margin profits and, their finances, then They're not likely to dedicate a lot of that money towards resources when they're struggling to meet their budgets.

So unfortunately, that puts us at a quandrum where centers that institutions that have the infrastructure is really able to adopt AI and organizations that do not have that infrastructure are lagging behind. The most important component, I think, that also like We tend to miss as we [00:33:00] think of a I and implementation of a I.

But we really learned At Stanford Medicine, how important it is to have, the infrastructure, the enterprise and the clinical data infrastructure being, being ready to allow for data to be used the clinical data. So if you think about data that's flowing as a pipeline, you want the data as it comes to be flowing from the patient care.

Being de identified, being curated, being available for research, being available for AI predictions. You want it to be that pipeline running. But if you have a format in your data and your infrastructure and there is a gap or there is there's a barrier for this data to be ready, then you cannot really utilize the eye to make predictions and improve your care.

And so building that infrastructure require resources and who put that resources in. It's not something that has been settled, so it's leading to some disparities for sure. 

Don Cameron: Are there any quality standards or safety protocols associated with the application of [00:34:00] AI in health care? 

Dr. Alaa Youssef: I think there is increasingly starting to be quality standards or best practices.

The Coalition for Health, for example, is is releasing, has been working on building standards for AI evaluation and assurance of its quality and also NIST and other government entities are also have released, some things related to quality standards of AI. So there is some efforts.

It's still ongoing. for listening. 

Don Cameron: And what do you think will be the impact of A. I. and future employment in health care? And do you feel that there will be displacement for certain roles? 

Dr. Alaa Youssef: I don't think there will be displacement. I envision that in fact, A. I. will make health care for the first time more efficient because The most biggest barrier linking back to my Ph.

D is health care administration. And if we can get health care administration if we can empower people to do the jobs they like and to revert the tasks that are tedious or that are [00:35:00] not enjoyable towards the eye, if they're reliable, that we have an efficient system that operates around, productivity around Workforce fulfillment, reduce burnout, and that is going actually to improve the economic cycle of health care.

More than anything, because I think it comes down at the end of the day to whether the work we're doing is we feel is meaningful and enjoyable or not. And if it's not meaningful and enjoyable, we do the work, but we're utilizing, 60 to 70 percent of our intelligence than our full 90 100 percent of our intelligence.

So it's a, it's actually a way for us to move forward, but we need to be prepared to think of what will be the other steps and other things where we will need to be skilled at in order to ensure, that AI is going is going to be in towards our advantage. 

Don Cameron: And how is AI technology impacting education and retraining of healthcare professionals?

Dr. Alaa Youssef: So AI is impacting education across. And actually before [00:36:00] medicine, it has impacted education at a broader level and when child GPT was released where there was a lot of conversations around schools and how, how should, Chad GPT and the use of these generative models be regulated in classrooms.

What to ban? What not to ban? But then we have seen from the Khan Academy and later platform how actually I can be The personalized mentor and tutor for people. We have seen how I can be used to improve learning outcomes by actually the teachers and the professors asking the students to try the assignment with the generative algorithm and come to ready for class for discussion.

And this is where we need to flip back that because we need to have we need to have that. Aspect of where we are more critical or having that conversational, interactive, experiential part of learning. And we are like we're really using the A. I. Tools to help us think deeper and [00:37:00] get at the core issues.

So I feel it's going definitely to be in medicine as well. Doing that job. There's just, the efforts to integrate it within medical curriculum for sure. Many medical school. Schools have, developed curriculums around the eye and also medical students have been interested in research internships related to I for sure people graduating from high schools.

Many people. Are their attention is towards? Yes. And I so there is a lot of interest across all domains in the eye and across all fields. But the medicine specifically, we really still have a long way to go around, Bringing the knowledge from the health system of how AI is being used effectively to training the next generation of physician because the content is great.

But the contest does not lead to translatable skills. You need the knowledge of where AI fails, and you need to educate physicians where they need to pay attention when they're using the eye. 

Don Cameron: And when it comes to educating the next generation, you have your summer camp coming up or [00:38:00] should to. happening over the next few weeks.

Can you talk a little bit about 

Dr. Alaa Youssef: that? It's already happening. That's so the Stanford Amy high school program is was founded by me and my co director Johanna Kim, the executive director for the Amy Center, and we really always felt the passion of, expanding the outreach with AI opportunities to, to, to students and learners, and one of the ways we thought about that is by building this Educational program.

And the reason we wanted to do that is we recognize that by us doing research in health and AI There's a different set of problems and working with health data is really unique. And so there's many opportunities to inspire people to think of their careers and where they would be interested in very many multitude of ways.

And so we wanted the summer the boot camp to be a way for them to expose them to the challenges we face in implementing and developing AI to allow them to see. To speak with speakers who had different trajectories in the eye and have [00:39:00] different careers, trajectories, and all of them are successful to give them.

There isn't the one road to get to where you want to get. And there's a component about having a hands on aspect of working on a project and really getting to experience, the teamwork of working around an email project. And really, that's With the goal to improve, diversity in STEM fields, and we have been running the program for three years and has to say it has been very successful.

We have moved from having, 108 applications in the first year and 25 acceptances to having 1, 500 last year applications and all. Still, we did increase it to 27, and this year we have, Over 2000 applications, and we have 70 students broken onto two programs, the boot camp and internship.

So it has been really exciting and rewarding to work with that. And it's very interesting to work with this population and see how much eagerness and interest they have in really learning about AI and working on problems with medicine. So I think the future is bright, and I [00:40:00] think, it leads to people who are, like growing it and like I reflect on it from my, my, my experience, you always when you're getting from high school, moving to just to university, you feel always, am I taking the right path?

I really don't know where I want to go. But as you go along in your career, and you start to see people and start to see different trajectories, you can start to see And so we wanted to provide this early on to allow people to see and have a broader vision and explore really deeply what they really feel passionate about.

Don Cameron: Okay. That's great to hear. And thank you so much, Dr. Ala Youssef, for joining us today and sharing your thoughts into AI ethics and governance. Your work is truly inspiring and vital to improving the future of healthcare. That brings us to the end of this episode of AI Frontiers dot log with Tech Pioneers.

We hope you enjoyed our conversation with Dr. Yusuf and gained a deeper understanding of AI healthcare's ethical challenges and innovations. Thank you to Dr. Yusuf for [00:41:00] sharing your experience with us. And to learn more about our research, check the links in the show notes. And thank you very much for listening and until next time, stay curious and keep exploring the frontiers of AI.

Dr. Alaa Youssef: Thank you so much for having 

me.