Dr. Ezzaddin Al Wahsh is a distinguished figure in the realm of healthcare innovation, seamlessly merging his expertise in clinical informatics with a profound dedication to humanitarian causes.
With an impressive educational background, including an MD from the University of Jordan and an MBA in Global Innovation from California State University, East Bay, Al Wahsh’s journey is marked by a commitment to advancing healthcare through technology.
Dr. Ezzaddin Al Wahsh‘s fellowship at the Mayo Clinic solidified his role as a leader, where he not only developed oversight tools for AI product lifecycles but also addressed alert fatigue in clinical settings. His work with Doctors Without Borders highlights his dedication to global health, with a focus on vulnerable populations and equitable AI solutions. This interview explores Al Wahsh’s insights into the future of AI in medicine, the ethical challenges it presents, and his approach to integrating humanitarian values in digital health.
Background and Early Career
How has your background in both clinical medicine and AI positioned you to lead innovations in healthcare delivery in 2025?
While I’m still in the early stages of this journey, my background in clinical medicine, combined with growing experience in AI and digital health, gives me a grounded yet forward-looking perspective on healthcare innovation. My work is guided by three core principles: patient safety, resource stewardship, and responsible use of technology. These aren’t just abstract goals—they are the foundation of how I approach healthcare informatics today. My time with Doctors Without Borders, as well as my ongoing humanitarian and anti-war advocacy, deeply informs this mission. Whether designing decision support or evaluating system impacts, I constantly reflect on how technology can serve, not replace, human care, especially in fragile or underserved contexts.
What recent breakthroughs or tools have you developed that are actively shaping the way physicians interact with AI in clinical settings?
While I haven’t built this tool from scratch, I collaborated during my fellowship at the Mayo Clinic on an innovative framework that evaluates the degree to which an AI system is truly responsible. This tool enables clinicians and developers to identify and address ethical deficiencies in AI models across various stages of development and deployment. It’s a practical resource for guiding safer, more equitable AI use in clinical workflows. Contributing to such work has shown me how evaluative transparency can be just as transformative as building new tools, because knowing what not to trust in AI is just as important as knowing what to adopt.
Balancing Humanitarian Missions and Digital Transformation
How do you balance your humanitarian mission with your role as a leader in digital health transformation?
For me, there’s no separation. Humanitarian values serve as my guiding compass in the digital health sector. Any new technology must be evaluated through the lens of empathy, access, and equity. I constantly ask myself: How will this impact those who are vulnerable? Are we amplifying care or introducing new harm? Whether I’m consulting on AI ethics or reviewing clinical workflows, I let the principles of human dignity and justice guide my decisions.
What’s one lesson from your work with underserved populations that directly influences your approach to developing AI solutions?
One hard truth is that AI can amplify inequality if we’re not vigilant. Populations like the elderly, mentally ill, or those without digital access may often be underrepresented in training data, and that creates risks of misdiagnosis, bias, or exclusion. My experience has taught me that ethical AI development must incorporate the voices and needs of the underserved and vulnerable populations. We can’t build inclusive systems unless we first acknowledge the digital divide and then actively work to bridge it.
Addressing Ethical and Practical Challenges
How are you ensuring ethical oversight and transparency as AI becomes more deeply embedded in patient care workflows?
At my level, I keep it simple and grounded. I focus on education and communication, explaining to patients and peers why and how AI is being used. Physicians must serve as advocates not just for patients but for medicine itself, providing the checks and balances on emerging technologies. Whether it’s a new drug, a diagnostic tool, or an algorithm, we must question its purpose, safety, and fairness. I always start with the “why,” and I encourage open dialogue to ensure the technology’s mission aligns with core medical values.
How are you addressing physician burnout and alert fatigue through informatics design this year, and what impact has it had?
I’ve spent much of the past year researching alert interruptions, their causes and consequences, and how they can contribute to fatigue. This work has culminated in a manuscript that examines how over-alerting in clinical systems impairs physician decision-making and potentially compromises patient safety. Through this research, I aim to identify which alerts are most prone to false positives and create a foundation for more brilliant alert design. It’s a long-term commitment, rooted in my informatics training, and I’m proud to contribute to a growing conversation about cognitive load, human factors, and digital safety in medicine.
Global Collaboration and AI Innovation
Can you speak to the role of global collaboration in the future of clinical informatics, and how you’re contributing to that vision now?
Global collaboration is essential—but I’m still early in my journey. Right now, my focus is on knowledge sharing and grassroots education. Whether it’s giving lectures abroad or engaging with peers from other countries, I use every opportunity to advocate for responsible informatics. During a recent visit to Jordan, I spent time discussing digital health challenges and opportunities with local physicians and community members. To me, every conversation is a step toward building a more connected and informed global health ecosystem.
What do you see as the biggest misconception about AI in medicine today, and how are you working to shift that narrative?
The biggest misconception is that AI is a universal solution. In reality, it’s a sensitive and limited tool that must be used with caution. People often assume it works reliably across all settings, but if we don’t understand its design—who built it, who it was trained on, and for what purpose—we risk drift, inaccuracy, or unintended harm. I advocate for building strong “technology cards” for AI systems—essentially, transparency documents that clarify scope, context, and limitations. We need to treat AI with the same rigor and skepticism we apply to any other clinical intervention.
Innovation and Safety in Healthcare Technology
As someone guiding product lifecycle strategies, what principles are non-negotiable when designing for both innovation and safety?
Value is my non-negotiable. If a product or process doesn’t add real value to care, we must pause and reassess. Over the past six months, I’ve been involved in developing a discharge planning tool designed to enhance early discharge workflows. Throughout this work, we’ve emphasized co-ownership, continuous feedback, and empathy for the end user. I believe in designing with a mindset of longevity and resilience, making sure the product serves patients and clinicians long after deployment. Above all, we don’t give up easily. Innovation often meets resistance, but persistence and humility are part of the process.
Looking forward, what personal or professional legacy do you hope to leave in the intersection of human care and machine intelligence?
I dream of a world where AI is as ethical, accessible, safe, and governed for the public good. Too much of today’s AI lies in the hands of industry, and we urgently need nonprofit and humanitarian-led frameworks that ensure fairness, transparency, and accountability. My long-term goal is to contribute to building a company that develops responsible AI—a space where patient safety, human rights, and ethics are foundational, not afterthoughts. I hope to be remembered as someone who helped humanize technology and brought compassion back into the innovation process.
Dr. Ezzaddin Al Wahsh remains a pivotal figure in the evolution of AI in healthcare. His unwavering commitment to ethical practices and global collaboration sets a standard for integrating technology into medicine with care and responsibility.
Read more:
Dr. Ezzaddin Al Wahsh Champions Ethical AI to Transform Global Healthcare