Article
Building a QMS for SaMD Development (ISO 13485 Guide)
As artificial intelligence reshapes the software industry, its influence is increasingly felt in clinical applications and the design and development of medical software. In a recent guest interview hosted by Professor Xenophon Papademetris (Department of Biomedical Informatics and Data Science, Yale School of Medicine), which was recorded as supplementary material for their Certificate Program in Medical Software and Medical Artificial Intelligence, Orthogonal’s Chief Technology Officer, Larkin Lowrey, offered a pragmatic look at what AI, software engineering discipline, and regulatory rigor mean for MedTech companies today.
Lowrey brings decades of experience building, leading, and transforming engineering teams in IoT and healthcare environments. His insights are especially valuable for those building modern, compliant, scalable digital ecosystems around connected medical devices.
Lowrey began his career in IoT, developing platforms for telematics, the integrated use of telecommunications and informatics to transmit data from remote devices, industrial sensors, and cloud analytics. His entry into MedTech came “somewhat by accident,” but the technical overlap was significant. What wasn’t the same? The regulatory expectations.
“Outside of MedTech, the floor (on quality) is all the way to the bottom. You can have the most dreadful quality you could possibly imagine… Whereas within the MedTech space, the regulators require a much narrower band of quality, but certainly a quality floor.”
That regulatory constraint changes how teams think about engineering. While many best practices from consumer and industrial tech still apply, they must be adapted to more structured, auditable, and reproducible workflows.
When people think about AI in medical software, they tend to focus on diagnostic algorithms or clinical decision-support tools. However, the latest AI tools and techniques are proving to be just as disruptive behind the scenes in software development.
Development teams utilize tools like GitHub Copilot to generate code, assist with logic scaffolding, and alleviate the fatigue associated with repetitive programming tasks.
“You can think of it as your junior software engineer… They have a lot of extremely sharp and specific skills… but they lack the experience and only time brings the ability to understand more comprehensively what the software is doing.”
Lowrey emphasized treating AI as a teammate, not a replacement. Developers remain accountable for reviewing, validating, and refining outputs. When used effectively, these tools can reduce the time spent on routine tasks and enhance focus without compromising quality.
AI is also making inroads in product design and localization. Lowrey described how his teams utilize tools like ChatGPT to generate multilingual software strings during early development. Human review is still necessary, especially to ensure cultural sensitivity and accuracy, but the effort required to perform this work has decreased significantly.
Similarly, generative image tools can produce UI assets or instructional visuals that would once have required external designers or searching through stock libraries. These shifts mainly help small teams deliver polished, professional-looking applications more efficiently.
With increasing medical devices collecting patient data, effectively and ethically managing that data is critical.
“You have to know for every bit of data that we’re using where it came from… the who, what, where, why, and when.”
That means:
Lowrey noted that in AI-driven products, data is no longer just a byproduct of the solution; it’s also the foundation. In this new world, poor data practices compromise privacy and undermine system reliability.
A theme he frequently returned to was how poor data governance isn’t just a technical issue, but a business risk. He urged companies to treat data as a strategic asset:
“If you’re not collecting the data… you might as well take a wheelbarrow full of cash out in the parking lot and set it on fire.”
He added that many organizations are “very paranoid about data” and limit collection to only the bare minimum. While well-intentioned, this defensive posture can hinder innovation. Designing systems that safely support safe, traceable, and broad data use is not only key to unlocking personalization, clinical insight, and long-term value, but it is also important to address the risks, concerns, and expectations that make data governance a focal point in the first place.
A persistent challenge in MedTech is integrating the experimental, iterative style of data science with the rigor required to release safe and scalable software.
Lowrey encourages teams to instill software engineering discipline into data science workflows, not to slow them down, but to ensure sustainability and scale.
That includes:
This becomes important when transitioning code from notebooks (i.e., iterative exploration of data and models) to production environments. Reproducibility, not just creativity, must also be a goal.
Many organizations struggle with the transition from model development to deployment. One major obstacle Lowrey emphasized is the incompatibility between data science tools and production environments:
“The data scientists will train the model… it will be basically a PyTorch project… The startup time on PyTorch is horrendous, and so it’s not something that can be deployed effectively.”
These missteps aren’t always technical. Often, data science and engineering teams operate in silos. Avoiding what he called “own goals” requires shared accountability and deployment-aware thinking from the start.
As AI modules become embedded in broader software systems, traditional engineering practices must remain foundational. Teams must define the module’s purpose, establish input/output contracts, and set clear performance expectations.
Even if the underlying machine learning (ML) is experimental, it must reside within a framework that supports verification and integration. Here, ‘verification’ is used in a general sense, referring to the assessment of system behavior and performance, not the formal verification and validation processes defined in a regulated MedTech context. Otherwise, teams risk building systems that can’t be assessed or trusted, especially in environments where reliability is critical.
One of the most challenging aspects of AI in regulated environments is its inherent unpredictability. Unlike traditional software, AI systems may behave inconsistently or fail in non-reproducible ways. As Lowrey explained:
“You can only be certain of the results that the AI provides you in your validation training set, right?… You show it one additional data set and it could be 100% wrong.”
Risk management must assume failure and focus on designing systems that fail safely and visibly.
The conversation was closed with a provocative reflection: as AI matures, the center of gravity in software may shift.
“We’re talking about things that we currently understand as activities, right? And how AI will help us in performing these activities. Those… are very human-centric.”
Lowrey’s key insight is that future software processes may no longer mirror human logic but will still need to produce reliable, validated outcomes. That shift requires both technical adaptability and cultural openness.
This transformation carries business implications. As AI handles more of the software delivery work, Lowrey argued that competitive advantage will come not from headcount, but from domain fluency and regulatory expertise:
“I don’t know that our future can be one where we’re relying on having big teams of people… I think we’re being forced into a position where we have to be selling expertise, not manpower.”
This may even strengthen onshore teams whose value proposition lies in trust and knowledge, not just labor capacity.
Human factors play a crucial role in ensuring that AI tools serve diverse users safely and effectively. Lowrey highlighted the importance of designing for both different types of clinical users, including data-driven clinicians and those seeking speed and simplicity:
“We have clinicians… they want to see raw data, they want to see the numbers… Then we have other physicians… looking for what is the least amount of effort, cognitive load.”
Designing for both and communicating uncertainty clearly will be crucial as AI becomes increasingly integrated into clinical workflows.
The insights from this session align closely with the core principles emphasized in Yale’s Medical Software and Medical Artificial Intelligence Certificate Program, namely the intersection of rigorous software engineering, real-world AI applications, understanding the healthcare domain, and regulatory fluency.
Lowrey’s perspective echoes many of the program’s foundational ideas, such as building AI-enabled medical software responsibly, validating thoroughly, and designing with long-term impact in mind.
Learn more about the Yale Online Certificate Program in Medical Software and Medical Artificial Intelligence.
The full set of supplementary interviews can be accessed as a podcast both on YouTube or as (in audio format) via the Yale Podcast Network: Yale Certificate in Medical Software and Medical AI: Guest Experts
About Professor Xenophon Papademetris
Xenophon (Xenios) Papademetris is a Professor of Biomedical Informatics & Data Science and of Radiology & Biomedical Imaging at Yale School of Medicine. He directs the Yale Certificate Program in Medical Software and Medical AI. With over 30 years of experience in medical image analysis, machine learning, and software development, his research spans a wide range of imaging modalities and clinical applications. He is also the lead author of the textbook Introduction to Medical Software: Foundations for Digital Health, Devices and Diagnostics (Cambridge University Press, 2002) and the primary instructor of a companion Coursera Course on Medical Software (More info at www.medsoftbook.com) His work bridges academic research, software engineering, and regulatory standards to advance safe, effective medical technologies.
About Larkin Lowrey
Larkin Lowrey is a veteran software engineering leader with over 30 years of experience building and transforming product development organizations. He has led teams across IoT, Medical Devices, Telecom, and e-commerce, with solutions deployed globally. Notably, he developed a telematics platform that was acquired by Verizon and is now operating as Verizon Networkfleet.
Now in MedTech, Larkin leverages his IoT expertise to lead the development of cloud-native, analytics-driven software for regulated medical technologies. He holds 26 U.S. patents and is known for applying Agile principles to create high-performing, efficient teams.
Learn more about Larkin Lowrey
Watch the Full Interview with Larkin Lowrey
Get the full conversation from the Yale Biomedical Informatics & Data Science guest speaker series: Software Development and AI with Larkin Lowrey
Related Posts
Article
Building a QMS for SaMD Development (ISO 13485 Guide)
Article
Bernhard Kappe: Founder & CEO of Orthogonal on Leading Innovation in SaMD and MedTech
Article
Lessons from Successful SaMD Launches (Case Study)
Article
How to Conduct Post-Market Surveillance for SaMD (Advanced Guide)