Move Faster & Break Nothing Transcript: Zimmer Biomet AI Case Study

Randy Horton
Randy Horton
mark brincat ai development orthogonal

Randy Horton:

We’re going to shift gears a bit and flip over to artificial intelligence as a different kind of software development. We’ll look at how you accelerate AI development when you’re putting your AI into a medical device. It’s going to be a conversation with Mark Brincat from Zimmer Biomet and Bernhard Kappe from Orthogonal. Mark, start by giving us a quick introduction to yourself.

 

Mark Brincat:

Thank you. I’m Mark Brincat, I lead our AI and advanced analytics at Zimmer Biomet. I’ve been at the company for 18 months or so, and I’m responsible in that time for building out an end-to-end AI platform for which we’re developing a pipeline of data products that support our whole ecosystem. 

A bit on my background: Of my software career, the last 22 years have been spent in life sciences and MedTech, where I have been responsible for delivering a whole series of disruptive clinical and healthcare products to the market. Those products have ranged from mobile solutions, to platforms and disease management services. 

I spent the preceding five years at McLaren Applied Technologies, where I really paired up my background in software with five years working in-depth with analytics and AI technologies. There I was responsible for developing a healthcare analytics platform, which we used to support industry across healthcare, wellness and fitness.

CTA developing successful samd webinar

 

How have Medical Device & Pharma Firm Leverage Data & AI Algorithms

Bernhard Kappe:

Awesome, thanks Mark. I’m going to start with a couple of questions to get more background. One of the core themes of “Software is Eating the World” is that we’re generating and have access to lots more data, as well as AIs that can act on that data and then inject the results back into the continuous, accelerating process. What’s been your experience in terms of how medical device and pharma firms have tackled this so far?

 

Mark Brincat:

It’s a story that goes back some ways. As long as we’ve been developing software in healthcare, there have been algorithms. We’ve seen algorithms advance the fastest as we’ve started to see a real proliferation in data, coming off the back of an increased number of devices and a larger application marketplace. 

That proliferation has driven more advancement on the algorithm side and on the data product side, to the point where now, rather than working with a set of defined algorithms, we’re developing much more complex data products with new and sophisticated architectures that might be running at the edge and connected to the cloud. I think the development of algorithms into more complex analytics products has been part of a longer trajectory. 

On the software side, off the back of Michael’s comments around the reference to Apple and iPhones and smartphones, those things really created a true proliferation in mobile healthcare and apps. It was a bit of a Wild West early on. Probably many of us experienced that. But I think software development is quite mature as a field, relatively speaking. We’ve got pretty good software application products these days. 

Then we came up against the problems of scalability and the complexity of interacting with patients. We needed to deal with things more proactively, as well as better personalize the interactions. That’s really where we’ve started to see the crossroads between software and data and started to develop supporting AI products, which can then better drive the applications. 

The era we’re in now is where we are starting to see how organizations better utilize data, better embrace these technologies and start to incorporate them into their platforms. I think it’s fair to say that their organizations are on many differing levels, but there are those that have really started to embrace the idea of a data-centric approach to everything. I think they are taking this more systematic approach to actually developing an environment which can really build data products and AI products, from good data engineering all the way through to deployment and maintenance of those products in the market.

 

Building Data Centric Products

Bernhard Kappe:

When you say data-centric, what do you mean by that? How does that manifest itself?

 

Mark Brincat:

Data-centric in the sense that, if I think about our own Zimmer Biomet ecosystem, we’ve invested heavily over the last five years in really building out device applications, robotics, and a whole series of products which support patients and clinicians across the whole continuum. That’s our conduit to those patients, clinicians and healthcare professionals. 

On one level it’s providing functional value. But then we’ve also been collecting the data, and it’s that shift that then starts saying, “Now, how do we use that data systematically to develop the insights we can from it and drive that value back into those products and services?” That’s really what we think about an organization being very data-centric.

 

Comprehensive Approach to Collecting Data

Bernhard Kappe:

That makes sense. It’s one of the things that I think is key for companies that do have software enabled products, connected medical devices and Software as a Medical Device, is really not just thinking about the product, but about what data they can collect, and what the quality of that data is. Ultimately, their product isn’t just the product, it’s how it ends up getting used, and what the ultimate outcomes and benefits are. That data can absolutely be key to that. If you’re not thinking about that, you’re leaving that data all on the table. 

So, once you have this kind of comprehensive approach to collecting the data, how do you then harness that? What does that look like? 

 

Mark Brincat:

I think that, for most organizations, some of the product is still a specific kind of vertical solution. A company may have a broad portfolio of products and services and they might have a need to develop a risk analytics product or a specific application or algorithm. Some of that work is still done at the bench, if you like, where there are the data science skill sets and engineering skills to build that, using some of the processes we know we’ve been hearing here. 

But for organizations that have made that digital shift, they have an integrated suite of products and are starting to now collate a joined-up data set. They’re starting now to build and deliver a richer, continuous pipeline of products. All of that needs an entirely different discipline altogether, recognizing that whole workflow from data engineering in the first place. 

It’s probably fair to say many organizations might have an analytics workflow and have in place all the controls that are required for HIPAA, as well as GDPR consent, management of data and anonymization of data. But if you are actually moving into a state where you’ve got a very rich set of data coming in, you need to have more investment in the data engineering end of this as well. I think the question is around how you develop those processes, practices, tool sets and workflows for data management and engineering.

As you move into the data science space, you have a whole workflow that you need to support. It’s the experimentation environment that you need to create, the need for the ability to track all of the experiments, then the capability to hold all of the data sets against them as well as maintain the results and the audit trails that go with all of those experiments, to the tool sets that you’re actually using for the type of field that you’re operating in. 

Then you continue to develop and refine that environment to make it more productive. Our first products took X number of days; when the second products came along, we probably halved the time to develop them, and so we continue to develop these as a sort of productive suite. We standardized the way we’re building these products as well. It becomes a platform with standards that not just you in a single team can use, but that you can also adopt across the organization. 

Then, I would say the validation is a big part of this. To me, validation is not a single exercise at the end. It’s the whole suite of evidence that supports the thinking you’re doing behind the product. It’s the type of strategies you’re employing and the transparent processes you’re carrying out, including all of the testing that is sufficient to meet requirements. And it of course includes any sort of supporting publication, pilot or other evidence you need to deliver alongside these products so that you actually create a market adoption and people actually buy into these products. 

Apart from all of that, it’s about the deployment. Once upon a time, we built these products and wondered how we could give our users access to them. If you are developing something with real scale, you need to be thinking about the kind of APIs and the type of architectures that you are employing to be able to make these products available. And not to just make them available to your own applications and platforms, but ultimately to your customers and other partners in the marketplace as well. That’s what I think about it: a systematic approach to how you think about that whole end-to-end process. 

 

Dealing with Data Quality

Bernhard Kappe:

It looks like something Zimmer Biomet has is really good end-to-end data capture processes. I know there are a lot of pharma firms that have started with large population health data sets. To some extent, there’s a garbage in/garbage out thing that can happen with the data. How do you deal with data quality and how important is that in the process?

 

Mark Brincat:

I’d add a caveat to this one: It depends on the kind of use case. There are applications out with very large population data sets, and there are companies and services that do a good job at giving you the ability to onboard and integrate multivariate data sets, as well as help you validate and test that data. So, there are people working at the big data end and applying it to their market quite effectively. I find that to be the minority of cases. I tend to think of big data as a sport of kings. We all reference Big Tech and their access to very large data sets. 

In my experience, what’s worked much better is being able to articulate and understand the problem you’re trying to solve. If we think about it in terms of pathways, I might ask, “What specifically are you focused on within that pathway?” The more definitive you can be about the nature of the problem, the more you can understand what you’re actually trying to solve for. You can be targeted in the type of data set that you need to support. 

If you employ the right sort of data science methods and approaches, tools and products, you can actually do more with less. To your point, it’s about quality. We know that we can get good statistical output from a few hundred good patient records or a number of hundreds of patients’ worth of data. We can even start to produce some good predictions at the edge of thousands of patients’ worth of data. If you go on to build some real complexity when you get to the tens of thousands, well, that’s not hundreds of thousands, that’s not a million, that’s not tens of millions of patients. That’s about being really kind of focused on the problem you’re working on. 

Coming back to your point, if you understand the scope of the care pathway that you are looking at, a good place to start is controlling your touchpoints within those parts of the pathway. Say you’ve got the applications, you’ve got the products, then you’ve got quality over that data. You’ve also got the ability, where you might have data gaps, to build in the ability to collect the additional data that you’re missing. Then suddenly, you’ve got a really high-quality data set. 

As I say, it comes back to being able to build the product in a very targeted way, and being able to work off much smaller but higher quality data. Don’t get me wrong. We still have the same problems as everyone else. There are gaps in the data. You’re trying to find ways to integrate other data from the marketplace. 

You might have 10,000 patients, but once you’ve actually looked at some of the specific problems you are working on, that number could quickly reduce down to 1,000 or 2,000 patients worth of data. The tradeoff is that you are looking into very high-quality data. Again, if you’re just trying to produce some earlier prediction, you can actually do that on some quite concise data sets.

CTA developing successful samd webinar

 

Challenges Bringing Products to Market

Bernhard Kappe:

So with great data and with machinery for exploring, testing and validating algorithms, you can create great products that could make a meaningful difference. Ultimately these things then have to get out in the real world, where they have to be trusted. Once you have the product and it’s ready to go, what are the biggest challenges you face getting it into the market?

 

Mark Brincat:

I’ve never been a believer of “build it and they will come.” It’s not just about delivering something with an FDA or regulatory approved process stamp on it. It’s about making sure that you’ve done this with the market. 

We start our product development with our stakeholders. We live their problems and understand them, and we involve them in the process along the way. It’s fair to say that different products are also pitched at different levels. It might be that a product is being used for surgeons or care teams as supporting information. Typically, those physicians have performed their practice without predictive tools. They’re new to the idea that they no longer need to look across all of their cohorts to try to find the patients who fall through the cracks. The digital products and services running in the background are watching most of their patients, who are actually doing perfectly well. 

What you want your algorithms to look for are those patients who are going to fall through the cracks, anticipate it, and actually identify the interventions which head those problems off before they even happen. That’s a rewiring of our healthcare system. There’s a whole range of different situations in terms of what you need to do to get adoption of these products in the market. Some of it’s to do with sufficient evidence, testing and proof of the product before the market will adopt it. That’s where most people’s heads go first: Is this product of clinical significance?

On one end, I need to make sure I’ve got enough evidence behind this. I’ve de-risked it, I’ve tested it, and I’ve got key opinion leaders supporting it. But on the other side, there’s a process that goes on in terms of putting these products in the hands of users for the first time and taking them on that journey. There’s a range of strategies we need to think about. This isn’t purely a regulatory approval process. 

 

Bernhard Kappe:

Makes total sense, and that’s actually a great segue to our next presentation – discussing a case study around the ideas you just talked about. Thanks, Mark.

 

Mark Brincat:

Thanks.

 

Below is a list of the other sections of our Move Faster & Break Nothing Webinar:

  1. Move Faster & Break Nothing Executive Summary
  2. Move Faster & Break Nothing Transcript: Introduction by Randy Horton
  3. Move Faster & Break Nothing Transcript: Lilly’s SaMD Best Practices with Don Jennings and Carl Washburn
  4. Move Faster & Break Nothing Transcript: Tandem Diabetes Case Study with Larkin Lowrey
  5. Move Faster & Break Nothing Transcript: egnite Health Case Study with Torrey Loper
  6. Move Faster & Break Nothing Transcript: Audience Q&A

Related Posts

Talk

Webinar: Shift Left in SaMD Development

Talk

Accelerate Updates to AI Algorithms with PCCP: Webinar

Talk

Roadmap for Generative AI Inside Medical Devices: Webinar

Talk

What Medical Device Software to Develop Under QMS: Webinar Summary