skip to Main Content

Key Analytic Metrics that Leverage EHR Data to Optimize Asset Utilization Webinar transcript

 

DANA BURKE: Hello and good afternoon. My name is Dana Burke, education specialist at Chime and it’s my pleasure to welcome all of you to today’s webinar,. Key analytic metrics that leverage EHR data to optimize asset utilization. Before we get started with the presentation, I’d like to cover a few technical details. A Q and A area located on your right side panel, allows you to ask questions during today’s presentation. To ask the question, type your message inside the Q and A panel and press the send key. The speakers will try to answer as many questions as possible at the end of the presentation. 

 

If you have difficulty listening to the live audio stream through your computer’s speakers, teleconferencing is available. To display teleconference instructions, click on the communicate menu to view audio options. By attending today’s session, you may earn up to one continuing education unit for the Chime certified. Health Care CIO program. Please visit the Chime website for more details. Please take a few moments to complete the evaluation that will pop up automatically at the end of the session. Your feedback is valuable for future programming. As a reminder, all attendees will receive a copy of the slides and link to the session recording following the webinar. 

 

With that said, I’m pleased to introduce today’s speakers. With us today Sanjeev Agrawal,. President and CMO of LeanTaaS, and Ashley Walsh, senior financial analyst of UCHealth. Sanjeev and Ashley, thank you for joining us today, I’ll turn things over to you. 

 

ASHLEY WALSH: Thank you. Good morning. My name’s Ashley Walsh and I was formerly the perioperative business manager for UCHealth Metro Denver campus, in Denver,. Colorado for eight and a half years and now I’m in a role as a senior financial analyst for perioperative services. I’m excited to share with you today some interesting findings that we had. So a little bit of housekeeping. So for our agenda today, I’m going to give you a little bit of background on UCHealth I am going to talk about some of our problems and our issues, share with you a case study we did along with solutions in the approach to that as well. So we look forward to any comments or questions throughout the presentation. 

 

A little bit more about UCHealth, so, I resided in. Denver, Colorado at the Metro Denver campus for the majority of my time with UCHealth and UCHealth became a system in 2012, when we joined with Poudre. Valley Health System in Northern Colorado. The two main hospitals in Northern Colorado, Poudre Valley and Medical. Center of the Rockies. Metro Denver’s campus is an academic campus and then later that year, rejoined with. Memorial Health System in Colorado Springs, where there are two hospitals as well, Memorial Hospital Central,. Memorial Hospital North. So in total we’re five hospitals today, very soon to be seven. We have almost 17,000 employees now. We do about 66,000 surgical procedures a year, those do not include GI endoscopy or cardiovascular intervention procedures, those are just operating room procedures. About almost 50% of them are performed at the academic campus, so we’re approaching about 27, 28,000 at the main campus this year. 

 

The main campus has 38 operating rooms out of the total 89 system wide today. So you can see we’re a growing health system, been growing rapidly over the last six to eight years about 6% to 12% per year in volume. All of that volume being admissions, emergency rooms surgeries, outpatient clinic visits, et cetera. So specifically my background is centered around the preoperative environment, and as the operating room business manager, part of my role was to really put together all of our key performance indicators metrics. So some popular metrics we looked at where, operating room utilization, so our regular business hour utilization. Block utilization, we did and still do utilize block scheduling for physicians’, practices and groups. We also looked at our delays, so what was adding to our downtime in our day to day operations. 

 

First case start, and turnover, being two of those metrics. This is a little bit of information on how we define our metrics. So for room utilization, we’re really looking at business hour utilization at our campus. Networking across the country, I find that the majority of preoperative environments do want to focus on that business hour utilization, how well are you performing seven to five. So, we’re looking at total in room minutes to total out of room minutes. We do include our scheduled turnover because that is required as a part of downtime in an operating room environment and I divided that by our total available minutes for business hours. Similar for block utilization, but really looking at for block owners, what is their in room time plus turnover and how is that comparative to the total minutes that we allocated a provider or a group or a service line. 

 

So when we look at our business hour utilization, room utilization, we strive at our health system to achieve 80% in networking with colleagues. We feel that that is the best practice measure for room utilization during business hours, and when we look at that utilization, we’re trying to look at then, why are we under utilizing? What is causing us to not reach 80% or higher during our business hours, Monday through Friday? So in my tenure there, you know we spend a lot of time looking at first case on time starts and turnover. We know that these are important metrics in the perioperative environment, if we don’t start the day well, we’re probably not going to end the day well either. So we’ve put a lot of time and effort into looking at the delays and looking at those metrics to figure out what was adding to our downtime in the room. So that led us to do a case study on really what was contributing to our under utilization. 

 

So we specifically focused that case study on the Metro Denver inpatient facility. So the inpatient facility has 25 operating rooms. They do open heart, transplants, trauma acute care surgery, orthopedic trauma, 25 OR’s, blocked 90% of the time, a few rooms are not blocked to accommodate the trauma acute care services. So when we looked at an entire year’s worth of data, we found that on average, we had about 500 minutes a day blocked out per room, and of those minutes we utilize them about 357 minutes a day. So, that gave us an approximate average utilization of 71% for our business hours Monday through Friday, seven thirty to five So the first thing we did is, what was the impact of first case delays, delays to that first case in the room of the day. And we looked at really how often where we late between seven and nine thirty or for first cases that start between seven and nine thirty, so we really only looking at those cases that start in the morning. 

 

We’re excluding cases that would start at ten, if that was the first case in the room. Then we calculated the delay, and what we found when we calculated all the delays for first case starts in that location for a year. That really only 2% of our total unused time that 29%, of unused time in our operating room for the whole calendar year was attributed to delays of first case starts. So a very small number. Now we had already gone through rapid improvement events to work on optimizing first case starts. Even before that though, we actually looked at what it was previously that number only would have gone up by about 1.5%, if we looked at first case delays prior to an intervention of a rapid improvement event. So second, we study the impact of turnover delays. 

 

Now we define turnover as wheels out to wheels in, so we’re looking at in room time between in room time, not procedures start to stop, and our goal for that location is 30 minutes. So to define the delay of that when are we exceeding that 30 minute goal. When we calculated all the delays from turnovers for that calendar year, we found of that 29% total downtime, 12% of that was attributed to delays in our turnover. So again, these were surprising to us because it felt boots on the ground, that our downtime was really attributed to delays in first case starts and turnover. So doing this case study really opened up our eyes, that there was a bigger problem going on that we were unaware of. So in total of all the time that our operating rooms are not used, delays from first case starts in turnovers only attributed 14% of that total unused time, that’s a very small slice of the pie. 

 

We cared a lot about this because as we know operating rooms are expensive to build, they’re very expensive to maintain. But the yield is tremendous. A great majority of our revenues for our health care systems are coming from our operating room environments. So now we wanted to figure out,. OK, what’s the real reason. What is really driving the under utilization of our operating rooms, if delays from first gate starts and turnover only attributing 14% of that. So besides the first case start and turnover delays, we looked at three other items. We looked at scheduled downtime, we looked at last minute cancellations and we looked at case length overestimate estimation, to determine what was driving under utilization. Here’s how we defined those various metrics. So for scheduled downtime, we’re really looking at, OK, our schedule was supposed to be A and our business hours are. B. What’s the difference? 

 

So when did we have cases scheduled to be in room and out of room and when we’re operating rooms open and available and what was that downtime there. For last minute cancellations, we define that as, we’re very generous, anything that cancels within a week, we were considering a last minute cancellation. And then, for overestimation, when we’re rescheduling cases to go longer than they actually need. The perception from many of our physicians was that we did not have very good scheduling practices and that we were making them schedule cases much longer than they really needed in the operating room. And so to improve our surgeon satisfaction, this was an important metric to look at as well. Here were those results. 

 

So when you look at the delays from the first case starts and turnover, that was 14% of the total time that went underutilized in our operating room for a calendar year. When we looked at overestimation, when we’re rescheduling cases longer than they truly needed, that only attributed to 11% of that time unused. Cancellations a little bit bigger, and this was looking at for a week. So if we look at cancellations in the last 48 hours or 72 hours, which a lot of hospitals really consider a last minute cancellation. This number goes down quite a bit, but for a week it was 21%. Even when you’re conservative there and you look at your delays and your overestimation, the big piece of the pie was truly just schedule downtime. We had our operating rooms open for business hours and we did not have cases on our grid to be scheduled. This was very surprising for us because we spent, excuse me, so much of our time focusing on improving those first case on time starts. 

 

Decreasing our turnover delays. We spend a lot of time focused around scheduling because being on the front line, that’s what we felt was not making our operating room as efficient as possible. And when it was all said and done, it was truly we weren’t scheduling the cases and filling our grids as much as we could. So that unlocked a lot of information for us to put our eggs into another basket, and focus OK, how can we expose this time on our operating room schedules to physicians and practices, because we see our volume is growing and we see our cases being done after business hours is growing, but yet we truly have opportunity in our business hour scheduling to accommodate more cases. So how we figured out where we had a lot of scheduled downtime, was to look at a new concept. That concept is really coined as collectible time. 

 

So when, excuse me, do we have a lot of holes in our schedule that are large holes. So when you look at the small non- collectible periods of time. The time that are, these are the times that are driving delays in first case starts, delays and turnover, we found from our case study, that was only 14% of our unused time. So we don’t even want to focus on that. We wanted to focus on these larger amounts of time on our schedule that were collectible. What’s on your screen right now is an example of an operating room schedule, and specifically a block owner. Still looking at when a block owner has available time to schedule cases by week, by time of day, and then looking retrospectively, to see when do they actually do cases and are there patterns and trends to see that we can extract as collectible time. 

 

This is an entirely new concept for us to look at but what we found was it really highlighted the areas where we had a lot of scheduled downtime, which we found through our case study attributed to 53% of our unused time for a whole calendar year. So when we really wanted to look at, how do we address that issue. That was wonderful that we did this case study that we identified, we were putting a lot of time and effort into optimizing first case starts and turnover, but really our bigger problem is scheduled downtime. So we could pull back on putting a lot of time, effort, and cost into optimizing those small non- collectible bits on our schedule and focus on the bigger issue the scheduled downtime issue. And when you look at what problems we were facing, that added to our schedule downtime, these are four problems that came out from this entire case study. 

 

So really the process to expose our schedule to our physicians through the block release and request was challenging. There was not a clean transparent method to do that. Optimizing the reporting was challenging as well, I spent a lot of my time, building a lot of reports that I used one time or putting a lot of effort into building a dashboard that really did a good job at telling me what my problem was, that I had some opportunities that my utilization was 71%, but not how I could optimize that and take that to the next level and improve it. So really increasing transparency and access on that reporting and being forward looking, not backward looking. 

 

Block allocation, going back to my previous slide, you know this is how we allocated blocks. We gave providers days of the week, weeks of the year to have available schedule time. And, what we found was there were a lot of these orange diamonds, that were occurring time over time with block owners. So to look at a more advanced way to allocate block time and analyze block utilization, it was important in really decreasing that largest factor of unused time in our operating rooms. So I’d love to turn it over to Sanjeev, to talk a little bit about how our partnership helped address these four issues and the tools that we use to provide a solution. 

 

SANJEEV AGRAWAL: Thank you. Thank you Ashley. So as Dana and Ashley mentioned, my name is Sanjeev Agarwal. I am the president of a company called LeanTaaS, and we partnered with UCHealth to build some of the tools around the four problems that Ashley mentioned. So the problem around making it easy to release and request blocks, we call that tool. Mobile Block Exchange. That reporting issue of making it much easier and more transparent to see forward looking metrics and drill down into what is going on for a surgeon and administrator, we call that Smart Performance Tracker. 

 

Using collectable block time instead of utilization as a way to allocate time, we call that Smart Block Allocation. And then, if you start using your OR’s better and doing more cases in the same bars that has a significant impact on staffing, so how do you make sure that staffing is necessary to the reality of the new world. The first problem that Ashley mentioned is, some of the scheduled downtime happens because when a surgeon is not able to use their block and have some visibility into the fact that they’re not going to be able to block, the process of releasing that and then making that every other surgeon aware that that block even exists then it’s open time, is often fairly manual and very cumbersome. 

 

Instead, you’ve transformed that with UCHealth, into a process that’s far more like OpenTable, like making restaurant reservations on OpenTable, and it’s driven off of three basic concepts. One is predictive analytics, who is it that might not use them block well. Two is mobile, everybody carries mobile phones with a browser on it and three is the cloud, where not having a paper based or phone based or text based process is possible now because you can replace it through a service site push. So every week, every surgeon, every administrator and every scheduler at UCHealth now gets a text message. And when they launch the text message and usually schedulers do this, they see a mobile browser screen that has a section called blocks, and you go there, you can request or release blocks. 

 

When you go into release blocks, you’re able to see your own schedule. And if you know you’re teaching, you’re going to be away on a conference or for whatever reason, you are able to pick the block you own and release it with one click. You also send reminders, that. I’ll talk about in a second, but on the request blocks side, think about that as the inventory of available open time that any scheduler or surgeon can see through a calendar based view, by day of week, by time of day, they can request time in the OR simply by clicking on to two buttons. So incredibly powerful because this transforms the old process which was more like a garage sale.

 

I tell my neighbor,. I tell people that I have something. I won’t be able to use. Instead this is like eBay, where I can now throw these blocks into a exchange, where the supply and demand match. Now the way we drive this is often by looking at historical bookings of data in the clinic, and saying Hey Dr. Smith, I see you have a block 10 days from now that you’re likely not to use because I know that your historical pattern of booking has been, that you’ve booked at these two weeks in advance. By pushing these texts, it makes it a lot simpler for surgeons and schedulers to do the right thing, because now it reminds them, hey maybe. I should release this block to improve my utilization numbers and reduce my collectible block time. And they use that process to do it. 

 

What UCHealth also asked us to do, was create a really interesting feature that surgeons love and administrators love, which is, now if you go into mobile block exchanger and you’re not able to see a block available under or on a day you’d like, you can put a little request that if a block opens up on that day, please notify me. So this is sort of an. Amazon wish list type feature, that also lets us know that there’s a day that’s very popular, let’s go tell surgeons who may have blocked time on that day, ask them if they are really going to use it, if not feel free that up for someone else that needs it. So this has this mobile block exchange that’s resulted in about 1,500 blocks being released and requested over the last eight months. Each block is worth 50,000 to $100,000 so, the economic benefits are huge. 

 

The access benefits are huge too, because now many more patients can get surgeries done. The number of cases that were on the daily add on list has reduced as well, anywhere between 5% and 15% and that number keeps going up. So the pressure on the system, predictability of the system, has become far better. The second module is a smart Performance Tracker that goes back to the problem Ashley described about having one source of truth and it being far more automated and simple, plus the forward looking as opposed to admiring the past. And that’s also been done through a very simple cloud and mobile based capability, where in that same text message where the surgeons and schedulers get a text message each week. 

 

This is what it looks like, so on the left side of the page, is what the text message contains. It contains the five key metrics that surgeon cares about. I’ll show you two views, starting with the surgeon view. Underneath the five metrics is a little link, when you launched that link, it takes you to a mobile browser so you don’t need to download an app or anything. And in that mobile browser, 90 seconds or less, every week, every surgeon gets to be able to see exactly how they’re doing. So, from a utilization perspective, how they stack rank against others and their peers, by block how they used that time. When they’ve been at the OR, when they used the time well, and they’ve ended early. When their first case started late, when their turnover was late. How they spent their time in their own block, in a surgeon group block, in a service line block. 

 

These are all metrics they care about because they want to see how they’re contributing to OR time. They get to see delay metrics, right on their fingertips. Once a week, 90 seconds causes for delays. Is it them. Is it their anesthesiologist. Is it nursing. Is it is lack availability of equipment. What’s the reason why cases are being delayed. So the transparency this creates is tremendous because now we’re not guessing. And this incorpuses data, of data is used to roll up and present the same set of metrics at a higher level to administrators and even the CFO or the. COO, the CIO can see this. 

 

But across the four locations within Metro Denver, Ashley or her peers and others, can drill down right from their mobile devices, once a week or as often as they want, because this is bookmarked on their mobile phone, and see roll ups of individual metrics, by surgeon who’s doing great at utilization. If I go into it and drill down right there on mobile and see for a specific surgeon, what their performance has been by day of week, see trends and switch views as often as they want. So the hallway conversations pick up on more fact based from a cultural perspective. There are no surprises as far as performance results or decision making is concerned. 

 

In addition,. UCHealth also had us create, as part of this module, a web based tool that can by definition, go much deeper because the UI allows you to create these tiles. So each tile is a metric that each individual cares about, and by clicking on the top of that tile, it’s like a pack of cards that unfolds, and you slice and dice, any metric by service line, see trends, create leader boards, compare weeks, compare frames, compare facilities, whatever it is that they’re trying to do. And there’s a lot more that’s being developed here. But this kind of replaces a lot of the haddock reporting. It replaces a lot of the pain of having to rush last minute before every OR committee meeting, and you could in fact, run every day as an OR committee because this is all real time data be presented to you, or as near real time as we have access to, which is typically once a day. 

 

So this is at worst a a day old. So that’s the forward looking, there’s many, many more pieces, like drill downs into reasons for a full performance, that you could get into as well. The third piece of data that you mentioned is, if you move away from the utilization is really the metric you should be looking at, for allocating block, a better metric is that schedule downtime analysis of what we call collectable block time. If you remember the chart that actually showed, was potentially this, where if you look at the way surgeons and service lines have used block. Instead of focusing on the small blue triangle, that you might be able to get first day on time starts to zero, or turnover to zero, but even if you did, it only matters if you can actually fit a full case in. So instead of doing that if you start looking at the red diamonds, which truly are segments of time that are big enough to fit a case in. So there’s three pieces, there are contiguous chunks of time, that have historically gone unused. 

 

If you see a pattern, this is where some of the machine learning comes in. If you see large amounts of time that have been abandoned in their entirety, and then there’s this idea that people might be over allocating time to me because I’m a big and important surgeon, and I keep releasing a lot of that time. That by definition means that I should, we should take another closer look at whether you’re giving me too much time. So the tool that we built for UCHealth was to be able to calculate collectible block time by surgeon, and by service line, and allow them to play with all the thresholds, and configure this to any which way that makes sense to them. So for example, the. UCHealth culture is that in fact, they want to be stricter about how much block time it’s released. 

 

But if they wanted to not penalize anyone for releasing time, you could set that threshold to be 100%, meaning all released time is OK. It’s OK to release time and I’m not going to dock for any of it. Similarly, you’re able to set the threshold for how much time is considered contiguous enough to fit a case in. Long story short, the tool actually gives you now, as a preoperative business manager, the ability to look at all the low hanging fruit. Where are the big pieces of collectible block time that I can truly take and branch to other people, or attracting surgeons for. So, this is a screenshot of the tool that’s showing by block owner, by day of week, by location, what percentage of the blocked time that’s been allocated to them, is truly collectible under the definition that we just talked about. 

 

So when Ashley and her colleagues are having a discussion with surgeons about their usage of time, they can drill down through this tool right there, and show the surgeons that the reason they think that some of their time is collectible and they can do with less, is there is a very deep bottom up analysis available right there that says, Dr. Kennedy, you have four allocated blocks. Every Monday you have a block. We think you can do all your cases in three, and one can be collected because using an analysis of last year’s worth of your use of block time, you’ve used a certain portion well, you’ve used a certain portion not very well, but this is the first case on time start and turnover delay issue. 

 

I can’t really collect that, so I will forgive that, but then there are pieces, where you’ve not used your block entirely, or you’ve released too much block like 15% or more. And there are large chunks of time that could have been used by someone else that you repeatedly haven’t used, by the way if you want to go deeper, you can see day by day, block by block, why I’m saying that, and this is just a roll up of all of your performance. And that’s why, we think you can do with one block less. So then this, think about collectible block time as being a very simple way for your OR exec committee to identify the low hanging fruit by way of blocks that if they were actually taken away from certain surgeons, those surgeons wouldn’t lose anything. And by the way, would free up a lot of capacity to attract new surgeons, as well as others whose practice might be growing, but are not politically powerful enough or aren’t too new to the system for you, for them to have a big say. That’s one side of the equation. 

 

The other side of the equation, is using the same data of how people have used block time, to predict how much time they’re likely to need going forward. So if you feed the same. Swiss cheese diagram with a bunch of holes, by surgeon, by service line, into a machine learning algorithm. The algorithm looks for things like seasonality and the consistency of how surgeons perform surgery, what their case mix it is, as well as all the time they’ve used, to come up with a prediction of what’s likely going to happen in the next 90 days. So if it’s summer, or the next three months or summer, I might find the orthopedic service line, by surgeon needs a little bit less time because summer tends to be low for them. Cancer on the other hand, isn’t less seasonal, so my oncologist and the cancer surgeons, might still have a fairly stable practice. 

 

The point is at any given point in time being able to predict going forward what the likely case mix is going to be, what the resulting block allocation ought to be, and then what the best block, master block schedule ought to be, for something the OR committee can look at, based on some fairly serious projections that are being done to back those up. To show you some data, when we did this for UCHealth, they had allocated 1600 blocks across their 25 OR’s. Our methodology would have suggested, they should only have allocated 1471, which right off the bat of freed up 129 blocks over a quarter to reallocate amongst others. When we check that against actual block requirement for the same quarter, in fact, the number of blocks that we use were even fewer. 

 

If you think about what this is saying, is using a far better mathematically driven model for predicting how much block time should be allocated by service line, you can help your preoperative business teams free up a ton of capacity, delay when you need to expand your OR’s. Your CFOs would really like this, because this basically says,. I have a lot of capacity I’m not using very well and. I can fit more cases in. And then eventually taking the service line blocks down to individual surgeon blocks, can also be done through a fairly perhaps obvious but powerful way of thinking about this. Where if you plot the usage of block time by surgeon within a service line, along two dimensions. Volume meaning how much time do they use, and volatility, meaning how predictable their practice is. 

 

It makes sense that people that have highly stable and large volume practices right up at the top here, should be given block. They should be given permanent block because we know they’re going to use them well. They have a stable practice, that needs high volume. Everybody else, it should be a mix a service line blocks, open time or mobile block exchange. You don’t want to necessarily give them large chunks of allocated block. This again, in itself provides a fair, transparent way to allocate block time between surgeons in a service line. The last module that I’ll spend a couple of minutes on, is this notion of staffing and doing it again, in a mathematically optimized way. The problem we’re trying to solve here is, does our nurse staffing and tech staffing plan mirror what our. OR actually needs. 

So if you look at any set of ORs chances are there is obviously the cost of nursing in text, which is huge. The ability to attract and retain nursing in this environment, is tough and so nursing satisfaction that’s driven by things like how often am I called back, how often am I expected to stay overtime unexpectedly. How many times am I furloughed, because the prediction of when I was needed was inaccurate. Those are huge drivers of nurse dissatisfaction and cost. So can we come up with a more mathematically optimized way of looking at nurse staffing and map it to when you actually will need nurses. The way we do it is also through a similar machine learning based optimization technique that does constraint based optimization. And the logic is as follows. If you look at this graph from the left, this shows you historical data. This is a heat map of when a particular skill set has been needed in the ORs by day of week. 

 

So let’s imagine this is the number of CNAs that have needed for the last year, and mapped by time of day, and in the number of. ORs that I need them. This is basically saying, it’s a fairly typical shape, which says I don’t need the maximum number of staff right at seven in the morning, but it does peak sometime in the morning. Kind of falls in the afternoon and then overtime is unavoidable, some amount of overtime will always happen. If you look at how staffing is typically done, it’s typically done reasonably independent of that keep up. It’s done, I need 10 nurses between seven and five or maybe there’s some staggered shifts, but it’s one big blob and one small blob. Let’s have three nurses between five and eight based on the historical usage that I think. The right answer actually could be and this this is a real example, is actually I need 10 nurses between seven and three and I might be 5 nurses between three and eight. 

 

Which even though it looks like it’s higher cost and overtime, the area under this curve on the total cost of the profile on the third graph, is actually 15% to 20% lower than the profile in the middle. The tool that UCHealth had built was a simulator, that says at the bottom on the left side, you see the current staffing plan. The current staffing plan has five metrics associated with it. What is my total cost given the staffing plan. How much furlough do I have. How much callback do I have. How much overtime do I have and how much idle time do I have. So it’s kind of a paradox that. I have idle time and overtime on the same day. The paradox is I have furlough and call back on the same day. So instead, this simulator allows you to play with different contours for staffing. So instead of this one simple step down. 

 

What if this was more of a saw to step down and what would this have done for the last 15 Mondays. So if my staffing plan from Mondays, is this today for the skill set, were it to be this new plan. What would happen to those five metrics. What would happen to cost, idle time, furlough overtime and callback. The idea being you can play with any of these metrics relevant to the objective function you are trying to maximize. You could say you know, I just want to minimize cost and I’ll deal with the possibility of occasionally running into overtime because. I don’t have enough nurses so the same ones have to stay. Those trade you can make by yourself. So those are the four modules just to remind you of what you’re talking about. The first one being mobile block exchange, allows you to swap blocks between surgeons, never lose blocks in the margin because the right surgeon can get it. It’s a bit of a land grab at the last minute. 

 

Smart Performance Tracker takes reporting and fast forwarding to a whole new level. Smart block allocation uses flexible block time to give the right surgeons, the right block, the right day of the week. Smart staffing is mathematically optimizing how your staffing nurses and techs. Let me stop here for a minute and happy to talk about how this is done but this is not about the product so much as the tools that we built to support UCHealth. Should we just turn it over to questions now, Dana? 

 

SUMMER O’NEILL: Yeah, hi there. This is Summer. O’Neill with Chime. Yep, I’m going to be handling questions for this, so thanks so much Ashley and Sanjeev for speaking to our group today. Just want to remind folks that we do have plenty of time for questions. So if you’d like to ask something of our speakers, go ahead and get it in the Q and A box, and we will get to as many of those as possible, before the end of the presentation today. So, Sanjeev and Ashley, just to kick us off, I wondered if you could talk about how long the integration and implementation takes and maybe some of the IT resources that would be required here. 

 

SANJEEV AGRAWAL:. Sure, let me start and Ashley please pick it up. So one of the things we discovered in working with. UCHealth and now we’re starting to work with. Cleveland Clinic. We’re starting to work with. New York Pres, and a number of other institutions. One of the things we realized in spades is, the easier we make it on IT because most IT organizations are really overworked. There are a lot of projects on the table. And so, the way we’ve designed this, is that this works off of occasional periodic daily or weekly feeds, that can be set up once as a cron job as opposed to any kind of integration that’s required with epic or cerner. 

 

So, we in working with, we’ve gotten this to the point where if you’re on epic, it probably takes no more than two FTE days, from an IT resource over multiple weeks, to actually set this up. And then it takes a couple of days with the right business people to understand the metrics, but the reason we’ve been able to do that, is it’s a very tightly defined set of tables, log tables that are extracted from the EMR. One historical set of data and the second is a forward looking set of data, that’s sent periodically. These are small files these aren’t huge. Case log data, block log data and block allocation data. And they’re sent to a secure. FTP server on our end, that we create for each customer that’s kept. And because it’s cloud based, there’s no server deployment on your side. There’s no software deployment on your side. Ashley, anything to add to that? 

 

ASHLEY WALSH: Yeah, know definitely, less IT resource was needed for us to fully go live with all of these products to really help address it. In comparison to what we were spending on, as far as FTE and resources cost, time, et cetera, on rapid improvement events to address what we felt were the common reasons. The outcome was much less time was being spent implementing a suite of tools to use, to address these issues more than our time and resources of people. Their time is better spent taking care of patients. 

 

DANA BURKE: I’ll say two as far as the case study. This was one of the first steps and it’s something I’ve been really networking with my peers across the country and encouraging them to look at, through other avenues, to look at really, to truly look at your overall utilization of your operating room environments, where time and money is so precious. You know not only to fill these resources out, but to maintain them and then to yield the most benefit from them. So the time it would take to do that type of a study is very minimal. To really look at all of these factors we’re talking about, which is what is the largest culprit of unused time in your resource today. 

 

SUMMER O’NEILL:. Thank you so much. I’m thinking a little bit more about resources on an ongoing basis. What kind of IT resources should one expect to assign to support this deployment after you know what happens. 

 

ASHLEY WALSH: It would be no different than a resource assigned to monitor any other feed or extract. So I think a lot of our health care organizations are going live with push or pull from our EHRs to various entities, LeanTaaS and cronos, you name it. So once this has been set up on our behalf, we have one person system wide monitoring the feed and being the point person for LeanTaaS, should there be questions on the data that’s coming across from EHR. So, the continued effort is very small. 

 

SANJEEV AGRAWAL: And in fact, the word monitor is important because they don’t actually, it’s an exception based intervention. And it’s usually if for whatever reason the cron job fails, for whatever reason. And because most of these are planning tools and they’re not day off, it’s not as if, if the tool is down for an hour or two, it’s causing havoc in the OR. Right, these aren’t decision making tools that are being used to decide what to do, for a given patient. These are both planning and swapping and staffing tools that happen to have a different cadence. Mobile block exchange is used daily by a lot of people. 

 

Staffing might be used once every 30 days. Block allocation may be used once every 30 days. And so, the risk, unlike other predictive analytics tools that are telling you, you know this patient is 92% likely to have a case of whatever disease you’re testing for, this is far less frankly important to get 100% right. So even from an IT perspective and a deployment perspective, there’s a lot more give in this kind of capability, than the clinical predictive analytics tools that you’re using. 

 

SUMMER O’NEILL:. Thank you so much. I’m just moving on to another question here. I think everybody is concerned about security. So just wondering if both of you can touch on, how this would ensure the security of the EGHR data. 

 

SANJEEV AGRAWAL: So one of the things that we didn’t mention and should have is, first of, the beauty of some of these tools that UCHealth has pushed us to build is, there is no PHI involved. None of these tools require us to know anything about the patient or the case or the disease or the procedure. Second, we’re both HIPA certified as a company. And, the quality of the institutions we work with, we work with 50 of the biggest cancer centers in the country,. MD Anderson, New York Pres, Cleveland Clinic,. Sanford, et cetera, et cetera. So we’ve been through all kinds of, we’ve taken all kinds of precautions in terms of how the data is stored. It’s not commingled with anybody else. We run lots of intrusion detection tests ourselves. We try and hack our own systems. There’s a ton of IT work we do on our end, to make sure that when folks like UCHealth send us through the IT proctology exam, we come out without it hurting too much. 

 

And so, in addition we’re now in the process of getting certified for the banking kind of capabilities. I forget the exact, so we’re in the, we’re going through soft two compliance as we speak, even though we don’t necessarily need it, for health care. And certainly, we live up to those standards even without the official certification. We’ve been HIPA certified, every year for the last few years. More importantly, from a data perspective, by not getting PHI data, that takes some of the risk off the table. And the way we store it and receive it, and the number of security tools we’ve put in place, we have not encountered a single case of any data slipping ever. 

 

SUMMER O’NEILL: Ashley, anything to add? 

 

ASHLEY WALSH: No, for us it was very easy because once we determined that we were not going to partner with a company, to share disease related information, specific patient information, it just made the process go a lot faster. We’re actually now looking at, do we want to step back and incorporate some of that, because you know, what we’ve uncovered has been tremendous. You know, really looking at optimizing our resources and so, taking it to the next level. We might get there, but that’s not what the product is about today, or the way we’re utilizing the tool today. 

 

SANJEEV AGRAWAL: As a company, we also find the BAAs. I think at the last count have 29 BAAs, where we follow the letter of every BAA, that we’ve signed with. The multiple institutions we’re working with. If that’s any help. 

 

SUMMER O’NEILL: Great, thank you both so much. I just want to remind folks that we have just a couple minutes left. I’m not seeing any questions in the queue. So I’m going to give it a few more seconds, to see if there’s any burning questions out there. While we’re waiting, I just want to remind everyone that you will be receiving a copy of the slides and a link to the session recording, after the session or at the latest by tomorrow. 

 

And, also we’ve got plenty of online sessions coming up in the next month or two, so be sure to head on out to the Chime website and check out what we’ve got planned for the summer and beyond. So with that, I don’t think. I see any more questions. I want to thank Ashley and. Sanjeev for joining us today. I want to thank all of our attendees for spending some time with us this afternoon. And I think with that, we’re ready to end for today, so thanks again everyone, and have a fantastic afternoon. 

 

SANJEEV AGRAWAL:. One last thought. 

 

SUMMER O’NEILL: Sure, go ahead. 

 

SANJEEV AGRAWAL: In case there are follow up questions, I did want to throw up on a slide, Ashley’s email as well as mine. So, if any of the attendees want to, you’re more than welcome to follow up with Ashley dot walsh at UCHealth dot org, or my first name at leantass dot com, so waned to make sure everybody have those emails. 

 

SUMMER O’NEILL: Perfect, yes, thank you so much. And those will be in the slides that everyone gets as well. So, if you didn’t get a chance to write them down real quick, you’ll get those in email as well. So, thanks again, and have a great afternoon, everyone. 

 

SANJEEV AGRAWAL: Thank you. 

 

ASHLEY WALSH: Thank you.

Back To Top