(SOUNDBITE OF MUSIC)

ANNOUNCER:

Welcome to ASA’s Central Line, the official podcast of the American Society of Anesthesiologists, edited by Dr. Adam Striker.

DR. ADAM STRIKER:

Welcome to another episode of Central Line. I'm Adam Striker your editor and I'm here today with Dr. Tim Clement, a physician anesthesiologist with Confluence Health. His group is in the process of implementing the quality improvement solution NACOR, so we wanted to talk to him about what he's learned and what he's hoping to get from his group’s improved benchmarking capabilities. Tim, welcome to the podcast.

DR. TIM CLEMENT:

Thanks for having me, Dr. Striker.

DR. STRIKER:

Why don’t you tell me a little bit about your practice?

DR. CLEMENT:

I am one of the newer members of a group of 22 anesthesiologists in central Washington. We’re part of a large multi-specialty group out here, uh a physician only group. Um, we are covering a health system that spans all of North-Central Washington, a huge geographic area, pretty large patient population for our two main hospitals which, which you must the surgeries, so. We do, you know, a typical mix of a generalist practice. We have a good amount of ortho, with regional, general, urology, GYN, OB, um, we do some neuroanesthesia. And then we have a dedicated cardiac team, as well.

DR. STRIKER:

Great. What are the overall successes of your practice?

DR. CLEMENT:

The biggest thing that has struck me since coming here is how effective the teamwork and communication is. Our department has, you know, has regular meetings like most of us do, but we really, we really do a good job of bringing in our surgeon colleagues and our urology colleagues and collaborating with nursing and making sure that we're all on the same page. So many different changes in our practice are rolling out and it affects everybody else and that communication makes it a lot more fun and, and meaningful for the patients as well. So I, I think that's the, that's the thing I like the most about my group right now.

DR. STRIKER:

Well, let’s talk a little bit about the main subject of the podcast today. You're in the process of implementing NACOR at the benchmarking level. Can you tell me a little bit about what was happening with your practice that led you to want to implement a quality improvement initiative?

DR. CLEMENT:

Yeah, my, my interest in implementing a quality improvement initiative started before I joined here, I trained at an institution that's known for its um, management system, which is based on continuous quality improvement. Despite that culture that we practiced in, and I, I trained in, the residents didn't have any personal or group quality measures to look at, and I thought this was a big gap. So when I came out here to central Washington, our department had a few things we were collecting through electronic record, but we weren't, we weren't reviewing it systematically and it wasn't really effectively changing patient, uh, patient care.

So I tossed this idea around with the leadership in our group and suggested a more robust quality improvement program, something more formal and organized and it came at a good time with a lot of changes in our orthopedic service line and, um, our leadership was supportive both at the departmental level as well as the hospital level. You know, we knew we'd need some help calculating and tracking measures. We wanted to be able to choose from a broad range of measures. So we, we ended up at the benchmarking level because we knew benchmarking would be valuable in helping us prioritize. Uh, we could see the areas where we were doing well relative to others, we could see areas where we had room for improvement that would be the most effective to target first.

DR. STRIKER:

Why don’t we talk a little bit about the, um, specific triggers to actually implement this kind of a solution? Were there any? Or was it just an evolution over time once you got there?

DR. CLEMENT:

Yeah, there wasn't a single trigger. I think a lot of it came from sort of a generational shift where we are having a lot more data at our fingertips and the residents coming out of training are realizing how much data can come out of the electronic medical record and how we can use that data. The momentum built over about a year-and-a-half. Uh, we were making a lot of changes, as I mentioned. There was lots of orthopedic service line changes, there was lots of changes in our regional anesthesia. Uh, there were some changes in our infrastructure like potentially building a, a block room for regional anesthesia. And, as these changes were compounding, we wanted to see if our protocols and pathways were an improvement over what we've been doing before, and um, if so, we want to understand how, how subsequent change affected our outcomes. So, as that momentum built and we had it a little more new blood come into the group, there was some more support, uh, for this type of change.

DR. STRIKER:

Well, let's talk about change a little bit. Did you think the ever-changing landscape of medicine leads to a greater need for benchmarking?

DR. CLEMENT:

Definitely. You know, we’re, we're all working on keeping up with the changes but there is so much coming out in literature these days with multiple journals, and, and sources. Um, I, I like to quote Peter Drucker who said, “If you can't measure it, you can't improve it.”
So, um, some of the changes we've made in our total joint pathways are reducing opioid prescriptions and multi-modal anesthesia and regional anesthesia. You know, we really need to know how these things are affecting some of our measures like post-op nausea and length of stay, and benchmarking really allows us to see how we compare to other groups with similar patients, um, but perhaps different protocols. You know, what, what are we doing that's working for us? What are other folks doing that’s working for them?  And what kind of results are they getting? So as all these changes happen, you know, we don't want to recreate the wheel, so benchmarking allows us to, like I said before, target our priority areas, and areas where we have the most room for improvement.

DR. STRIKER:

This brings up a question that I know his come up in our group is we also implemented benchmarking and quality improvement measures. Not everybody is always on board with this kind of a, an endeavor. Um, some people feel that as you try to benchmark or, um, measure things concretely that we’re perhaps impinging on the art of medicine, and there is a little bit of resistance, um, to going down a route which may seem a little more cookbook-ish, if you will. Um, and that anesthetics should be given in a recipe and I was just curious, A, is that has that even, um, come up in your group? And if so, how would that message countered or engaged with?

DR. CLEMENT:

You know, we’ve definitely dealt with that. I, I feel like quality improvement isn't something that any physician would argue with. Everybody is in favor in, of quality improvement. It's just that it's tough because there's no perfect, um, there's no perfect measure and there's no perfect process improvement. You know, like all physicians, we are incredibly dedicated our work. We have, we take a lot of pride in our work, and it has taken some work to transition from feeling like quality measurement is threatening or confining to feeling like it can really help patient care and job satisfaction. There's tons of evidence out there in the literature that, that supports that you're, not only can this sort of program with, with NACOR and benchmarking be effective for improving patient outcomes, but also, uh, it improves physician satisfaction and, um, job satisfaction.

So, you know, despite the literature out there supporting this, it's still a sort of an emotional response like you were talking about. And, you know, from, from what’s, what I've learned and from what others have been, have taught me, you know, it’s part of it is you're taking a leap of faith. But I think a lot of it is, uh, knowing and accepting that no measure is perfect. And that something is better than nothing. Having some idea of how our patients are doing, for instance, uh, post-op pain scores. You know, pain scores are, are a really challenging thing that a lot of people think has low value, in, in in terms of quality. But it may be an indicator, it may, it may lead to something else and, and we decide how we react to these measures. You know, we are not confined to react to a certain measure, and we're not confined to make any specific changes. Uh, so in terms of quality improvement programs, yeah, there was some resistance but nobody opposed the idea on a global level.

Uh, and then on the level of protocols and guidelines and cookbook medicine, you know, just hearing you ask that question made me think of what I do a lot, which I'm in the kitchen a lot, and I love to cook for my family and myself and I use recipes, you know, a lot of the time and when I, when I do use a recipe, I make changes if I feel like it needs a change. You know, I've been, I've been cooking for, for a couple decades and feel like if I can, if I need to, if my intuition tells me that I need to change something I can and if you know to bring it back to the ORs, if I feel like a guideline isn't right or a part of a protocol isn't right for a certain patient, I'll talk to my partners or talk to my mentors and, uh, see what they say about that and see if a change or modification is indicated.

DR. STRIKER:

It’s a great analogy. And you mentioned literature before as well and at, at least in the context of supporting uh, the idea of quality improvement, but I was wondering what your thoughts are on evidence-based medicine in the literature versus these quality projects? In other words, do you think it's fair to say that there's a place for both? I’m thinking some people may be resistant to, you know, there’s not level one evidence out there, there's not a randomized double-blind trial, scientifically proving that that initiative is okay and therefore we shouldn’t be doing it. But I feel like there is still a place, and maybe this is obvious to a lot of people, but I at least wanted to get your thoughts on the, the uh, the place for this kind of quality improvement work, benchmarking versus evidence-based medicine that's, uh been critically analyzed and studied.

DR. CLEMENT:

Yeah, that is a great question. I think, I think that that question reflects the changing times with the development of huge data sets and the evolution of the electronic medical record and the pace of change in medicine. Those three things are coming together to create a time in which a lot of the questions we are asking about patient care and patient outcomes, uh, and interventions does not have a randomized controlled trial or high-level evidence to help us make decisions on it. So, what that means and I think what anesthesiology and our specialties publications are supporting is the need for other types of interventions and other types of measurement.

I think for a lot of the things we’re looking at, uh, in terms of number of antiemetics, you know, and, and that sort of thing, there are some good randomized control trials and they can help guide our decisions, uh, to help changes with some, and improvements with some of these measures. But for a lot of other stuff, like I said, there, there isn't any evidence-based medicine, or the, the evidence-based medicine doesn't specifically address the question in the context we need it to, um, like certain multimodal analgesics in orthopedic vs. spine surgery. And so what quality measures allows us to do is what I really have my background in, which is continuous quality improvement and iterative changes. So, we’re all familiar with PDSA – plan, do, study, act - and the cycles of change where, you know, or you research what needs changing and how to change it. You make a change based on your best guess, you see how it affects outcomes, and then you go back and you do the cycle over again. And that sort of iterative change along with the type of data we're able to collect these days, I think will allow more room for improvement than waiting for a randomized control trial to come out.

DR. STRIKER:

Yeah, I find it kind of a fascinating discussion cuz we're ingrained to think a certain way as we grow up, if you will, in, in the scientific and medical community and changing a whole way of thinking and with new tools, as you pointed out, is certainly a challenge and I depending on, you know, where your outlook falls, uh, you know, your viewpoint is, is shaped. And so, um, thank you for the, for the thoughts on that cuz, um, I do think it's a pertinent question as we shift the landscape of how we, uh, how we improve the quality of anesthetic care going forward.

But I wanted to circle back and, uh, and to how your group now, the implementation, if you will, the decision-making, the details, how you went about it. Maybe there’s a group out there wanting to do the same thing. How did you start the process?

DR. CLEMENT:

The first thing we did was we reached out to some of the other, our peers, in, in groups in the area and across the country. People we had trained with, uh, people we were friends with, some for medical school, people we had met at conferences, and, and asked what others were doing. And about half had some sort of organized quality improvement program, and about the other half were doing something more ad hoc or something based around meeting CMS measures and that sort of thing. And we got their feedback on what, what they were doing and how it was working for them. Uh, we reached out to our state society, um, who has some resources and is aligned with ASA. There were a couple of startups, uh, that we contacted that we're working on QI programs and a couple of established third-party programs that we asked for more information from, and those were through personal contacts as well.

And then we also met with our electronic medical record folks to, to see what was possible and what sort of resources we had available, uh, to help implement this sort of thing.

DR. STRIKER:

What solutions did you consider? Did you jump straight in to NACOR, or did you consider other options?

DR. CLEMENT:

We had three main options on the table. Uh, the third-party programs were really enticing, uh, and really attractive, I think will make a big impact on our specialty. But they were either still sort of in the development phases an unproven, um, another one required a lot of extra paperwork filling out forms, checking boxes, involving other providers, and we wanted something a little more streamlined that wouldn't change our work flows as much, um, and something to kind of launch us off. And NACOR offered a good balance of, of measure options and a streamlined workflow. We were all already ASA members and so NACOR was an incredible value for us. We like the idea of contributing to AQI and having our data contribute on a national level.

Some other advantages that tipped us towards NACOR were the large number of participating groups, uh, for the high-quality benchmarking. So it was, it was a lot of things came together between the value, the integration, the workflow, um, streamlining.

DR. STRIKER:

We, we touched a little bit about potentially, uh, you know, other thoughts or resistance   within groups. But how did you guys actually implement the decision-making? Was it just, uh, one or two, a few individuals or was it a collaborative effort to decide?

DR. CLEMENT:

My group, with 22 anesthesiologists is sort of at a critical size where we could use an executive committee to help make these sorts of decisions, uh, but we're still making decisions as a, as a big group. And so it started out as a journal club. Um, we looked at an article in Anesthesiology from a couple years ago that discussed how a group in the Netherlands did a quality improvement project with inter scaling blocks. And, they had a really robust description of their implementation and their review process and how they went about implementing this sort of thing for a specific measure. Uh, and this was the introduction to my group what was possible with quality improvement measures. Uh, it was a way to get the ball rolling and, and give people to see the opportunity. So we had a few champions promoting the idea of implementing a more robust quality improvement program. Like I said earlier, everybody saw the value, but there were still some skeptics that, you know, kind of clung to the idea that no quality measure is perfect, and if the data would actually be actionable. Um, so, uh, ultimately it was a group decision and it took a little time and a little coaxing and it's taking more time to figure out how we're going to implement it, uh, how we're going to review the data. Will, will it be on a group level, will, will it be on an individual level with our manager? So it’s, it’s definitely an evolving process.

DR. STRIKER:

Within that process, have you felt that some of the, some of the people on the other side of the fence changed their views at all? Or at least shifted their thinking in any way?

DR. CLEMENT:

The change I have seen, and we’re still pretty early in the process, the change I have seen is excitement among about half of us and acceptance among the other half. So the resistance is sort of faded away. This is coming down the, the pike and we're going to see how it works. And I think as we get a little deeper in the process we’ll have some more hurdles to overcome in terms of getting buy-in, but the acceptance really is there and we'll see how it evolves as we get some more data back and start doing these improvements or, or making changes to our workflows.

DR. STRIKER:

How did you settle on the Benchmarking level?

DR. CLEMENT:

Uh, we looked at various levels of NACOR and, you know, Quality Concierge seemed like an awesome product. In an ideal world, I think we would have gone there and had, had a little more hand-holding, you know, the top level of service. Looking at our budget, we weren't at that point yet and really we needed to start somewhere. So all of our quality reporting measures for Medicare and Medicaid are through our health system. So we didn't have to address quality reporting, uh, through this quality improvement program. So it was really the goal of developing an internal quality improvement framework. And being relatively small group, we thought the Benchmarking level would offer the ability to see how our results compared to others groups of similar size, and really target what we wanted to address.

Um, another factor was cost. We have a pretty robust quality improvement program embedded in our hospital that's willing to support us. And so we felt like we had some support without diving fully into the Quality Concierge. And the NACOR Benchmarking level is an incredible value. For ASA members, it's sort of a no-brainer to at least go in at that level. So that's, that's how we ended up on the Benchmarking level.

DR. STRIKER:

Then once you decided on that, how did you handle education and training?

DR. CLEMENT:

So once we decided on NACOR Benchmarking, the next step was to choose which measures we wanted to implement through our electronic medical record and report. So through a series of um, web-based surveys, and some in-person discussions at our weekly meetings, we narrowed it down to the, the measures that we thought we were excited to look at first and target. And part of that was what was feasible through our electronical medical record without making too many changes to workflows. I think the real steps and changes will come once we start looking at the data, um, since we haven't we, haven't gotten our first round of data back yet. We have a little bit more to hash out and terms of how we're going to review and make changes.

DR. STRIKER:

Well, let's talk about the role of a champion, uh, for some, any kind of project. I, often times, it's very helpful when someone is uh, carrying, the, the, the banner for a change such as this. How important is that, and do you have a specific individual that acts as that?

DR. CLEMENT:

At the end of the day, all these little changes in practice and protocols and workflows require an individual or group making sure these changes are happening and are, are now worthwhile. It definitely, you know, it took an individual to get the ball rolling. And once the ball is rolling there is, there's more support more energy behind it. Uh, but none of this can be done by a single person. It really takes the entire team to buy in and to, um, be enthusiastic and to support a good quality improvement program. Sometimes an individual or small-group can start a change on their own by collecting their own data and measures, sort of like a pilot program with a few people.

Um, our group, you know, I, I did a journal club and a presentation which helped initiate changes and so the regular meetings, organizing discussions, organizing the web-based surveys, all, all those things take, take a champion. Uh, but, but really when it comes down to the work of reviewing data, reviewing protocols, making changes, communicating with the surgeons, those things happen in our group across the spectrum of anesthesiologists. So, the champion, I think, is doing some legwork and is sort of the, the key person for the communication and the planning. But really it is taking all of us to, to reach the ultimate goal, which is improving patient outcomes.

DR. STRIKER:

Well, you mentioned working with surgeons. What does the hospital think of your efforts? Obviously you want to do what's best for quality of care, but, uh, are there other benefits in terms of what the institution, uh, looks at?

DR. CLEMENT:

They are actually really excited to, um, have us share our data with them. I think as any health system would be. So they are very supportive both on a financial level and a logistical level, giving us resources to make changes to our electronic medical record, helping with our membership fees.

Our health system is quite unique in our integration into it and that, you know, that really isn't, isn't threatening us if they have our quality data and so the support from our institution has been key and I think they, you know, they’re, they are a little hesitant. They want to see some proof-of-concept locally before they give more support. But, uh, like I said, both at a surgical departmental level as well as, uh, a s, health system-wide level, they've been very supportive.

DR. STRIKER:

How much integration is there with the other departments at the institution? Are you guys just looking at anesth, or are there multiple levels, I should ask? Are there just anesthesia specific variables or are there other projects going on that involve, uh, surgery and/or ICU or, or medicine or anybody, really?

DR. CLEMENT:

We're part of, uh, several different groups here in Washington like the pre-collaborative. And so you know what there are several different forms in which we look at our equality data outside of the specific anesthesiology department. The department of anesthesiology has chosen to look at measures mostly that we have a direct effect on, or we can make changes to have a direct effect on patient outcomes. While we are doing some of the broader level quality data in the organization our NACOR membership is starting out mostly, mostly looking at things that are specific to the department of anesthesiology.

DR. STRIKER:

Well, let’s delve a little bit into the specifics. Are, are there specific variables you can share with us that you're looking at right now? I know you haven't had your first round of data back, but can you share with us what, uh, you're gonna be starting with?

DR. CLEMENT:

Yeah, so, our targets currently are some process measures and some outcomes measures. We already have some protocols in place for things like how blood glucose is managed in patients in the operating room and postoperative period. And so we’re excited to use our NACOR measures to look at both, is a patient getting insulin when appropriate? And are the, the blood sugars being measured when appropriate? So, those two things I think are, are some process measures that will help us make sure that we're sticking to our protocols that are already in place.

Other measures like post-op nausea, that's, that's been, I'd say everybody has a different recipe for their post-op nausea. And several us believe in a more, a more guideline oriented approach. And so now we'll have some data with post-op nausea to help us determine, you know, who in the group is someone we can turn to for advice to share with the rest of the group on how to best manage post-op nausea, or, or prevent post-op nausea and our patient population. Post-op pain scores are something that we’re, you know, really interested in with all our changes in regional anesthesia. So those are the measures off the top of my head that I can, you know, give specific examples of.

DR. STRIKER:

Do you see it working both ways in terms of specific practitioners and also group-wide adjustments? In other words, do you see this helping perhaps improve practitioners that may have, let’s say, higher postoperative nausea and vomiting rates and suggest to them, hey, so-and-so is doing it better, and this is what they're doing. But also looking at it with a bird's-eye view to make the departmental changes on the mean, if you will? You have a, to maybe try to drop your post-op nausea vomiting rates overall despite the fact of having outliers that may do a better job than others.

DR. CLEMENT:

Uh, definitely, definitely, and that's where, that's I think partly where Benchmarking comes in. Bringing all the individuals up to the same level whether it's adherence to ERAS protocols or measuring blood sugars is one part of the big picture in improving patient care, but the benchmarking level really is what allows us to look at other groups and see where we as, as a group, can do better. So it definitely goes both ways, as you were saying in terms of improving our global care across, across our group as well as bringing all individuals up to the highest level we can practice at.
DR. STRIKER:
If there are anesthesiologists listening who are considering some kind of quality improvement solution, what advice would you give them? And, and along with that, is there something you wish you had known when you started the process?

DR. CLEMENT:

Well, I, I should have known this, but I think being patient is, is something, you know, I wish I'd realized how much time this would take not in terms of getting our membership and getting, getting data back, but just in terms of getting buy in, um, getting everybody on board, um, deciding what measures we were going to take a look at first, and all the logistics around it. Those have, those have been, patience was key, but yeah, as everybody knows, it's all about the communication. You know, you’ve got to have everybody, if not on the same page, um, you have to have enough support that the program's going to fly. You have to, you have to have the nurses and the surgeons on board with you making changes, um, that, that are going to affect somebody who’s their patient as well.

So the communication is, is the biggest key. If you go into this sort of thing knowing your goals and your budget and then it, it helps. You don't have to do this in advance, but having an example or a plan for how to implement and review your measures is helpful and gives people more concrete feeling for what, what the quality improvement program will look like. And then someone has to do the legwork, you know, get contracts uh, signed, get the membership paid for. So there's some practical stuff and then there's some more high-level stuff.

DR. STRIKER:

Groups obviously are of varying size. Would you advise people start off small, regardless of group size or do you think it's just, you just jump right in? Any specific insight on that? Perhaps if you’re going to look at different measures, or quality, uh, improvement measures, you know, let's key in on one or two not try to tackle, uh, the whole practice all at once. It seems like that would be the right place to get the mechanisms in place get everybody adept at the benchmarking the measuring and, and then go from there rather than bite off more than you can chew right at first.

DR. CLEMENT:

I think no matter, no matter the size of the group or the group's goals or the number of measures looked at, you know, this this will take practice and using quality improvement measures for, for change will take practice. I think it depends on how much energy the group has to put into the, the quality improvement program. It depends on the budget, it depends on the amount of time people are willing to put in, or an individual's going to put in to make this happen. So my approach this sort of thing is there's, there's nothing no such thing as, as too big a bite.

Yeah, you don't want to change everything at once but really when it's integrated in your electronic medical, medical record gathering the data for us was pretty low-hanging fruit. So we weren't restricted on the number of measures we could possibly have submitted. And, I, my feeling is that we have the luxury of choosing which measures, not only we wanna make changes around but what measures we even want to review. If we don't have the energy to review all of the measures we’re submitting, then we look at what we want to target and we only make changes around one. That way we can pick and choose what we want to address and the scale of it. We don't have to, we don't have to make too many changes at once, or if we have a ton of energy and a ton of ideas, uh, then maybe we'll have some more energy to address multiple measures at the same time. But I really don't feel like there's such a thing in this as biting off more than you can chew, but there is such a thing as sort of going in over your head or making too many changes at once.

DR. STRIKER:

No, great point.  To kind of follow on that, do you feel that this is something every group is, is going to have to do? There's obvious reimbursement ramifications and regulatory ramifications, but philosophically, do you think it's, it's something really every group should tackle, even if there's some groups out there that still haven't, uh, gone down this path yet?

DR. CLEMENT:

I think you make a great point that this is something we're all going to have to do inevitably. That was, that was part of the selling point for my group is the better we get at looking at our measures and making changes and, and showing improvement, the more prepared will be for the future and, you know, that we’ll be, just be on better footing to face any challenges that come and any changes that come, whether it's, uh, regulatory or logistical or financial.

So I, I do really feel like you can't start practicing this stuff early enough. Uh, cuz before we know it, we're not going to have a choice. And it, it's fun, I mean quality improvement is fun and the better you get, it's kind of like I say this about simulation, too. Simulation is really hard and frustrating when you first start doing it, but after you've done it for a little while, it's an incredible learning opportunity that can be really fun and rewarding and quality improvement the same way the more practice we get at this, the better we’ll get it and it'll make us better anesthesiologists, better prepared for the future.

DR. STRIKER:

And along the lines of the future, do you have any thoughts on perhaps how quality analysis and implementation will look in the next 10 to 20 years as it’s pertains to, uh, anesthesiology practice?

DR. CLEMENT:

Well, I'm not sure if I have a prediction, but I have a hope and my, my hope is to have much more integrated data collection. I think that we, we can see in some countries in Europe, uh, with national health systems, how collecting nationwide data can help them answer questions about specific patient populations. That, that makes a big difference in patient care. And so along the lines of AQI, I hope that we have more measures integrated into our workflows, uh, seamlessly, and that our expectations change that, such that, we are all contributing to this big data set that allows us to, to have a little more information about the changes we’re making rather than just on the local level.

DR. STRIKER:

Well, let's, uh, let's leave it there, uh, Dr. Clement. Thank you very much for a, for a great conversation and, uh, for joining us today on the podcast.

DR. CLEMENT:

Well, thanks for having me. I really, uh, appreciate the work I've done with, uh, NACOR and the help they've, they've offered us.

DR. STRIKER:

Great. This is Adam Striker, thanking everybody for joining us on this episode of ASA’s Central Line. And we will see you on the next episode. Please join us. Thanks!

(MUSIC)

ANNOUNCER:

Advance patient care and see how your performance compares to your peers. Explore quality registry options at asahq.org/registries.

Subscribe to Central Line today, wherever you get your podcasts or visit asahq.org/podcasts for more.