Blog

Decoding the '95% Accuracy' Standard: Ensuring Consistent Quality Metrics in Medical Coding

Ask a room full of seasoned coding directors, auditors, and medical coders about the quality standard they’re held to, and they’ll unanimously say 95%. However, if you ask them to explain how this 95% accuracy is measured and defined, you might be met with silence, uncertainty, or varying explanations regarding the volume of cases and cadence; consistent responses will be hard to come by.

This raises numerous critical questions: What does 95% accuracy truly signify? Is it measured at the case level, during audits, by department, or by individual coders? Are CPTs, Modifiers, and ICDs all evaluated in this metric? Does each code within a case need to be accurate to achieve a passing grade? Furthermore, the complexity of coding varies greatly depending on the type of service being coded and the purpose behind the coding; yet, there is no established way to compare quality based on the difficulty of the coding task.

The Gap in Coding Quality Measurement

This highlights a significant gap in what is otherwise a regimented and structured field, with abundant guidelines, policies, and resources designed to ensure accurate and compliant coding. Without a consistent standard of measurement applied across the industry, our ability to benchmark coding quality across health centers, departments, and even individual coders (or physicians) is severely limited. The coding landscape has evolved beyond the fee-for-service model and is now integral to value-based care, quality metrics, clinical registries, and more. With the advent of coding automation and optimization tools—ranging from in-house coders to contract coders, CAC systems, Epic's Simple Visit Coding, and AI automation—the way we process and code encounters is changing, and so too must the methods by which we measure and track quality. We need quality processes and metrics that can be applied consistently, irrespective of the coding type or coder involved. Consistent measurement of performance is essential, as we cannot afford to proceed with an ambiguous "95%" accuracy standard.

Without a consistent standard of measurement applied across the industry, our ability to benchmark coding quality across health centers, departments, and even individual coders (or physicians) is severely limited.

A robust quality program must assess both the accuracy and completeness of coding while considering the specific purpose of the codes being audited. This approach will provide valuable insights into coding and documentation improvement opportunities by implementing precise and systematic methods to measure code quality. Uniform processes and metrics must be applied consistently, regardless of the code generator (be it human, AI, etc.). The outcome should track the quality of the codes across all use cases. Did the coder capture all required codes for fee-for-service? If so, how accurate was the coding for that purpose? Did the coder include codes beyond the standard fee-for-service scope? If so, for what use cases, and at what quality level? With a comprehensive and well-defined coding quality program, answering these questions—and many more—becomes not just possible, but routine.

Adapting to Elevate your Quality

In the rapidly evolving healthcare environment, it is essential that coding practices keep pace with industry changes. Establishing a clear, consistent method for measuring coding accuracy and quality across all scenarios is no longer optional—it’s a necessity. By adopting standardized metrics and processes, we can ensure that coding meets the diverse needs of the healthcare system, whether it's for fee-for-service, value-based care, or other emerging models. Such an approach not only clarifies what 95% accuracy truly means but also empowers organizations to make data-driven decisions, enhance documentation, and ultimately deliver better patient outcomes. As we move forward, refining our approach to coding quality will be crucial to supporting the integrity and efficiency of healthcare delivery.

When evaluating AI vendors, consider asking these key questions to better understand their quality assurance processes:

1. How does your coding quality assurance program function, and what routine processes are maintained after implementation?

2. What methodology do you use to score and evaluate quality metrics?

3. Are the results of your quality assurance shared with the customer?

4. Do you have certified coding SMEs on your team, and how do they contribute to the implementation process?

5. Do you have a defined process to determine which cases within a service line are suitable for automation?

6. How do you determine if a coding prediction should be automated?

7. Is the automation process integrated with CCI edits and payer policy reviews?

8. What actions are taken if a case fails to meet CCI edits or payer policy requirements?

9. How do you implement and measure quality improvements in your AI models?

10. What procedures are in place for annual/biannual updates to CPT and ICD codes?

About Jamie Robinson

Jamie Robinson is a seasoned coding auditor and educator with over two decades of experience in the healthcare industry, particularly within teaching hospitals and AI-driven coding solutions. She holds the CPMA, CPC and CIRCC credentials through the AAPC and is currently serving as the Director of Coding Quality at CodaMetrix. She has been instrumental in developing and implementing coding quality assurance programs, optimizing automation performance through data analytics, and ensuring compliance with regulatory standards. With a strong background in professional medical coding, Jamie is dedicated to advancing coding quality, particularly during integration of AI technologies into coding processes.

Let’s Talk

Let’s see what Code for Better can do for you and your organization.

Request a meeting

Let’s See What Code For Better Can Do For You.

Schedule a demo