V28 is no longer a transition issue. It is the live operating environment for Medicare Advantage risk adjustment in 2026. CMS completed the three-year phase-in of the 2024 CMS-HCC model for CY 2026, meaning plans and providers are now operating under the full model rather than a blended version.¹
That makes mid-year the right moment for a hard operational check. Not a high-level “how are we doing?” review, but a specific look at whether documentation and coding workflows are actually performing under V28. The key question is not whether conditions are eventually being found somewhere in the process. It is whether they are being documented clearly, specifically, and defensibly early enough to matter.
Teams that wait until year-end to answer that question lose time they cannot get back. Mid-year is where you find out whether your program is built on first-pass accuracy or retrospective cleanup.
Risk adjustment teams should first measure whether diagnoses are being captured accurately and supported completely during the encounter. Under V28, first-pass capture and documentation support matter more than downstream recovery because the model now operates fully on the updated CMS-HCC logic, and CMS continues to publish 2026 initial and midyear/final ICD-10 mappings and model software that teams need to track as the year progresses.¹ ²
That means the first metrics to review should be the ones that show whether your workflow is working before query, before chart chase, and before retrospective coding rescue. At a minimum, teams should be looking at:
If those numbers are weak, your year-end results may still look acceptable on paper while your workflow is quietly underperforming.
If your organization is still centered on late-stage recovery, this is a good companion read: From Retrospective to Prospective: Modernizing Risk Adjustment Workflows.
First-pass accuracy is more important under V28 because retrospective recovery can identify missed value, but it cannot reliably restore clinical context or create the same level of documentation defensibility as accurate encounter-level capture. V28 raises the importance of specificity and current clinical support, which makes the original note more important than ever.¹ ²
A program can post strong retrospective recovery numbers and still have a weak operating model. That happens when teams are finding conditions only after the encounter through chart review, coder outreach, or provider addenda. The value may still be recovered in some cases, but the process is slower, more expensive, and less defensible.
That is why mid-year metrics need to go beyond “how much did we find later?” and start asking:
If your recovery engine is doing all the heavy lifting, your point-of-care process is underperforming.
For more on this measurement shift, see Proving ROI in Risk Adjustment: How to Measure Results Beyond Chart Review.
The documentation quality metrics that matter most under V28 are MEAT support, specificity, current-year clinical relevance, and consistency across providers. These are the metrics most likely to reveal whether your program is becoming more defensible or simply more dependent on manual cleanup.¹ ²
Start with MEAT support. If diagnoses are not clearly monitored, evaluated, assessed, or treated in the note, they become vulnerable regardless of whether the condition is clinically real.
Then look at specificity. V28 places more pressure on clear condition detail, especially in categories where severity, staging, or complication status materially affects capture. Conditions like diabetes, CKD, heart failure, COPD, and behavioral health diagnoses often expose this problem quickly.
Also measure how often chronic conditions are being carried forward without active current-year support. A diagnosis on the problem list is not the same thing as a diagnosis supported for risk adjustment.
Two definitions are worth keeping in mind here:
First-pass HCC capture is the percentage of valid, clinically supported HCC diagnoses that are documented correctly during the initial encounter and do not require retrospective recovery or provider rework.
V28-sensitive performance is how well your workflow handles the specificity, hierarchy logic, and current clinical support required under CMS-HCC V28, especially for conditions that were easier to capture under prior models.¹
If you want the provider-side version of this same issue, read Closing HCC Coding Gaps at the Point of Care.
Teams should measure how often chart chase, coder queries, and late-stage reviews are still required to close documentation gaps. If a large share of valid diagnoses only appears through retrospective effort, the workflow is still operating as a recovery model rather than a prospective one.
This is where a mid-year checkpoint becomes especially useful. It helps answer practical questions like:
These are not just efficiency questions. They are also accuracy questions. Heavy dependence on retrospective correction is usually a sign that documentation support is showing up too late.
That’s why chart chase and burden metrics matter alongside capture metrics. Related reads: Reducing Chart Chase Volume with AI-Powered Documentation Signals and Reducing Documentation Burden While Improving Coding Accuracy.
A useful V28 checkpoint should be practical, not theoretical.
Step 1: Benchmark first-pass capture
Segment HCC capture by provider, specialty, clinic, and condition family to identify where point-of-care capture is failing.
Step 2: Audit documentation support
Review whether diagnoses are supported with complete MEAT and sufficient specificity, not just present in the chart.
Step 3: Quantify retrospective dependence
Measure chart chase, provider query rates, and recovered diagnoses to see how much of the program still depends on late-stage intervention.
Step 4: Identify top V28 failure patterns
Look for condition groups where specificity or current-year support is consistently weak.
Step 5: Translate metrics into workflow change
Use the findings to change provider guidance, point-of-care signals, and pre-submission validation priorities.
The point of the checkpoint is not just reporting. It is course correction while there is still time to improve year-end performance.
Inferscience helps risk adjustment teams improve V28 performance by turning the most important mid-year metrics into workflow signals that can be acted on before year-end.
AI Chart Assistant supports real-time documentation quality and encounter-level completeness. HCC Assistant helps surface high-value suspect conditions and reinforce specificity during the visit. Quality Assistant helps teams see where care gaps and chronic condition management intersect with risk adjustment performance. HCC Validator strengthens pre-submission support checks and audit readiness.³
That matters because a mid-year checkpoint only creates value if the team can actually act on what it finds.
Organizations that measure the right mid-year V28 metrics should expect stronger first-pass accuracy, fewer unsupported diagnoses, lower retrospective burden, and more consistent year-end performance. The real value of this checkpoint is not just visibility. It is the ability to improve outcomes while there is still time.
If the right metrics are in place, teams should see:
This is also why mid-year measurement is tied directly to defensibility. If you want to extend the conversation, see Preparing for RADV Audits: Building a Defensible Documentation Strategy and Building a Defensible Risk Adjustment Program in a Post-V28 Environment.
Risk adjustment teams should measure first-pass HCC capture, MEAT-supported diagnosis rates, documentation specificity, chart chase dependence, and audit-ready diagnosis support.
First-pass capture is important under V28 because retrospective recovery cannot reliably restore clinical context or ensure the same level of documentation defensibility as accurate encounter-level capture.
One of the biggest warning signs is heavy dependence on retrospective chart review or provider queries to identify diagnoses that should have been documented clearly during the encounter.
Teams can improve V28 performance mid-year by identifying weak documentation patterns, reinforcing point-of-care specificity, and using real-time validation to reduce unsupported diagnoses before submission.
V28 mid-year performance should be measured through first-pass accuracy, documentation quality, and workflow dependence on cleanup.
The goal is not just to find missed value. It is to identify where the workflow is still failing while there is still time to fix it. Teams that course-correct now will improve year-end RAF performance, reduce operational waste, and build stronger audit resilience.
Contact Inferscience to see how your team can turn a mid-year V28 checkpoint into workflow improvement.
Request a walkthrough to learn how real-time documentation support can improve first-pass capture, reduce rework, and strengthen audit readiness.

