Reducing Prior Authorization Burden with Generative AI

As generative AI gained momentum in healthcare, Epic moved quickly—launching dozens of new features like summarization, message drafting, and automation across the EHR. I led the design of a new feature that streamlined medication prior authorizations—driving accuracy and reducing friction for care teams.

Epic “Health Grid” (80+ apps across) | Source: showroom.epic.com

As generative AI gained momentum in healthcare, Epic moved quickly—launching dozens of new features like summarization, message drafting, and automation across the EHR. I led the design of a new feature that streamlined medication prior authorizations—driving accuracy and reducing friction for care teams.

Epic “Health Grid” (80+ apps across) | Source: showroom.epic.com

Company

Company

Industry

Industry

Health IT

Role

Role

Lead UX Designer

Year

Year

2024 - 2025

Staged Rollout

Early adopter feedback

Disclaimer: Due to IP restrictions, this case study only includes conceptual visuals. Actual Epic designs and assets cannot be shared, unless they've been made public.

Challenge

Justifying medical necessity through prior authorizations is one of the more tedious parts of outpatient care. Nurses and support staff spend hours gathering clinical details—labs, diagnoses, past treatments that didn’t work—spread across years of chart history. The question sets from payers are long, sometimes repetitive, and small gaps or unclear answers lead to denials and delays in patient access.

We saw an opportunity to surface the most relevant information—right when it’s needed—without taking users of out context.

Results

We piloted the feature with five early adopter organizations, each with centralized ePA teams of 20–40 staff focused full-time on prior authorizations. These were high-volume teams, and the impact was immediate.

On average, users reported submitting up to 3x more authorizations in a single day than before.

Across pilots, 92% of AI-generated answers were rated as a good starting point by staff.

122

Early adopter users

3x

Increase in efficiency

92%

Accuracy in answers

My Role

I led design on this feature in close partnership with engineering, clinicians, and IT leaders. We worked together to shape both the LLM and the experience around it—validating early output with test data, defining what “good” answers looked like in context, and figuring out how those answers would be generated, reviewed, and edited by users.

I designed the interaction model for AI-generated suggestions—making sure they felt trustworthy, editable, and non-disruptive. And coordinated a cohesive user experience across Epic's system of 80+ products that were building similar form automation tools.

Along the way, I ran design reviews and we iterated based on feedback from internal teams and pilot users, ensuring the feature felt practical, safe, and easy to adopt.

Process

Defining What Makes a "Good Answer"

The core challenge was trust. If users didn’t feel confident in the AI-generated answers, they wouldn’t rely on the feature—no matter how well it integrated into their workflow.

We began by auditing prior auth question sets across payers to understand what information was needed and how it typically appeared in the chart. Starting with simpler, structured questions, we used rule-based logic to generate reliable answers. Once we had a foundation, we moved on to more complex questions—ones that required pulling data across long spans of time or interpreting patterns. We also drew from existing ML research and past predictive UX work to avoid reinventing the wheel.

User interviews helped us shape both the functionality and tone of the feature.

We learned that:

  • Users often pieced together answers from multiple notes across visits

  • AI suggestions needed to be editable and clearly sourced

  • Users liked plain-language summaries but were cautious of black-box automation

Designing for Confidence and Clarity

Our goal was to give users support—not limit control or replace clinical judgement. I explored ways to embed AI-generated answers directly into the existing prior auth workflow without breaking their mental model of chart review.

I created prototypes with these key design decisions in mind:

  • Balancing visibility, editability, and trust

  • Linking each answer to source data so users could verify where it came from

  • Crafting clear, neutral language with help from UX writing experts, including disclaimers to reinforce human verification

  • Allowing in-place editing, so users could easily refine answers before submission

  • Prioritizing short, scannable responses tailored to payer expectations

Creating a Shared Visual Language for AI

As part of a broader AI UX initiative at Epic, I collaborated with other designers to build a shared visual system for AI-generated content. This system needed to work across both clinical and non-clinical workflows and flex to different input field types.

We focused on visual cues that signaled tentativeness—reinforcing that the clinical content should be reviewed and verified by the user before submission.

Testing with Users

Before piloting, we ran usability tests with a diverse group of users to understand:

  • Whether the AI-generated content was clear and actionable

  • If users trusted the suggestions—or felt the need to do their own research

  • How often they revised, accepted, or rejected the AI content

Rolling It Out

To support broader adoption, I worked with dev leadership and the internal design system team to:

  • Turn our patterns into reusable code

  • Update Epic’s internal UX guide with practical guidance for AI-generated content

  • Add this component to our AI Figma library, so other designers could move faster and stay consistent

Scaling to Full Release and Beyond

With strong results from our pilot, we’re now focused on two parallel efforts:

Finalizing the full release for medication prior authorizations

  • Refining interaction patterns based on real-world usage

  • Iterating on AI output using feedback from early adopters

  • Preparing for general release to ~250 customer organizations

Expanding to other prior auth types

  • Expanding the LLM for procedure-based prior authorizations

  • Validating content needs and workflows for new question types

  • Ensuring the design system scales across varied clinical contexts