How AI Actually Grades You in a 2026 Interview (and the Myths to Ignore)
Half the prep content on the internet is based on how HireVue worked before 2021. This is the 2026 reality.
The headline correction
In 2026, the leading AI interview platforms (HireVue, Sapia) do not score your facial expressions. They score your language content and, on video platforms, your speech structure. Most online prep advice is based on pre-2021 behaviour that no longer applies.
Source: HireVue announcement of facial-expression scoring removal, 2021. ORCAA audit of HireVue, 2021. Sapia.ai candidate-explainer documentation, April 2026.
How HireVue Actually Scores in 2026
HireVue evaluates four primary dimensions, scored against a Behaviorally Anchored Rating Scale (BARS) for each competency the employer is assessing:
1. Language content
The words you use, the keywords that appear in your answer, the completeness of your response against the competency rubric. If the BARS rubric for Problem Solving requires you to describe a specific obstacle, your answer needs to contain a specific obstacle. Generic language ("we evaluated options") scores lower than specific decisions ("we had to choose between extending the sprint or shipping known defects, and I recommended we ship with a documented workaround and remediate in the next cycle").
2. Speech structure
Sentence shape, discourse markers, coherence, and specificity. STAR-structured answers produce naturally higher speech structure scores because they follow a cause-effect-outcome narrative shape. Answers that jump around, repeat, or ramble produce lower structure scores.
3. Rubric match (BARS alignment)
Your answer is scored against explicit level descriptors for each competency. Level 1 means the competency behaviour is absent. Level 5 (exemplary) requires you to have described a specific situation, named your specific decision-making process, addressed trade-offs, and quantified the outcome. The AI maps your answer content against these descriptors.
4. Delivery (pacing and speaking rate)
The target speaking rate on most video platforms is 120-150 words per minute. Too fast reads as nervous or rehearsed. Too slow can affect the speech structure score. Filled pauses ("um", "uh") are noted but are not catastrophic in moderation. Strategic pauses (2-3 seconds of thinking before answering) are normal and not penalised.
Tier Banding and Human Review
On most HireVue employer configurations, candidates are placed into tier bands (Top, Middle, Bottom) before human review. Recruiters see the tier band and scorecard alongside the video. In many configurations, Bottom-tier candidates are filtered out before human review. You will typically not be told which tier you received.
Some employers use the AI score as a recommendation only and review all candidates manually. Others use auto-rejection below a threshold. You usually cannot determine which mode your employer uses from the invite email.
How Sapia.ai Scores (Text Only)
Sapia is categorically different from video platforms. There is no video, no audio processing, no facial analysis. Sapia evaluates your written text responses using Natural Language Processing.
What Sapia scores
- Sentence structure and coherence
- Word choice and linguistic markers
- Behavioural content specific to each competency
- Response completeness
- Specificity of examples provided
What Sapia does NOT score
- Video or visual signals (none captured)
- Audio or voice quality (none captured)
- Gender, age, race (explicitly excluded)
- Typing speed or corrections (not scored)
- AI-generated content (detected and flagged, not scored)
Sapia explicitly scans for AI-generated content using linguistic fingerprinting and anti-plagiarism techniques. Their candidate documentation states this directly. Submitting ChatGPT-written responses to Sapia is detectable.
BARS Rubrics Explained for Candidates
Behaviorally Anchored Rating Scales are the scoring framework most AI interview platforms use. Understanding them changes how you approach every answer.
| Level | What it means | Example anchor for “Problem Solving” |
|---|---|---|
| 1 - Absent | The competency is not demonstrated in the answer | No specific problem described. Generic statements only. |
| 2 - Below standard | The competency is implied but not shown behaviourally | Mentions a challenge but no specific actions or decisions described. |
| 3 - Meets standard | The competency is demonstrated with a specific example | Describes a specific problem and what they did to address it. No outcome quantified. |
| 4 - Exceeds standard | Specific example, decision-making shown, outcome described | Names the problem, explains why the chosen approach was selected over alternatives, quantifies result. |
| 5 - Exemplary | Full STAR structure, trade-offs addressed, impact quantified | Specific situation with context, explicit task and constraints, decision chain with trade-offs, quantified business outcome. |
Level 4 and 5 answers require: a named specific situation (company, role, or context), a clear task or problem with constraints, your specific actions and decision-making process (including what alternatives you considered and why you rejected them), and a quantified result. “We improved performance” is a level 2-3 anchor. “We reduced median API response time from 340ms to 190ms over four sprints by switching from synchronous database calls to an event queue” is a level 4-5 anchor.
What AI Does NOT Score on Leading 2026 Platforms
Facial expressions
HireVue removed facial expression scoring in 2021. The AI does not score your expressions. The human reviewer watching the video afterwards may notice them.
Eye contact
No leading 2026 AI platform reliably measures eye contact duration. The AI cannot distinguish between looking at the camera and looking at a script. The human reviewer can.
Attractiveness or appearance
Not scored. Not legally permissible in jurisdictions with AEDT laws.
Clothing or background
Not scored by the AI. A messy background may distract the human reviewer who watches after.
Accent (direct)
Not directly scored as a variable. Indirect effects are possible through how AI interprets pacing, sentence structure patterns, and language complexity in candidates with non-standard accents - a known limitation that bias audits are supposed to detect.
Honest acknowledgment: where bias still exists
Training data bias persists in all AI scoring systems. Non-native English speakers, candidates with stutters or atypical speech patterns, autistic candidates whose response structure differs from neurotypical norms, and ADHD candidates whose response length varies significantly - all face measurable disadvantage in some platforms even where no intentional bias exists.
The 2025 Benson et al. study (Wiley International Journal of Selection and Assessment) found that autistic candidates’ response patterns on algorithmically scored AVIs differ measurably from neurotypical responses in ways that affect scores. This is addressed in full on the neurodiversity page, including accommodation requests.
How to Find Out How Your Interview Is Scored
Under Local Law 144, the employer must publish a bias audit summary on their website that discloses the job qualifications and characteristics their AEDT analyses. Ask for the URL. If they cannot provide it, they may be out of compliance.
Under AIVIA, the employer must disclose specifically what characteristics their AI evaluates (facial expressions, word choice, tone, etc.) in a standalone notice before you start. If they have not provided this, they are non-compliant.
Under the EU AI Act high-risk provisions, you have the right to meaningful information about automated decisions affecting you. Request an explanation.