Product Overview
The complete ATC
candidate assessment pipeline.
From a candidate's first login to a hiring panel's final decision — Falcon handles every step with rigorous, reproducible data.
Platform Components
Study-level terminal area radar control, built in Unity and served to the browser via WebGL. No downloads, no plugins — candidates access their simulation session from any modern browser.
The environment models realistic aircraft flight dynamics, procedural clearances, and ATIS broadcasts. Traffic complexity scales with each successive run, allowing the platform to stress-test candidate performance under increasing workload.
Candidates interact using voice commands through a standard USB or Bluetooth headset — the same hardware they'd use in a real ATC environment.
Every session is scored automatically by the Separation Integrity Engine (SIE) in real time. The SIE detects >99% of separation violations using predicted intercept vectors — the same principle underlying real STCA systems — and classifies each event by severity:
- Separation Loss — actual radar or vertical separation infringement (−20 pts)
- Critical — clearance that would directly cause a separation loss (−10 pts)
- Significant — procedural error with safety implications (−5 pts)
- Minor — non-standard practice or communication error (−2 pts)
Run scores are weighted by session difficulty — later, more complex runs contribute more to the final simulator score.
Real speech. Real scoring. The voice layer uses speech-to-text transcription followed by an LLM-based inference model to parse candidate transmissions, correct transcription noise, and evaluate:
- Adherence to ICAO standard phraseology
- Rate of speech (words per minute)
- Readback error detection — the simulator plants deliberate readback mistakes to test vigilance
- Transmission redundancy — issuing commands to aircraft already executing that instruction
The model handles non-standard phraseology without penalising candidates for accents or minor linguistic variation — only for procedurally incorrect transmissions.
Knowledge assessment alongside practical performance. Each candidate session includes a structured learning module with knowledge-check questions covering terminal procedures, separation standards, emergency handling, and coordination protocols.
Results are presented as a grid of correct/incorrect answers with expandable detail views, giving evaluators insight into theoretical knowledge gaps that may explain observed simulator performance.
Candidate performance is decomposed into 15+ distinct skill categories across three domains:
- Planning — number of vectors, track miles flown, transmission efficiency
- Managing Multitasking — proper prioritization, flight strip currency
- Communication — phraseology adherence, readback monitoring, speech rate, repeated commands
- Situational Awareness — conflict anticipation, aircraft position awareness
- Separation — horizontal, vertical, and wake-turbulence standards
Each subcategory is graded A through F, aggregated from all simulation runs. Grades are weighted by run importance.
AI-generated, performance-tailored interview questions. Based on a candidate's specific skill grades and error patterns, the platform generates targeted interview questions for each area of concern — so hiring panels can probe whether a gap reflects a knowledge problem, a stress response, or a deeper aptitude issue.
Each question comes with a suggested ideal answer aligned to the specific skill being assessed, allowing non-specialist panel members to evaluate responses effectively.
Who Uses Falcon
Built for ANSPs and training organisations.
Falcon is designed for Air Navigation Service Providers, ATC training academies, and defence organisations that need to screen large candidate pools efficiently and objectively.
The platform is configurable to local airspace rules, separation standards, and phraseology — so assessment criteria match the operational environment candidates will actually work in.
Typical use cases