BETA
In aviation, “almost right” isn’t enough.

We collect real-world ATC audio
We collect a large volume of ATC audio from real flights, both from pilots using PilotGPT and from public aviation sources. That means busy frequencies, regional accents and phrasing, stepped-on or partial calls, and real-world radio noise and distortion.
The more edge cases we capture, the better PilotGPT becomes at handling them.
Human Verification, Always
Before any audio is used for training, it is manually reviewed by humans.
For each ATC segment, reviewers listen to the original audio, verify the full transcription, correct callsigns, numbers, clearances, and phraseology, and resolve ambiguous or overlapping transmissions.
Only human-validated transcriptions make it into training.
Train, Optimize, Repeat
We train and refine PilotGPT using these human-verified ATC segments. This is what teaches the model how to respond even in the toughest edge cases and helps us reliably and steadily reach experienced pilot-level accuracy.
Each training cycle makes PilotGPT more accurate for everyone.
Custom Model
January 2026
~60% Accuracy
We introduced our own model, trained on an initial dataset of real ATC audio. This significantly improved recognition of aviation phraseology, callsigns, and controller cadence. Accuracy increased, but edge cases remained common in complex environments.
Operational Reliability
End of 2026
~95% Accuracy
Through continued dataset expansion and model optimization, PilotGPT achieves consistent performance across most ATC edge cases, reaching pilot-level transcription accuracy.








