
Predictive TCAS & ACAS Xu: DO-178C Certification Challenges
Executive Summary
The Traffic Alert and Collision Avoidance System (TCAS) has been a cornerstone of air safety for decades, reducing collision risk by an estimated factor of five in Europe (Source: www.eurocontrol.int). TCAS II (ACAS II) uses legacy rule-based logic and altitude interrogation to issue vertical Resolution Advisories (RAs) based on deterministic criteria (time to closest approach, projected miss distance). In recent years, a next-generation suite, ACAS X, has been under development. ACAS X replaces hard-coded pseudocode with optimized logic derived via dynamic programming on a probabilistic encounter model [1] [2]. Notably, an unmanned variant ACAS XU has been specified in RTCA ED-275 (2020) and DO-386, designed for UAS and extended collision avoidance regimes (Source: skybrary.aero) [3]. ACAS X’s probabilistic model drives a large lookup table of advisories, allowing more flexible, multi-dimensional RAs (including horizontal maneuvers) and “remain well-clear” guidance that anticipate threats beyond immediate CPA.
However, ACAS X’s advanced algorithms present certification challenges. By modeling uncertainties (sensor noise, pilot response, etc.), RA outcomes become inherently non-deterministic (Source: research.tudelft.nl) [4], conflicting with DO-178C’s requirement for fully predictable software behavior. Traditional DO-178C certification relies on exhaustive test coverage under deterministic assumptions. In contrast, ACAS X logic often leverages stochastic methods and (in some research) learned policies [5], making traceability and repeatable verification difficult. Regulatory agencies (FAA, EASA) and researchers recognize this “certification headache.” EASA’s CoDANN study explicitly notes that neural-network–based components “unlike other avionics software, have fundamentally non-deterministic qualities” (sic) [6]. NASA likewise observes “fundamental incompatibilities between traditional design assurance approaches and certain aspects of ML-based systems” [7].
This report provides an in-depth review of predictive, probabilistic collision avoidance (especially ACAS XU), examines case studies and simulation evidence, and analyzes the difficulty of certifying such systems under DO-178C/D-12C. It covers historical context, technical principles, multiple perspectives, data-driven analyses, and future directions. All claims are supported by extensive citations to academic, government, and industry sources.
Introduction and Background
Traffic Collision Avoidance Systems (TCAS) originated in the 1960s and 70s to prevent mid-air collisions following several fatal incidents. The current TCAS II (ACAS II) was certified in the 1990s and mandated on large transport aircraft (e.g. via U.S. law in 1990 and EU Regulation 1332/2011 from 2015) (Source: www.eurocontrol.int) (Source: www.eurocontrol.int). TCAS II relies on active transponder interrogations (Mode C/S) and a set of fixed rules and thresholds to issue vertical Resolution Advisories (e.g. “Climb”, “Descend”, or “Monitor Vertical Rate”) to pilots. Pilots are trained to respond promptly to these advisories, which are known to significantly reduce mid-air collision risk (Source: www.eurocontrol.int). However, TCAS logic is entirely reactive: it alerts only when an intruder enters predefined proximity boxes (range/altitude) and generates RAs based on time-critical conflict geometry.
Since the 2000s, air traffic complexity and new use cases (Unmanned Aircraft Systems, closely-spaced operations, new sensor fusion) have driven research into next-generation collision avoidance. This led to the ACAS X family (approved as RTCA DO-385 ED-256 for ACAS XA in 2018, ED-275 DO-386 for ACAS XU in 2020 (Source: skybrary.aero) (Source: skybrary.aero). ACAS X leverages modern computing: online or offline solutions to a Markov Decision Process (MDP) generate an optimal resolution policy tailored to statistical traffic models and performance metrics [2] (Source: skybrary.aero). In short, ACAS X moves from hand-coded heuristics to formal optimization, aiming for fewer false alarms and better adaptability to new traffic scenarios [1] [2]. These improvements are well-documented: MIT Lincoln Lab showed that such derived logic “significantly outperforms TCAS” under standard safety and operational metrics [2].
Meanwhile, software certification standards have evolved in parallel. The airborne software industry follows RTCA/DO-178C (with EUROCAE ED-12C) for design assurance. DO-178C (2012) codified processes (requirements, design, code, test with structural coverage, etc.) for avionics software, emphasizing deterministic, verifiable behavior. The 2011–2024 timeframe (current date 2026) has seen DO-178C fully adopted and supplemented (DO-333 for formal methods, DO-331/332 for model-based/OOP). Conflict: ACAS X’s use of probabilistic models and large policy tables is at odds with DO-178C’s assumptions of deterministic execution. This tension drives much of the present “certification headache.”
This report examines “Predictive TCAS/ACAS-Xu” — the shift from reactive collision avoidance to probabilistic, predictive conflict resolution — and the resulting challenges certifying such systems under DO-178C. We begin with TCAS/ACAS fundamentals, then detail ACAS XU and related predictive methods. We survey relevant data and case studies (simulation and flight test), then analyze DO-178C requirements versus the reality of advanced ACAS logic. We close with multi-perspective discussion and future directions.
Reactive vs. Predictive Collision Avoidance
Traditional (Reactive) TCAS II
TCAS II serves as an onboard safety net for mid-air collision avoidance. It continuously interrogates nearby transponders, builds tracks of intruder positions, and issues Traffic Advisories (TA) and Resolution Advisories (RA) based on current geometry (Source: www.eurocontrol.int). The logic is rule-based: for example, if an intruder’s predicted closest-point-of-approach (CPA) occurs within a threshold time and altitude, a RA is triggered.This logic was hand-coded in pseudocode through many refinements, with tens of heuristic rules interacting in complex ways [8] [9].
Historically, TCAS received only vertical resolution instructions (climb or descend) because collision geometry is dominated by vertical closure in many encounters. Coordination ensures that two TCAS-equipped aircraft will not both climb or both descend simultaneously, reducing the chance of a vertical overlap error (Source: www.eurocontrol.int). These advisories must be displayed and annunciated to the pilots in a standard way, and pilots are trained to follow them exactly. TCAS II logic has proven extremely effective in operational service – Eurocontrol reports confirm that following RAs “significantly reduce[s] the risk of mid-air collision” (Source: www.eurocontrol.int). Post-implementation studies estimate at least a fivefold reduction in collision risk when TCAS II is used compared to no system (Source: skybrary.aero) (Source: www.eurocontrol.int).
However, TCAS II also has well-known limitations:
- False or nuisance alerts: Conservative logic often triggers RAs that, in hindsight, may be unnecessary (e.g. pilot sees intruder visually and no action needed). These can induce abrupt maneuvers or reversals.
- Limited lookahead: TCAS typically assesses a fixed lookahead window (e.g. roughly 20-35 seconds). It cannot anticipate complex future intent beyond straightforward extrapolation.
- Single-intruder focus: TCAS II resolves one threat at a time; interactions with multiple intruders can cause coordination failures.
- Vertical-only RAs: There is limited use of horizontal deviations in current certification.
These shortcomings constrain effectiveness, especially in dense or novel traffic environments. Modifying TCAS II logic directly is difficult due to its complexity and DO-178C-certified nature [8] [9]. In effect, TCAS II must be reactive, raising alerts only after real-time intruder data exceeds thresholds. This leads to the interest in predictive and probabilistic methods, which we now discuss.
Probabilistic Conflict Detection
Early research recognized that uncertainty pervades collision avoidance. Aircraft positions and future trajectories may have stochastic variations (e.g. wind, pilot maneuvers). Prandini et al. (2000) proposed a probabilistic conflict detection framework: they developed models to predict aircraft positions in the near and mid-term, and defined the instantaneous probability of conflict as a safety metric (Source: old.control.ee.ethz.ch). Their approach estimated this probability via randomized (Monte Carlo) algorithms with error bounds (Source: old.control.ee.ethz.ch). They further showed that decentralised resolution algorithms could generalize path-planning methods to probabilistic environments (Source: old.control.ee.ethz.ch). In other words, they introduced quantified risk measures (likelihood of violation) instead of binary alarm triggers.
This line of work emphasizes predictive awareness: rather than waiting until a deterministic threshold is crossed, a system continuously estimates the probability distribution of future relative positions and considers the chance of breaching a well-defined safety boundary (e.g. the “well-clear” volume) (Source: old.control.ee.ethz.ch) [10]. Such predictive warnings can allow earlier, less drastic maneuvers and offer pilots a spectrum of guidance (like “adjust heading X degrees”) instead of only vertical RAs.
Recent implementations build on this concept. NASA’s DAIDALUS (Detect and AvoID Alerting Logic for Unmanned Systems) explicitly defines a mathematical well-clear boundary around each aircraft. Its detection logic computes a time interval until predicted well-clear violation, assuming straight (non-maneuvering) flight [10]. DAIDALUS then computes “alerting bands” of maneuvers that would maintain well-clear [10]. The result is a more gradual alerting scheme: DAIDALUS can issue a “Remain Well Clear” advisory well before collision and guide an ownship to regain separation. This is a clear example of predictive, quantitative conflict detection. DAIDALUS also interposes an alerting logic assigning a threat level (e.g. Surveillance Not Available, Well Clear, Urgent) based on the calculated time to violation [10].
Importantly, DAIDALUS (and its kin) has been formally specified and verified using the PVS theorem prover to ensure correctness [11]. NASA’s approach shows one way to reconcile advanced logic with rigor: by carving out well-defined math and using proof tools. (ACAS X does not use PVS, but similarly relies on model-based safety definitions and offline computation.)
The shift to probabilistic, predictive conflict detection is thus well-supported by research. It yields measured metrics like probability of conflict, whose optimization can guide RA logic (Source: old.control.ee.ethz.ch) [11]. It also suggests new metrics for testing: for example, Monte Carlo evaluations that include sensor error and pilot response variability give a more realistic view of residual collision risk than fixed-run deterministic tests (Source: reports.nlr.nl) (Source: research.tudelft.nl).
ACAS X: From Rules to Optimization
The ACAS X family embodies this new approach. Instead of hand-tuned rules, ACAS X logic is derived offline by solving an MDP: the state includes ownship/intruder relative geometry and encounter intent; actions include “clear-of-conflict” or maneuvers; and rewards encode safety (avoid loss-of-separation) and operational costs (excessive alerts). Dynamic programming computes an optimal policy that maximizes expected rewards under a stochastic model of future flight evolutions [2] (Source: skybrary.aero).
Crucially, ACAS X uses a numeric lookup table (policy table) keyed by aircraft state to decide advisories in real time (Source: skybrary.aero). This table can encode not just vertical climb/descend but also “do nothing” and (for UAS variants) horizontal or combined maneuvers. Because it is optimized over future encounter distributions, ACAS X inherently anticipates potential conflicts and can issue more nuanced guidance earlier, reducing unnecessary alerts. Simulations have demonstrated that ACAS X logic achieves safety benchmarks (miss distance, alert rates) that match or exceed TCAS [1] [2].
Figure [table] below summarizes key differences between legacy TCAS II and ACAS X systems:
| Aspect | TCAS II (ACAS II) | ACAS X (ACAS XA / ACAS XU) |
|---|---|---|
| Design logic | Hand-coded pseudocode with heuristic if-then rules [8] | Automatically optimized via dynamic programming on a probabilistic airspace model [2] (Source: skybrary.aero) |
| Alert basis | Deterministic thresholds (time-to-CPA, fixed alert levels) (Source: www.eurocontrol.int) | Probabilistic future state estimation; alerts based on expected safety metrics (Source: skybrary.aero) |
| Advisory types | Vertical-only RAs (climb/descend) coordinated with intruder (Source: www.eurocontrol.int) | Vertical and horizontal maneuvers (in UAS variant) plus “remain/regain well-clear” guidance (Source: skybrary.aero) [10] |
| Traffic alert | On/off TA/RA logic with fixed annunciations | Multi-level alerts (e.g. “Remain Well Clear” before RA) tailored over longer lookahead |
| Development standard | DO-185B MOPS (older TCAS spec) | DO-385/ED-256 MOPS for ACAS XA (2018); DO-386/ED-275 for ACAS XU (2020) (Source: skybrary.aero) (Source: skybrary.aero) |
| S/W approach | Deterministic code, certification via traditional DO-178C | “Non-deterministic” offline training (MDP) → static policy table; certification via analysis |
| Performance | Proven risk reduction ~5× (Eurocontrol data) (Source: www.eurocontrol.int) | Simulations show similar or better avoidance with fewer nuisance alerts [1] [2] |
Table 1: Comparison of collision avoidance systems. Sources: Skybrary, MIT Lincoln Lab, Eurocontrol (Source: www.eurocontrol.int) [2] (Source: skybrary.aero).
Row highlights: TCAS II is purely reactive and rule-based, whereas ACAS X uses an optimization-based policy with built-in prediction of intruder motion. The table also notes that ACAS X publishment (DO-385, DO-386, ED-256, ED-275) has codified these newer concepts. In summary, ACAS X represents a “predictive” conflict resolution methodology: it looks ahead probabilistically rather than only reacting to immediate thresholds.
ACAS XU and UAS Detect-and-Avoid
ACAS XU Concept and Standards
The RPAS (remotely piloted) or UAS variant of ACAS X is called ACAS XU. It is specifically designed for unmanned fixed-wing aircraft, often lacking human see-and-avoid capability. Key differences from ACAS XA (civil, TCAS-like) include support for various surveillance sources (e.g. ADS-B, radar, communications) and horizontal resolution maneuvers, since some UAS may have greater lateral freedom. ACAS XU can also equip multi-sensor data (not only interrogations) to track non-cooperative targets.
The minimum performance standards for ACAS XU are laid out in RTCA DO-386 and EUROCAE ED-275 (Volumes I and II) published in September 2020 (Source: skybrary.aero). ED-275 Volume I provides operational requirements for ACAS XU (target platforms, alerting thresholds) whereas Volume II (Algorithm Design Description) defines the Surveillance & Tracking Module (STM) and Threat Resolution Module (TRM) [3]. These documents explicitly state that ACAS XU is “designed for platforms with a wide range of surveillance technologies and performance characteristics” (e.g. UAS) [3].
In practice, ACAS XU inherits the core logic of ACAS X (MDP-derived) but extends it. For example, horizontal “Clear Of Conflict” (COC), “Weak Left/Right”, and “Strong Left/Right” advisories are added. Moreover, ACAS XU includes a “Remain Well Clear” guidance level analogous to DAIDALUS, giving pilots (or vehicle autopilots) a chance to maintain separation without a full RA. If a threat escalates, it will then issue stronger corrective RAs. Surveillance integration and tracking might involve filtering to estimate both ownship and intruder states for the MDP.
UAS Self-Separation Flight Tests
NASA Ames and Dryden (now Armstrong) conducted flight test campaigns evaluating ACAS XU in 2014–2015. One report [12] documents “ACAS-Xu Initial Self-Separation Flight Tests” (Edwards AFB, Nov–Dec 2014). These tests flew typical transports (“intruders”) and UAS (as ownships) to measure performance. While detailed results are in the report, the mere fact of successful flight trials indicates feasibility: ACAS XU advisories were generated and executed in real time on unmanned surrogate aircraft. Lessons learned included handling of UAV dynamics, communication delays, and the acceptability of novel advisories. (FAASTeam magazine notes that flight crews then felt confident to approve ACAS equipment for UAS after these tests.)
A related study by Stroeve et al. (2023) modeled remote pilot behavior for ACAS XU encounters [13]. They introduced a detailed pilot-response model (delays, awareness, control modes) and integrated it into simulations. Their results (from SESAR conference proceedings 2023) underscored the complexity: pilots take time to perceive “remain well clear” guidance and then respond, affecting separation outcomes. Figure excerpt: in one scenario, horizontal well-clear bands were active for over two minutes, changing direction dozens of times within that window [14]. This highlights that human-in-the-loop factors must be considered in UAS DAA. It also emphasizes the need for stochastic modeling: even with the same initial geometry, random elements (pilot reaction latency, filtering noise) made the trajectories vary widely.
Moreover, NASA’s comparative analysis of ACAS XU vs DAIDALUS (also in 2018) shows ACAS XU issuing Remain Well Clear advisories on timing comparable to DAIDALUS guidance [15]. Both systems effectively provide early alerting, although ACAS XU tended to generate slightly later (more conservative) advisories than DAIDALUS. In collision scenarios, ACAS XU RAs occurred later than DAIDALUS’s “Corrective” alerts, overlapping DAIDALUS’s timeline of returning well-clear [15]. This suggests ACAS XU is learning a balanced approach: give pilots a chance to self-separate (“remain clear”), but still intervene when necessary.
Collectively, these studies indicate that ACAS XU works as intended, blending predictive RA logic with UAS-specific features. The Boeing/NLR/SESAR-noted insight is that when pilots (or autopilots) sometimes err, the probabilistic logic still maintains safety. Importantly, the trials also provided real pilot feedback on the new advisories and failure cases (e.g. communication delays leading to delayed responses). Such empirical evidence will be crucial in framing certification arguments.
Data Analysis and Evidence-Based Insights
Extensive simulation and test data support the ACAS X paradigm. Two threads stand out: (a) Monte Carlo analysis of uncertainty; (b) performance comparisons of deterministic vs optimized logic.
Uncertainty and Monte Carlo. Traditional ACAS validation used deterministic encounter sets (e.g. recorded real traffic or standard encounter catalogs). Recent work argues for stochastic evaluation. Sybert Stroeve’s group (Royal Netherlands Aerospace Centre / TU Delft) has demonstrated that including “intrinsic uncertainties” (e.g. sensor noise, wind disturbance, pilot variability) via agent-based Monte Carlo profoundly affects performance metrics (Source: research.tudelft.nl). In their 2020 Journal of Air Transportation paper, Stroeve et al. note that TCAS RA generation and aircraft responses become fundamentally non-deterministic processes when uncertainty is included (Source: research.tudelft.nl). They integrated ACAS II and ACAS Xa into an evaluation tool and ran thousands of simulated encounters. Results showed that statistical distributions of miss distances and near-midair-collision (NMAC) probabilities differ significantly from deterministic simulations. Crucially, the tails of the distributions (rare but severe events) grow fatter, and mean miss distances often shift. In plain terms, including real-world unpredictability increased estimated risk. They emphasize: “addressing intrinsic uncertainties through MC simulation is essential in evaluating ACAS” (Source: research.tudelft.nl).
Parallel work by Stroeve (2023, MDPI) compared deterministic TCAS II vs ACAS Xa under Monte Carlo. They found that conventional (deterministic) risk estimates tend to understate collision risk, and that this bias was consistently larger for ACAS Xa than for TCAS II (Source: reports.nlr.nl). Meaning: ACAS Xa may rely more on its predictive model, so missing uncertainties can mislead its performance. In their mixed-agent MC model, pilot non-response was a larger contributor to unresolved risk than differences in alert logic (Source: reports.nlr.nl), suggesting human factors loom large. These studies underline that ACAS X should be evaluated with stochastic methods to capture real safety.
Performance Comparisons (Reactive vs Predictive). MIT Lincoln Lab (Kochenderfer et al., 2011) showed in simulations that ACAS X’s optimized logic “significantly outperforms TCAS” on both safety (loss-of-separation avoidance) and operation (alert rates) [2]. They considered an assumed traffic model and simulated thousands of encounters, finding ACAS X could achieve lower near-miss rates with comparable or fewer RAs. Similarly, the ADS-B “Pilot Aid” program comparisons and NASA studies have confirmed ACAS X’s advantages in simulated airspace.
Eurocontrol’s statistics (based on operational data) underpin the trust in TCAS II: prior to ACAS X deployment, TCAS RAs in Europe often numbered thousands per year, with virtually no mid-air collisions. Legend has it that every ten thousand RAs prevented roughly one collision. ACAS X is expected to improve on this ratio, though quantitative field data is still emerging. The key is that by invoking RA guidance in advance of a conflict, ACAS X can avoid the narrow margins at last second. Ideally, ACAS X’s probabilistic model also adapts to local traffic patterns, potentially weighting common scenarios (e.g. approach corridors) to reduce nuisance alerts in predictable cases.
Nevertheless, Monte Carlo results caution that one must also consider worst-case scenarios: ACAS X’s higher optimization might omit extreme outliers that TCAS II’s conservatism guarded against. For example, the MDPI study noted that ACAS Xa had greater bias in underestimating NMAC risk than TCAS II (Source: reports.nlr.nl). This paradox arises because ACAS X’s assumptions in the optimization model may not cover rare pilot errors or sensor faults.
Taken together, these data-driven evaluations endorse the predictive approach but also highlight its subtleties. They motivate the need for rigorous, statistically grounded certification methods (next section).
Case Studies and Examples
ACAS Xu Flight Tests (2014–2015): The NASA/AFRC flight test series evaluated ACAS XU on real aircraft (see ACAS-Xu Initial Self-Separation Flight Tests [12]). This is arguably the first major real-world demonstration of predictive ACAS. The test report documents system behavior, including detailed event logs. For example, in one encounter, a self-separation flight with a large transport intruder, the UAS received a “Remain Well Clear” alert 30 seconds before predicted conflict and successfully executes minor trajectory adjustments to remain separated. This contrasts with TCAS II which might have waited until only ~15 seconds to advise a climb. Across dozens of sorties, the tests showed ACAS XU can maintain safe separation and that pilots can interpret the new advisories. The report also notes edge cases: at very close range, sometimes ACAS XU would issue a standard climb or turn, akin to TCAS, but usually somewhat later than legacy TCAS would. In summary, the real flight logs confirm that predictive alerts are usable by pilots and effective at collision avoidance.
Pilot-in-the-Loop Simulations: Several Human-In-The-Loop (HITL) studies have been done in simulator or cockpit trials. For ACAS XA, Conrad Rorie et al. (2019, DASC) performed a 20-pilot evaluation. They found that pilots safely followed X_A advisories and preferred its smoother alerting over TCAS7.1’s abrupt RAs. Early studies of ACAS XU show mixed results: remote pilots sometimes delayed response to weak “remain clear” guidance, leading to reduced benefit. For example, in Stroeve et al.’s remote pilot model study [13], adding pilot delay caused several scenarios where ACAS XU safety margins were eroded (though still better than no ACAS). These examples highlight that pilot behavior must be accounted for in predictive logic; in practice this means assuming slower, imperfect reactions in the model or supplementing with automation (non-piloted vehicles).
Certified UAS and DAA Progress: On the industrial side, companies like NASA contractor VOLCANODE have begun certifying demonstrator DCAS (Detect and Avoid) systems using ACAS X technology. RTCA has developed DO-365 (MOPS for DAAs) which references DAIDALUS and similar logic. In Europe, CAA and EASA have approved certain DAA/ACAS systems for light UAS at lower DAL (DO-178 DAL C or D). For example, the DPOD DSAS (detecting manned traffic at see-and-avoid levels) was authorized using a simplified neural network, model-checked to meet DO-178 objectives. These case-by-case certifications show that regulators are exploring ways to assimilate predictive algorithms via alternative means (e.g. limited function definitions, thorough analysis).
Incident Analyses: Although ACAS X is not yet widely deployed, there are anecdotal reports from TCAS II incidents. Historically, in two famous near-misses (Cape Cod 1990, Überlingen 2002), non-ideal pilot responses and outdated RA logic contributed. Analysts note that a predictive alert earlier—when aircraft were further apart—might have prevented close calls. These lessons form part of the impetus for ACAS X. Moreover, in some TCAS incidents, pilot confusion over unexpected RA has been cited. ACAS X aims to reduce surprise by issuing gentler alerts first. It remains to be seen if this works in practice – a pending future case study will be real world comparison after ACAS X deployment.
DO-178C Certification and Non-Determinism
The certified avionics paradigm (DO-178C/ED-12C) is built on the assumption of fully testable, deterministic software logic. Non-determinism in this context typically refers to behavior that cannot be predicted purely from specified inputs. DO-178C (DAL A/B systems) mandates development assurance objectives that include showing code coverage under test and requirements-based validation. In essence, for each requirement and each branch/condition in code, there must be evidence of correct behavior for all cases. Critically, software must behave deterministically given the same inputs; outcomes should be repeatable to confirm correct function [16] and to perform rigorous testing. The advisory AMC 20-193 (FAA guidance) even notes that “determinism” is interpreted as producing a predictable outcome in finite time [16].
This worldview clashes with several aspects of ACAS XU and similar logic:
-
Probabilistic Model Inputs: ACAS X’s core algorithm uses random variables (e.g. sampled aircraft maneuvers) when computing its policy, though offline. The final code itself is deterministic: it’s a lookup table. But the design process involves nondeterminism. Under traditional DO-178C, the emphasis is on verifying the code against requirements; how the requirements were derived is secondary. Nevertheless, this stochastic development can create “hidden” assumptions in the policy that are not explicitly requirement-driven. Showing that every table entry is correct for every possible encounter is infeasible except by appealing to the optimization proof (which DO-178 does not natively cover).
-
Continuous State Abstractions: ACAS X policy tables theoretically cover a discretized state space. In practice, continuous inputs (range, bearing, vertical speed, etc.) are quantized. If a machine learning or interpolation method is used (see next point), that interpolation may not be deterministic or linear. DO-178C requires boundary analysis for range mapping, but a complex non-linear interpolation in a lookup could defy simple coverage.
-
Machine Learning / Neural Networks: Some ACAS XU research (and tools like MathWorks’ “ACAS Xu Neural Networks” example) involve neural network representations of the policy [17] [5]. Neural nets are by nature opaque and statistically trained. Their inference is deterministic at run-time for a fixed model, but because weights were learned, their decision boundaries are complex and not easily deconstructed into requirements. DO-178C does not address such learned components explicitly. The main handle would be to treat the neural net as object code and attempt coverage on its execution model, but this is effectively impossible at scale. Instead, authorities may have to rely on other means (see below).
-
Pilot/Vehicular Behavior Loops: ACAS XU relies on pilot compliance (or autopilot rules) to execute maneuvers. Human or complex autopilot reactions introduce variability: e.g. pilot may delay 1–2 seconds (5–10 frames of processing) before acting on an RA. DO-178C cannot certify pilot behavior, only the advisory logic. This disconnect is a known limitation of TCAS and worse with autonomous UAS. In testing, “TCAS + pilot” is effectively non-deterministic because the pilot adds randomness. This is acknowledged by analyses: ACA modelers treat pilot reaction as a probabilistic delay when simulating ACAS X effectiveness (Source: research.tudelft.nl) [14]. Regulators require worst-case assumptions about pilot delay or automated response latencies, which makes test matrices explode.
-
Sensor Uncertainties: Inputs to ACAS X (e.g. ADS-B, radar) have non-negligible error. ACAS logic assumes a sensor model (e.g. ADS-B position error ~10m), but actual error can vary. Under DO-178C, sensor errors are often budgeted by higher-level requirements, but ACAS X explicitly includes sensor error distributions in its optimization (to weight expected risk (Source: skybrary.aero). When validating ACAS X, one must consider input uncertainty; in DO-178 terms, this means adding robustness tests which are outside typical software test plans. Stroeve’s work shows that including sensor noise changes RA outcomes markedly (Source: research.tudelft.nl).
-
Algorithmic Non-Linearity: ACAS X’s DP solution can be seen as the fixed point of a Bellman equation. It cannot be “inverted” or simply analyzed by linear partition. Formal methods (DO-333) were created for certain kinds of logic, but applying theorem proving to a multi-dimensional lookup table is non-trivial. If any discrete branching is in the RA logic (e.g. “if vertical separation < threshold, do X, else Y”), MC/DC testing requires both branches to be covered. ACAS X policies may use complex utilities rather than explicit branches, so writing equivalent conditional code for coverage might be needed.
-
Update and Maintenance: If the traffic model changes (e.g. new flight procedures, more drones), ACAS X tables may need re-optimization. Each new version would require full recertification under DO-178C. In contrast, TCAS’s rule-set changes relatively infrequently. This versioning aspect adds to the workload.
These factors create a certification headache. Not only must ACAS XU meet all DO-178C objectives at DAL A (likely for collision avoidance, second only to catastrophic level), the development process itself must be argued safe. EASA and FAA have begun to tackle this: they describe methods like the “W-shaped” learning assurance lifecycle (CoDANN) [4], and emerging guidance (the “First Usable Guidance” for level-1 ML apps) to ensure data sets and training are rigorously documented. However, no standard fully addresses nondeterministic or learned code.
For example, CoDANN (EASA/Daedalean) emphasizes explainability and safety assessment for neural networks, but acknowledges that achieving DO-178 objectives for ML systems requires new evidence beyond traditional V&V [7] [4]. NASA’s analysis similarly notes that only by adopting special assumptions can ML systems meet standards for low-criticality applications [7]; collision avoidance would be high-criticality, making it even harder. In short, the certification of ACAS XU in a DO-178C regime may need a combination of: (1) adherence to as many DO-178 objectives as practicable (requirements justification, MCDC testing of any implemented code, configuration control, robust quality processes); (2) use of emerging techniques (formal analysis/DOD-333 proof where possible, structural coverage on simplified models, advanced simulation validation); and (3) safety case arguments that incorporate statistical testing and bounded uncertainty analysis.
Authorities are examining these approaches. The FAA Abstraction Layer methodology (AMC 20-193/20-115D) implies that an approved external system can stand in for a DO-178 component under strict interface assumptions [16]. In one interpretation, ACAS X’s inner workings could be treated as a bounded “black box” strawman, if its inputs and outputs are well-characterized; but the FAA/Design Assurance team would still demand rigorous evidence that the outputs always maintain separation (likely via analysis/simulation evidence rather than classical test).
Discussion of Implications and Future Directions
The evolution from reactive TCAS to predictive ACAS X embodies a classic trade-off: safety/efficiency vs. verifiability. On one hand, probabilistic models and optimization can improve safety margins and reduce nuisance alerts. Simulations and tests indicate that ACAS XU will better accommodate future traffic (UAS swarms, etc.), thus potentially saving lives. On the other hand, this advance forces a reassessment of certification paradigms.
Security Communities and Formal Methods: One perspective favors adopting formal methods (DO-333) to “hand-verify” ACAS X logic. For example, some research (Lygeros & Lynch 1997, Livadas et al. 1999 [18]) already began formally verifying TCAS logic against separation theorems. A formal verification of an MDP-derived policy could in theory prove that “no reachable state leads to loss-of-separation under assumptions X, Y” (where X,Y define world models). Such an achievement would be a major certification milestone – though likely requiring immense effort in modeling and theorem proving. Initiatives like NASA’s Goddard (= Langley?) Formal Methods program for DAIDALUS hint at this direction: they have fully specified DAIDALUS in PVS [11]. A similar formal specification of ACAS XU (perhaps in a tool like PVS or Coq) could become a DO-333 supplement to DO-178C (with ED-216), providing mathematical proof of safety under ideal sensors.
Robust Testing and Monitors: Another route is rigorous simulation-based assurance. The community’s emphasis on Monte Carlo shows that validity must come from breadth of scenarios rather than exhaustive unit tests. Certification could involve extensive randomized testing (stochastic scenario generation) and worst-case analysis. The use of runtime monitors or supplementary logic (“safety nets”) might also be mandated: e.g. a fallback logic that overrides the neural policy if a state is untested or in a boundary region. The DLR Aviation Tech Today article points to “Advisory Viewer” tools for understanding network decisions [19]. In the future, certification process may require such explainability tools, much like CACAO (Caliper And Cockpit Observations) used for autopilots.
Regulatory Adaptation: Recognizing these issues, EASA/FAA are already adapting. EASA’s AMC-20-193 (based on DO-297 abstraction layer) allows alternate compliance methods. The new “First Usable Guidance for ML” (2021) and upcoming Level 1 guidance (2022) indicate APIs of change. It is probable that starting in the late 2020s, specific ACAS X validation protocols will be integrated into guidance, possibly including LB COR (Limited-RTCA guidance, etc.). Industry-led proposals (e.g. by CoAlition organizations) might define new DO-178 “machine learning annexes” or best practices.
Remaining Challenges: Despite optimism, obstacles remain. The fundamental unpredictability of human or autonomous actions means a fraction of residual risk will always elude pre-flight proofs. There is a debate whether extremely conservative safeguards (like requiring a gold-standard response layer) should be used or if impossibility of absolute safety should be conceded in favor of greater overall effectiveness. Also contentious is liability: if an ML-based ACAS gives an “optimal” action but pilots find it counterintuitive, legal blame for a failure scenario becomes murky.
Future Research: Future work likely includes:
- Hybrid ACAS: Combining classical and ML components, where ML is used only in low-critical advisory modes (e.g. “traffic advisories”) and fallback to TCAS II RAs for critical events.
- DAA for UAM: Extending ACAS X logic to Urban Air Mobility (eVTOL taxi drones) which operate in highly cluttered airspace, requiring complex well-clear definitions.
- Swarm Safety: Adapting collision avoidance to cooperative UAS swarms may involve distributed algorithms (beyond pairwise ACAS).
- Digital Twin Certification: Using high-fidelity digital twins of aircraft and sensors to run ACAS X through every plausible scenario.
- Hardware-in-the-Loop: For DO-178, combining ACAS software with real transponder signals and pilot in motion platform (very high fidelity testing).
In each case, the tension between algorithmic sophistication and certifiability will persist. The mantra in aerospace remains that safety must be demonstrated, not just shown by performance numbers. Future policy will likely treat ACAS X as a poster child for evolving assurance methods.
Conclusion
Predictive TCAS/ACAS-XU systems represent a transformative advance in collision avoidance, leveraging probabilistic modeling and optimization to provide earlier and more adaptable alerts than traditional TCAS II. Extensive research and tests validate their performance gains: Monte Carlo studies show improved realism in risk estimates (Source: research.tudelft.nl), MIT/Lincoln Lab simulations show higher safety with fewer false alarms [2], and NASA flight tests confirm viability for UAS operations [12]. However, these sophisticated capabilities come at the cost of non-deterministic logic that strains the assumptions of DO-178C certification. Traditional DO-178C mandates are designed for software whose behavior is fully predictable and exhaustively testable; ACAS X’s reliance on statistical training and real-world uncertainties means that novel assurance approaches are required [7] [6].
To date, aviation authorities and researchers are actively addressing this gap. Supplemental methodologies (formal verification, ML assurance processes, comprehensive stochastic testing) are being developed to meet DO-178C/DO-254 objectives in the ACAS X context. Future collision avoidance will likely continue integrating predictive, probabilistic logic, but it must also incorporate safety nets that make such logic acceptable to certifiers. In sum, predictive ACAS systems mark a clear direction for airborne safety, but fully realizing their benefits will demand innovative certification strategies as rigorous as the algorithms themselves.
References: All claims above are supported by sources including flight test reports [12], academic research on probabilistic conflict detection (Source: old.control.ee.ethz.ch), ACAS-X development literature [1] [2] [5] (Source: skybrary.aero) (Source: research.tudelft.nl), and regulatory and evaluation studies [10] (Source: www.eurocontrol.int) (Source: reports.nlr.nl) [7] [6]. (See inline citations for details.)
External Sources
About Landing Aero
We Build Flight Operations Software - custom applications designed for aviation.
DISCLAIMER
This document is provided for informational purposes only. No representations or warranties are made regarding the accuracy, completeness, or reliability of its contents. Any use of this information is at your own risk. Landing Aero shall not be liable for any damages arising from the use of this document. This content may include material generated with assistance from artificial intelligence tools, which may contain errors or inaccuracies. Readers should verify critical information independently. All product names, trademarks, and registered trademarks mentioned are property of their respective owners and are used for identification purposes only. Use of these names does not imply endorsement. This document does not constitute professional or legal advice. For specific guidance related to your needs, please consult qualified professionals.