Why mmWave Radar is Increasingly the Right Choice for Safety

For two decades industrial machine safety has been built around light curtains and laser scanners. They work, but they impose a price on the surrounding mechanical design: rigid mounting, sensitivity to contamination, frequent realignment after mechanical knocks. In medical AAL and elderly-care applications, the camera-based equivalents add a different price: a privacy compromise that regulators and end-users are no longer willing to pay.

60 GHz mmWave radar offers a different trade-off. It sees three-dimensional zones rather than two-dimensional planes. It does not care about dust, oil mist, fog, or darkness. It cannot identify a person. For safety functions defined as detection of a person inside a hazardous zone, that combination of properties is often the right answer. The remaining question is how to engineer the radar product so that a notified body or accredited assessor will sign it off at the required integrity level.

The Standards Landscape

Radar safety programs intersect several standards families. The exact set depends on whether the product is industrial, automotive, or medical, and whether the market is the European Union, the United States, or both.

  • IEC 61508:2010 is the parent standard for electrical and electronic safety-related systems. It defines SIL 1 through SIL 4, the failure-rate targets behind them, and the lifecycle and work products required for each. It is referenced by almost every other functional-safety standard.
  • ISO 13849-1:2023 covers machinery. It defines Performance Level a through e, plus the category of the safety architecture. It is the most common reference in CE machinery directives.
  • IEC 62061:2021 also covers machinery but in the IEC tradition. It maps to SIL 1 through 3 and is increasingly used alongside ISO 13849 for complex safety-related electronic systems.
  • IEC 61496 covers electro-sensitive protective equipment (ESPE), the device-level standard for safety sensors. Part 1 is generic, part 3 covers Active Opto-electronic Protective Devices responsive to Diffuse Reflection (AOPDDRs). Radar is not explicitly named but the concepts transfer cleanly.
  • ISO 26262 covers automotive electrical and electronic systems, with ASIL A through D. We mention it because 77 GHz automotive radar shares many of the 60 GHz engineering challenges.
  • IEC 60601-1 and IEC 62304 cover medical electrical equipment and medical device software. Critical for AAL fall-detection products under EU MDR.

The right strategy is to pick the primary standard for the market, then deliberately map work products so that adjacent markets can reuse the same evidence. Done well, the same FMEDA serves both an IEC 62061 SIL 2 claim for the EU machinery directive and an IEC 61496-1 type 3 claim for the device-level ESPE certificate.

From System Hazards to Safety Requirements

Every functional-safety program starts with hazard analysis. Two methods are well-established for radar perception products.

HAZOP (Hazard and Operability) is the legacy method, particularly suited to process industries and discrete machinery. The team walks through nodes of the system and applies guidewords (more, less, none, reverse) to each operational parameter. For a radar safety sensor monitoring a robot cell, nodes might be the radar transmit chain, the receive chain, the signal processing, the safety output, and the host comms. Guidewords identify what could go wrong and what the consequence would be.

STPA (System-Theoretic Process Analysis) is more modern, more software-aware, and increasingly required for autonomous systems. It models the system as a control structure of controllers and controlled processes, then asks for each control action: what could make this action unsafe? STPA tends to find loss scenarios that HAZOP misses, particularly around emergent failures between software components.

The output of either method is a list of hazardous events, each with an assigned target failure rate. That list becomes the input to the safety requirements specification, which is the document the rest of the program traces back to.

Safety Architecture Patterns for Radar

Three architectural patterns cover most radar safety functions. The right pattern depends on the SIL target, the cost envelope, and the available compute.

1oo1D (one out of one with diagnostics) uses a single radar sensor with continuous self-tests. It can reach SIL 2 with diagnostic coverage above 90 percent. It is the cheapest pattern and is suitable for safety functions where the cost of an unsafe failure is bounded.

1oo2D (one out of two with diagnostics) uses two independent radar sensors, with the safety output asserted when either sensor declares an unsafe condition. Diagnostic coverage requirements are similar to 1oo1D, but the architectural redundancy supplies the hardware fault tolerance required for SIL 3. This is the most common SIL 3 pattern in industrial radar.

2oo3 (two out of three) uses three independent radar sensors with a majority vote. It tolerates one failed sensor without false alarms, which matters in availability-critical applications like high-throughput logistics. It costs more than 1oo2D and is rarely the right answer for cost-sensitive products.

For all three patterns, the dependent-failure analysis is the most subtle work in the program. The beta factor (the fraction of failures that are common to both channels) typically dominates the achievable SIL. Reducing beta requires careful attention to common power supplies, common clocks, common mechanical mounts, and common software components.

FMEDA in Practice on the IWR6843 and Its Successors

The FMEDA workbook is where the safety claim is actually built. For a radar product the rows typically come from three sources: the radar SoC vendor's safety manual, our analysis of the surrounding circuitry, and our analysis of the software architecture.

Texas Instruments publishes safety-manual content for the IWR6843 that maps the internal blocks (ADC, RF synthesiser, analogue front end, digital baseband, Cortex-R4F, DSP, hardware accelerator) to assumed failure rates and recommended diagnostic measures. That content is the starting point; it is not the finished FMEDA. The surrounding power management, oscillator, antenna feeds, and host communications all need to be added by the integrator. Software failures need to be accounted for separately under IEC 61508-3.

Typical metrics that come out of a finished radar FMEDA:

  • SFF (Safe Failure Fraction): above 90 percent for SIL 2, above 99 percent for SIL 3.
  • DC (Diagnostic Coverage): above 90 percent for SIL 2, above 99 percent for SIL 3.
  • PFH (Probability of Failure per Hour, high-demand mode): below 10⁻⁶ for SIL 2, below 10⁻⁷ for SIL 3.
  • HFT (Hardware Fault Tolerance): zero for SIL 2 with sufficient diagnostics; one for SIL 3 in most cases.
  • T1 test interval: the period after which the diagnostic must complete its full coverage. Often one hour for high-demand functions, longer for low-demand.

Diagnostic Coverage: What Counts Inside a Radar SoC

Diagnostic coverage in a radar SoC is the sum of many small checks. The hard part is not listing the checks; it is justifying that they actually cover the failure modes they claim to cover.

Examples of in-product diagnostics that contribute to coverage:

  • RF transmit power monitor cross-checked against the antenna feedback path.
  • ADC dynamic-range monitoring against a known noise floor.
  • Periodic transmission of a calibration signal across the receive chain.
  • Antenna match monitor (VSWR estimate from the on-chip RF detector).
  • Watchdog supervision of the signal-processing schedule.
  • Cross-check between two independent compute paths on the same point cloud.
  • Plausibility checks on detected target motion (no instantaneous teleport).
  • Communications integrity over the safety bus (CRC plus sequence number).

Each of these contributes a percentage to the diagnostic-coverage total. The total claim should be defensible against fault-injection testing, which is required at SIL 3 and recommended at SIL 2.

The Safety Case: What Auditors Expect

The safety case is the narrative document that ties the engineering evidence together for the assessor. A well-constructed safety case for a radar product runs to forty or fifty pages and includes:

  • Product overview, intended use, foreseeable misuse, operating environment.
  • Safety concept: hazard list, safety functions, target SIL or PL.
  • Architecture description with safety-relevant boundaries.
  • Reference to the FMEDA workbook and its summary metrics.
  • Description of diagnostics and the evidence that each is implemented.
  • Software lifecycle compliance argument under IEC 61508-3.
  • Tool-confidence-level assessment for compilers, debuggers, and test infrastructure.
  • Verification and validation report, including HIL test campaign results.
  • Environmental and EMC test reports demonstrating the product operates within its assumed envelope.
  • Safety manual for the integrator and the operator.

The work-product trace must allow the assessor to read from any high-level claim back to the unit-test or analysis that supports it. The engineering team that builds the product should write the safety case as it goes, not retrofit it at the end.

Common Pitfalls We See in Radar Safety Submissions

Five problems recur in radar safety programs that come to us mid-development.

  1. Late hazard analysis. Teams design the product, then run HAZOP, then discover safety requirements that change the architecture. The result is a wasted PCB revision and a six-month slip.
  2. Optimistic dependent-failure assumptions. The beta factor between two radar sensors sharing a single power supply is much higher than teams assume. Independent power islands are non-negotiable for SIL 3.
  3. Tool-confidence-level gap. Using a non-qualified compiler or debugger for safety-relevant code at SIL 3 requires extensive justification. Plan tool selection at safety-concept time.
  4. Software architecture mismatched to FMEDA. The FMEDA is built around safe and non-safe partitions, but the software is a monolith. Partitioning has to be designed in from the first commit.
  5. Documentation drift. The safety case captures the design at a single point in time, then the engineering team keeps designing. Configuration management of safety artefacts is as critical as configuration management of code.

How HALready Supports Enterprise Safety Programs

Our involvement spans the full safety lifecycle. For programs above one million euros, we typically run the full safety engineering as an integrated part of the V-Model rather than as a separate workstream.

Specific deliverables we own:

  • Hazard analysis (HAZOP or STPA), safety concept, safety requirements specification.
  • Safety architecture design, dependent-failure analysis, partitioning.
  • FMEDA workbook, including software FMEA.
  • Diagnostic design and verification, fault-injection test plan and execution.
  • Safety case authoring, assessor coordination, response to assessor questions.
  • Bridge to machine safety radar engineering and contactless fall detection development teams who use the safety artefacts at product level.
  • Sensor SoC selection in coordination with our TI mmWave EVM comparison guidance.

What we do not do: act as the notified body. Our role is to deliver the evidence package and represent your engineering case to the assessor. The certification body itself remains independent.

Frequently Asked Questions

What is the difference between SIL and PL?

SIL (Safety Integrity Level) comes from IEC 61508 and is used in process industries and machinery via IEC 62061. PL (Performance Level) comes from ISO 13849 and is used predominantly in machinery. They quantify the same idea (acceptable failure rate) on different scales. SIL 3 and PL e are broadly equivalent and cover the most demanding safety functions.

Can a single-chip radar reach SIL 3?

Yes, with architectural redundancy. A single IWR6843 reaches at most SIL 2 with full diagnostics. SIL 3 typically requires two radar sensor heads in a 1oo2D arrangement with independent compute, cross-checking, and a watchdog supervisor. Some specific applications can claim SIL 3 with a single sensor where the system-level redundancy comes from a non-radar diverse channel.

What is FMEDA in functional safety?

Failure Modes, Effects, and Diagnostic Analysis. It is the structured workbook that lists every component, its failure modes, the consequence of each failure, whether the failure is detected by the safety function, and the resulting contribution to the safe failure fraction. The output of FMEDA is the headline number that feeds your SIL claim.

How long does SIL 3 certification take?

From kickoff to a notified-body sign-off, twelve to eighteen months for a well-scoped radar product. The safety case takes three to six months of focused work on top of the normal engineering schedule, plus four to eight weeks of assessor review and iteration. Early scoping is critical: late safety changes are the most expensive way to lose a quarter.

What is diagnostic coverage in IEC 61508?

Diagnostic coverage is the fraction of dangerous failures that are detected by built-in tests. SIL 2 typically requires diagnostic coverage above 90 percent, SIL 3 above 99 percent. In a radar system, this means continuous self-tests on the analogue-to-digital converters, RF transmit power, antenna integrity, signal-processing path, and external comms.

Can radar safety functions reuse safety-rated software like FreeRTOS Safety?

Yes, and we recommend it where licensing allows. Pre-certified OS kernels, communication stacks, and floating-point libraries shorten the certification path by inheriting their proven-in-use credit. The integration work and the application code still have to be developed to the target SIL, but the foundation does not need to be re-certified.

Discuss your radar safety program

Thirty minutes with our principal engineer to scope the safety pathway, target integrity level, and likely assessor timeline.