AI in Safety Systems: What Manufacturing Standards Actually Allow
A client asked me recently whether they could use AI for a safety interlock on a piece of equipment. “The AI is better at detecting the dangerous condition than our current sensors,” he said. “But we’re not sure if standards allow it.”
It’s a question I’m hearing more often as AI capabilities expand. The answer is nuanced—neither a blanket “no” nor an unconditional “yes.”
The standards landscape
Several standards govern safety systems in manufacturing:
IEC 61508: The foundational standard for functional safety of electrical/electronic/programmable electronic systems. Other standards are often based on or aligned with this.
IEC 62443: Industrial cybersecurity standards, relevant because AI systems may have security vulnerabilities that affect safety.
ISO 13849 and IEC 62061: Machine safety standards specifically, with guidance on safety functions for machinery.
Sector-specific standards: Process industries have IEC 61511, automotive has ISO 26262, and so on.
None of these were written with AI specifically in mind. They were developed for traditional programmable systems with deterministic behaviour. AI—especially machine learning—introduces probabilistic behaviour that doesn’t fit neatly into existing frameworks.
What the standards say (and don’t say)
The relevant standards don’t explicitly prohibit AI. But they do establish requirements that AI systems struggle to meet.
Determinism requirements
Traditional safety systems are deterministic: given the same inputs, they always produce the same outputs. This makes them testable and predictable.
Machine learning systems are not inherently deterministic in the same way. Small input variations can produce different outputs. This creates challenges for validation and verification.
Systematic capability requirements
IEC 61508 requires demonstration of “systematic capability”—essentially, confidence that the system was developed and validated properly. Traditional software has established methods for this.
For AI/ML systems, these methods don’t directly apply. How do you demonstrate systematic capability for a neural network? The standards don’t provide clear guidance.
Validation and verification
Safety standards require thorough testing to demonstrate that systems behave correctly. For traditional software, this involves testing against requirements.
For machine learning models, exhaustive testing is often impossible—the input space is too large. Models may perform well on test data but fail on edge cases encountered in operation.
Transparency and interpretability
Some standards require that system behaviour be understandable—that you can explain why the system made a particular decision. Many AI systems, particularly deep learning, are “black boxes” where decisions are difficult to explain.
Where AI in safety is being explored
Despite these challenges, work is progressing on AI for safety applications:
Advisory and monitoring roles
The lowest-risk approach: AI that advises or monitors but doesn’t directly control safety functions. A traditional safety system remains in place; AI provides additional information.
Example: AI vision monitoring operator behaviour, alerting to unsafe actions, but not directly controlling equipment. If the AI fails, the traditional safety system still works.
Redundancy and voting systems
AI as one input to a voting system where multiple independent methods must agree before action is taken.
Example: AI-based hazard detection alongside traditional sensors, with safety action only when both agree. This limits the impact of AI errors.
Non-safety-critical predictions
AI predicting conditions that inform maintenance or operational decisions, where the safety function itself doesn’t depend on AI.
Example: AI predicting that a safety valve may fail in the future, triggering preventive maintenance. The safety function (valve relieving pressure) doesn’t depend on the AI working correctly.
Emerging standards work
Several bodies are working on guidance for AI in safety applications:
- IEC is developing TR 63283 on AI in IEC 61508 systems
- ISO is working on AI reliability standards
- NIST and other bodies are publishing frameworks for trustworthy AI
These are still emerging. Until clear standards exist, conservative approaches are prudent.
Practical guidance for manufacturers
If you’re considering AI for applications related to safety:
Distinguish safety-critical from non-critical
Not everything called “safety” is safety-critical in the standards sense. A system that improves safety but isn’t the primary protection can be treated differently from the actual safety function.
Be clear about what role the AI plays and what happens if it fails.
Keep humans in the loop
For applications where AI judgment affects safety, maintain human oversight. AI flags concerns; humans verify and act.
This limits AI’s decision authority while still gaining its capabilities.
Use AI alongside traditional systems, not instead
Add AI to existing safety approaches rather than replacing proven methods. The traditional system provides a baseline; AI provides enhancement.
If you can’t justify the safety function without the AI, you probably shouldn’t use AI.
Implement extensive validation
Even without clear standards, apply rigorous validation:
- Test on diverse data including edge cases
- Monitor performance continuously in operation
- Have processes for updating and revalidating models
- Document everything
Engage safety experts early
Don’t surprise your safety engineering function with AI. Involve them from the start. They understand the regulatory landscape and can guide appropriate use.
Consider regulatory implications
Some safety systems require assessment and certification by approved bodies. Understand what approvals your application requires and engage those bodies early. Novel approaches may require extended review.
Document your reasoning
Whatever approach you take, document why you believe it’s appropriate. What standards did you consider? What risks were assessed? What mitigations are in place?
If regulators or auditors ask questions, you need clear answers.
Looking ahead
AI capabilities are advancing faster than safety standards. This creates a gap that makes organisations cautious.
I expect this gap to narrow over several years as:
- Standards bodies develop specific guidance for AI
- Industry accumulates experience with AI in safety-adjacent applications
- Techniques for validating and explaining AI improve
- Regulatory frameworks adapt
In the meantime, caution is appropriate. The potential consequences of safety system failures are severe. Being conservative—using AI to enhance rather than replace traditional safety methods—is the sensible approach.
The question isn’t whether AI will eventually be acceptable for safety applications. It’s how we get there safely.
For Australian manufacturers, this means staying informed about evolving standards, engaging with safety professionals, and taking measured approaches that gain AI’s benefits while maintaining robust safety protection.