Safety and Artificial intelligence
AI integration into safety-related systems (SIL, ASIL, PL)
The integration of AI technologies into control systems promises new opportunities to increase the availability and performance of devices and improve their safety. However, with regard to functional safety (FUSA), special requirements apply to the development and integration of AI. The reason is the following: while it is clearly possible to say whether classic systems meet their requirements, the performance of AI technologies can often only be tested using statistics - with a corresponding variance.
AI integration into safety-related systems: What does this mean for development?
This variance must be taken into account in the development of AI technology and reconciled with the risk of functional insufficiency.
NewTec is your partner when it comes to a competent consideration of the safety aspects of AI integration and compliance with regulatory requirements (keyword EU AI Act). We support you in system, error and risk analysis - by identifying possible critical AI failures, determining the necessary accuracy and performance to prevent failures and analyzing the risks associated with a possible failure. Using process analysis, we clarify whether your development processes are already suitable for AI technology and where there may be a need for action to avoid systematic errors. We determine for you whether planned hardware is reliable enough for the operation of AI technology. And our experts will advise and support you with regard to approval - for example by identifying the relevant normative and regulatory requirements.
AI services from NewTec
- Identification of specific error modes of the AI technology used
- Analysis of the possible error propagation of the AI technology along the information flow in the system context
- Analysis of possible risks due to failures that can be caused by the AI technology
- Derivation of target values for the functional suitability of the AI technology (e.g. required accuracy)
- Validation of the target values with regard to achievability in the system context with the necessary confidence
- Hardware analysis to determine the probability of random errors on complex hardware for the execution of AI technologies
- Analysis of the specific development processes of the AI technology used with regard to ensuring systematic capability
- Preparation of the developed content and preparation of a presentation to test bodies
The EU regulation on artificial intelligence (EU AI Act)
Regulation (EU) 2024/1689 on artificial intelligence (AI Act) is an attempt to comprehensively regulate the development, placing on the market and operation of AI systems. For AI developers and operators, the Act, which came into force in August 2024, provides a regulatory framework with extensive obligations regarding the integration and use of AI technology.
In principle, the AI Act takes a risk-based approach, dividing AI products into four risk classes: “minimal”, “limited”, “high” and “unacceptable” risk. As the risk class increases, so do the legal obligations.
Significance for the development of machines and devices
When developing machines and devices, particular attention must be paid to the class of high-risk systems. These include, in particular, AI products or products with safety-relevant AI components from the fields of medical technology, machinery, elevators,
vehicles, aircraft or critical infrastructures.
For these high-risk systems, the AI Act requires, among other things, the establishment and implementation of a risk management system and a quality management system, data management for training, validation and testing, as well as ensuring accuracy,
robustness and cyber security. The requirements for high-risk AI systems must be implemented on a mandatory basis from August 2, 2027.