- Life-critical system
A life-critical system or safety-critical system is a system whose failure or malfunction may result in:
- death or serious injury to people, or
- loss or severe damage to equipment or
- environmental harm.
Risks of this sort are usually managed with the methods and tools of safety engineering. A life-critical system is designed to lose less than one life per billion (109) hours of operation. Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based.
Several reliability regimes for life-critical systems exist:
- Fail-operational systems continue to operate when their control systems fail. Examples of these include elevators, the gas thermostats in most home furnaces, and passively safe nuclear reactors. Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-communications was rejected as a control system for the U.S. nuclear forces because it is fail-operational: a loss of communications would cause launch, so this mode of operation was considered too risky. This is contrasted with the Fail-deadly behavior of Perimetr system built during the Soviet era.
- Fail-safe systems become safe when they cannot operate. Many medical systems fall into this category. For example, an infusion pump can fail, and as long as it complains to the nurse and ceases pumping, it will not threaten the loss of life because its safety interval is long enough to permit a human response. In a similar vein, an industrial or domestic burner controller can fail, but must fail in a safe mode (i.e. turn combustion off when they detect faults). Famously, nuclear weapon systems that launch-on-command are fail-safe, because if the communications systems fail, launch cannot be commanded. Railway signaling is designed to be fail-safe.
- Fail-secure systems maintain maximum security when they can not operate. For example, while fail-safe electronic doors unlock during power failures, fail-secure ones lock, possibly trapping people in a burning building.
- Fail-Passive systems continue to operate in the event of a system failure. An example includes an aircraft autopilot. In the event of a failure, the aircraft would remain in a controllable state and allow the pilot to take over and complete the journey and perform a safe landing.
- Fault-tolerant systems avoid service failure when faults are introduced to the system. An example may include control systems for ordinary nuclear reactors. The normal method to tolerate faults is to have several computers continually test the parts of a system, and switch on hot spares for failing subsystems. As long as faulty subsystems are replaced or repaired at normal maintenance intervals, these systems are considered safe. Interestingly, the computers, power supplies and control terminals used by human beings must all be duplicated in these systems in some fashion.
Software engineering for life-critical systems
Software engineering for life-critical systems is particularly difficult. There are three aspects which can be applied to aid the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This allows the system developer to effectively test the system by emulation and observe its effectiveness. Thirdly, address any legal and regulatory requirements, such as FAA requirements for aviation. By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements. All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors.
Examples of life-critical systems
- Circuit breaker
- Emergency services dispatch systems
- Electricity generation, transmission and distribution
- Fire alarm
- Fire sprinkler
- Fuse (electrical)
- Fuse (hydraulic)
- Burner Control systems
The technology requirements can go beyond avoidance of failure, and can even facilitate medical intensive care (which deals with healing patients), and also life support (which is for stabilizing patients).
- Heart-lung machines
- Mechanical ventilation systems
- Infusion pumps and Insulin pumps
- Radiation therapy machines
- Robotic surgery machines
- Defibrillator machines
- Nuclear reactor control systems
- Railway signalling and control systems
- Air traffic control systems
- Avionics, particularly fly-by-wire systems
- Radio navigation RAIM
- Engine control systems
- Aircrew life support systems
- Flight planning to determine fuel requirements for a flight
- Mission critical
- International Journal of Critical Computer-Based Systems
- Reliability theory
- Reliable system design
- Redundancy (engineering)
- Factor of safety
- Nuclear reactor
- Biomedical engineering
- SAPHIRE (risk analysis software)
- Formal methods
- Zonal Safety Analysis
Wikimedia Foundation. 2010.
Look at other dictionaries:
Life support system — For other uses of Life support , see Life support (disambiguation). In human spaceflight, a life support system is a group of devices that allow a human being to survive in space. US government space agency NASA, and private spaceflight… … Wikipedia
Critical Infrastructure Protection — or CIP is a national program to assure the security of vulnerable and interconnected infrastructures of the United States. In May 1998, President Bill Clinton issued Presidential directive PDD 63 [ [http://www.fas.org/irp/offdocs/pdd/pdd 63.htm… … Wikipedia
Critical thinking — is the process or method of thinking that questions assumptions. It is a way of deciding whether a claim is true, false, or sometimes true and sometimes false, or partly true and partly false. The origins of critical thinking can be traced in… … Wikipedia
Critical theory — Horkheimer, Adorno, Habermas David Rasmussen HEGEL, MARX AND THE IDEA OF A CRITICAL THEORY Critical theory1 is a metaphor for a certain kind of theoretical orientation which owes its origin to Hegel and Marx, its systematization to Horkheimer and … History of philosophy
Critical illness insurance — or critical illness cover is an insurance product, where the insurer is contracted to typically make a lump sum cash payment if the policyholder is diagnosed with one of the critical illnesses listed in the insurance policy. The policy may also… … Wikipedia
Critical Care Air Transport Team — (CCATT) dates from 1996, when the United States Air Force formally established what has proved to be one of the latest milestones in the history of military medicine. The CCATT is a three person, highly specialized medical asset that can create… … Wikipedia
Critical Path (book) — Critical Path 1st edition … Wikipedia
Critical Art Ensemble — in Halle/Saale, Germany performing Radiation Burn: A Temporary Monument to Public Safety , October 15th 2010. Critical Art Ensemble (CAE) is an award winning collective of five tactical media practitioners of various specializations including… … Wikipedia
Life (NBC TV series) — Life Life title sequence Genre Crime drama Created by Rand Ravich Starring … Wikipedia
Critical illness-related corticosteroid insufficiency — (CIRCI) is a form of adrenal insufficiency in critically ill patients who have blood corticosteroid levels which are inadequate for the severe stress response they experience. Combined with decreased glucocorticoid receptor sensitivity and tissue … Wikipedia