Safety Fundamentals


The concept of Safety

From CAA NZ Booklet 2
From CAA NZ Booklet 2

“Safety is the state on which the possibility of harm to persons or of property damage is reduced to, and maintained at or below, an acceptable level through a continuing process of hazard identification and safety risk management.”

ICAO SMM 3rd Edition (Doc 9859) 2.1.1

"Safety is protection from harm"

NZ SMS summit May 2018

Is it possible to achieve the following;

  • Zero accidents or serious incidents?;
  • Freedom from danger or risks?;
  • avoidance or all errors?;
  • Safety through regulatory compliance?

If not, why not?  Are controlled risks and errors are acceptable in an inherently safe system?

It is the controlled acceptance and correct management of risk that allows an organisation to generate profit

What does it mean when we say, '...be safe...'?

Safety is relative

The Heinrich Model

The Heirich Pyramid (Skybrary)
The Heirich Pyramid (Skybrary)
Adaptation of the Heinrich pyramid (Safety Culture Blog)
Adaptation of the Heinrich pyramid (Safety Culture Blog)


The evolution of aviation safety (ICAO SMM)
The evolution of aviation safety (ICAO SMM)

The Evolution of Safety Thinking

Following WWII, there were significant technological advances in aviation, including those advances in Safety.  However the Human Machine Interface progress was not as rapid and nor was the Human to Human Interface. This evolved with early CRM and has become Human Factors / Non-Technical Skills knowledge.  The integration of both Technological and Human factors into Organisational Factors provides the greatest safety outcomes.

SMS and ICAO adaptation of it have been evolving since the early 1990s. It came as a recognition that both human and organisational factors contribute to an accident, incident or significant event.  The Piper Alpha disaster in 1988 and the subsequent Lord Cullen inquiry into it had a significant impact on the evolution of Integrated Safety and SMS.

High Reliability Industries

High reliability industries repeatedly deliver successful, predictable results in a dynamic, technologically complex, time-constrained, and high-hazard environment.  Examples of HRIs include;

  • Aviation
  • off-shore Oil and Gas,
  • nuclear industry,
  • space exploration,
  • heavy mining
  • Medical

Hallmarks of High Reliability Industries are;

  • Look for low frequency/High consequence events
  • carry out deliberate actions to achieve predictable results
  • maintain a sense of 'chronic unease' (sometimes called 'respectful distrust')
  • HROs learn how to 'fail in a safe way', and then ask 'how did we contribute to this failure?"

Accident causation

ICAO utilised the work of Professors James Reason and Patrick Hudson (among others), that brought organisational failures and safety culture as foremost consideration in accident causation.

The Reason model of accident causation describes how the breach of multiple system defences could result in an accident. Professor James Reason also argued that single point failures in complex systems like aviation should not be consequential.  The defence failures (breaches) could be both active or latent failures.

An Active failure could be described as something that a conscious decision,regardless of the motivation; (Mistake, error, lapse, violation), resulted in a defence layer being breached.

Example; maintenance crews using work-a-rounds to achieve operational efficiency when they know a procedure might be contrary to SOPs.

A Latent failure is more insidious, it lies in wait  and  unknown until discovered.

Example; An organisational manual that details company procedure, that happens to be contradictory to OEM manual perhaps prohibiting such action.  The OEM manual was not thought to be checked on the assumption that the SOP writer had already done so.

Simplified Reason modelThe concept of an 'organisational accident' considers how the processes that an organisation could have reasonable control over (such as policy, planning, communication, supervision and how resources are allocated) might act as defenses against an accident.  Unfortunately, when mis-managed, could act in reverse as pathways to the opposite outcome.

Organisational accident (from the ICAO SMM Doc 9859)
Organisational accident (from the ICAO SMM Doc 9859)

Practical drift and normalisation of deviation

PD and NoDPractical drift is where a system's baseline performance 'drifts away' from the design parameters.  This could be due to a system being ulitlised more effectively than expected, and where performance exceeds the intention as the operator uses initiative and innovation to maximise efficiency.  Unfortunately it could also result in less than expected performance due to misuse, misunderstanding or a lack of appropriate training or supervision.

It is defined 'drift' because it will inevitably be the result of daily use where the movement away from baseline performance is barely detectable, and is due to external circumstances outside the design criteria of the system.

Normalisation of deviation is the intentional violation of procedure that occurs so regularly that it becomes the norm.  The lack of negative outcome produces the illusion that deviation from normal procedures is acceptable. Many accidents have occurred as a consequence.  The saying '...but we've been doing it this way for years...' is often cited as a defense of the indefensible.

An absence of evidence is not evidence of absence...

Take special note of what is said at 1:15 to 1:30 for class discussion