There is a fundamental change in the way industry standards are being
written. Standards are moving away from prescriptive standards and
toward more performance-oriented requirements. In fact, this was one of
the recommendations made in a government report after the Piper
Alpha offshore platform explosion in the North Sea. Prescriptive
standards generally do not account for new developments or technology
and can easily become dated. This means each organization will have to
decide for themselves just what is ‘safe’. Each organization will have to
decide how they will ‘determine’ and ‘document’ that their systems are,
in fact, ‘safe’.
Unfortunately, these are difficult decisions that few want to make, and
fewer still want to put in writing. “What is safe” transcends pure science
and deals with philosophical, moral, and legal issues.
Things Are Not As Obvious As They May Seem
Intuition and gut feel do not always lead to correct conclusions. For
example, which system is safer, a dual one-out-of-two system (where
only one of the two redundant channels is required in order to generate
a shutdown) or a triplicated two-out-of-three system (where two of the
three redundant channels are required in order to generate a
shutdown)? Intuition might lead you to believe that if one system is
“good,” two must be better, and three must be the best. You might
therefore conclude that the triplicated system is safest. Unfortunately,
it’s not. It’s very easy to show that the dual system is actually safer.
However, for every advantage there is a disadvantage. The one-out-of-
two system may be safer, but will suffer more nuisance trips. Not only
does this result in lost production downtime and economic issues, it is
generally recognized that there is nothing “safe” about nuisance trips,
even though they are called “safe failures.”
At least two recent studies, one by a worldwide oil company, another by
a major association, found that a significant portion of existing safety
instrumented functions were both over-designed (37-49%), as well as
under-engineered (4-6%). Apparently things are not as obvious as
people may have thought in the past. The use of performance-based
standards should allow industry to better identify risks and implement
more appropriate and cost effective solutions.
If there hasn’t been an accident in your plant for the last 15 years, does
that mean that you have a safe plant? It might be tempting to think so,
but nothing could be further from the truth. You may not have had a car
accident in 15 years, but if you’ve been driving home every night from a
bar after consuming 6 drinks, I’m not about to consider you a “safe”
driver!
No doubt people may have made such statements one day before Seveso
(Italy), Flixborough (England), Bhopal (India), Chernobyl (Soviet
Union), Pasadena (USA), etc. Just because it hasn’t happened yet,
doesn’t mean it won’t, or can’t.
If design decisions regarding safety instrumented systems were simple,
obvious, and intuitive, there would be no need for industry standards,
guidelines, recommended practices, or this book. Airplanes and nuclear
power plants are not designed by intuition or gut feel. How secure and
safe would you feel if you asked the chief engineer of the Boeing 777?
“Why did you choose that size engine, and only two at that?”, and his
response was, “That’s a good question. We really weren’t sure, but that’s
what our vendor recommended.” You’d like to think that Boeing would
know how to engineer the entire system. Indeed they do! Why should
safety instrumented systems be any different? Do you design all of your
systems based on your vendor’s recommendations? How would you
handle conflicting suggestions? Do you really want the fox counting
your chickens or building your henhouse?
Many of the terms used to describe system performance seem simple
and intuitive, yet they’ve been the cause of much of the confusion. For
example, can a system that’s 10 times more “reliable” be less “safe”? If
we were to replace a relay-based shutdown system with a newer PLC
that the vendor said was 10 times more “reliable” than the relay system,
would it automatically follow that the system was safer as well? Safety
and reliability are not the same thing. It’s actually very easy to show
that one system may be more “reliable” than another, yet still be less safe.
The Danger of Complacency
It’s easy to become overconfident and complacent about safety. It’s easy
to believe that we as engineers using modern technology can overcome
almost any problem. History has proven, however, that we cause our
own problems and we always have more to learn. Bridges will
occasionally fall, planes will occasionally crash, and petrochemical
plants will occasionally explode. That does not mean, however, that
technology is bad or that we should live in the Stone Age. It’s true that
cavemen didn’t have to worry about The Bomb, but then we don’t have
to worry about the plague. We simply need to learn from our mistakes
and move on.
After Three Mile Island (the worst U.S. nuclear incident), but before
Chernobyl (the worst nuclear incident ever), the head of the Soviet
Academy of Sciences said, “Soviet reactors will soon be so safe that they
could be installed in Red Square.” Do you think he’d say that now?
The plant manager at Bhopal, India was not in the plant when that
accident happened. When he was finally located, he could not accept
that his plant was actually responsible. He was quoted as saying “The
gas leak just can’t be from my plant. The plant is shut down. Our
technology just can’t go wrong. We just can’t have leaks.” One wonders
what he does for a living now.
After the tanker accident in Valdez, Alaska, the head of the Coast Guard
was quoted as saying, “But that’s impossible! We have the perfect
navigation system?”
Systems can always fail; it’s just a matter of when. People can usually
override any system. Procedures will, on occasion, be violated. It’s easy
to become complacent because we’ve been brought up to believe that
technology is good and will solve our problems. We want to have faith
that those making decisions know what they’re doing and are qualified.
We want to believe that our ‘team’ is a ‘leader’, if for no other reason
than the fact that we’re on it.
Technology may be a good thing, but it is not infallible. We as engineers
and designers must never be complacent about safety.
There’s Always More to Learn
There are some who are content to continue doing things the way
they’ve always done. “That’s the way we’ve done it here for 15 years
and we haven’t had any problems! If it ain’t broke, don’t fix it.”
Thirty years ago, did we know all there was to know about computers
and software? If you brought your computer to a repair shop with a
problem and found that their solution was to reformat the hard drive
and install DOS as an operating system (which is what the technician
learned 15 years ago), how happy would you be?
Thirty years ago, did we know all there was to know about medicine?
Imagine being on your death bed and being visited by a 65-year-old
doctor.
How comfortable would you feel if you found out that that particular
doctor hadn’t had a single day of continuing education since graduating
from medical school 40 years ago?
Thirty years ago, did we know all there was to know about aircraft
design? The Boeing 747 was the technical marvel 30 years ago. The
largest engine we could make back then was 45,000 pounds thrust.
We’ve learned a lot since then about metallurgy and engine design. The
latest generation engines can now develop over 100,000 pounds thrust. It
no longer takes four engines to fly a jumbo jet. In fact, the Boeing 777,
which has replaced many 747s at some airlines, only has two engines.
Would you rather learn from the mistakes of others, or make them all
yourself? There’s a wealth of knowledge and information packed into
recent safety system standards as well as this textbook. Most of it was
learned the hard way. Hopefully others will utilize this information and
help make the world a safer place.