A Harm Reduction Approach to Systems

Typically when we hear the phrase “harm reduction,” we may think of services such as needle exchange programs and Narcan training, which are community programs designed to help keep people who use IV drugs safer. Harm reduction can be described as a set of public health policies that are designed to minimize the amount of harm that occurs to people when they engage in risky behaviors.

Since I first began to consider and discuss the ideas below a couple of years ago, I have returned to them from different angles, integrating concepts from other sectors as I learned and wrote about systems and psychological safety. Since then I’ve also had the great fortune of getting recommended the book “Trauma Stewardship” by Laura van Dernoot Lipsky. It was a fantastic read that I recommend for every security person, risk management person, activist, nurse, EMT, therapist–anyone whose work is invested in caring for others or keeping them safe from harm. If you’re interested in personally practicing resilience, both inside and outside of yourself, then I can’t say enough good things about it. I truly feel that understanding trauma exposure response and learning how to process trauma individually can help us to build radically transparent and psychologically safe teams that are capable of tackling the most difficult challenges sustainably.

If you’re a US public library patron, you may be able to read or listen to it for free on the Libby app. I can’t recommend it more highly.

Now, back to systems.

“The process leading up to an accident can be described in terms of an adaptive feedback function that fails to maintain safety as performance changes over time to meet a complex set of goals and values. ”

Truths about systems

When we consider taking a harm reduction approach to designing, building, upgrading, and maintaining systems, we have to accept some truths:

1. People interacting with the system won’t always follow the ideal path we leave for them no matter how hard we try to make it obvious or codify it via policy or technical controls.

This isn’t a reflection of some deficiency in design or architecture or technology. It’s just life. Hackers gonna hack and people are unpredictable! As technologists, security, and risk management people, the idea that we are here to bring order to chaos is incorrect. We are here to embrace the chaos and learn how to live alongside it. We are here to continuously improve the ways we build and maintain resilient and safe systems.

2. People interacting with the system may act in bad faith.

It’s not pleasant but we must plan for abuse. We must plan for humans acting to the detriment of themselves, others, and the greater good. We must plan for humans using systems in ways we never intended and act to reduce the harm that can befall them when they do.

3. The people invested in interacting with the system are experts and we should seek them out, listen to them, and learn from their experiences as part of our efforts to improve the system.

Consider intent vs impact. The intent of a system is the system’s vision, purpose, and functionality described from the designer or engineer perspective whereas the impact of a system encompasses the system’s actual user experience. What happens when the two don’t match up or if users are alienated due to designer bias? See Tatiana Mac’s fantastic talk “System of Systems” for critical analysis of inclusive system design, bias, and user experience.

4. People interacting with the system have diverse needs and backgrounds and their individual experiences with the system may differ vastly.

One person’s experience with the system is no more or less valid than any other person’s experience with the system. We have a responsibility to provide feedback opportunities for all the users of our systems across all segments. We have a responsibility for the safety and user experience of the 10% of users interacting with the system in uncommon ways just as we have a responsibility for the safety and user experience of the 90% of users who are following our ideal path.

Beyond the customers

The people who build, repair, and maintain complex systems are still being impacted by those same systems. Just because these people may have privileged insider knowledge of the system does not mean they are somehow immune to the harm it may cause. Their proximity to the system, especially if the system is not performant or is harmful and not trending towards improvement, may cause them to experience a completely different variety of harm that isn’t even on the radar of the people making design decisions about the system.

This interaction is a critically missed area of consideration for many teams, especially in technology-centric spaces. It’s trivial to avoid the squishy, amorphous complexity of “people problems” when there’s no shortage of technical problems with solutions that can be coded or debugged.

What is a harm reduction approach to systems?

Harm reduction is a guiding principle. This approach would require us to accept that others may knowingly or unknowingly take actions that could be detrimental to them as individuals and to the system as a whole. We acknowledge that these things happen and we affirm a commitment to maintaining safety above all else.

Harm reduction is empathy in action. A harm reduction approach cannot exist without empathy. What’s a surefire way to start building empathy into teams and organizations? That would be acting on a commitment to nourishing diversity, increasing transparency, and putting in the work.

Harm reduction is actively accepting that harm and suffering are inevitable. It’s a commitment to seeking out and reducing instances of harm and easing the suffering that falls within our spheres of influence. In fact, not taking into account the harms our systems may cause might be considered a critical failure in due diligence.

Harm reduction is safety. A harm reduction approach might arise organically out of an organizational commitment to safety. Safety has to permeate through the organizational culture beyond one single effort or one specific team/division. With a harm reduction approach, we look inward truthfully, consistently, and transparently, asking:

Harm reduction requires diverse input and divergent thinking. To provide a safer system, we focus not only on the ideal paths a human could take but also on the possible alternative paths and pitfalls that might cause them (and others) harm when interacting with the system. When I think about this practically, I think about solutions like Kelly Shortridge and Ryan Petrich’s Deciduous web app (which I just discovered recently!) for creating decision/attack trees to map possible system interactions. Modeling possible interactions like this is an activity we must do in order to understand how our systems can cause harm.

The modeling of potential harm is something like approaching an iceberg in the ocean. From a distance, you can clearly see what’s above the water but you can determine nothing about what’s present underwater. As you move closer to the iceberg, you can collect more information that you could use to guess what’s below the surface but you still have no way to know for sure while you’re traveling in this moving boat. You’re left with some predictable stuff you can see above the water and a much larger dark side below that’s full of likely dangerous unknowns.

Make decisions based on what’s known and be open to feedback from those who have experienced the unknowns. You’re one piece of feedback away from unknown harm becoming predictable harm that can be eliminated from the system or made less harmful. A user of our system who experiences a previously unknown harm (an “outlier”) may be providing us with the warning signs of a trend that’s right around the corner.


We don’t put on our seatbelt upon entering a car only to remove it once we’re careening down the highway. Similarly, we should strive to maintain safety for the user’s entire journey.

Safety must be continuous. We can’t really afford for it not to be.