Most people who write about ICS/OT security have never touched the equipment they're defending.

I have.

For years, I've worked as a telecom technician programming mission-critical radio systems — the kind first responders depend on when a building is on fire, when someone is having a cardiac arrest, when everything else has already failed. P25 systems. Trunked networks. Repeaters. Dispatch consoles. The infrastructure that keeps emergency communications alive when lives are on the line.

That work changed how I see cybersecurity. Specifically, it changed how I see operational technology security — the discipline of protecting industrial control systems, SCADA networks, and the physical processes they manage.

Here's what I learned.


Availability isn't a feature. It's the mission.

In IT security, we talk about the CIA triad: Confidentiality, Integrity, and Availability. Most IT security programs treat availability as the third priority. You encrypt first, you verify integrity second, and you keep things running third.

In OT, that order is reversed.

When I'm programming a radio system for a fire department, the one thing that cannot happen — under any circumstances — is that the radio goes silent when a firefighter is inside a burning building. I don't care if someone can intercept the transmission. I care that the transmission happens.

This is the fundamental tension at the heart of ICS/OT security. The systems we're trying to protect were built with one goal: to keep the process running. A natural gas pipeline. A water treatment plant. A power substation. An emergency communications network. These systems were never designed with security in mind because, historically, security meant adding friction — and friction kills availability.

The attacker who understands this has an enormous advantage. They don't need to steal data. They just need to make the system stop.


Legacy doesn't mean broken. It means frozen in time.

The radios I work with run firmware that hasn't been updated in years. Sometimes decades. Not because the agencies using them are negligent — because updating firmware on a mission-critical radio system requires taking it offline, testing it exhaustively, and accepting the risk that something goes wrong during the window when your communications infrastructure is vulnerable.

Sound familiar?

This is exactly the situation facing every ICS operator running a Siemens S7 PLC, a Modbus RTU device, or a DNP3-based SCADA system from 2004. The patch exists. The vulnerability is known. The patch hasn't been applied because applying it means shutting down a process that may not be shut down, or that carries so much operational risk that restarting it is riskier than the known vulnerability.

Security researchers call this "technical debt." Operators call it Tuesday.

Understanding this isn't an excuse for leaving systems unpatched. It's a prerequisite for having a realistic conversation about how actually to improve security in these environments. You cannot defend a system you don't understand. And you cannot understand these systems if you've never had to make the call between security and uptime.


The perimeter was always an illusion. In OT, it was a necessity.

For decades, the standard answer to OT security was air-gapping — physically separating industrial networks from corporate IT networks and the internet. No connection, no attack surface. Simple, effective, and increasingly fictional.

Remote access vendors. Cloud-connected historians. Engineering workstations that double as email machines. USB drives carried by contractors. The air gap has been eroding for twenty years, and most of the organizations responsible for critical infrastructure either don't know it or don't want to know it.

I think about this every time I connect a laptop to a radio programming cable. That cable is a direct interface to the firmware of a mission-critical device. There is no authentication. There is no audit log. There is no way to know, after the fact, whether the person who plugged that cable in programmed the radio correctly or introduced something that shouldn't be there.

That's not a hypothetical attack surface. That's Tuesday, again.


Why I'm here

I'm not going to pretend I have twenty years of SCADA experience. I don't.

What I have is three years of serious study, a background in mission-critical communications infrastructure, and a recent stint at The Washington Center's Cybersecurity Accelerator program, where I narrowed my focus to ICS/OT and critical infrastructure defense. I have a technician's instinct for how these systems behave in the real world—not in a lab or a vendor demo, but in the field, where the stakes are real, and the margins are thin.

Dead Reckoning exists because I couldn't find the blog I wanted to read. Technical enough to be useful. Honest enough to say what the industry gets wrong. Written by someone who has actually held the equipment.

I'm building this in public. That means some posts will be wrong. I'll update them when they are. It means my understanding will evolve — and I'll document that evolution openly, because watching someone figure something out in real time is more valuable than a polished retrospective that pretends the confusion never happened.

If you work in ICS/OT security — welcome. If you're transitioning into this field like I am, you're not alone. If you think I've got something wrong, tell me. That's the whole point.

We're navigating without a map. Let's do it together.


Brenda Suarez is a telecom technician and ICS/OT security researcher based in the United States. Dead Reckoning publishes threat intelligence, protocol analysis, and field notes on critical infrastructure defense.

#ICS/OT #Opinion #GettingStarted