When Sprinklers Took Down the Network
I was standing outside a clinic with my coffee when I figured out what had been dropping our T1s for weeks. The sprinklers kicked on. The ducks scattered. And I watched water arc directly onto a telco junction box on the corner of the building.
Situation: A growing multi-site healthcare practice connected by point-to-point T1 circuits was experiencing intermittent drops between sites. Short outages, no pattern the ISP could find, no hardware fault on our end.
The Ghost in the Circuit
We were in growth mode. Adding clinics, onboarding providers, running point-to-point T1s between sites. Metro ethernet was on the roadmap but not in the budget yet. The T1s were our lifeline.
Every few days, the circuit between two sites would drop. Five minutes, ten minutes, then back. The ISP ran line tests and found nothing. We replaced the CSU/DSU. We checked the demarc. We watched the monitoring dashboard like it owed us money.
Nothing. Clean tests, dirty results.
The Pattern Nobody Looks For
Then fall overseed hit.
Arizona landscapers water aggressively during overseed. Three times a day: 6 AM, 11 AM, 3 PM. The outages went from weekly to daily. Same times. Every day.
I was standing outside one morning, coffee in hand, watching the sprinklers chase a group of ducks across the parking lot. The water arced past the landscaping and hit the side of the building. Specifically, it hit the telco junction box mounted on the exterior wall. The box where our T1 circuit entered the building.
Wet copper. Intermittent short. Every time the sprinklers ran.
Infrastructure Doesn't Stop at the Rack
The fix was straightforward: weatherproof the junction box, adjust the sprinkler zone. The lesson was not.
Nobody puts "landscaping schedule" in a root cause analysis template. But infrastructure runs through places that have nothing to do with IT. Junction boxes on exterior walls. Cable paths that cross irrigation zones. Facilities decisions made by property managers who have never heard of a T1 and have no reason to care.
The same principle applies at every scale. On March 1, 2026, a drone strike hit an AWS data center in the UAE. The local fire department responded by cutting all power to the facility. Primary feeds and backup generators. 84 services went down. Financial institutions lost access. Recovery took over a day.
The fire department didn't care about the SLA.
The DR Question Nobody Asks
Disaster recovery plans typically cover hardware failure, ransomware, and natural disasters. They rarely cover:
- Authority-based shutdowns. Fire marshals, law enforcement, and utility companies can cut power to your facility. They don't need your permission.
- Environmental hazards outside your control. Sprinkler systems, construction crews, landscaping, HVAC water damage. The building is part of your infrastructure whether you manage it or not.
- Shared blast radius. If your production and your DR target share a building, a circuit path, a power feed, or a geopolitical region, a single event can take both offline.
The question isn't whether your backups work. It's whether they work when the building they live in goes dark and stays dark.
What This Looks Like in Practice
If you're responsible for infrastructure that can't go down, audit the physical path. Not just the logical path.
- Where does your circuit enter the building? Is that junction point exposed to weather, construction, or landscaping?
- Does your DR site share a power feed, an ISP, or a geographic risk with production?
- If someone with authority walked into your data center and turned everything off, what happens? How long until you're running somewhere else?
A health check covers exactly this. Not just the VMs and the backups, but the physical dependencies that don't show up on a network diagram. The sprinkler timer that nobody thought to check.
Next step
Most engagements start with the Health Check. Fixed fee, clear picture, under two weeks.