Tips & Guides

Hurricane Season 2026 IT Disaster Recovery Checklist (Florida)

A South Florida IT disaster recovery checklist for hurricane season 2026 — pre-season, 48-hour, 24-hour, day-of, and post-storm steps your business can actually run.

Douglyn 14 min read
A South Florida office at dusk with hurricane shutters going up, server racks visible through a glass wall, and a tablet showing a business continuity dashboard

Forty percent of small businesses never reopen after a major disaster. Another 25 percent fail within a year. Those are FEMA’s numbers — and they were collected before South Florida added a decade of population, IT dependency, and cloud-but-not-really hybrid stacks to the equation.

Hurricane season 2026 officially opens June 1. NOAA releases its season outlook May 21. Colorado State’s April forecast leans below-normal as a likely El Niño develops, but “below-normal” still produced Andrew, Wilma, and Ian. The question is not whether something will form. The question is whether your IT stack — your data, your phones, your client deliverables, your payroll — will still be there when you reopen.

This is the checklist we hand to South Florida operations leaders, the one designed to be printed, taped to the server room door, and run year after year. If you only have time for the TL;DR, here it is: get RTO/RPO on paper, prove your backups by restoring something, fail over to the cloud before the storm, and run a blameless debrief after.

Why hurricane IT recovery is different in South Florida

Most “disaster recovery” templates were written for tornados, fires, or generic cyber incidents. Hurricanes are a different animal, and Miami-Dade, Broward, and Palm Beach businesses face a specific stack of risks:

  • Multi-day power outages. Not hours. Days. Sometimes more than a week for commercial corridors.
  • Internet provider blackouts. Your fiber may be fine. Your provider’s regional POP may not.
  • Phone system collapse. Cell towers go offline. VoIP without a failover path becomes a paperweight.
  • Physical flooding of ground-floor server rooms. Storm surge does not respect your raised-floor design.
  • Insurance documentation deadlines under the Florida Information Protection Act (FIPA) and any industry-specific rules (HIPAA, PCI-DSS, FINRA).
  • Workforce dispersal. Your team is dealing with shutters, evacuations, and family. Your runbook needs to assume the building is empty.

A hurricane plan that ignores any of those is not a plan. It is a hope. Hope is not a recovery time objective.

The numbers you need on paper before anything else

Two acronyms decide whether you survive or not:

  • RTO (Recovery Time Objective) — how long the business can survive with a given system down.
  • RPO (Recovery Point Objective) — how much data the business can afford to lose, measured in time (e.g., “15 minutes of orders” or “one business day of email”).

Every critical system needs both, signed off by the business owner — not IT. RTO and RPO are business decisions disguised as technical ones. A 15-minute RTO costs ten times what a four-hour RTO costs. The person paying for the system decides.

This is the foundation of every other step. If you cannot fill in this table, stop reading and fill it in now:

SystemOwnerRTORPORecovery method
ERP / accounting
Email / Microsoft 365
File shares / OneDrive / SharePoint
Customer database / CRM
Phone system
Industry-specific apps

We help clients build this matrix as part of our managed IT services onboarding, because nine times out of ten the in-house team has it scattered across three spreadsheets and a Slack channel.

The 60-day pre-season checklist

Run this in May. Every May. It does not get easier the second time, but it gets faster.

60 days out

  • Risk assessment for every facility. Note flood zone, generator status, server-room elevation, and shutter coverage.
  • Inventory every critical asset — servers, firewalls, switches, UPS units, key workstations. Capture model and serial.
  • Confirm cyber insurance and BCDR clauses. Many policies require documented testing within the last 12 months. Find out before you file a claim.
  • Verify cloud licenses and seats. A failover environment with expired licenses is not a failover environment.
  • Identify your top three vendors for power, internet, and phones. Get the after-hours escalation numbers. Print them.

30 days out

  • Validate the 3-2-1 backup rule for every critical data set. Three copies, two media types, one off-site (or cloud).
  • Restore a real file from backup. Not “we ran a test job.” Open the document. Read it. Save it somewhere. This is the single most-skipped step in disaster recovery, and the most common source of catastrophic recovery failure.
  • Patch the stack. A hurricane is the worst possible time to discover you are running an unpatched VPN appliance. Our cybersecurity services team treats pre-hurricane patching as a hard deadline.

14 days out

  • Live failover test to the cloud. Move a non-critical workload to your failover environment, time the cutover, and document who did what. If you do not have a cloud failover path, this is the year to build one — talk to our cloud services team.
  • Generator runtime test under load. Run for at least 30 minutes with the server room on generator power. Watch the temperature. Watch the fuel curve.
  • UPS load and battery audit. Any battery older than three years gets replaced. Period.
  • Network failover test. If you have a secondary ISP or LTE/5G failover, force the cutover and confirm the firewall behaves.

7 days out

  • Distribute the printed runbook. One copy per facility, one per key leader. Cell service will fail. The PDF in OneDrive is useless if OneDrive is unreachable.
  • Confirm the comms tree. Who calls whom, in what order, on what channels. Include personal cell numbers (with consent) and a backup channel like SMS or a basic group text.
  • Brief the team. A 20-minute walkthrough beats a 60-page document no one reads.

Data backup: where most South Florida plans quietly fail

The 3-2-1 rule is not new. What is new is how many businesses think they have it and do not.

Common failure modes we see every June:

  • All “off-site” backups in the same AWS region as production. A single regional outage takes both down. Use a different region or a different provider.
  • Backups encrypted with a key stored on the backed-up server. When that server floods, the key floods with it. Store keys in a managed KMS or a separate vault.
  • Microsoft 365 treated as “already backed up.” Microsoft replicates data; they do not back it up against user error, ransomware, or retention policy mistakes. You need a third-party M365 backup. Full stop.
  • Backup jobs that have not completed successfully in weeks because no one is monitoring the alerts. This is the most common one.

The fix is not buying more software. The fix is a monthly recovery rehearsal where someone actually restores something. If that is not happening, our co-managed IT practice exists to run it for you alongside your internal team.

Phone system continuity (the one most plans skip)

When the storm hits, your customers will call. Your phone system is your first impression after a disaster. Plan for these scenarios:

  • Primary VoIP provider goes down. Configure automatic failover to a secondary provider or to a cell forwarding rule that routes main-line calls to a designated mobile number.
  • All internet at the office is gone. Your VoIP needs to be hosted in the cloud, not on a PBX in the server closet. If you still have an on-prem PBX, this is the year to migrate.
  • Auto-attendant changes. Pre-record a “storm response” greeting. Activate it the day before landfall. Sounds simple. Almost nobody does it.
  • Voicemail-to-email. Confirm it works and that emails route to a mailbox someone is actually checking on their phone.

The 48-hour, 24-hour, and day-of sequence

This is the part of the runbook that has to be executable by a tired person on a Sunday night.

48 hours before projected landfall

  • Force a final cloud sync of every critical data set.
  • Snapshot every on-prem VM and store the snapshot off-site or in the cloud.
  • Export any system that is not cloud-native (legacy ERP, NVR footage, on-site accounting). Encrypt the export.
  • Charge every laptop, tablet, and mobile device the team will use during recovery.
  • Confirm fuel for the generator and water for any liquid-cooled equipment.

24 hours before landfall

  • Activate VoIP failover or forward main lines to designated mobile numbers.
  • Switch the auto-attendant to the storm-response greeting.
  • Freeze all non-emergency IT changes. No deployments. No patches. No “while we’re at it” configs.
  • Send a customer communication: what to expect, how to reach you, when you anticipate normal operations.
  • Verify everyone on the call tree has the printed runbook and a charged device.

Day-of shutdown sequence

If the storm is making landfall in your county and the office must close:

  1. Workstations off — shut down cleanly, do not yank power.
  2. Servers off — graceful shutdown, in the order documented in your runbook.
  3. Network gear off — switches, firewalls, access points last.
  4. Unplug from the wall. Surge protectors are not lightning protection. A direct hit or a wet ground line will fry everything plugged in.
  5. Elevate. Move anything ground-floor to higher ground. Even a few inches matters.
  6. Photograph the rack state, serial numbers, and any visible damage as a baseline for the insurance claim.
  7. Lock up and leave. People over hardware. Always.

Post-storm recovery: priority order, not panic order

The single biggest mistake we see after a storm is well-meaning teams powering things on before the building is actually safe.

The sequence that works

  1. Facility safety check first. Structural, electrical, water intrusion. If in doubt, do not power on.
  2. Verify clean utility power with a meter before reconnecting anything sensitive. Dirty power kills hardware faster than the storm did.
  3. Bring up the network first — ISP handoff, firewall, core switches, access points.
  4. Then servers. In the order documented in the runbook.
  5. Then workstations. Verify domain authentication and shared drive access for a sample of users before declaring “back up.”
  6. Validate backups. Confirm last successful backup timestamps and run a test restore before resuming production.
  7. Document everything for the insurance claim — photos, timestamps, vendor invoices, replacement costs.

Industry-specific compliance kicks in here. If your business handles PHI, you have HIPAA notification requirements. If you handle Florida residents’ personal information, FIPA mandates notification within 30 days. Healthcare clients lean on our industry compliance team to make sure the recovery does not become a separate breach.

After the first storm: the debrief that prevents the next disaster

Most recovery failures are repeat failures. The debrief is how you stop the loop.

Within 14 days of recovery, run a blameless post-incident review. No finger-pointing. Three questions:

  1. What worked? Document it so next year’s team does not reinvent it.
  2. What broke? And specifically — was it a tool failure, a process failure, or a knowledge failure?
  3. What was missing from the runbook? Add it now, while the memory is fresh.

Update RTO/RPO numbers based on real data. Update vendor escalation contacts. Update the comms tree. File the debrief somewhere your successor can find it — not in someone’s personal OneDrive.

What good looks like — a benchmark

A South Florida business with a mature hurricane IT plan can typically:

  • Fail over critical systems to the cloud in under 60 minutes.
  • Restore email and file shares within 4 hours of a total facility loss.
  • Continue answering customer calls within 15 minutes of the primary phone system going down.
  • Produce complete insurance documentation within 5 business days of the storm.
  • Pass a tabletop audit from their cyber insurance carrier without scrambling.

If your business is not there yet, you are not alone. Most are not. The point is to be measurably better next June than you are this June.

When to bring in outside help

If your in-house IT team is one person — or zero people — running this checklist on top of normal operations is not realistic. That is what managed IT and co-managed IT exist for.

We run this exact checklist for South Florida clients across construction, healthcare, professional services, and retail every May. We test the failover. We rehearse the restore. We sign the insurance documentation. We sit on the bridge call at 2 a.m. when the lights go out.

If your team is dealing with this on their own and you are not 100 percent sure the plan would hold, we can help. Get in touch for a free hurricane-readiness assessment — we will walk your runbook with you and tell you, honestly, what we would change before June 1.

Hurricane season will arrive on schedule. Whether it arrives at your business is the only variable you control.

Tags: hurricane disaster recovery business continuity Florida BCDR South Florida IT hurricane prep data backup

Let's Build Your Technology Strategy

Ready to transform your IT from a cost center into a competitive advantage? Talk to our team.