It is 7:30 on a Monday morning. Your office manager calls you in a panic because nobody can log in to the company file server. Your email is down. Customers are calling to say they received strange messages from your accounts. You have no idea what happened, who to call first, or what to tell your clients. The clock is ticking, and every decision you make in the next sixty minutes will shape how much damage your business suffers.
This scenario plays out at small businesses every week, and the companies that recover quickly all have one thing in common: they had a plan before the crisis hit. This guide will walk you through creating an incident response plan that is practical, affordable, and built for businesses without a dedicated IT security team.
Why Every Small Business Needs an Incident Response Plan
The majority of small businesses do not have a documented incident response plan. Many business owners assume that cyberattacks only happen to large corporations, or that their IT provider will handle everything if something goes wrong. Both assumptions are dangerous.
Without a plan, a cyber incident turns into chaos. Employees do not know who to report problems to. Nobody is sure whether to shut systems down or leave them running. Critical evidence gets destroyed. Customers and partners are left in the dark. Legal and regulatory deadlines get missed. What could have been a contained, manageable event spirals into a crisis that threatens the survival of the business.
There is also a practical business reason to have a plan: many cyber insurance providers now require one. If you are applying for or renewing a cyber insurance policy, your insurer will likely ask whether you have a documented incident response plan. Not having one can result in higher premiums, reduced coverage, or outright denial. Understanding what insurers expect from your security posture can help you prepare for this requirement.
The difference between a business that recovers from a cyber incident and one that does not often comes down to preparation, not technical sophistication.
What Is an Incident Response Plan?
An incident response plan is a documented set of procedures that tells your team exactly how to detect, respond to, and recover from a cybersecurity incident. Think of it as a fire evacuation plan, but for your digital assets. It answers the fundamental questions that arise during a crisis: Who is in charge? What do we do first? Who do we call? How do we communicate with customers? When do we involve law enforcement?
Your plan does not need to be a hundred-page technical manual. For most small businesses, a clear, concise document of five to ten pages is more than enough. The goal is to have something that any member of your team can pick up, read, and follow under pressure.
The 6 Phases of Incident Response
The widely accepted framework for incident response includes six phases. Each phase builds on the one before it, and together they form a cycle that improves your resilience over time.
1. Preparation
This is everything you do before an incident occurs. It includes creating the plan itself, training your team, setting up the tools and contacts you will need, and making sure your backups and security controls are in place. Preparation is the phase you are in right now by reading this guide. The more thorough your preparation, the smoother every other phase will go.
2. Identification
Before you can respond to an incident, you have to recognize that one is happening. This phase is about detecting unusual activity and determining whether it is a genuine security incident or a false alarm. Signs of an incident include unexpected system slowdowns, employees locked out of their accounts, unfamiliar files appearing on servers, unusual outbound network traffic, or customers reporting suspicious emails from your domain. Train your employees to report anything that looks unusual, even if they are not sure it is a real threat. It is far better to investigate ten false alarms than to miss one real attack.
3. Containment
Once you have confirmed an incident, the immediate priority is to stop it from spreading. Containment means isolating affected systems to prevent further damage. In practical terms, this might mean disconnecting an infected computer from the network, disabling a compromised user account, blocking a malicious IP address, or temporarily shutting down a particular service. The key is to act quickly while preserving evidence. Do not wipe or rebuild systems yet. You need the forensic evidence to understand what happened and how far the attacker got.
4. Eradication
After the threat has been contained, you need to remove it completely. This means identifying the root cause of the incident and eliminating it from your environment. If ransomware was involved, this might mean wiping and rebuilding affected machines from clean images. If a compromised password was the entry point, it means resetting that password and every other account that may have been exposed. Eradication must be thorough. If any trace of the attacker or malware remains, the incident will recur.
5. Recovery
Recovery is the process of restoring your systems and operations to normal. This includes restoring data from backups, bringing systems back online in a controlled manner, and monitoring closely for any signs that the threat has returned. Do not rush this phase. Bring systems back gradually, starting with the most critical ones, and verify that each system is clean and functioning correctly before moving to the next. Recovery can take days or weeks depending on the severity of the incident.
6. Lessons Learned
After the dust has settled, gather your team for an honest review of what happened. What went well? What went wrong? Where did the plan fall short? What would you do differently? Document everything and use it to update your incident response plan. This phase is often skipped because teams are exhausted and eager to move on, but it is arguably the most valuable step. Every incident is an opportunity to strengthen your defenses and improve your response for next time.
Building Your Plan: Step by Step
Now that you understand the framework, here is how to build your actual plan document:
Assemble your response team. Identify who will be involved in incident response and what each person is responsible for. Even in a company of ten people, you need to assign clear roles. More on this in the next section.
Define what counts as an incident. Not every IT problem is a security incident. Your plan should define categories of incidents so your team knows when to activate the full response versus when to handle something through normal IT support. Examples include unauthorized access to systems, confirmed malware infection, data breach involving customer information, ransomware attack, denial of service, and compromised email accounts.
Create your contact lists. Build a list of everyone who needs to be contacted during an incident, and keep it somewhere accessible even if your systems are down. This should include your internal response team members and their personal phone numbers, your IT provider or managed service provider, your cyber insurance carrier and policy number, your legal counsel, local law enforcement and FBI field office contacts, and any regulatory bodies you are required to notify.
Establish communication protocols. Decide in advance how you will communicate during an incident, especially if your email and internal systems are compromised. Identify a backup communication channel such as a group text thread, a personal email chain, or a messaging app. Also prepare templates for external communications, including what to tell customers, vendors, and the media if necessary.
Document recovery procedures. Write down the specific steps needed to restore each critical system. Include where backups are stored, how to access them, the order in which systems should be restored, and who is responsible for each step. This documentation is invaluable when you are operating under stress at two in the morning.
Key Roles and Responsibilities
Even a very small business needs clearly defined roles during an incident. One person can fill multiple roles, but every role must be assigned to someone specific:
- Incident Lead: The person who coordinates the overall response, makes key decisions, and ensures the plan is being followed. This is typically the business owner or a senior manager.
- IT Contact: The person or external provider responsible for the technical response, including containment, eradication, and system recovery. If you use a managed IT provider, make sure you have their emergency after-hours number.
- Legal Counsel: An attorney who can advise on regulatory notification requirements, liability issues, and communications. Many small businesses do not have in-house counsel, so identify an external attorney in advance and confirm they have experience with data breach matters.
- Communications Lead: The person responsible for all internal and external communications during the incident, including employee updates, customer notifications, and media inquiries. Controlling the message is critical for protecting your reputation.
- Management Decision-Maker: Someone with the authority to approve major decisions such as shutting down operations, engaging outside forensic investigators, or authorizing significant expenditures for recovery. In a small business this is often the owner, but you need a backup if the owner is unavailable.
Write down each role, the primary person assigned to it, and a backup person. Include their work phone, personal phone, and personal email. Print this list and keep a physical copy somewhere secure outside of your digital systems.
Testing Your Plan
A plan that has never been tested is barely better than no plan at all. You need to practice before a real incident forces you to perform under pressure.
Tabletop exercises are the simplest and most effective way to test your plan. Gather your response team around a table and walk through a realistic scenario step by step. For example: "It is Tuesday morning. An employee reports that all files on the shared drive are encrypted and there is a ransom note on their screen. What do you do first? Who do you call? What do you tell your customers?" Discuss each decision as a group. You will quickly discover gaps in your plan, unclear responsibilities, and missing contact information. Run a tabletop exercise at least once a year, and ideally twice.
Review and update your plan annually at a minimum. People leave the company, phone numbers change, new systems are added, and the threat landscape evolves. Set a calendar reminder to review the plan every year and after any significant change in your business or technology.
Update after real incidents. If you experience an actual security event, no matter how minor, use the lessons learned phase to update your plan immediately while the experience is fresh.
Common Mistakes to Avoid
After working with dozens of small businesses on their incident response readiness, these are the mistakes we see most often:
- Having no plan at all. This is by far the most common situation. Many business owners believe they will figure it out when the time comes. They will not. The stress, confusion, and time pressure of a real incident make clear thinking nearly impossible without a predefined process to follow.
- The plan exists but nobody knows about it. A plan buried in a shared drive that no one has read is almost as useless as no plan. Every member of your response team should know the plan exists, where to find it, and what their role is. Review it together at least once a year.
- No communication plan. Many incident response plans focus entirely on the technical steps and forget about communication. How will you notify customers? What will you tell the media? Who is authorized to speak publicly? Poor communication during an incident can cause more reputational damage than the incident itself.
- Forgetting legal and regulatory requirements. Most states have data breach notification laws that require you to notify affected individuals within a specific timeframe, often 30 to 60 days. Industry regulations like HIPAA and PCI DSS have their own requirements. Your plan must account for these obligations, because missing a deadline can result in significant fines on top of the incident costs.
- Storing the plan only on the systems it is meant to protect. If your plan is only saved on your company server and that server is encrypted by ransomware, you cannot access your plan when you need it most. Keep printed copies, store a copy in a personal cloud account, and make sure key team members have it on their personal devices.
The Bottom Line
An incident response plan does not prevent cyberattacks. What it does is turn chaos into a process. When every second counts and your team is under pressure, a clear, rehearsed plan transforms panic into purposeful action. It reduces downtime, limits financial damage, protects your reputation, and can be the difference between a business that recovers and one that closes its doors.
You do not need a dedicated security team or a massive budget to build an effective plan. You need a few hours of focused work, a commitment to assigning clear roles, and the discipline to test and update your plan regularly.
Start today. Gather your key people, walk through the steps outlined in this guide, and write down your plan. It does not have to be perfect on the first draft. A simple plan that everyone knows about is infinitely more valuable than a comprehensive plan that nobody has read.
If you want to take the next step in preparing your team, CyberLearningHub offers practical cybersecurity training modules that include incident response awareness, helping every employee understand their role when something goes wrong. Because when an incident hits, it is not just an IT problem. It is everyone's problem.