Small teams move fast, but that speed often depends on informal agreements. The problem is that “informal” turns into “inconsistent” as soon as you add a second developer, a second environment, or a second stakeholder. One person’s “done” is another person’s “ready for QA”, and someone else’s “merged but not deployed”.
A Definition of Done (DoD) is a simple tool to remove that ambiguity. It is not a manifesto and it is not paperwork for its own sake. It is a short, shared checklist that answers one question: what must be true before work can be considered complete?
This playbook focuses on small teams in real-world conditions: limited time, mixed experience levels, and a constant flow of “quick” requests. The goal is a DoD that protects quality and predictability without slowing you down.
Why “Done” Is Harder Than It Sounds
When teams struggle with predictability, the root cause is often not estimation. It is hidden work. Tasks look small until you count the missing pieces: edge cases, tests, review, documentation, release steps, and post-release verification.
Without an explicit DoD, each ticket becomes a negotiation. Developers optimize for what they think matters, QA optimizes for what they can verify, and product optimizes for what can be demoed. The gaps show up later as regressions, “hotfix Fridays”, or a backlog filled with cleanup tasks that never get prioritized.
A DoD makes the hidden work visible and repeatable. It improves planning because “done” includes the same categories of work every time, so tickets become comparable.
What a Definition of Done Actually Covers
A strong DoD is less about controlling how people work and more about protecting outcomes. It generally covers five areas:
- Correctness: acceptance criteria met, edge cases handled, and behavior matches expectations.
- Quality: tests, linting, security basics, and code health standards appropriate to the change.
- Reviewability: the change is understandable, scoped, and documented enough for a reviewer to approve with confidence.
- Operability: logging, monitoring signals, and runbook notes where failure modes matter.
- Release readiness: the change can be safely deployed, validated, and rolled back if needed.
Importantly, a DoD is not the same as a Definition of Ready (DoR). DoR is about whether work can start (clear requirements, dependencies resolved). DoD is about whether work is finished (merged, tested, and usable).
A Small-Team DoD Template You Can Start With
The best DoD is short enough to remember and strict enough to matter. Start with a baseline that fits most tickets, then add a small “if applicable” section for higher-risk work.
Baseline DoD (use for most tickets)
- Acceptance criteria are met (or explicitly updated with approval).
- Code is reviewed by at least one teammate; feedback is addressed.
- Automated checks pass (build, lint, unit tests as applicable).
- New behavior has at least one test: unit, integration, or a documented manual check if automation is not feasible.
- Any user-facing change includes a short note for release notes or support.
- Risk is assessed: “low / medium / high” and recorded in the ticket.
Risk-based add-ons (apply when relevant)
- Data changes: migration is reversible or has a rollback plan; migration tested on a copy or staging.
- Payments/auth/security: threat considerations noted; logging avoids sensitive data; extra reviewer included.
- Performance-sensitive: basic before/after check recorded; guardrails exist for worst-case behavior.
- Customer-impacting workflow: customer support note written; known limitations documented.
- Operational impact: monitoring signal updated or verified; failure mode is visible in logs.
If you want the DoD to be truly reusable, write it in a format you can paste into tickets. Keep it conceptual, but specific:
Definition of Done (baseline)
[ ] Acceptance criteria met
[ ] Peer review complete
[ ] Automated checks green
[ ] Test added OR manual verification steps documented
[ ] Release/support note added (if user-facing)
[ ] Risk level recorded (low/med/high) + any add-ons applied
Key Takeaways
- A DoD reduces hidden work and makes planning more accurate by standardizing what “complete” means.
- Keep a short baseline DoD for most work, then add risk-based add-ons for sensitive areas.
- Use the DoD as a coaching and alignment tool, not as a compliance weapon.
- Make it easy to apply by pasting it into tickets and reviewing it at the end of each story.
How to Roll It Out Without a Process Overhaul
The fastest way to introduce a DoD is to treat it like a product: iterate, measure friction, and refine. Here is a rollout pattern that works well for teams of 2 to 10.
- Start with one meeting: 30 minutes to draft the baseline DoD and agree on where it will live (ticket template, repo docs, or both).
- Pick a “default” scope: define which work items use the baseline DoD (feature tickets, bug fixes, chores). Avoid exceptions at the start.
- Make it visible at the moment of truth: add the DoD checklist to your PR template or your ticket “completion” section, not a separate document no one opens.
- Review at the end, not the beginning: during sprint review or weekly wrap-up, sample 3 to 5 completed items and ask: did “done” actually match reality?
- Tighten one thing at a time: if you add too many requirements at once, the team will route around it. Add one improvement per week or sprint.
If you already use Scrum, a practical cadence is: draft DoD in retrospective, apply in the next sprint, then update in the following retrospective. If you run Kanban, do a weekly “done quality” review instead.
Real-World Example: Shipping a Checkout Improvement
Imagine a small SaaS team of four: one product-minded founder, two engineers, and one support person. The team wants to improve checkout by adding a “company name” field and a clearer error message when a card fails.
Without a DoD, “done” might mean “field added and it works on my machine”. The missing work shows up later: validation edge cases, localization, support docs, and a deployment that accidentally breaks analytics.
With the baseline DoD plus risk-based add-ons, the work item changes shape in a healthy way:
- Acceptance criteria include validation rules (optional vs required) and where the field is displayed later.
- A test is added for the error message behavior, or manual steps are documented if UI testing is not in place.
- A short note is written for support: “If a customer asks why checkout fails, here is the new wording and the next step.”
- Risk is marked “medium” because checkout is revenue-impacting, so an extra reviewer is pulled in.
- Logging is checked to ensure no sensitive payment data is captured during failures.
The result is not slower shipping. It is fewer follow-up tickets, fewer “surprise” tasks after release, and fewer support escalations that interrupt the next sprint.
Common Mistakes (and Fixes)
Mistake: the DoD becomes a giant policy document
Fix: keep the baseline under 8 to 10 bullets. Move deep detail to “add-ons” tied to risk, and link the add-ons to existing standards (test strategy, logging conventions) instead of duplicating them.
Mistake: one-size-fits-all rules for every change
Fix: keep the baseline consistent, but let add-ons scale with risk. A typo fix should not require the same checks as an auth refactor.
Mistake: “done” is interpreted differently by engineering and product
Fix: explicitly define whether “done” means “merged”, “deployed to production”, or “verified in production”. Many small teams use two stages: Done (merged) and Released (verified).
Mistake: the DoD is created once and never revisited
Fix: treat incidents and regressions as signals. Each time a preventable issue slips through, ask: should the DoD change, or should a supporting tool/process improve?
When Not to Use a Single DoD
A DoD is helpful, but there are cases where a single DoD across all work can create friction or false confidence.
- Research spikes and prototypes: the outcome is learning, not a shippable artifact. Use a “Definition of Done for spikes” that focuses on documented findings and next steps.
- One-off operational tasks: for example, rotating secrets or responding to an incident. Use an ops checklist and a short runbook update rather than forcing a feature-style DoD.
- Multiple products with different risk profiles: an internal admin tool and a public API may require different release and testing expectations.
In these cases, keep a shared baseline (review, clarity, and safety) but allow a dedicated DoD variant per work type.
Conclusion
A Definition of Done is a small investment that pays back through fewer surprises, smoother handoffs, and a calmer delivery rhythm. Start with a minimal baseline, add risk-based checks where they matter, and review it regularly so it stays useful instead of ceremonial.
If you are building out your team’s operating system, pairing a DoD with a simple ticket template and a lightweight release checklist can create a noticeable jump in consistency without adding much overhead.
FAQ
How strict should our Definition of Done be?
Strict enough that “done” consistently means “safe to build on”. If your DoD is frequently bypassed, it is too strict or too vague. If regressions frequently slip through, it is too loose or missing a key risk-based add-on.
Who owns the DoD: product, engineering, or QA?
Ownership should be shared. Engineering typically maintains the technical parts (tests, review, operability), while product ensures acceptance criteria and user-facing notes are included. If you have QA, they should help define what “verifiable” means.
Should “done” mean merged or deployed?
Either can work, but be explicit. Many teams find it clearer to track both: “Done” for merged work that meets the checklist, and “Released” for production-verified work. That avoids conflating development completion with scheduling and release coordination.
How do we enforce the DoD without creating a blame culture?
Use it as a team agreement and a learning tool. When a DoD item is missed, treat it like a signal: was the checklist unclear, unrealistic, or easy to forget? Improve the system (templates, automation, peer support) before assuming a people problem.