Estimating Without the Crystal Ball
How to make solid estimates even when the scope is fuzzy
How many of us have thrown out a wild guess during an estimation meeting?
It's easy to get cynical when estimating new projects. You don't have every requirement. You haven't seen the code. You're being asked to pin a number to something that doesn't exist yet. And worse, that number becomes locked into roadmaps, sprint plans, and status updates as if it were a divine revelation.
So why do we even bother estimating?
Because every business decision, from budgets to launch dates, depends on it. We estimate so teams can staff appropriately, spot risks early, and avoid being surprised three months in.
But you don't need a crystal ball to give a good estimate.
You need a consistent way to break down work, size each piece, and communicate where the unknowns are.
By the end of this newsletter, you'll walk away with:
- A lightweight framework to build and share estimates you can stand behind.
- A spreadsheet to size and buffer work using real-world assumptions.
- A one-page checklist to make sure you're not missing anything obvious.
This newsletter won't solve every estimation problem. But, it will provide a repeatable method for making better decisions more quickly and help you move forward with confidence.
Estimation Creates Clarity
The first step to making better estimates is remembering what they're for.
Estimates aren't about getting the number exactly right. They're about helping your team understand the work before it begins. That's where the value comes from.
A good estimate forces three things:
- You break the work into smaller, more manageable pieces
- You acknowledge what's unknown
- You align expectations early so people aren't surprised later
This doesn't require a perfect forecast. Most experienced teams only hit about 60–80% accuracy when estimating story points across sprints.
The goal isn't precision. It's shared understanding. And that starts before you assign a number to anything.
|
|
Wondering If a Startup Is Right for You?
Big Creek Growth Company shares what it’s really like to work in a startup and what founders are looking for when hiring.
|
|
Frame the Work Before You Size It
You can't estimate what you don't understand. Before you start slicing or sizing the project, take a few minutes to clarify what's actually being asked of you.
Here's a quick checklist I use to frame the work:
What's the outcome?
What needs to be true when this is done? Think user behavior, system behavior, or business result—not just "build X."
Who's involved?
Identify the stakeholders, dependencies, and individuals who will have a stake in the outcome. Surprises here are what usually blow up estimates.
What constraints are in play?
Tight deadlines? Regulatory rules? Non-negotiable tech stacks? These all change how you approach the work.
What could block progress?
Is there anything unclear, brittle, or risky? Call it out now so you can buffer for it later.
You don't need a full spec. You need enough clarity to avoid anchoring your estimate to the wrong thing.
Tip: For feature work, sketch a rough "Definition of Ready" with the team. If you can't answer basic questions like "What does done look like?" or "What breaks if we get this wrong?", you are not ready to estimate.
Break It Down, Don't Blow It Up
Big estimates are bad estimates.
The larger the chunk of work, the more vague the effort becomes. That's why most teams stick to a simple rule: if a task takes more than a couple of days, break it down further.
A few reliable ways to split work:
- By behavior: Break the feature into user-facing slices (e.g., search input, results view, filters).
- By system layer: Frontend, backend, infra. This is useful when teams are cross-functional but need to split ownership.
- By risk: If part of the work involves something new or unpredictable, split it out as its own research or spike.
Atlassian recommends keeping stories under 16 hours or about 8–13 story points. Anything bigger becomes a black box. (Source)
It's also easy to forget the supporting work:
- Testing and validation
- Infrastructure or DevOps tasks
- Code review time
- Post-deploy verification
All of that still takes effort. If it's not on the board, it's not in the estimate.
For a quick gut check, review your last few projects. What work popped up late or got missed in the planning phase? Make those pieces visible early next time.
Size the Pieces
Once the work is broken down, it's time to size it. The key is to focus on relative effort, not absolute time.
Most teams use one of three approaches:
- T-shirt sizes (S, M, L, XL) — useful for quick gut-checks in early planning.
- Fibonacci story points (1, 2, 3, 5, 8…) — common in Agile teams for estimating effort and complexity.
- Hours or level of effort (LoE) — best reserved for operational work or known tasks.
The goal isn't to predict the exact time each task will take. The goal is to position it in relation to other tasks you've completed.
If your team already has historical data—like average velocity or completed stories—that's your anchor. If not, pick a known task (one you've finished recently) and size everything else in comparison.
Here are a few tips that help:
- Avoid half-points or in-between sizes. If something feels like a 2.5, round up and talk about why.
- Flag anything that feels "weird." Unfamiliar tech, vague requirements, or external dependencies often mean the task needs to be clarified or split.
- Use sizing conversations to surface disagreement. If one person thinks a task is a 3 and another says it's a 13, stop and figure out what assumptions you're making.
For a more in-depth guide to this process, Atlassian provides a solid overview of how to run estimation sessions using story points and Planning Poker.
🔗 Planning Poker and Story Point Estimation – Atlassian
Account for the Unknowns
Every estimate inherently carries risk. The mistake most teams make is pretending that risk doesn't exist.
Instead of ignoring it, make space for it.
Here are three categories of unknowns worth calling out:
- Learning curve
New tech, new domains, or unfamiliar codebases always slow things down. Add a buffer if someone on the team is unfamiliar with this task.
- Cross-team dependencies
If you're waiting on another team (for an API, design, or legal review), assume a delay. Don't estimate as if everyone's available and aligned.
- Surprise scope
The thing that looks simple probably isn't. Assume that edge cases, error states, or integrations you don't see yet will appear later.
You can turn these risks into simple buffer percentages. For example:
- +20% for learning & research
- +15% for managing external dependencies
- +10% for "this feels too easy"
Or go one step further and build a three-point estimate:
- Best case (clean path)
- Most likely (with buffer)
- Worst case (multiple surprises)
If you want to go deeper, you can use Monte Carlo simulations to model delivery confidence based on historical velocity. Teams often use this to create charts showing, for example, "You have a 70% chance of finishing by April 18."
You don't need to get fancy. Just show that you've thought about what might go wrong—and built enough margin to handle it.
Build the Estimate
Once you've broken down the work and sized each task, it's time to pull it together.
The most straightforward approach is bottom-up:
- List out each task
- Add a size (points, hours, or both)
- Apply any necessary buffers
- Sum it all up
If you're using story points, multiply by your team's average velocity to estimate the timeline. If you're using hours, you can break it down by contributor and week.
It helps to include a buffer column directly in the estimate:
Now you've got a total, but more importantly, you've captured why it's not just a raw number.
You can also build in three-point estimates for riskier efforts:
If your team has past data, this is a great time to cross-check:
- What did similar features take?
- What did we miss last time?
- Did our actual time match the estimate, or are we consistently off?
Don't rely on gut feelings. Let the numbers guide the conversation. And if something looks way off—too high or too low—go back and challenge the breakdown.
We'll include a spreadsheet (MTC - Estimation Template.xlsx) that builds this structure for you. It auto-sums totals, adds buffer logic, and provides space for tracking confidence.
Present the Estimate Like a Pro
Once the estimate is built, your job isn't done. You still need to communicate it.
That means showing more than just a number. A good estimate includes:
- The total effort (with and without buffers)
- The range (best, likely, worst case)
- The assumptions and risks baked into it
- Your confidence level (e.g., "We're about 70% confident in this range based on past work")
A single number feels arbitrary. A range backed by clear thinking shows you've done the work.
Keep it simple. One page or one slide is enough. Use plain language. Focus on what matters:
- "This estimate assumes the API is stable and available by the end of the sprint."
- "We've added 20% buffer for unknowns because the codebase is unfamiliar."
- "If that buffer isn't needed, this could ship 3 days earlier."
Don't wait for someone to ask why your estimate looks high or what might go wrong. Say it first.
It builds trust and makes it easier to adjust the plan when things change.
Download the Estimation Toolkit
You don't need to build this from scratch. Here are two resources to help you estimate work with more confidence:
1. Estimation Spreadsheet
A ready-to-use spreadsheet that lets you:
- Break work into tasks
- Add estimates and buffers
- Calculate best case, likely case, and worst case
- Total it up in one place
You can use it in Excel or Google Sheets.
🔗 MTC - Estimation Template.xlsx
2. One-Page Checklist
Here is a quick reference to help you:
- Frame the work before you size it
- Avoid common blind spots
- Decide how much buffer to add
- Share your estimate clearly
Use it before you give your next estimate.
🔗 MTC - Software Estimation Checklist.pdf
Use the sheet, walk through the checklist, and send me a note. Tell me where it worked and where it didn't. I'll use your feedback to improve the next version.