How We Solved Our $500K Estimation Problem
A story about missing deadlines by 200%, losing client trust, and finally building a solution that actually works.
I'm a tech lead at a small software consultancy. We build custom web applications for clients. Nothing fancy, mostly CRUD apps with some business logic. For years, we were hemorrhaging money on fixed-price contracts because we couldn't estimate worth a damn.
This is the story of how bad estimates nearly killed our business, and what we did to fix it.
The Wake-Up Call: A $200K Project That Took $620K
It was March 2024. We quoted a client $200K for a logistics management system. "Should take about 4 months," we told them confidently. We'd built similar systems before. How hard could it be?
Narrator: It was very hard.
Four months later, we weren't even halfway done. The integrations were more complex than expected. The client kept "clarifying requirements" (read: adding features). One of our senior devs quit mid-project. We missed the deadline by 8 months.
The Damage:
- Original estimate: $200K (4 months, 2 devs)
- Actual cost: $620K in labor (12 months, 3 devs + contractor)
- Loss: $420K
- Client satisfaction: 2/10 (they paid, but barely)
We weren't alone. Research shows this is the norm:
- 71% of software projects fail or are challenged due to poor estimation (Standish Group CHAOS Report)
- Projects typically overrun by 50-100% (McConnell's "Software Estimation")
- Average estimation error: 30-40% (Simula Research)
But knowing everyone else sucks at estimation doesn't pay the bills.
Phase 1: "Let's Just Try Harder"
After that disaster, we had a company-wide meeting. The consensus: "We need to get serious about estimation."
We tried everything the agile books recommend:
- Story points with planning poker
- Breaking tasks into smaller chunks
- Adding buffer time (which clients hated)
- Daily standups to track progress
Results? Marginally better, but still terrible.
The core problem remained: we had no historical data. When a developer said "this auth system will take 3 days," we had no way to verify that. Was it 3 days of focused work? 3 days of calendar time (which is really 8 hours of coding)? 3 days including all the meetings, code reviews, and debugging?
We were flying blind.
Phase 2: Manual Time Tracking (AKA Developer Rebellion)
In mid-2024, we decided to get serious. If we wanted data, we needed tracking.
First, we tried ClickUp. Great project management tool, has a time tracker built in. Developers just had to click "Start Timer" when they began a task and "Stop Timer" when they finished.
Adoption rate after 2 weeks: ~40%
Developers forgot to start the timer. They forgot to stop it. They switched between tasks without updating the tracker. The data was garbage.
We tried Clockify next. Same story. Then Toggl Track. Then Harvest. Every tool had the same problem: manual entry.
"Look, I'm in the zone writing code. I don't want to break flow to click a damn timer button. Just let me code."
- One of our senior devs, who was 100% right
We couldn't argue with that. Developers are in high demand. If we made their lives miserable with administrative overhead, they'd just leave. And honestly, we agreed with them. Manual time tracking sucks.
But we still needed the data.
Phase 3: The Surveillance Dystopia We Almost Built
Out of desperation, we briefly considered tools like Time Doctor and Hubstaff. These tools take screenshots every few minutes, track keystrokes, and monitor "activity levels."
We lasted exactly one demo call before killing the idea.
The thought of telling our team "Hey, we're going to take screenshots of your screen every 5 minutes and track your keyboard activity" made us physically ill. That's not a workplace. That's a panopticon.
Plus, it wouldn't even solve the core problem: correlating time to specific features. A screenshot of VSCode doesn't tell you which git branch someone is working on.
The Breakthrough: What If We Tracked Git Branches?
One evening, I was debugging a production issue. I ran git log to see recent commits and noticed something:
commit a3f2b1c
Author: [email protected]
Date: Mon Nov 11 14:23:15 2024
+247 lines, -103 lines on feature/user-auth
Git already knows what you're working on.
Every commit has a branch name. Every file save in a git repo can be detected. If we could track file changes and associate them with git branches, we'd have automatic task-level time tracking without manual timers.
The Requirements:
- Automatic: Zero manual intervention. No start/stop buttons.
- Privacy-first: No code content, no file names (by default), no screenshots, no keystrokes.
- Branch-aware: Automatically detect git branch and link time to it.
- Lightweight: Just a shell script. No IDE plugins, no dependencies.
- Developer-friendly: Open source, transparent about what data is collected.
We couldn't find a tool that did all of this. So we built it.
Building dev-time: The Technical Approach
The core idea is dead simple:
- A shell script runs in the background (via cron every 5 minutes)
- It checks if any files in configured project directories were modified
- If yes, it records: project name, git branch, timestamp, and a client ID
- Data is sent to our server (batch uploads for efficiency)
- The dashboard shows a timeline of activity per branch
That's it. No file names, no code content, no invasive tracking. Just: "This branch was actively worked on at this time."
What It Looks Like:
Real data from our team: time spent vs. original estimate, with a visual timeline of when work actually happened.
The tracker script is fully open source. Developers can read exactly what's being tracked. Trust matters.
The Results: 3 Months After Deployment
We rolled this out internally in December 2024. By March 2025, we had enough data to spot patterns.
What We Learned:
1. Our estimates weren't just off. They were systematically off.
- Authentication features: Estimated 3 days on average, actually took 7-8 days (133% over)
- CRUD operations: Estimated 2 days, actually took 1.5 days (25% under. We were too conservative here)
- Third-party API integrations: Estimated 5 days, actually took 12-15 days (150-200% over)
2. Different developers had different estimation patterns.
- Developer A consistently overestimated by 20% (cautious, good for client trust)
- Developer B consistently underestimated by 40% (optimistic, needed coaching)
- Developer C was spot-on for backend work but wildly off for frontend tasks
This isn't about blaming developers. Software is complex. Bugs happen. APIs change. Requirements shift. We're well aware of the realities of software engineering.
But patterns are actionable. If we know Developer B tends to underestimate API work by 40%, we can adjust future estimates accordingly. If authentication always takes 2.5x longer than expected, we build that into our quotes.
3. Estimation accuracy improved dramatically.
Our Progress:
- Before dev-time: Estimates off by 80-150% on average
- After 3 months: Estimates off by 20-35% on average
- After 6 months: Estimates off by 15-25% on average
We're not perfect. We never will be. But we went from "completely guessing" to "educated estimates backed by historical data."
That's a game-changer.
The Cultural Shift: Devs Actually Like It
Here's what surprised us most: developers didn't hate it.
In fact, several of them started using it for their own freelance work. Why?
- No manual work. It just runs silently. No timers to start/stop.
- Privacy-respecting. No screenshots, no code content, no surveillance vibes.
- Helpful for retrospectives. "How long did that feature actually take?" Now we know.
- Fair estimates. If a task is estimated at 3 days but takes 8, the data backs up the developer. It's not them slacking, it's the estimate that was wrong.
"I used to feel guilty when tasks took longer than estimated. Now I can show the PM: 'Look, this feature had 4 unexpected blockers. Here's the timeline.' The data has my back."
- Developer B, who now estimates much more accurately
This is critical: We don't use this data to punish developers. If a task takes longer, we ask "why?" Maybe the estimate was bad. Maybe the requirements changed. Maybe there was a production incident that week. Software is unpredictable.
The goal isn't to micromanage. It's to learn.
Who This Is For (And Who It's Not For)
This works great for:
- Engineering managers who need better estimates for roadmaps
- Tech leads who want to identify bottlenecks ("Why does API work always take 3x longer?")
- CTOs who need data to justify hiring decisions or push back on unrealistic deadlines
- Freelancers who bill clients by the hour and need accurate time logs
- Agencies doing fixed-price work who are tired of losing money on bad estimates
This is NOT for:
- Micromanagers who want to surveil every minute of a developer's day (please don't)
- Companies that measure productivity by "lines of code written" (this doesn't track that, and you shouldn't either)
- Solo developers working on personal projects with no time pressure (though some still find it useful for self-awareness)
Try It Yourself
We built dev-time.com because we needed this tool for ourselves. Now we're making it available to others.
Getting started is simple:
- Download the open-source tracker script
- Run it (takes 30 seconds to set up)
- View your dashboard at dev-time.com/app
There's a live demo with real data if you want to see what it looks like first. No sign-up required.
It's free for individual use (1-day retention), and paid plans start at $5.42/mo for unlimited history. We're not trying to get rich here. We're just trying to cover server costs and keep building useful tools for developers.
Final Thoughts
Software estimation is hard. It always will be.
But it doesn't have to be guesswork.
We spent years throwing darts in the dark, losing money on bad estimates, and frustrating clients with missed deadlines. The manual time tracking tools didn't work because they required too much effort. The surveillance tools were dystopian and didn't solve the right problem.
Automatic, branch-aware time tracking was the missing piece. It gives us the historical data we need to improve estimates, without burdening developers or invading their privacy.
If your team struggles with estimation (and honestly, who doesn't?), this approach might help. It's not a silver bullet, but it's better than flying blind.
And in 2026, asking developers to manually click "Start" and "Stop" buttons feels like asking them to file TPS reports. There's a better way.
Ready to Fix Your Estimation Problem?
No screenshots. No surveillance. Just automatic time tracking by git branch.
Open source • No account required • Start in 30 seconds
About the author: Tech lead at a software consultancy. Been building web apps for 10+ years. Still bad at estimation, but getting better. You can find the dev-time tracker on GitHub.