Testing Disastrous Code: Guaranteed Confidence In New Changes
What you need to know to test existing code and build confidence in your changes (even when the codebase is a mess).
Hey there,
Remember that terrifying class I mentioned last week? The one that seemed designed to confuse anyone who looked at it? Well, here's the scariest part: we had no tests.
Zero tests to be seen.
Every change felt like walking to get a glass of water in the pitch black of night. You walk, but you can’t really see where you’re going. One wrong move and... BOOM! You’re tumbling down the stairs.

"But it works on my machine!" you say, fingers crossed, hoping nothing explodes.
We've all been there.
That moment when you’ve done your best and finished updating some code, but you can’t shake the feeling that something’s off. What if you broke something?
Do you really know all the edge cases? What if that one weird scenario you didn't think could happen comes up in production?
And what about the scenarios you didn’t know could happen?
So how do you refactor safely without tests?
You don’t.
It's like trying to renovate a house, but you’re as clueless about construction as I am, and you don’t want to ask for help. Suddenly you knock down a load-bearing wall without even knowing it, and your troubles are just beginning.
Testing isn't just for the big tech companies or those perfectly organized codebases you see on GitHub.
It's your safety helmet when things get messy. Or even when they already are messy.
It's your confidence booster when imposter syndrome hits hard.

The questions every new dev asks about testing
I hear these all the time:
"Do I really need tests? The code works!"
"Where do I even start? There's so many types of tests!"
"How do I know I'm testing the right things?"
"What if my tests are wrong?"
"Isn't writing tests just doubling my work?"
“I don’t have time to test”
Sound familiar? You're not alone. Most developers I've talked to has asked these questions.
They form a barrier to testing.
Breaking through the testing barrier
I remember my first attempt at writing tests. I stared at the screen, overwhelmed by all the testing frameworks, patterns, and best practices I'd read about.
You might recognize some of the questions I had:
"Should I use TDD?”
“What about BDD?“
“Do I need 100% coverage?"
“How do I even do TDD even?”
“What’s a unit??”
Stop. Breathe.
Here's what nobody tells you about testing: The hardest part isn't writing the tests. It's writing the right tests.
The ones that are valuable.
If you’re early in your career you shouldn’t worry about coverage percentages or mocking or any of that fancy stuff yet. Just write some tests.

You might question if what you’re testing is important. That’s great!
As you get more experience you’ll learn to recognize when the tests you write actually provide value, and when they’re nothing more than a statistic to say you’ve got great coverage.
Just like we discussed with naming conventions last month, you'll develop a feel for what needs testing. You'll start seeing the patterns. The edge cases will jump out at you.
The true purpose of tests
Tests aren't about catching every possible bug.
They're about confidence.
Confidence to refactor that messy code, to add new features, to say "yes, this works" without crossing your fingers.
Confidence to push to production without fear.
Remember how, like Jason Voorhees, past bad code will catch up to you? Well, tests are like being able to slow him down. He might still get you, but at least you had some extra days.

So, you’ve written some tests. How might you know if they’re any good?
Three signs your tests need work
They test multiple things at once: You've got this massive test that checks user registration, email validation, and password rules all at once. When it fails, you spend more time debugging the test than fixing the actual issue. Sound familiar?
They're hard to read (what is this test even checking?): A good test reads like documentation. Anyone should be able to look at it and understand what behavior is being verified.
They depend too much on manual judgment: Some features have thorough tests, others have none. Critical things are not tested enough while trivial utilities have extensive coverage. There's no team agreement on what's worth testing, so everyone just goes with their gut. Just like we discussed with consistent formatting. Without clear guidelines, you end up with an inconsistent codebase that's harder to maintain.
Your testing toolkit to write test that help you write valuable, maintainable code
Keep it simple:
One test = one outcome (makes it clear what failed)
Test behavior, not implementation (focus on what, not how)
Make your test names tell a story ("should_calculate_total_with_tax")
Keep your test data simple and obvious
Comment your tests when the intention isn't crystal clear
That's it. Start there.
Don't worry about mocking, stubbing, or test coverage yet. For now, just write that first test. Then another. Then another.
Before you know it, you'll have a safety net that catches you when you fall.
And trust me, we all fall sometimes.
See you next Tuesday!
PS... Enjoying this newsletter? Consider referring it to a friend who's also navigating the start of their career! Each new subscriber helps us create more in-depth newsletters.