Coding assistants generate code faster than ever. A feature that used to take a day can be roughed out in hours. The typing is nearly free.
You might wonder why you should still bother with tests. If the assistant can produce code that fast, surely it gets it right. That misses the point entirely. Tests were never about catching typos. They are about defining what the software should do and making sure it keeps doing it.
Tests codify intent
A coding assistant does not understand your business rules. It does not know your edge cases. It has no idea about how your modules depend on each other. It generates code that looks plausible. That is not the same as code that is correct.
Tests are a living specification. They describe what the system should do in concrete, executable terms. When a test passes, it means the system behaves according to your intent. When it fails, something has changed unexpetedly. No amount of generated code replaces that.
I have seen coding assistants produce elegant solutions that pass a manual review without any obvious issues. Then a test catches a boundary condition the assistant never considered. This happens regularly. The assistant does not know what it does not know. Your tests do.
Test-first matters even more
Writing the test before the implementation forces you to think about behaviour. What should this function accept? What should it return? What happens when the input is missing? You answer these questions before a single line of production code exists.
When you then hand the failing test to a coding assistant and ask it to make it pass, something powerful happens. The assistant generates the implementation. The test verifies it immediately. You get feedback in seconds. The red-green-refactor cycle becomes faster, not obsolete.
This is designing, not typing. There is a reason why some folks claimed more than 15 years ago that TDD is not Test-Driven Development but rather Test-Driven Design. The test defines the contract. The assistant fills in the implementation. You stay in control of the what while the assistant handles the how. That is a good division of labour.
You are responsible
You can never blame a coding assistant for bad code in production. It is a tool. You are the professional. Your name is on the commit.
When a user reports a bug and you trace it back to code an AI assistant generated, the conversation with your team is not "the assistant got it wrong." The conversation is "why did we ship this without proper tests?" The answer to that question is always uncomfortable.
Tests are how you uphold your responsibility. They are how you verify that what you ship actually works. Skipping them because the code was generated rather than typed by hand is just negligence.
Continuous deployment demands tests
Every commit is a potential release. That is the reality of continuous deployment. There is no manual QA gate. There is no lengthy release process where someone clicks through the application. There is a pipeline and a production environment.
In this world, tests are the safety net. They are the only thing standing between your commit and your users. If the tests pass, you ship. If they don't, you fix. It is that simple.
I deploy on Friday afternoons. I have done it for years. The reason I can do it without worry is that the test suite tells me whether the system works. Not a gut feeling. Not a quick manual check. The tests tell me. And they can not be skipped.
Coding assistants make it easier to produce more code, faster. That means more commits. More commits mean more potential releases. More potential releases mean you need more confidence in each one. Tests provide that confidence. The math is straightforward.
The discipline remains
The tools have changed. The typing is faster. An assistant can scaffold a feature in minutes that used to take hours. That is useful.
But the discipline of verifying intent before shipping remains the foundation of professional software development. Understanding the problem still matters more than typing the solution. Tests are how you prove you understood the problem.
Testing in 2026 is more relevant than ever, precisely because of the coding assistants.
Resources
- TDD with an AI assistant - a companion post about practising TDD with a coding assistant
- My other TDD blog posts
- Thomas Sundberg - author