This is the fourth post in my series about working with an AI coding assistant. The rules from the previous post are in place. Now I want to talk about the results.
Bug fixing in 20 minutes
I get a bug report in Slack. Something broke in our frontend. Or backend. There is a stacktrace. The frontend is partly obfuscated, so reading it manually takes time.
I paste the bug report into my assistant and ask what happened. A minute later there is a hypothesis. I sometimes have a hunch and can point it in the right direction. But more often than not, the fix is right there. I tell the assistant to create an issue, make a plan, and then I monitor it while it solves the bug.
Some of these bugs I wouldn't have found on my own. One of them I had seen for years. We have a tree-like UI where users can expand and collapse sections. Collapsing sometimes broke the frontend. The cause turned out to be a React fragment that should have been a div. When the DOM was manipulated outside of what React expected, it got confused. A well known issue in React, apparently, but not something I would have found as a backend developer. The assistant spotted it in minutes. The fix was one line and it seems to hold in production.
Others I would have found eventually, but it would have taken me 30 minutes just to locate the interesting areas. With the assistant I can go from bug report to a fix in production in about 20 minutes.
Tackling scary code
The other day I had to implement something in the frontend. An area that is well known to be thorny. The complexity comes from things that should be visible under some conditions and hidden under others. The data structure doesn't help either. It should be a tree, but we have a map, and that makes everything more complicated than it needs to be. We have good test coverage there, but it is still the kind of code where everyone hesitates before making changes.
The assistant understood the data structure without much explanation. The structure isn't hard to understand, just too complicated to work with comfortably. The assistant made a plan, worked through it, and an hour later I could confirm the result with a manual test. A stakeholder had asked for this feature some time ago, and I had blocked out a full week for it in case it turned hairy. It took half a day.
Refactoring and technical debt
I have paid down a lot of technical debt over the last couple of months. The pattern that works best is to keep one side fixed while changing the other. Say I want to refactor both the production code and the tests. I keep the production code as it is, refactor the tests. Commit, build, push to production. Then I do the production code while leaning on the refactored tests.
We had a submodule with a few god classes. One of them was about 2000 lines. I used this pattern to break it into seven smaller service classes, each focused on a business feature. Same approach for the controllers in one package and the database support in another god class. Keep the tests steady, move methods around in production code. Commit. Then update the tests. Commit again.
The whole thing took half a day and a lot of tokens. That kind of refactoring would normally sit on the backlog for months because no one wants to spend a week on it. With the assistant it took half a day, so I just did it.
I also try to run /simplify before each commit. It catches things I miss.
Smaller prompts, better issues
I spend less time writing prompts now and more time refining issues in GitHub. When an issue is clear enough, I tell the assistant to make a plan for it. It reads the issue, prepares a plan, and I review it. Sometimes I question it or ask for clarifications. Then we implement it together, step by step.
I use the assistant to write issues too. I describe roughly what I want, it writes a first draft on GitHub, and then I read it and push back. I might disagree with one part, ask for more detail in another, or correct an acceptance criterion. After a few rounds the issue describes what I actually mean, more precisely than I would have written it myself. Not very different from iterating on a mail or any other short text. As long as the scope is reasonably small, this works really well.
Working with images and agents
UI changes are always done with a prompt and an image or two now. I take a screenshot of what it looks like, describe what I want changed, and it works out after a few iterations.
For exploration tasks I tell the assistant to use agents. It parallelizes the work and comes back with findings faster than if it had to search through things one at a time.
Resources
- TDD with an AI assistant - first post in this series
- AI Coding Assistant: More Observations from a Practitioner - second post
- Spell It Out: Rules for an AI Coding Assistant - third post
- Thomas Sundberg - the author