Hero image for Ai coding best practices at first glance
12 min read

Ai coding best practices at first glance

Coding Technical AI

I just wasted 2 hours watching a coding agent implement the wrong API endpoint in my frontend. It either ignored my API spec entirely, or decided to rewrite the API to match its code.

Here are the four principles I wish I’d known before I started - learned through painful trial-and-error so you don’t have to.

1. Context matters

You wouldn’t ask a random stranger to fix your car without telling them what’s wrong with it. Yet that’s exactly what we do with AI agents - we say “add auth” without mentioning our techstack, our existing user table, or that it needs to work with our custom middleware as well.

Here’s what happens without context:

Prompt: Add an admin override for testing

Result: Admin override works like the normal user interaction endpoint, and has no special features that I would associate with an admin override

Here’s what happens with context:

Prompt: Add an admin override for testing to my astro/react frontend to interact with my python-flask based backend, where the functionality of the current API endpoint update-match needs to be updated to allow for an admin override which skips the current checks that only the users in a match can update their own matches

Result: A working admin override when a match was complete the admin override allowed the admin to set the score without needing to be a match participant

AI isn’t psychic (yet) - it can only work with what you give it. The more you frontload your work, the less time you spend on fixing weird bugs during debugging.

But gathering all that context upfront can feel overwheling. Here’s a shortcut I discovered: work with AI to build your context. Go into a planning mode at the start and ask it, to ask you questions about your context. The system surroundings etc. Maybe plug in the list below until the AI has all of its questions answered. Then save that context as an artifact/document so you can reuse it in the future. Rewriting this again and again is too much of a hassle for me.

Prompt:
I want to build a new match override feature for my tournament bracket generator site. Let's plan what information you would need to build this feature

Result: 
A back-and-forth conversations between you and your AI assistant to find the most important information the AI needs to build your feature as you want it

Best practices for context

  • Tech stack details: Framework, database, styling approach
  • Existing code patterns: Error handling, naming conventions
  • Constraints: files that can and can’t be modified, performance requirements
  • Integrations: what the surround system landscape looks like

Pro tip: Before starting any coding task, spend 5 minutes just writing context. It saves time compared to debugging for a few hours.

2. Documentation is king

At various points in my life I was a new employee at a company, a new member in a team or a new developer on a project. When you are new to something you first need to gather your own world-view regarding the surroundings you are in, you bring your own backstory as described in the previous section, but you still need to know what you just walked into.

This is where documentation helps you. When joining a new company, you want to know their mission, their product, their customers, as much as possible about the environment you are entering. The same is also true for new programming projects:

  • How can I set up a development environment?
  • Where can I see the architecture and the problems we are currently working on?
  • Who can I talk to when I run into a problem?

Here’s exactly what this looked like in practice with the API example we had earlier, to help with my problems I let AI create an api.md file. This file had the most important api connections outlined with

  • endpoints
  • interface specification
  • when to use each end-point

This already helped me with my problem of the AI hallucinating endpoints. Then with additions to my claude.md, like

Do not modify API endpoints without explicit approval and a comprehensive analysis why this endpoint needs to be adapted to allow for the new behaviour.

Best practices regarding documentation for AI coding

While reading up on best practices I come across a great analogy in a blog post. I will link it when I remember where I got it from: “Imagine that for every task you want the agentic AI to work on, you have to onboard a new junior developer every time.”

If you have good documentation, maybe even AI specific documentation like claude.md files, onboarding is easy. For example highlighting that APIs should never be changed by any developer without previous explicit authorisation. Otherwise, what that junior developer will do is largely down to luck and their previous skill set. With AI you are mostly rolling the dice on what you get. But you can stack the odds in your favour. Just provide a more comprehensive documentation in easy to read formats and maybe even linked for specific tasks.

Before:

Prompt: Please adjust the frontend admin override logic, to be able to work with the newly developed API endpoint in the backend.

Result: Bad outcome, the endpoint was not found. And then it hallucinated its own

After: I had an api.md file and added the no api-changes in the claude.md file.

Prompt: Please implement a new button to add a new match override point to each match component, so that an admin can override the match result as they see fit. Use the newly created admin-override endpoint linked in the @api.md file to integrate with the frontend button. Follow the rules of the @claude.md file as usual.

Result: Working admin override logic in the frontend, with the correct api contract used.

Here is everything that I have come across that helped me so far:

  • central and accessible API documentation
  • specific claude.md (for claude-based agents), which highlights the do’s and don’ts when working in the project -> like freezing API endpoints
  • how to run the test suite
  • how to run the project
  • explicit mention on coding practices, sometimes even with good and bad examples
  • documented architectural decisions

TLDR: Make implicit knowledge explicit. Along with strict guidelines on what is allowed to be worked on, and what not.

3. Planning to be precise, is time well spent.

When working with agentic AI you will receive the best outputs when being incredibly precise with the tools it should use, the changes it should do and how those changes should be tested. This is eerily similar to development tickets in the normal development workflow. For a certain feature you usually get a set of wanted outcomes and tests to perform to validate that the desired outcome has been achieved, with a list of sometimes vague, sometimes precise instructions of steps that need to be performed. Hopefully they are linked to good documentation.

This is also something I ran into while working with another part of my tournament-bracket generator. I was working on the new API endpoint feature, which was used in multiple different places. Everytime I prompted the AI it would only change one of those places even though I previously had mentioned that it should change both. After a lot of back and forth and seeing a working solution, I realized that it forgot to change the second part of the code again. Getting this aligned took more time than I care to admit.

Before:

Prompt: Please use the @api.md specification to include a new button when reporting match results, so that an admin override can be incorporated where an admin is able to report match results, even though he is not actively taking part in the match.

Result: After a while, a working button. But only at one point of the code.

After: Using the beginnings of my own ticket framework based on past working experience.

Prompt:  
 - Background: Please use the @api.md specification to include a new button when reporting match results, so that an admin is able to report match results, even though he is not actively taking part in the match.
 - Points-of-contact: Adjust the tournament-dashboard component and the user-match-dashboard component to the same component for match confirmations where the new button is included.
 - Testing: Test that both the old way of confirming a match result through the /confirm-match-result endpoint and the new /admin-override-match-result endpoint work after your changes are made. Before and after hitting these endpoints verify that the /get-match-result has changed to what it should be.

Result: This actually gives code that worked as I expected. So both points in the site were changed to incorporate the new expected result.

Here’s the questions I answer for every AI coding task going forward:

  • What outcome should be achieved?
  • Why should that outcome be achieved?
  • Can the outcome be measured already or does that need to be implemented?
  • How can we test for the outcome?
  • What steps need to be taken, in which order to properly achieve the outcome?
  • How can other test-systems be used to verify correctness during development?

TLDR: Write a very precise development ticket, grounded in context and documentation to get the best outcome.

4. AI can help with all of that.

When you are not getting the results that you want, you can solve this problem recursively. You enter into a conversation with your AI about what problems you have faced while working with it and iteratively work through those problems while generating new guidelines along the way.

Earlier on when talking about the API example, I referenced an api.md file. This file was entirely generated by AI while looking through my backend code only. Since the backend endpoints are what truly matters, I let it focus on those. Then after generating that file, I looked through it and checked if everything still makes sense as far as I am concerned. When that was completed, I had a new document I could reference everytime I wanted the AI to work with an API endpoint.

Additionally, whenever I updated the API specification in the backend, I made it a point to update that document immediately, so I always have an up-to-date api specification that I can easily ingest into any AI when it’s working on a task.

The same process can be followed when working on your claude.md file. Other people even created a specific process for the AI to follow, to help with further refining their claude.md Through incorporating the current context and chat.

A similar setup can be worked on when going through the creation of AI tasks, I personally like working with a combination of gemini 2.5 pro and claude to work on my tasks. Claude is easier to add context to, but I frequently hit rate-limits. Gemini seems more concise in its reponse and its faster, that’s why I use the combination fo them. In both of them you can inject information about the AI being an expert product owner to generate your own core ticket structure, mine currently looks similar, I cut it down for convenience here:

Core Ticket Structure
Title: Concise, action-oriented summary (50-80 characters)

Format: [Component] Action - Brief description
Example: [Auth] Implement - OAuth2 integration for Google login

Description

Problem Statement: What needs to be solved and why
Acceptance Criteria: Specific, testable outcomes using Given/When/Then format
Technical Context: Relevant architecture, constraints, or dependencies
Definition of Done: Clear completion checklist

It was also created in combination with AI, but looks very similar to what I worked with previously and what is used throughout industries I have knowledge in regarding software tickets.

Helping the AI with planning can also come in a completely different form, when working on a larger task which might get split between different team-members, a list of tasks which has already been accomplished, a documentation on what has been changed how and what still needs to be done, is also an incredibly valuable thing to have. If you work with agentic AIs, have them create their own tracked to-do lists so they keep their focus on the problem at hand.

Rounding up and outlook

I hope these insights will help you when working with your own agentic AIs and save you the time I wished I could get back. It was a learning experience, so I can’t complain too much :D?

If you have encountered issues when working with AI, just try out these insights and they might help you avoid similar frustrations.

Start by chatting with your preferred AI about the following:

  • adding some documentation
  • improving your context
  • providing ticket style instructions

The world of agentic ai coding, is still in its infancy for most people. Best-practices also largely depend on the environment, which coding tool is being used and the size of the code based it is supposed to engage in.

My beliefs on these will probably change, if these have sufficiently changed, I will update this blog post at the start with a disclaimer and reference to a new blog-post with a more up to date view of how to best interact with these tools to my future selfs knowledge.

I’m still in the process of setting up my newsletter and RSS feed, but feel free to reach out if you want to be notified about future posts.

Ingo