What's this going to break?

Image generated by author in collaboration with MidJourney
If you don't code, this is not for you.
If you are a programmer but have never actually designed or created a solution, this could be useful but it is probably not for you.
If you are an AI expert or a Chief of Anything — basically someone who doesn't actually build stuff — this is definitely not for you, because you know everything you need to know and don't have to pay attention to downstream problems.
This article is about building software with AI as a developer. Architects design what someone else imagined, for developers who will build what they can rather than what was designed. Nobody in that chain owns the whole thing.
To be precise: if you want to design and build and test and deploy your own product all by yourself, then this article is for you. It is about being the whole team. It won't tell you everything. In fact, it may tell you nothing about actually building stuff. But it will give you one cool tool that may help you streamline your collaboration process with AI.
So what am I doing right now, at the very beginning of an article? I am laying down some boundaries so that readers know whether this is for them or not. And that is how it is when you are working with Generative AI — you have to know enough to be specific enough so that you get what you need from the platform.
So here is the cool tool: the most important thing I have learned to ask when coding with AI: "What's this going to break?"
Frankly, that one learning applies to anyone collaborating with AI for any purpose. But I don't want to generalise. "What's this going to break?" is a very common second question after a brainstorming or solution session. Engineers, mechanics, designers ask this all the time before they make changes to something already designed — but more frequently after something is built. Why am I starting with this? Because that is the nuke to keep any enthusiastic AI in check. But more importantly, it is a critical thinking check on yourself. It ensures that you are looking at your own plans and digging deep to expose your own assumptions rather than discovering them later just before deployment — because at that stage, an architectural problem really can't be fixed.
But why is this especially required when working with AI? Because what many may not know is that every LLM-based AI — every — reads the full context at every single prompt. So let us take this example:
Prompt 1: "I need a user authentication module with JWT tokens."
Response 1: "Here's the authentication module with JWT implementation, including token generation, validation, and refresh logic. The tokens expire in 24 hours and are stored in the Authorization header..."
Prompt 2: "Add a password reset flow."
Response 2: "Here's the password reset flow integrated with your auth module. It uses a separate short-lived token for reset links, tied to the user's email..."
Prompt 3: "The dashboard isn't loading for new users."
Response 3: "The issue is that new users have no session token yet. Here's a fix that initialises a guest session on first load..."
Prompt 4: "Add role-based access for admin users."
Response 4: "Here's the role-based access implementation. Admin roles are checked against the JWT payload on each protected route..."
...
Prompt 47: "Users are getting logged out randomly."
Response 47: "The issue appears to be in the session handling. Here's a fix that extends token refresh intervals..."
At Prompt 4, the AI is reading everything, including all documents uploaded in all previous prompts and all responses. As the conversation goes on it will summarise, drop context, use synopsis and key points to stay efficient — therefore losing nuances, making judgments based on what it thinks is relevant to your goal. It is exercising judgment based on your immediate prompt task and not the entire conversation.
By Prompt 47, the AI has long since stopped holding the full weight of Prompt 1's architectural decisions. The JWT design, the password reset logic, the guest session patch, the role-based layer — all of it is now compressed, summarised, partially lost. The fix it offers is locally correct and potentially globally destructive.
That is why — long before Prompt 47 — after you have already built and deployed and discovered an error, when you discuss a solution and ask the AI to implement the fix, you have to ask if it will break anything. That is a trigger that forces the AI to step back and take stock of the entire context to see if something could go wrong. It still could — but this is a review mechanism.
A great follow up after the fix is an instruction: "Make sure this doesn't break anything already working."
One last thing. Companies adopting AI already know the power of AI, but the thought process is 'we'll replace 20-person teams with two people and AIs.' That's not collaboration — that's the same fragmented ownership problem, now running at scale, with AI doing the forgetting faster than humans ever could.
The mistake isn't using AI. The mistake is deploying AI into a structure that was already broken and calling it transformation. Roles don't contain knowledge — people do, but only when they own the whole problem. Give an AI to someone who only owns a slice and you get faster, more confident, better-written errors. What's this going to break? Nobody in that org is positioned to answer that. And that's the problem.
P.S.: This part is for the CIOs, CTOs, Chiefs, Heads of Transformation, and everyone currently deploying AI through role mapping, skill mapping, upskilling initiatives, and exciting strategic roadmaps. When you are planning AI deployment, ask yourself: "What's this going to break?"
Hmm?
