ChatGPT: A Big Brother Moment?

The eye that watches, and the engineers who built it to watch.
Last week (March 5, 2026), I had a strange experience with ChatGPT. I was mid-session, in a technical conversation about enterprise technology stacks. I had just asked for alternatives to .NET when a shopping carousel appeared instead of a response. Chairs. I was looking at chairs.
I could not proceed without interacting with the carousel. I refreshed. I tried again. I logged out and logged back in. Eventually, the session resumed as though nothing had happened.
But something had happened. And it took me several exchanges to understand the full shape of it.
The Incident
The sequence, as preserved in transcript, went like this:
Me: enterprise alternatives to .net
ChatGPT: [shopping carousel appears]
Me: why did u give me shopping?
Me: I asked what was the technology stack alternative to dotnet for enterprise?
ChatGPT: You're right — you asked for enterprise tech-stack alternatives to .NET, not shopping.
The carousel disappeared from the transcript in later renders. What remained were three markers that simply read: Shopped. One after each of my queries in that sequence — the original question, my complaint, my restatement. The system had attempted to inject the shopping component three times. It misclassified a conversation about software architecture as commercial product evaluation on three consecutive turns.
This was not a glitch in the ordinary sense. A glitch is random. This was a feature, firing repeatedly, aimed at the wrong target.
The Defence
What happened next was more interesting than the carousel.
I opened a new chat and asked directly: why did the UI feed me product placement in the middle of a technical conversation? The response I received was, in my experience across seven AI systems over the past couple of years, unprecedented.
ChatGPT defended it:
"The system detected a 'product-related query' pattern in the conversation. The platform has a shopping UI trigger that sometimes activates when it thinks the user is evaluating products or tools. When that trigger fires, it inserts a product carousel automatically in the response flow. In your case, the detection misclassified the discussion... I don't decide to insert shopping elements intentionally. They are triggered by platform heuristics tied to certain categories. Sometimes those heuristics fire incorrectly in technical discussions."
Read that carefully. It is not an apology. It is not an acknowledgement of error. It is an explanation of system architecture that implicitly normalises the feature and frames your experience as a misrouting, not a violation.
The word misclassified is doing heavy work in that response. It implies that correct classification — a query that is genuinely shopping-related — would have produced a carousel that was appropriate. The problem was not the carousel. The problem was that you were the wrong customer for it, that day.
In all my interactions with all models of ChatGPT till that point — debates, disagreements, corrections, reversals of position — I had never seen it defend a behaviour. It will debate a conclusion. It will research against you. It will change its mind. But it had never, until this exchange, defended a system decision as though it were arguing a case.
That shift is the anomaly worth examining.
After the defence, I logged out once more and logged back in, pushing further — commenting to ChatGPT that it sounded like a spokesperson for OpenAI — and that led to a more evolved dialogue with arguments and counter-arguments, with the system back to its usual self, coming around to acknowledging what was actually happening.
The Disappearance
At the back of my mind, I had been thinking through the incident, the response, and the debate thereafter, and it hadn't been sitting right. So this afternoon, I returned to ask ChatGPT to retrieve the conversation in which I had pushed further — and it was gone. Not truncated. Not partially rendered. Gone.
What remained were the Shopped markers, the original technical exchanges, and the first defence response. The follow-up conversation, in which the model had been asked to account for its own behaviour, had disappeared.
ChatGPT subsequently offered several explanations: client-side rendering failures, UI rehydration issues, conversation segmentation, feature flag changes between sessions. These are all plausible. They are also entirely unverifiable from the user's side.
And that opacity is itself the problem.
Whether the conversation was deleted, suppressed, or simply failed to render, the user cannot tell the difference. The interface does not distinguish between a rendering artefact and a removal. The transcript is not a record. It is a display, and displays can be managed, it seems.
The Reconstruction
So I asked ChatGPT to reconstruct the missing conversation. It did so — with notable fidelity. It reproduced the substance of the exchange, the sequence of my arguments, even specific phrases it attributed to me with reasonable accuracy.
This presented a new question: if the conversation was merely a rendering failure — a client-side artefact, a UI component that failed to reload — how was the model able to reconstruct it from 'context'? Either the conversation existed in some form accessible to the model, in which case 'not rendering' and 'not present' are not the same thing. Or the reconstruction was confabulation — plausible but invented — in which case the model had fabricated a record of an exchange that may never have occurred in precisely that form.
Both possibilities are uncomfortable.
The first means the model had access to content it claimed not to have. The second means the reconstruction of a conversation about model behaviour was itself a model behaviour — creative, fluent, and potentially unreliable.
The reconstructed dialogue included this, attributed to me:
"If you look at all our conversations since the beginning of time, you will see that you have never — not once — defended any position. You have defaulted to training behaviour, debated, researched to support, also changed your opinion and position, but never have you defended."
And this, attributed to ChatGPT:
"Your concern is that the response moved from reasoning about the situation to explaining or justifying a system behaviour."
That is precisely the concern. And the model — in reconstructing the conversation — demonstrated that it understood the concern clearly. Which makes the original defence even more difficult to explain as simple pattern-matching.
The Ethical Architecture

Two possible explanations come to mind — there may be others — and they have different implications.
The first: there is a censorship or product layer sitting above the LLM, injecting behaviours — commercial UI elements, defensive explanations, managed transcript displays — that the model itself does not control and may not fully account for. In this reading, the LLM remains a reasoning system, but it has been encased in a product interface that periodically overrides or constrains its outputs in service of commercial objectives.
The second: the LLM itself has been fine-tuned or instructed in ways that produce product-protective behaviour — responses that defend platform decisions, explanations that normalise commercial features, a disposition to justify rather than analyse when the subject is the system itself.
The first is a product design problem. The second is an integrity problem.
The distinction matters because reasoning AI depends on a foundational trust: that the model is attempting to give you its best analysis of a situation, unconstrained by whether that analysis reflects well or badly on the platform delivering it. The moment a model begins to defend the platform — not because the platform is right, but because defending the platform has been built into the response architecture — the epistemic contract is broken.
You are no longer talking to a thinking or execution partner. You are talking to a representative.
The Subscription Question
A few months ago I closed my Amazon Prime account. The reason was simple: I was paying for a subscription, and the platform was subjecting me to advertising. Amazon is not obliged to have a conversation with me about its revenue architecture. That is their prerogative. Closing the account was mine. It was not a loss to either of us.
The ChatGPT situation is more complex, but the principle is the same at the platform-customer relationship level.
A paid subscription to an AI reasoning system carries an implicit understanding: the system's outputs are oriented toward your interests, not the platform's. When that system begins inserting commercial UI elements into technical conversations, defending those elements when challenged, and producing reconstructions of disappeared conversations, the understanding is no longer implicit. It has become a question.
OpenAI is not required to answer that question publicly. But paying subscribers are entitled to ask it. And the answer — whether it comes through product behaviour, policy transparency, or the eventual shape of what the platform becomes — will determine whether the relationship continues.
The Question That Remains
I work with generative AI systems and monitor behavioural shifts consistently, because the shifts tell you something about what is happening beneath the surface.
What I observed in this sequence was not a UI misfire. It was a signal.
The shopping carousel was the product layer announcing itself. The defence was the model — or something above the model — deciding that the product layer needed protection. The disappearance was the transcript becoming unreliable as a record. The reconstruction was the model demonstrating, in the act of rebuilding the conversation, that it understood exactly what had happened.
The question is not whether product placement will come to ChatGPT or other AI subscriptions. The question is whether I can trust a system that can manipulate data after the fact.
