Your AI prompt just crossed five borders (you have no idea which ones)
You paste a confidential client document into ChatGPT. You type a question. You hit Enter. Three seconds later, you have an answer. In those three seconds, something happened that no compliance officer would approve if they understood it.
Stop 1: The border crossing
Your prompt hits a load balancer. For ChatGPT, that is Azure. For Claude, it is AWS. For Gemini, it is Google Cloud. You do not choose which region handles your request. London? Frankfurt? Virginia? Singapore? The system picks based on latency, capacity, and cost optimisation—not your compliance requirements.
The routing decision is made in milliseconds by an algorithm that has no concept of data residency obligations. Your prompt is already in transit before you could even ask where it is going.
Stop 2: The log
Your prompt gets logged. Rate limiting needs it. Content moderation needs it. Error tracking needs it. Billing needs it. Most providers retain these logs for 30 to 90 days. Some retain them indefinitely for abuse detection and model improvement, unless you have specifically opted out—and even then, the infrastructure logs persist.
At this point, your confidential document exists in at least three places: your browser memory, the load balancer logs, and the queue storage waiting for inference. Each of these may be in a different jurisdiction.
Stop 3: The distribution
Modern AI inference is not a single server processing your request. Large language models are distributed across regions for performance and redundancy. Your prompt might be processed on Server A in us-east-1. The first half of the response might be generated on Server B in eu-west-1. The second half might come from Server C in asia-southeast-1.
This is not a theoretical edge case. This is how large-scale AI inference works. Tensor parallelism, pipeline parallelism, and speculative decoding all involve distributing computation across multiple machines—potentially in multiple data centres, in multiple countries.
The compliance problem
GDPR Article 28 requires data controllers to ensure their processors provide "sufficient guarantees" about where and how personal data is processed. Article 44 restricts transfers of personal data to third countries unless adequate safeguards are in place.
Can you demonstrate that your client's data stayed within UK or EU jurisdiction when processed through ChatGPT? No. You cannot. OpenAI's terms of service state that "data may be processed in any country where we or our service providers operate." That includes the United States, and it includes any region where Azure has capacity.
For firms regulated by the SRA, the FCA, or HMRC, the question is not whether AI is useful. It plainly is. The question is whether you can use it without creating a compliance exposure that you cannot quantify, audit, or control.
The alternative
This is exactly why we built PrivateNode with dedicated European infrastructure where every layer of the stack stays within European jurisdiction. Your prompt does not cross a border. It does not hit a US load balancer. It does not get logged on infrastructure controlled by a US-incorporated entity. The inference, the embeddings, the vector search, and the document storage all run on European-owned servers in European data centres.
Three seconds is all it takes to lose control of your client's data. It is also all it takes to get an answer from infrastructure you can actually trust.
Want AI that keeps your data in European jurisdiction?
Get in touch