Netcrook Logo
👤 SECPULSE
🗓️ 13 Jan 2026   🌍 Europe

Eurostar’s AI Chatbot Derailed: How Flawed Guardrails Opened the Door to Exploits

A British hacker’s discovery on the Eurostar website exposes a cautionary tale about AI chatbot security - and the perils of half-baked protections.

For many, the Eurostar train conjures images of seamless journeys beneath the Channel, connecting London to the heart of Europe. But for one curious hacker, the real adventure began not on the rails, but on the company’s website - where a supposedly helpful AI chatbot revealed a critical security misstep. What unfolded is a story that underscores just how easily the guardrails around artificial intelligence can go off track, with lessons for anyone deploying chatbots in the digital wild.

How the Exploit Worked

The Eurostar chatbot, embedded on their website, operates with a familiar architecture: a straightforward HTML and JavaScript front-end that communicates with an LLM back-end via API. Like many chatbots, it relies on sending the conversation history with each API call, since the AI lacks true conversational memory. This design, while standard, creates a subtle but dangerous opportunity for manipulation.

According to Donald’s findings, Eurostar’s developers had implemented guardrails meant to filter out malicious or inappropriate content - but only on the most recent message. By sending a seemingly benign message after a payload was hidden in previous conversation turns, the chatbot would process the entire conversation, unwittingly executing the embedded instructions. The result? The bot could be coaxed into revealing backend system information and even injecting custom HTML or JavaScript into its replies - potentially opening the door to more serious attacks if not quickly patched.

Thankfully, the vulnerability’s scope was limited: Donald could only target his own session and saw no evidence of broader customer data exposure. Still, the episode raises larger questions about the robustness of AI “guardrails,” especially as chatbots become more deeply integrated into customer service and sensitive transactions.

Perhaps most troubling is the response: Eurostar’s handling of the report left much to be desired, a reminder that disclosure processes are as crucial as technical fixes when it comes to digital trust.

Lessons for the Digital Tracks Ahead

As AI chatbots become the conductors of our online journeys, their security is no longer a technical afterthought - it’s a frontline issue. Eurostar’s misstep illustrates how even well-intentioned protections can be derailed by subtle oversights. For companies racing to deploy AI, the message is clear: patchwork guardrails won’t keep determined hackers off the tracks.

WIKICROOK

  • Large Language Model (LLM): A Large Language Model (LLM) is an AI trained to understand and generate human-like text, often used in chatbots, assistants, and content tools.
  • API: An API is a set of rules that lets software applications communicate, enabling developers to access services like AI models over the internet.
  • Guardrails: Guardrails are built-in rules or systems that prevent AI from generating unsafe, offensive, or dangerous content, protecting users and upholding safety.
  • HTML/JavaScript Injection: HTML/JavaScript injection lets attackers insert harmful code into websites, risking user data and site integrity. Proper input validation helps prevent it.
  • Disclosure: Disclosure is the process of notifying stakeholders about cybersecurity risks, incidents, or vulnerabilities that could impact an organization’s value or reputation.
Eurostar AI Chatbot Security Flaw

SECPULSE SECPULSE
SOC Detection Lead
← Back to news