Elon Musk’s AI chatbot Grok is courting controversy as users are urged to upload sensitive health data for medical advice—a move that bypasses strict European regulations and sparks concerns about privacy, safety, and regulatory oversight.
A new wave of cybercrime uses AI-powered chatbots to impersonate trusted assistants like Google Gemini, luring victims into buying a fake cryptocurrency called 'Google Coin.' This investigative feature breaks down how the scam works and what red flags to watch for.
The UK’s data watchdog has opened a formal probe into Grok, the AI chatbot from Elon Musk’s xAI, amid allegations of personal data misuse and child safety failures. The investigation could set new standards for AI accountability.
Apple is reportedly preparing to overhaul Siri with a new chatbot-style interface, enabling persistent conversations and deeper integration across its ecosystem. As the tech giant leans into conversational AI, experts debate whether the move will set a new standard for privacy-conscious assistants or introduce new risks.
#Siri | #AI Chatbot | #Apple
A flaw in Eurostar’s AI chatbot allowed a hacker to bypass security filters, revealing system info and injecting code—underscoring the urgent need for stronger AI safeguards.
When ethical hackers flagged severe vulnerabilities in Eurostar’s AI chatbot, the company responded with accusations of blackmail instead of thanks. This incident exposes the risks of rapid AI adoption without proper security and the rocky road for responsible security disclosures.
Two ex-contractors deleted sensitive US government data, then used AI chatbots to cover their tracks. The digital paper trail left behind proved their undoing in this bungled cyber caper.