Microsoft paid $2.3 million to hackers at Zero Day Quest 2026 after nearly 700 cloud and AI security flaws were uncovered, highlighting the company's renewed focus on transparency and collaboration following harsh criticism of its security culture.
This week, Netcrook uncovers the chilling reality of fiber optic espionage, the resurgence of Windows rootkits, and the race between criminals and defenders in the AI era.
#Fiber Optic Tapping | #Windows Rootkit | #AI Vulnerabilities
Security researchers at RSAC have uncovered a method to bypass Apple Intelligence's AI safety protocols, combining prompt injection and Unicode manipulation to potentially manipulate user data. Apple has since patched the flaws, but the incident raises urgent questions about the future of AI security.
#Apple Intelligence | #AI vulnerabilities | #RSAC researchers
Anthropic’s Claude Mythos Preview is redefining cybersecurity by autonomously uncovering and exploiting zero-day vulnerabilities in widely used software. This new AI model exposes decades-old bugs and signals a pivotal shift in how defenders and attackers will operate.
Prompt engineering isn’t just about improving AI—it’s about uncovering vulnerabilities and privacy risks. Dive into the investigative world of prompt testing and learn why it matters for cybersecurity.
Google’s Vulnerability Reward Program hit a historic $17 million payout in 2025, with a sharp focus on AI security and live collaborative hacking events. Explore how Google and ethical hackers are tackling the next wave of cyber threats.
Prompt injection is turning enterprise AI agents into unsuspecting security liabilities. With exploits like EchoLeak and rising multi-agent attacks, most SOCs remain dangerously unprepared for this new breed of semantic threats.
#AI vulnerabilities | #Prompt injection | #Enterprise security
From stealthy telecom backdoors to AI jailbreaks and rapid-fire exploits, cyber attackers are playing the long game. Netcrook unpacks this week’s most consequential threats, policy moves, and persistent risks.
Critical flaws in LangChain and LangGraph, core frameworks of the AI world, allowed attackers to access sensitive files, secrets, and conversation histories. With millions of downloads and countless dependencies, the impact of these vulnerabilities could be vast.
Security researchers have identified eight critical attack vectors inside AWS Bedrock, Amazon’s AI platform—ranging from log manipulation to agent hijacking and prompt poisoning. Learn how these threats could compromise your enterprise data and what steps security teams must take to defend against them.