The EU Code of Practice: Practical Governance for the AI-Governed Enterprise

The European Union published a voluntary Code of Practice for general-purpose AI, complementing its upcoming AI Act set to take effect August 2, 2025. Crafted by independent experts and stakeholders, the code emphasizes safety, transparency, security, and copyrighted content use. Signatories—potentially including major LLM providers—will benefit from reduced administrative burdens and clearer legal frameworks. The move signals the EU’s embrace of ‘secure-by-design’ AI while controlling risk, especially relevant for sectors managing sensitive data and vendor-produced AI systems.

Supply Chain Invisibility: The Silent Breach Catalyst

A new LevelBlue report reveals that only 23% of enterprises have high visibility into their software supply chains—even though 40% of CEOs consider it their top security risk . Poor visibility correlates with breaches: 80% of under‑informed organizations suffered incidents in the past year, versus 6% for those with strong visibility. Emerging regulations (EU Cyber Resilience Act, U.S. SBOM mandates) are accelerating action, but risk remains. Firms must embrace software bills of materials and integrate AI‑driven monitoring to reduce third‑party exposure.

Deepfake Governance Attacks: A New Frontier of Trust Exploitation

A man in a suit answering the phone in a boardroom

Deepfake governance attacks are evolving social-engineering weapons. Cloned voices bypass human trust, exploiting familiarity & honor-based workflows. This isn’t just a government threat; CEOs face it too. We need new authentication for communications & decision-making to combat this rising risk. Trust must be earned—every time.

Zero‑Trust for AI-First Workplaces: Lessons from Zscaler

An AI castle with a moat of electricity

Traditional network approaches still rely on IP allowlists, VPN access, and perimeter security to implicitly trust users once inside. With distributed workforces and AI-integrated SaaS platforms, attackers can pivot horizontally once internal. Every user or AI agent becomes a potential threat vector. Removing implicit trust means protecting each identity, session, and API.

The UBS Data Leak: A Wake-Up Call for Rethinking Third-Party Risk Management

In June 2024, global financial powerhouse UBS became the latest victim of a cyberattack—not due to a breach in its own defenses, but through a vulnerability in a third-party provider. The leak exposed sensitive employee data after a ransomware group, LockBit 3.0, targeted a third-party vendor that provided HR and payroll services.This incident is just […]

AI Governance: The Cornerstone of Cyber Resilience—Insights from Axios Boston

source: David Fox Photography on behalf of Axios

At the Axios Boston Security roundtable in June 2025, cybersecurity leaders converged to dissect AI’s impact on digital defense. Their verdict was unanimous: without formal governance frameworks, AI adoption amplifies risk rather than mitigates it . The Dual-Edged Sword of AI in SecurityAI models can rapidly process threat intelligence, identify novel malware signatures, and automate […]

Turning Human Vulnerabilities into Strategic Strengths in AI-Driven Cybersecurity

The paradox at the heart of modern cybersecurity is this: even as we deploy the most advanced AI models to detect anomalies in real time, a single voice-cloned phone call can render those defenses moot. The recent Scattered Spider attack on Qantas, which exposed data for up to 6 million customers through a compromised third-party […]