Latest News + Updates

Proactive Defense: The UK’s Vulnerability Research Initiative as AI Security Blueprint

The UK’s National Cyber Security Centre (NCSC) established the Vulnerability Research Initiative (VRI) to expand vulnerability discovery, including plans to target AI-powered weaknesses. Recognizing the limitations of their internal team, NCSC will collaborate with third-party researchers to investigate both common and specialized tech vulnerabilities. The program will coordinate expert efforts, share findings with industry, and aim to proactively identify and mitigate risks before they’re exploited—particularly those tied to AI modules and supply-chain components.

+ Read More

The EU Code of Practice: Practical Governance for the AI-Governed Enterprise

The European Union published a voluntary Code of Practice for general-purpose AI, complementing its upcoming AI Act set to take effect August 2, 2025. Crafted by independent experts and stakeholders, the code emphasizes safety, transparency, security, and copyrighted content use. Signatories—potentially including major LLM providers—will benefit from reduced administrative burdens and clearer legal frameworks. The move signals the EU’s embrace of ‘secure-by-design’ AI while controlling risk, especially relevant for sectors managing sensitive data and vendor-produced AI systems.

+ Read More

Supply Chain Invisibility: The Silent Breach Catalyst

A new LevelBlue report reveals that only 23% of enterprises have high visibility into their software supply chains—even though 40% of CEOs consider it their top security risk . Poor visibility correlates with breaches: 80% of under‑informed organizations suffered incidents in the past year, versus 6% for those with strong visibility. Emerging regulations (EU Cyber Resilience Act, U.S. SBOM mandates) are accelerating action, but risk remains. Firms must embrace software bills of materials and integrate AI‑driven monitoring to reduce third‑party exposure.

+ Read More
A man in a suit answering the phone in a boardroom

Deepfake Governance Attacks: A New Frontier of Trust Exploitation

Deepfake governance attacks are evolving social-engineering weapons. Cloned voices bypass human trust, exploiting familiarity & honor-based workflows. This isn’t just a government threat; CEOs face it too. We need new authentication for communications & decision-making to combat this rising risk. Trust must be earned—every time.

+ Read More
An AI castle with a moat of electricity

Zero‑Trust for AI-First Workplaces: Lessons from Zscaler

Traditional network approaches still rely on IP allowlists, VPN access, and perimeter security to implicitly trust users once inside. With distributed workforces and AI-integrated SaaS platforms, attackers can pivot horizontally once internal. Every user or AI agent becomes a potential threat vector. Removing implicit trust means protecting each identity, session, and API.

+ Read More
Provata - Cyber Technician in Server Room

AI Can Help Fight Cyber Attacks

The ability to proactively identify anomalous activity and potential attacks before they can do major damage is one of the biggest benefits of tech-driven cybersecurity.

+ Read More
Provata AI - Managing Supply Chain Risk in a Post-COVID World

Managing Supply Chain Risk in a Post-COVID World

Although the full effects of the pandemic are still unknown, it is never too early to start mapping out what the future might hold for our supply chains, and how to manage risk within them. Early research and analysis is encouraging, pointing towards companies across industries making major changes in the way they adapt supply chains to be more conscious of the known and unknown risks that they face.

+ Read More