At a May 2025 executive panel hosted by Cyber Risk Alliance, security influencers Bruce Schneier and Renee Guttman delivered a clear message: cybersecurity must evolve from static checks to continuous, intelligent validation. Their perspectives are especially close to home for us — both Renee and Bruce are Strategic Advisors at FireCompass, helping guide our mission to bring ethical, AI-powered red teaming to enterprise security.
Here’s what they shared — and how it maps to the future of offensive security.
AI as a Defensive Advantage
AI offers new ways to strengthen cybersecurity—not just in detection and response, but also in offensive testing.
- Useful for source code analysis, phishing defense, vulnerability triage, and more
- Augments human analysts instead of replacing them
- Defense will likely benefit from AI faster than offense, according to Schneier
FireCompass in action: Our AI agents emulate attacker behavior using static and dynamic playbooks, safely executing attack paths to validate real exposures.
The Need for AI Integrity
Trust is a core requirement for AI adoption in cybersecurity.
- AI bias, opaque logic, and misaligned incentives pose real risks
- Systems must be designed with integrity, accountability, and transparency
- Without trust, AI tools won’t be used effectively
CISOs should demand justification—not just explainability—from AI-powered tools. Counterfactual reasoning, clear logic, and human oversight are essential.
Stop Blaming Users — Fix the Design
Schneier pushed back against the overuse of security awareness training as a fix-all.
- Most breaches are rooted in poor system design, not user mistakes
- Security should be built around human error tolerance, not perfection
- AI should enforce safe defaults (e.g., “easy yes” only when secure)
Red teaming should uncover not just technical gaps, but systemic flaws in design and architecture.
The Case for Regulation
The panel discussed the growing need for strong AI regulation—especially when systems make high-stakes decisions.
- Schneier supports current EU and California AI legislation
- Warns against federal overreach that weakens state laws
- Advocates for enforceable justification in AI decision-making
Security vendors must prepare for greater scrutiny and provide full transparency in how their AI behaves.
Impact of AI on Talent and Teams
Generative AI is changing the dynamics of cybersecurity teams.
- Entry-level roles are most at risk of displacement
- Long-term talent pipelines may be disrupted
- Teams will need to rethink how skills are built and retained
The future workforce must be supported with roles that balance AI automation and human learning.
FireCompass: A Practical Example
During the session, FireCompass was highlighted as an example of how to apply AI in a responsible, high-impact way.
- Shifts pen testing from periodic to continuous and autonomous
- Validates risks through safe, live attack emulation—not just scanning
- Requires no agents or deployments—outside-in, like real attackers
This automation-first model helps CISOs reduce risk exposure without expanding pen testing and red teaming resources.
Recommended Actions for CISOs: Shift From Point In Time Pen Testing To AI-Based Automated Pen Testing
- Make AI integrity a non-negotiable requirement in vendor selection
- Evaluate security tools by how they scale, justify actions, and reduce manual effort
- Move away from point-in-time testing toward continuous, AI-led validation
- Redesign workflows and roles to adapt to AI-driven transformation
Ready to Make the Shift?
If you’re exploring how to scale offensive security, validate risks continuously, and reduce noise without adding headcount, start with FireCompass free trail.
Try it here: Firecompass Trial