“Not an existential threat”

Marion Stewart, CEO, Red Helix
Are new AI tools like Claude Code Security a threat to cybersecurity vendors? Should CrowdStrike, Palo Alto, Okta and Zscaler be worried?
AI-native security tools such as Claude Code Security are not an existential threat to established cybersecurity vendors – but they are a signal of how rapidly the threat landscape and defensive capabilities are evolving.
AI is accelerating both attack sophistication and defensive automation. Tools that can analyse code, identify misconfigurations or surface anomalies at speed are valuable, but they do not replace the need for continuous, integrated security operations across endpoint, identity, cloud and network. Cybersecurity is an ecosystem — not a single feature.
The major platforms recognise this. Vendors like CrowdStrike, Palo Alto Networks, Okta and Zscaler are already embedding AI deeply into detection, response and identity protection. CrowdStrike’s launch of AI Detection and Response (AIDR), for example, reflects a broader shift towards securing the emerging AI attack surface — including risks associated with prompt injection, model manipulation and autonomous agents.
The competitive differentiator will not be who “has AI”, but who integrates it most effectively into real-world protection, monitoring and response. The winners will be those who operationalise AI within a broader security architecture – not those who treat it as a standalone capability.
I believe that AI is becoming foundational infrastructure in cybersecurity, not a parallel industry any longer.
As an MSSP leader, do you foresee a future where you use, or help clients use, these new AI tools as part of a wider cybersecurity strategy?
Absolutely — and that future is already taking shape.
Clients are actively exploring AI adoption, but they want to do so responsibly. The priority is not just innovation, but governance: control of data, clarity of accountability, and visibility of risk, and being fully transparent with prospects and customers about this is key I believe. As an MSSP, our role is to help organisations harness AI’s benefits without compromising security posture or regulatory obligations as can often happen if we go back to some of the early public cloud days.
The SOC of the future will be increasingly AI-powered, but that does not mean analyst-free in my opinion. Quite the opposite. AI is exceptionally effective at triaging noise, correlating signals across vast datasets and surfacing patterns at machine speed. That reduces alert fatigue and accelerates time to insight. However, context, judgement and decisive action during incidents remain human responsibilities.
AI augments expertise; it does not replace it. And whilst I think it improve at a rate unprecedented in any industry, people will ultimately still be required but with enhanced skills. Even with the developments of agentic AI, during a live security event, I believe that clients still need experienced engineers who understand business impact, regulatory implications and operational priorities.
I see managed services evolving towards intelligent automation layered with expert oversight. AI will enhance detection and investigation, and in due course aid decision making and actions taken, while our specialists focus on validation, strategic guidance and incident response leadership and coordination/ communications.
I think that future of managed security is not human versus machine – it is human plus machine. Better automation, stronger insight, and more time for skilled professionals to exercise judgement where it matters most.
Article continues on final page…












