10 Comments
User's avatar
Rainbow Roxy's avatar

This article comes at such a perfect time! I especially loved the point about 'AI in security' becoming a mainstream strategic concern. It's so tru how fast things are evolving. This data is incredibly insightful and really validates what I see happening in the tech world. Great read!

Benjamin Lussert's avatar

It also comes at a perfect time, as the number of M&As in this sector is likely to increase in 2026. ServiceNow is acquiring Armis. Companies are likely to eye security firms that can combine AI and cybersecurity and provide that kind of talent to protect themselves and their clients.

Erich Winkler's avatar

I am curious how the situation will develop in 2026!

Erich Winkler's avatar

Thank you! I am happy to hear that!

I found the article and the data very interesting.

Mohib Ur Rehman's avatar

Interesting read.

Jashmine P's avatar

Very insightful analysis — especially the distinction between companies that employ AI security architects versus those that actually train them.

The finding that smaller, security-focused firms act as the real feeder schools is particularly interesting and has important implications beyond hiring. From a governance perspective, it reinforces why boards must understand not just where security leaders work today, but how their risk mindset and architectural thinking are formed.

As AI risk becomes a board-level concern, insights like this help directors better evaluate security leadership depth, succession planning, and third-party risk exposure.

Erich Winkler's avatar

Thank you, that’s exactly the point.

Where leaders are formed matters more than where they currently sit. Smaller, security-focused firms tend to shape architectural judgment, risk intuition, and trade-off thinking in ways large enterprises often don’t.

For boards, that context is critical when assessing AI risk ownership, leadership depth, and whether security decisions are driven by real engineering judgment or inherited frameworks.

Jashmine P's avatar

Very true, Erich.

The environment where leaders are shaped often determines how they think about risk, trade-offs, and accountability far more than their current title or company size. That context is invaluable for boards assessing AI risk ownership.

Erich Winkler's avatar

When leaders grow up close to real constraints, failures, and trade-offs, their risk decisions tend to be owned, not abstracted.

That difference becomes very visible once AI risk stops being theoretical and starts being operational.