For the First Time in 15 Years, One Thing Is Dominating Cybersecurity
Matthew Rosenquist has made cybersecurity predictions for 15 years. In 2026, for the first time, AI is dominating all of them.
Matthew Rosenquist has been publishing annual cybersecurity predictions for 15 years. He grades himself on them publicly, every year, without exception. That combination of long-term commitment and public accountability gives his forecasts a credibility that most analyst commentary doesn't earn.
So when he says 2026 is different from every year he's covered before, it's worth understanding exactly why.
In Episode 6 of Full Metal Packet, Matthew joins hosts Yegor Sak and Alex Paguis to walk through his 2026 predictions and what he's already watching come true.
Full Metal Packet Podcast: Episode 6 with Matthew Rosenquist
The First Time a Single Factor Has Taken Over Everything
Matthew has structured his predictions around four independent areas: what cyberattackers are doing, what disruptive technology is reshaping the battlefield, how targets of cyberattacks are behaving, and the state of the cybersecurity industry itself.
In fifteen years, no single factor has ever dominated all four at once.
This year, one does.
"It's the first year ever, ever in all the years that I've done these predictions, that a single element has dominated all four of those areas," he says. "And in this case, it's AI."
He's careful to separate this from the AI hype that's circulated since at least 2022. The argument isn't theoretical. AI is already reshaping all four areas at once, and the momentum is only growing.
The $6 Zero-Day and the Shrinking Window
For most of the history of software security, defenders had time. Not unlimited time, but enough. Discovering a vulnerability required dedicated researchers with deep expertise. Writing a working exploit required more. Orchestrating an attack required more still. From discovery to real-world harm, the timeline was measured in months, sometimes years.
But time is shrinking.
Matthew points to a post from a fellow researcher who found a zero-day in a major commercial product for $6. Not a theoretical vulnerability, but a serious, previously unknown flaw in software that people actually use. Six dollars.
The compression runs through the entire chain. AI tools now assist with discovery, exploit generation, and attack orchestration. "What used to take highly skilled researchers and a lot of time is now available for a few dollars and an afternoon," Matthew explains. Many organizations still assume they have weeks to respond to a disclosed vulnerability. That assumption is no longer safe. And it’s something that should be making executives uncomfortable.
The Attack That Encrypted 91 Days of Backups
The episode's most striking war story has nothing to do with AI. It's a case from years past that illustrates exactly why the shrinking response window is so dangerous, and why defenders can't afford to assume their safety nets will hold.
A company had followed best practices. They ran regular backups. They used Iron Mountain for cold tape storage. They had 90 days of coverage.
The attacker had been inside longer.
The breach involved placing a shim in front of the company's core database, a transparent layer that silently encrypted data going in, then decrypted it coming back out. In normal use, the system was slightly slower but fully functional. Nobody noticed. The shim ran for 91 days before the attacker deleted the decryption key and sent a ransom demand.
The company attempted to restore from the day before’s backup. Encrypted. Last week's tapes from Iron Mountain. Encrypted. Ninety days of backups, all written through the shim, all useless.
"By that time you can't just create a patch and test it," Matthew says elsewhere in the conversation, "you have to wait until they're all done." The same logic applies to backup strategies built around assumptions that the attacker has spent months quietly invalidating.
The Phishing Email He Was Rooting For
Matthew is not an easy target. He works across both security research and advisory roles, thinks in threat models, and approaches every unsolicited communication with professional skepticism.
Which is why the phishing email he received recently caught his attention, not because it fooled him, but because of how close it came to being convincing.
The email opened with a detailed summary of his career: conferences he'd spoken at, keynotes he'd given, specific messages from his public work. There was flattery, urgency, even a request. It was signed by a named individual at a company, with a link to verify credentials.
Matthew opened the link in a sandbox. A full company website loaded: headquarters photo, privacy policy, services pages, and customer reviews. He navigated to the About page. Headshots of the executive team and LinkedIn profiles linked from each one. The whole thing looked completely legitimate.
All of it was fabricated. The website had been generated by an AI tool that produces a complete web presence from a single prompt. The LinkedIn profiles were synthetic. The headshots were AI-generated.
"I was rooting for it," Matthew says. "I really was."
He caught it because the AI had misread the tone of his public writing, pulling quotes that were sarcastic and presenting them straight. Most people don't write with enough tonal consistency for that gap to be detectable. And most people, confronted with a polished company website and a verifiable-looking executive team, won't think twice to check the HTML source.
The same week, he received a call from someone impersonating his daughter, using a cloned voice, claiming she had been kidnapped. He knew what to do. He had established safety keywords with his children specifically for this scenario, and the caller couldn't provide them. Most people haven't done this. His practical advice: safety words, secondary contact methods, and challenging questions whose answers wouldn't appear in any training data. The solution sits with people, not products.
The Attack Surface Nobody Is Watching
There's a threat Matthew raises in the episode that gets less attention than ransomware or social engineering, but that he argues is the more serious near-term risk.
Every major enterprise SaaS platform is currently rushing to expose APIs and MCP (Model Context Protocol) connections to stay competitive in an AI-enabled market. These are the interfaces that let agentic AI systems connect to Salesforce, Slack, email, and internal databases, and act on them. The pressure to adopt them is real, and it's moving faster than the security controls around them.
"The MCP framework, when it was originally designed, was 100% about functionality," Matthew says. "There was zero concern about security, privacy, safety, or governance. Wasn't even thought about."
An employee connecting an AI assistant to the company Slack isn't doing anything malicious. They may not realize that the AI now has read access to channels they don’t, that it can act on a well-constructed external message, or that their organization has no visibility into what it's doing. This is the risk Matthew argues is an order of magnitude larger than the shadow AI conversation that tends to dominate the discussion.
"These connections will bypass all of our traditional security tools," he says. "There is more here than anybody will know."
What Separates the CISOs Who Survive
Matthew's final prediction concerns the security leadership role itself, and it's the most direct.
The CISOs who get replaced in the next two years will be the ones who treat AI primarily as a risk to slow-walk, positioning themselves as the obstacle between the business and AI adoption until every security question has been resolved. No business will accept that attitude when competitors are moving faster.
The ones who survive will be the ones who change the nature of the conversation. Not "here's the risk we need to manage before you can proceed," but "here's how we move fast with the right guardrails embedded from the start." That means treating security as a partner to the business rather than a gatekeeper.
"I'm not the office of no," Matthew says, describing the approach the surviving CISO needs to project. "I'm your partner to help you go faster."
It's a shift that requires a different skill set than most CISOs have traditionally developed, one that leans harder on business fluency and value framing than on technical depth. And it requires getting ahead of AI adoption rather than waiting to be brought in at the end.
"Those," Matthew says, "are the CISOs that are going to win."
Matthew Rosenquist is a cybersecurity strategist, former Intel security executive, and independent advisor to boards, CISOs, and security-focused startups. His full 2026 predictions are available on his LinkedIn. This post is based on his appearance on Episode 6 of Full Metal Packet.
Listen on Apple Podcasts, Spotify, YouTube, or wherever you get your podcasts.