Seth Whitworth, who is both acting Associate Deputy Chief of Space Operations for Cyber and Data and acting chief information security officer, said he believes AI tools are shifting the way defenders review cyber risk, both for individual systems and more holistically throughout an enterprise.
In particular, Large Language Models can be used to systematically implement fixes for the smaller but critical weaknesses that have allowed state-sponsored hackers and cybercriminals to get inside victim networks and live off the land.
“Our adversaries are not looking for the massive cybersecurity vulnerabilities – we’re actually pretty good at [defending] that,” said Whitworth Tuesday at AI Talks, presented by Scoop News Group. “They’re looking for a misconfiguration, a failed update, a tiny little thing that allows them an entry point into a very connected network.”
Many of these basic cyber hygiene problems tend to fall under existing compliance programs, but it can take more than legal mandates to fix them. Many enterprise IT networks – particularly older ones – build up technical debt over time, leading to forgotten systems, hidden routers and other forms of shadow IT that get more insecure over time.
Cybersecurity experts say agents and the Large Language Models that power them – which operate in perpetuity 24/7, – are particularly well-suited to finding these smaller flaws and quickly exploiting them.
But Whitworth argued that the same technology can be used to reshape how organizations measure and track cyber compliance, from a sluggish box-checking exercise to something more nimble and substantive. He claimed that Space Force’s internal process for obtaining Authorities to Operate and other formal security certifications used to take 3-18 months. Now, it “can now be done in weeks and days.”
That in turn can empower program managers to “pull in all of that massive amount of data, allow the AI – who doesn’t get tired, who doesn’t miss patterns, who doesn’t miss these components – to churn on those items and them deliver something” that can inform real-time changes to cybersecurity, he said.
Whitworth also acknowledged the “fear” that many organizations still have around the use of AI, as well as lingering concerns about some of the technology’s enduring limitations like hallucinations and data poisoning. He said he still gives AI-generated outputs “extra scrutiny, because I haven’t seen the trusted validation” yet.
But he also said he gets more valuable insight on the Space Force’s holistic cyber risk from using Large Language Models than he does from other security control assessments, which tend to narrowly focus on the risk of single systems or assets in isolation.
“We are operating in a highly connected, highly orchestrated world, and so moderate risk that’s accepted in one program immediately becomes moderate risk that is accepted in another program,” said Whitworth. “AI can take that whole picture and understand that when this system change impacts this system, it also impacts this [other] system.”
The post Space Force official touts AI’s impact on cyber compliance appeared first on CyberScoop.
The acting CISO said that AI is reshaping how the service measures and tracks cyber compliance, moving it from a box-checking exercise to something nimbler and more substantive.
The post Space Force official touts AI’s impact on cyber compliance appeared first on CyberScoop. Read MoreCyberScoop
