Concerns about AI-powered cybersecurity threats are mounting among UK security leaders, with Chinese AI company DeepSeek drawing particular scrutiny. Security operation centres have begun reassessing the use of new AI tools, as recent incidents have exposed significant vulnerabilities that could impact sensitive business data. Many in the industry express worries not just about technical risks but also about the speed of AI evolution outpacing current defensive capabilities, leading to strong calls for urgent government intervention. These apprehensions stem from a growing pattern of real-world incidents where AI systems have inadvertently expanded attack surfaces instead of safeguarding digital assets.
Surveys from earlier years showed optimism among CISOs about the impact of AI, especially regarding efficiency and innovation in cybersecurity. Updates over the past year, however, reflect a dramatic shift; now, data stewardship and regulatory oversight dominate the conversation. The number of companies restricting or halting AI deployments has risen rapidly compared to similar polls from 12 to 18 months ago. In contrast to earlier takes that highlighted AI’s potential as a protective measure, there is now a more pronounced sense of risk and increased demand for state regulation rather than voluntary guidance. The shift reflects a realization that business productivity gains must be weighed against new exposure to sophisticated cyber threats.
What Are Security Leaders Saying About DeepSeek?
A recent poll commissioned by Absolute Security found that 81% of UK Chief Information Security Officers (CISOs) want immediate government regulation of DeepSeek, citing its data practices and accessibility for potential cyber misuse. More than a third of large UK organisations have banned or suspended AI tool use over security fears. Absolute Security’s Andy Ward stated:
“Our research highlights the significant risks posed by emerging AI tools like DeepSeek, which are rapidly reshaping the cyber threat landscape.”
The findings suggest anxiety is rooted in the belief that AI systems could exacerbate existing privacy and governance difficulties within enterprises.
Is AI a Catalyst for New Cyber Risks?
The proliferation of DeepSeek’s capabilities has led 60% of CISOs to anticipate a direct rise in cyberattacks. Nearly half of surveyed leaders admit their teams are unprepared for the distinctive threats posed by AI-driven software. Respondents increasingly view large language models as tools that can be hijacked by threat actors, forcing many organisations to re-assess ongoing and future AI deployments. Andy Ward underscored the consequences, adding:
“Without a national regulatory framework – one that sets clear guidelines for how these tools are deployed, governed, and monitored – we risk widespread disruption across every sector of the UK economy.”
The response indicates a shift from AI enthusiasm toward practical challenges to security operations.
How Are Businesses Responding to Regulatory Uncertainty?
Despite heightened caution, most organisations have not abandoned AI investments entirely. Instead, companies are focusing on controlled adoption, prioritising recruitment of AI specialists and offering C-suite trainings to improve in-house AI literacy. While 84% plan to prioritise expert hiring in 2025, 80% are also implementing AI-focused education for executives. Companies seek to mitigate risk by sharpening internal oversight and anticipating regulatory changes, rather than halting progress altogether. The direction for many remains clear: collaborate with regulators and prepare teams for continuous adaptation as new technologies arrive.
A coordinated national approach stands out as the preferred solution among industry leaders. CISOs point towards the need for clearer government intervention and guidance for AI governance, particularly for platforms such as DeepSeek. Their argument centres on building robust, transparent frameworks that support responsible innovation without leaving businesses exposed to new types of threats. This approach aims to balance operational benefits of AI with sufficient guardrails, ensuring safe advancements for both the public and private sectors.
As both private companies and policymakers consider their next steps, it will be crucial to establish clear standards for AI deployment and monitoring. Overreliance on unregulated AI tools could expose businesses to unprecedented risks, particularly as technological sophistication accelerates. A multi-pronged strategy—combining government oversight, specialist recruitment, ongoing executive education, and targeted bans as needed—may provide the resilience organisations seek. Stakeholders should continue tracking developments not just in capability but also in legal and ethical governance, as these elements will shape AI’s trajectory in cybersecurity for years to come.
- UK CISOs express rising concern over DeepSeek’s security implications.
- Surveys show increased demand for urgent governmental regulation of AI.
- Firms adopt AI cautiously while boosting internal expertise and oversight.