The race to secure corporate digital assets has entered a new phase with artificial intelligence pushing both defense and attack strategies into uncharted territories. As organizations navigate these uncertain waters, balancing opportunity and risk has become vital. The rapid deployment of AI tools may make cybercriminal tactics more sophisticated, but it also offers defenders the chance to detect and neutralize threats more swiftly and effectively than before. Industry experts argue that understanding the motivations and behaviors of both perpetrators and protectors is increasingly essential in shaping effective responses to emerging cyber risks.
When AI’s role in cybersecurity first appeared in public discussions, vendors highlighted detection speed and threat identification. At that time, many companies hesitated to integrate AI beyond basic operations, citing trust and transparency issues. By 2023, a growth in large language models prompted more organizations to experiment with AI for automating security tasks, but challenges around transparency and explainability persisted. Reports repeatedly cited the use of platforms like OpenCTI and structured data formats such as STIX, yet comprehensive strategies linking intelligence with real-world application remained limited. Today, firms like AbbVie are advancing these integrations more deliberately while staying attentive to vulnerabilities unique to the technology itself.
How Does AbbVie Use AI in Cybersecurity?
At AbbVie, Principal AI ML Threat Intelligence Engineer Rachel James leads a team leveraging large language models (LLMs) to process and analyze extensive security data. Their workflow uses vendor AI components and custom LLM analysis to identify similarities, track duplicate alerts, and reveal gaps in security coverage. The team plans to expand by incorporating even more external threat data, including intelligence from platforms like OpenCTI, to create a consistent picture of emerging risks across their digital infrastructure.
“In addition to the built in AI augmentation that has been vendor-provided in our current tools, we also use LLM analysis on our detections, observations, correlations and associated rules,”
James said, describing the layered approach the company is taking.
What Are the Main Risks and Trade-offs?
James highlights that as organizations adopt more sophisticated AI, they must address new vulnerabilities and business risks. Generative AI brings creativity but also unpredictability, making it crucial to anticipate and mitigate unexpected behaviors. Companies face less transparency in AI-driven decision-making, which becomes more pronounced as model complexity increases. Additionally, accurately estimating the true value and demands of AI-based security investments remains a significant business challenge.
“I would be remiss if I didn’t mention the work of a wonderful group of folks I am a part of – the ‘OWASP Top 10 for GenAI’ as a foundational way of understanding vulnerabilities that GenAI can introduce,”
James noted, emphasizing the ongoing need to stay alert regarding AI’s possible flaws.
How Does Threat Intelligence Shape AI Security Practices?
James draws on her background in cyber threat intelligence to refine AbbVie’s defenses further, monitoring threat actors’ discussions and tools on both open and dark web forums. Her approach includes direct engagement in adversarial testing and red teaming, contributing publicly through projects like OWASP’s Prompt Injection entry and guiding others through published best practices. She asserts that a solid understanding of the threat landscape, in combination with robust AI and security integration, allows defenders to anticipate and counter novel attacks more effectively.
The ongoing convergence of the cyber threat intelligence lifecycle and the data science process underlying AI tools presents organizations with a unique opportunity. Effective sharing of intelligence data across departments could enhance both monitoring and incident response. As AI becomes further engrained in daily security operations, the skill sets required for cybersecurity professionals are shifting, prioritizing interdisciplinary knowledge and proactive learning to keep pace with evolving threats. Stakeholders are advised to not only invest in technical solutions but also foster expertise that balances technical, strategic, and ethical considerations.
Synchronizing cybersecurity with advanced AI tools requires companies to routinely assess not just the strengths but also the limits of their technological defenses. Alongside technology investments, participating in initiatives like the OWASP Top 10 for GenAI and building strong collaborative networks help organizations proactively manage threats. For business leaders and professionals, familiarization with both technical frameworks and the behavioral tendencies of adversaries can significantly increase resilience. Ongoing education and a willingness to adapt remain cornerstones for maintaining a secure and robust digital operation as AI further blends with business processes.