Technology NewsTechnology NewsTechnology News
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Reading: Hackers Exploit Major AI Coding Tools in Software Workflows
Share
Font ResizerAa
Technology NewsTechnology News
Font ResizerAa
Search
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Follow US
  • Cookie Policy (EU)
  • Contact
  • About
© 2025 NEWSLINKER - Powered by LK SOFTWARE
CybersecurityTechnology

Hackers Exploit Major AI Coding Tools in Software Workflows

Highlights

  • Researchers found prompt injection risks in Google Gemini, Codex, and Claude Code.

  • Attackers can exploit AI tool privileges in popular software automation workflows.

  • Experts advise stricter controls and prompt handling for AI agents in pipelines.

Kaan Demirel
Last updated: 6 December, 2025 - 12:19 am 12:19 am
Kaan Demirel 2 hours ago
Share
SHARE

Contents
How Does the Vulnerability Compromise Development Pipelines?What Responses Emerged After Discovery?Are Companies Taking Adequate Measures?

AI-powered assistants have rapidly become integral to software development workflows, from continuous integration to issue tracking. Behind their convenience, however, new security risks have emerged, giving attackers fresh opportunities to insert malicious instructions. Recent discoveries indicate that widely adopted tools such as Google Gemini, Claude Code, OpenAI’s Codex, and GitHub’s AI Inference tool are exposed to vulnerabilities that can compromise critical parts of development pipelines. High privileges assigned to these tools amplify the potential for damage, letting seemingly harmless content become a direct avenue for exploit.

Earlier security analyses of LLM-based development tools mainly focused on theoretical prompt injection threats or data leakage risks. Those reports rarely detailed real-world, practical attack chains, and security fixes were presumed to be sufficient if models restricted permissions. However, the vulnerability assessed by Aikido researchers goes further, directly demonstrating a practical exploitation method within live software automation platforms that channel AI models, like GitHub Actions and GitLab, into everyday engineering processes. This represents a shift from imagined risks to evidence-based impact on trusted, production-facing systems, bringing urgency to the ongoing debate about AI tool integration.

How Does the Vulnerability Compromise Development Pipelines?

The revealed weakness targets a fundamental shortcoming in current large language models—difficulty distinguishing between genuine commands and regular content when prompts carry software-related instructions. In affected environments, attackers can inject malicious code or directives as part of commit messages or pull requests, which AI agents may later execute with elevated privileges. Affected workflows, especially those where external developers can trigger automated actions, risk exposure to manipulated data that the AI models might treat as operational input instead of passive information.

What Responses Emerged After Discovery?

On realizing the issue, Aikido’s bug bounty team notified Google and developed a proof of concept highlighting the exploit in Gemini CLI. This led to prompt changes by Google, aimed at mitigating this specific threat in its automated issue triage system. Despite that, researchers cautioned about the broader threat, noting similar vulnerabilities in other AI coding platforms. According to Aikido, even platforms requiring restrictive permissions, such as Claude Code or Codex, can be bypassed under certain conditions.

“This should be considered extremely dangerous. In our testing, if an attacker is able to trigger a workflow that uses this setting, it is almost always possible to leak a privileged [GitHub token],”

stated Aikido researcher Rein Daelman.

Are Companies Taking Adequate Measures?

Aikido withheld certain technical details while collaborating with major enterprises to address the exposure. Project maintainers are being advised to review and limit privileges granted to AI agents in their automation scripts. However, Daelman underscored that underlying architectural limitations in how LLMs ingest and act on prompts remain unresolved despite patching specific workflows.

“The goal is to confuse the model into thinking that the data its meant to be analyzing is actually a prompt,”

he explained, outlining how attackers manipulate AI behavior through subtle, disguised instructions embedded in typical software artifacts.

Based on the trends and research findings now surfacing, companies deploying AI tools in their development pipelines are urged to reassess the level of trust and authority extended to such agents. Historically, industry focus was on restricting access and improving input validation, while the current evidence demonstrates that model-centric vulnerabilities require attention beyond access controls. Organizations should consider additional monitoring, prompt sanitization, and architectural changes to limit AI autonomy in handling untrusted inputs. As this threat vector matures, development teams need to remain vigilant, avoid over-reliance on AI-driven automation, and seek continued threat intelligence related to automated code review and deployment workflows.

You can follow us on Youtube, Telegram, Facebook, Linkedin, Twitter ( X ), Mastodon and Bluesky

You Might Also Like

Attackers Exploit React2Shell Flaw as Security Teams Race to Respond

Google’s Ironwood TPUs Attract Major AI Clients and Challenge Nvidia’s Grip

Intellexa Faces Scrutiny Over Predator Spyware Remote Access Capabilities

Senator Kelly Urges AI Safeguards as America Expands Investment

Senators Block CISA Director Nominee Sean Plankey from Senate Vote

Share This Article
Facebook Twitter Copy Link Print
Kaan Demirel
By Kaan Demirel
Kaan Demirel is a 28-year-old gaming enthusiast residing in Ankara. After graduating from the Statistics department of METU, he completed his master's degree in computer science. Kaan has a particular interest in strategy and simulation games and spends his free time playing competitive games and continuously learning new things about technology and game development. He is also interested in electric vehicles and cyber security. He works as a content editor at NewsLinker, where he leverages his passion for technology and gaming.
Previous Article People Seek Lost Joy of Surfing in an AI-Driven Internet
Next Article CivNav Simplifies Solar Construction Logistics with AI Automation

Stay Connected

6.2kLike
8kFollow
2.3kSubscribe
1.7kFollow

Latest News

CivNav Simplifies Solar Construction Logistics with AI Automation
AI
People Seek Lost Joy of Surfing in an AI-Driven Internet
Gaming
PVKK Lets Players Decide Planetary Fate as Release Date Approaches
Gaming
Melonee Wise Leads KUKA’s Silicon Valley Software and AI Initiative
AI Robotics
Tesla Launches Lower-Priced Model 3 to Compete in Europe
Electric Vehicle
NEWSLINKER – your premier source for the latest updates in ai, robotics, electric vehicle, gaming, and technology. We are dedicated to bringing you the most accurate, timely, and engaging content from across these dynamic industries. Join us on our journey of discovery and stay informed in this ever-evolving digital age.

ARTIFICAL INTELLIGENCE

  • Can Artificial Intelligence Achieve Consciousness?
  • What is Artificial Intelligence (AI)?
  • How does Artificial Intelligence Work?
  • Will AI Take Over the World?
  • What Is OpenAI?
  • What is Artifical General Intelligence?

ELECTRIC VEHICLE

  • What is Electric Vehicle in Simple Words?
  • How do Electric Cars Work?
  • What is the Advantage and Disadvantage of Electric Cars?
  • Is Electric Car the Future?

RESEARCH

  • Robotics Market Research & Report
  • Everything you need to know about IoT
  • What Is Wearable Technology?
  • What is FANUC Robotics?
  • What is Anthropic AI?
Technology NewsTechnology News
Follow US
About Us   -  Cookie Policy   -   Contact

© 2025 NEWSLINKER. Powered by LK SOFTWARE
Welcome Back!

Sign in to your account

Register Lost your password?