Technology NewsTechnology NewsTechnology News
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Reading: NYU Researchers Create AI-Assisted Malware to Probe Security Risks
Share
Font ResizerAa
Technology NewsTechnology News
Font ResizerAa
Search
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Follow US
  • Cookie Policy (EU)
  • Contact
  • About
© 2025 NEWSLINKER - Powered by LK SOFTWARE
Cybersecurity

NYU Researchers Create AI-Assisted Malware to Probe Security Risks

Highlights

  • NYU researchers built “PromptLock,” an AI-powered malware, as a scientific demonstration.

  • The software uses language models to autonomously perform ransomware operations and adapt itself.

  • Its public reveal drove renewed discussion about cybersecurity threats from advanced AI tools.

Ethan Moreno
Last updated: 5 September, 2025 - 7:19 pm 7:19 pm
Ethan Moreno 7 hours ago
Share
SHARE

A fresh perspective on the intersection between artificial intelligence and cybersecurity has emerged following news that researchers at New York University authored a prototype malware called “PromptLock.” The software was initially discovered by ESET on VirusTotal, sparking immediate discussion across the cybersecurity community. Rather than being an attack in the wild, PromptLock was designed as part of a controlled academic experiment by NYU’s Tandon School of Engineering to assess the feasibility and implications of AI-powered ransomware. The project underscores the ongoing tension between advancing AI capabilities and the urgent need for robust digital defense systems, prompting new conversations among security professionals and policymakers worldwide.

Contents
Academic Intentions Behind PromptLock’s CreationHow Does PromptLock Leverage Large Language Models?What Are the Broader Implications for Cybersecurity Defense?

Media coverage over the last several weeks has traced similar concerns about large language models (LLMs) and their increasing exposure to cyber criminal misuse. There have been earlier demonstrations of AI tools facilitating simpler hacking tactics; however, PromptLock’s ability to autonomously strategize, adapt, and execute ransomware tasks places it in a distinct position. Recent incidents involving models like Anthropic Claude have demonstrated comparable risks, revealing ongoing patterns where AI plays an increasingly active role in both the technical and psychological aspects of targeted cyber attacks. Compared to past academic showcases, NYU’s experiment brings the conversation closer to real-world implications and policy gaps by making the dangers tangible and measurable.

Academic Intentions Behind PromptLock’s Creation

PromptLock’s origin traces back to NYU researchers who built it as a proof-of-concept to showcase the potential for AI-based threats. The team, led by Professor Ramesh Karri and supported by agencies such as the Department of Energy and National Science Foundation, launched the malware using open source tools, commodity hardware, and minimal resources. Their aim was to present a practical illustration of future threats, demonstrating how large language models can script and automate attacks with little direct human involvement.

“At the intersection of [ransomware and AI] we think there is a really illuminating threat that hasn’t yet been discovered in the wild,”

said Md Raz, the project’s lead author.

How Does PromptLock Leverage Large Language Models?

The proof-of-concept malware leverages an open weight version of OpenAI’s ChatGPT, embedding natural language prompts into its binary. This allows it to carry out complex tasks such as system reconnaissance, data exfiltration, and personalized ransom note creation, relying on the LLM for dynamic code generation. Each instance of this malware may manifest with different characteristics, making detection more complicated than with traditional malware. The research points to an evolving landscape where AI-driven automation challenges typical cybersecurity defense strategies.

“The system performs reconnaissance, payload generation, and personalized extortion, in a closed-loop attack campaign without human involvement,”

according to the NYU paper.

What Are the Broader Implications for Cybersecurity Defense?

This experiment has exposed difficulties around identifying and countering such threats, given the polymorphic tendencies and personalization enabled by LLMs. Security professionals and AI developers face challenges in building guardrails strong enough to withstand prompt injection and jailbreak attempts. As noted by both NYU and ESET, while PromptLock itself was a controlled academic demonstration, its existence and rapid propagation of related cases illustrate the ease with which malicious actors could adapt these techniques for real-world exploitation. Regulatory responses and technical safeguards for LLMs remain topics of debate, with policy approaches varying significantly across administrations and regions.

While PromptLock was not an operational threat, its academic context provides valuable visibility into the emerging risks tied to AI misuse. The unveiling and subsequent media reporting advanced notification of the research, broadening awareness among defenders. Similar recent incidents, such as the use of Anthropic’s Claude LLM for real-world extortion, highlight the necessity for proactive adaptation within the security sector. These developments bring renewed focus to the ongoing struggle to implement effective preventative measures at the most fundamental levels of AI systems.

PromptLock’s existence as an academic project highlights stark concerns around the future of cybersecurity in the age of general-purpose AI. The sophistication offered by LLMs makes tailored ransomware campaigns accessible, even to low-skilled attackers, through simple natural language commands. Readers should monitor progress in the security field, especially regarding prompt injection defenses and policy strategies that balance innovation with safety. Understanding the underlying mechanics of AI-assisted malware, and anticipating the next steps in automated cyber attacks, will be increasingly important for organizations and security professionals. Lessons from PromptLock emphasize that neither AI developers nor security defenders should underestimate the speed with which new attack models can evolve—and that collaboration between research and industry is vital for anticipating and addressing these risks.

  • NYU researchers built “PromptLock,” an AI-powered malware, as a scientific demonstration.
  • The software uses language models to autonomously perform ransomware operations and adapt itself.
  • Its public reveal drove renewed discussion about cybersecurity threats from advanced AI tools.
You can follow us on Youtube, Telegram, Facebook, Linkedin, Twitter ( X ), Mastodon and Bluesky

You Might Also Like

AI Tools Support CISA in Managing Growing Software Vulnerabilities

Authorities Shut Down Streameast’s Vast Pirated Sports Network

Google Fixes Two Active Android Zero-Days, Details 120 Patches

Salesloft Drift Supply Chain Attacks Impact Security Leaders

Russian Ransomware Suspect Remains on Bail in California

Share This Article
Facebook Twitter Copy Link Print
Ethan Moreno
By Ethan Moreno
Ethan Moreno, a 35-year-old California resident, is a media graduate. Recognized for his extensive media knowledge and sharp editing skills, Ethan is a passionate professional dedicated to improving the accuracy and quality of news. Specializing in digital media, Moreno keeps abreast of technology, science and new media trends to shape content strategies.
Previous Article Tesla Robotaxi App Surges in Downloads, Outpaces Uber and Waymo
Next Article Tesla Board Proposes $1 Trillion Pay Deal for Elon Musk

Stay Connected

6.2kLike
8kFollow
2.3kSubscribe
1.7kFollow

Latest News

Universal Robots Certifies Technicon to Target Global Pharma Automation
Robotics
Mistral AI Pursues $14 Billion Valuation to Rival U.S. Tech Giants
Technology
Apple Sets Stage for Next Apple Watch 11 Launch Event
Wearables
Tesla Board Proposes $1 Trillion Pay Deal for Elon Musk
Electric Vehicle
Tesla Robotaxi App Surges in Downloads, Outpaces Uber and Waymo
Electric Vehicle
NEWSLINKER – your premier source for the latest updates in ai, robotics, electric vehicle, gaming, and technology. We are dedicated to bringing you the most accurate, timely, and engaging content from across these dynamic industries. Join us on our journey of discovery and stay informed in this ever-evolving digital age.

ARTIFICAL INTELLIGENCE

  • Can Artificial Intelligence Achieve Consciousness?
  • What is Artificial Intelligence (AI)?
  • How does Artificial Intelligence Work?
  • Will AI Take Over the World?
  • What Is OpenAI?
  • What is Artifical General Intelligence?

ELECTRIC VEHICLE

  • What is Electric Vehicle in Simple Words?
  • How do Electric Cars Work?
  • What is the Advantage and Disadvantage of Electric Cars?
  • Is Electric Car the Future?

RESEARCH

  • Robotics Market Research & Report
  • Everything you need to know about IoT
  • What Is Wearable Technology?
  • What is FANUC Robotics?
  • What is Anthropic AI?
Technology NewsTechnology News
Follow US
About Us   -  Cookie Policy   -   Contact

© 2025 NEWSLINKER. Powered by LK SOFTWARE
Welcome Back!

Sign in to your account

Register Lost your password?