LABScon25 Replay | LLM-Enabled Malware In the Wild
嗯,用户让我总结一下这篇文章的内容,控制在一百个字以内,而且不需要用“文章内容总结”之类的开头。首先,我需要通读整篇文章,抓住主要信息。 文章讲的是LLM(大型语言模型)被嵌入到恶意软件中,形成了一种新兴的威胁。传统恶意软件是静态嵌入代码,而这种新型威胁是在运行时生成恶意代码,这增加了检测难度。SentinelLABS的研究人员Alex Delamotte和Gabriel Bernadett-Shapiro探讨了这一点,并举了几个例子,比如PromptLock勒索软件和APT28的活动。 他们发现这些恶意软件通常会硬编码API密钥和提示,这反而提供了一个检测的机会。他们提出了两种新的狩猎策略:使用YARA规则检测API密钥结构,以及搜索二进制文件中的硬编码提示结构。通过这种方法,他们在VirusTotal上发现了7000多个样本,并且发现了新的恶意软件如MalTerminal。 最后,文章强调了传统检测方法的不足,并提出了新的检测方法的重要性。同时介绍了作者的背景和会议的信息。 现在我需要把这些信息浓缩到100字以内。重点包括:LLM嵌入恶意软件、运行时生成代码、检测挑战、SentinelLABS的研究、硬编码API密钥和提示、两种狩猎策略、发现样本数量、传统方法失效、新方法有效。 所以总结的时候要涵盖这些关键点:LLM嵌入恶意软件的特点、研究发现、检测策略以及成果。 </think> 文章探讨了将大型语言模型(LLM)嵌入恶意软件的新威胁,这些威胁通过运行时生成代码而非静态嵌入来规避检测。SentinelLABS的研究揭示了此类恶意软件依赖硬编码API密钥和提示的漏洞,并提出基于YARA规则和提示搜索的新型检测策略。研究在VirusTotal上发现了7,000多个样本,并识别出如MalTerminal等未知恶意软件。 2025-11-3 14:0:50 Author: www.sentinelone.com(查看原文) 阅读量:45 收藏

This presentation explores the emerging threat of LLM-enabled malware, where adversaries embed Large Language Model capabilities directly into malicious payloads. Unlike traditional malware, these threats generate malicious code at runtime rather than embedding it statically, creating significant detection challenges for security teams.

SentinelLABS’ Alex Delamotte and Gabriel Bernadett-Shapiro present their team’s research on how LLMs are weaponized in the wild, distinguishing between various adversarial uses, from AI-themed lures to genuine LLM-embedded malware. The research focused on malware that leverages LLM capabilities as a core operational component, exemplified by notable cases like PromptLock ransomware and APT28’s LameHug/PROMPTSTEAL campaigns.

The presentation reveals a fundamental flaw in the way much current LLM-enabled malware is coded: despite their adaptive capabilities, these threats hardcode artifacts like API keys and prompts. This dependency creates a detection opportunity. Delamotte and Bernade-Shapiro share two novel hunting strategies: wide API key detection using YARA rules to identify provider-specific key structures (such as OpenAI’s Base64-encoded identifiers), and prompt hunting that searches for hardcoded prompt structures within binaries.

A year-long retrohunt across VirusTotal identified over 7,000 samples containing 6,000+ unique API keys. By pairing prompt detection with lightweight LLM classifiers to assess malicious intent, the SentinelLABS researchers successfully discovered previously unknown samples, including “MalTerminal”, potentially the earliest known LLM-enabled malware.

The presentation addresses implications for defenders, highlighting how traditional detection signatures fail against runtime-generated code, while demonstrating that hunting for “prompts as code” and embedded API keys provides a viable detection methodology for this evolving threat landscape. A companion blog post was published by SentinelLABS here.

About the Authors

Alex Delamotte is a Senior Threat Researcher at SentinelOne. Over the past decade, Alex has worked with blue, purple, and red teams serving companies in the technology, financial, pharmaceuticals, and telecom sectors and she has shared research with several ISACs. Alex enjoys researching the intersection of cybercrime and state-sponsored activity.

Gabriel Bernadett-Shapiro is a Distinguished AI Research Scientist at SentinelOne, specializing in incorporating large language model (LLM) capabilities for security applications. He also serves as an Adjunct Lecturer at the Johns Hopkins SAIS Alperovitch Institute. Before joining SentinelOne, Gabriel helped launch OpenAI’s inaugural cyber capability-evaluation initiative and served as a senior analyst within Apple Information Security’s Threat Intelligence team.

About LABScon

This presentation was featured live at LABScon 2025, an immersive 3-day conference bringing together the world’s top cybersecurity minds, hosted by SentinelOne’s research arm, SentinelLabs.

Keep up with all the latest on LABScon here.


文章来源: https://www.sentinelone.com/labs/labscon25-replay-llm-enabled-malware-in-the-wild/
如有侵权请联系:admin#unsafe.sh