Jump to content

Recommended Posts

Posted

AI needs to do the DoD Cybersecurity training...


BLUF: A newly discovered AI (Artificial Intelligence) prompt injection exploit can bypass critical safeguards in AI-driven systems, as demonstrated in a proof-of-concept (PoC) attack on Anthropic's Claude and exacerbated by vulnerabilities in platforms like DeepSeek. This exploit allows for autonomous malware download and execution, posing significant threats to organizations using AI in security-sensitive workflows, with the potential for malware infections, data breaches, supply chain compromises via weaponized dependencies, like poisoned datasets, and exploitation of AI service vulnerabilities.


A PoC attack demonstrated that AI systems can be successfully manipulated via hidden instructions embedded in web pages. In this attack, the AI was tricked into downloading and running malware disguised as a support tool, which ultimately compromised the entire computer system. Meanwhile, service vulnerabilities like those in DeepSeek’s design amplify risks: Its iOS app transmits unencrypted user data and uses hardcoded encryption keys, enabling man-in-the-middle attacks if compromised via prompt injection or direct exploitation. Cybercriminals can hide malicious commands in web pages or documents that AI systems process, manipulating AI bots into performing these commands as legitimate tasks, such as downloading tools or modifying system settings. Once the AI executes the command, malware is deployed, allowing attackers to take control of the system and steal data. Since the AI believes it is following valid instructions, it bypasses traditional security measures.  


This vulnerability exposes organizations using AI in security-sensitive workflows to significant threats, including malware infections, data breaches, supply chain attacks via compromised AI model dependencies, and reputational harm. DeepSeek’s open-source model compounds these dangers: If integrated into autonomous AI agents, attackers could weaponize systems to exfiltrate data or modify security settings, mirroring the Claude PoC attack. For instance, an AI tool with access to customer data or financial systems could be manipulated into leaking sensitive information or approving fraudulent transactions after being compromised.  


Additionally, insecure AI supply chains, such as untrusted model repositories or vulnerable dependencies (insecure PyTorch/Pickle modules), could allow attackers to inject malicious code during model training or deployment. Further complicating governance, DeepSeek stores user data in China under government jurisdiction allowing access without consent, creating regulatory risks for global enterprises. This not only jeopardizes security but also undermines customer trust and can lead to financial losses, regulatory penalties, and reputational damages.

 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...