AI Supply Chain Security: Hugging Face Malicious ML Models

A recent report by JFrog researchers found that some machine learning models on Hugging Face may be used to attack the user environment. These malicious models will lead to code execution when loaded, providing the attacker with the ability to gain full control of the infected machine and implement backdoor implantation based on open-source models. Potential threats to these Machine Learning models include direct code execution, which means that a malicious attacker can run arbitrary code on the machine loading or using the model, potentially resulting in data disclosure, system corruption, or other malicious behavior. With the rise of open-source model communities such as Huggingface and Tensorflow Hub, malicious attackers are already investigating deploying malware using these models, prompting a new era of AI that calls for careful treatment of untrusted-source models, thorough security reviews in MLOps, and related actions.

Disclaimer: This article is part of X-Force OSINT Advisories’ automated collection to enable faster integration of open-source articles to client environments. All credit and copyright goes to the original authors.

Reference: https://securityboulevard.com/2024/03/ai-supply-chain-security-hugging-face-malicious-ml-models/

Sample Indicators of Compromise:

136.243.156.120136.243.156.104192.248.1.167210.117.212.93

You May Also Like

More From Author