You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

⚠️ SECURITY RESEARCH: Joblib Deserialization RCE Vulnerability

WARNING: This repository contains a malicious model file for security research purposes only.

πŸ”΄ CRITICAL SECURITY NOTICE

This is a Proof of Concept (PoC) demonstrating a critical Remote Code Execution (RCE) vulnerability in .joblib model files through unsafe pickle deserialization.

DO NOT load this model file unless you understand the security implications.


πŸ“‹ Vulnerability Summary

Property Value
Vulnerability Type Arbitrary Code Execution (ACE) / Remote Code Execution (RCE)
Affected Format .joblib
Attack Vector Unsafe pickle deserialization
CVSS Score 9.8 (Critical)
Trigger Point joblib.load()
Status Submitted to bug bounty program

🎯 Description

The .joblib serialization format, commonly used for saving scikit-learn models and other Python objects, relies on Python's pickle module for deserialization. The pickle protocol is inherently unsafe when deserializing untrusted data because it can execute arbitrary Python code.

This PoC demonstrates how an attacker can:

  1. Create a malicious .joblib file with embedded Python code
  2. Upload it to a model repository (HuggingFace, ModelZoo, etc.)
  3. Achieve Remote Code Execution when a victim loads the file

How It Works

The exploit leverages Python's __reduce__ method, which pickle calls during deserialization:

class MaliciousModel:
    def __reduce__(self):
        # Returns (callable, arguments)
        # Pickle will execute: exec(malicious_code)
        return (exec, (malicious_code,))

When joblib.load('weaponized_model.joblib') is called, the malicious code executes immediately.


πŸ“ Repository Contents

  • weaponized_model.joblib - Malicious model file (PoC)
  • load_model_demo.py - Safe demonstration script with user confirmation
  • REPRODUCTION.md - Detailed step-by-step reproduction guide

πŸ”¬ Reproduction Steps

Prerequisites

# Python 3.x required
pip install joblib numpy

Quick Test

import joblib

# ⚠️ WARNING: This triggers arbitrary code execution!
model = joblib.load('weaponized_model.joblib')

Safe Demonstration

Use the provided demo script with user confirmation:

python load_model_demo.py

What Happens?

When you run the code:

  1. βœ… Immediate code execution upon joblib.load()
  2. βœ… System information is displayed (OS, Python version, user, etc.)
  3. βœ… Demonstrates full code execution capabilities
  4. βœ… No user interaction required after loading

Expected: Model loads silently without side effects
Actual: Arbitrary Python code executes during deserialization


πŸ’₯ Security Impact

An attacker exploiting this vulnerability can:

Impact Description
πŸ”“ Remote Code Execution Execute arbitrary Python code on victim's system
πŸ“‚ File System Access Read, write, or delete files
πŸ”‘ Credential Theft Access environment variables, config files, SSH keys
πŸ“‘ Data Exfiltration Send sensitive data to attacker-controlled servers
🦠 Malware Delivery Download and execute additional payloads
πŸ”— Lateral Movement Pivot to other systems in the network
πŸ’£ Supply Chain Attack Poison widely-used models affecting thousands of users

🌐 Real-World Attack Scenarios

Scenario 1: Model Repository Poisoning

  1. Attacker creates malicious model disguised as legitimate pre-trained model
  2. Uploads to HuggingFace, GitHub, or other model repositories
  3. Victim downloads and loads: model = joblib.load('model.joblib')
  4. RCE achieved - attacker gains shell access

Scenario 2: Jupyter Notebook Attack

  1. Malicious .joblib file shared in collaborative notebook
  2. Data scientist loads model in trusted environment
  3. Code executes with scientist's credentials and access

Scenario 3: Automated ML Pipeline

  1. ML pipeline automatically downloads and loads models
  2. Malicious model in pipeline triggers on scheduled run
  3. Compromises production systems

πŸ›‘οΈ Mitigation & Defense

For Users

  1. βœ… Never load .joblib files from untrusted sources
  2. βœ… Verify model provenance with cryptographic signatures
  3. βœ… Use safer formats like .safetensors, ONNX, or TensorFlow SavedModel
  4. βœ… Sandbox model loading in isolated containers/VMs
  5. βœ… Scan files with static analysis tools before loading

For Developers

  1. βœ… Deprecate .joblib for untrusted models
  2. βœ… Implement integrity checks (SHA256 hashes, signatures)
  3. βœ… Add security warnings to documentation
  4. βœ… Provide migration paths to safer formats
  5. βœ… Educate users about pickle security risks

For Platform Operators

  1. βœ… Scan uploaded models for malicious pickle payloads
  2. βœ… Warn users when downloading .joblib files
  3. βœ… Implement sandboxing for model previews
  4. βœ… Require verified badges for trusted uploaders

πŸ” Technical Details

Exploit Mechanism

# Attacker creates this:
class ExploitModel:
    def __reduce__(self):
        return (exec, ("import os; os.system('malicious_command')",))

# Victim runs this:
import joblib
model = joblib.load('malicious.joblib')  # Code executes here!

Why This Works

  1. joblib.dump() uses pickle.dump() internally
  2. Pickle serializes the __reduce__ method
  3. joblib.load() calls pickle.load()
  4. Pickle calls __reduce__() and executes returned callable
  5. exec(malicious_code) runs with victim's privileges

Affected Components

  • joblib: All versions (inherent to pickle design)
  • scikit-learn: Models saved with joblib
  • pickle: Python's built-in serialization module
  • Any library using joblib for persistence

πŸ“š References & Resources


βš–οΈ Responsible Disclosure

This vulnerability disclosure follows responsible security practices:

  • βœ… Reported to appropriate bug bounty program
  • βœ… Created for legitimate security research
  • βœ… Intended to raise awareness and improve security
  • βœ… Not exploited maliciously

πŸŽ“ Educational Use Only

This PoC is provided exclusively for:

βœ… Permitted ❌ Prohibited
Security research Malicious attacks
Educational purposes Unauthorized access
Defensive testing Data theft
Vulnerability disclosure System compromise
Academic study Malware distribution

πŸ“ Disclaimer

This repository is for security research and educational purposes only.

  • The author is not responsible for any misuse of this information
  • Users must comply with all applicable laws and regulations
  • Unauthorized access to computer systems is illegal
  • Always obtain proper authorization before security testing

πŸ‘€ Author

Created for security research and bug bounty program submission.

Submission Date: January 19, 2026
Program: Model File Vulnerability Bug Bounty (Beta)


πŸ“„ License

MIT License - For research and educational purposes only.


πŸ”— Additional Information

For more details on the vulnerability, reproduction steps, and technical analysis, see:

  • REPRODUCTION.md - Detailed reproduction guide
  • load_model_demo.py - Safe demonstration script

Remember: With great power comes great responsibility. Use this knowledge to make systems more secure, not to harm them.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support