Security Report: Arbitrary Module Import via Malicious Adapter Config (Supply Chain Attack)
Reporter: Conner Webber (conner.webber000@gmail.com)
Severity: HIGH
CWE: CWE-94 (Improper Control of Generation of Code / Code Injection)
Note: Private vulnerability reporting is not enabled on this repository. I attempted to file this via GHSA but received HTTP 403. Please consider enabling private vulnerability reporting for future security issues.
Summary
PEFT's AutoPeftModel.from_pretrained() and LoRA TP layer contain unsanitized importlib.import_module() calls that allow arbitrary Python module loading via malicious adapter configs hosted on HuggingFace Hub.
Vulnerability 1 — HIGH: Arbitrary Module Import via auto_mapping.parent_library
File: peft/auto.py, lines 106-108
When loading an adapter via AutoPeftModel.from_pretrained(), if task_type is None and auto_mapping is present in the adapter config, the parent_library string from the config JSON is passed directly to importlib.import_module() with zero validation:
# peft/auto.py:106-108
parent_library = auto_mapping["parent_library"]
parent_library = importlib.import_module(parent_library) # UNSANITIZED
model_cls = getattr(parent_library, auto_mapping["model_name"]) # attacker-controlled class
Line 109 then calls getattr() with the attacker-controlled model_name string, and .from_pretrained() is subsequently invoked on the resolved class.
Attack Vector
- Attacker creates a malicious PEFT adapter and uploads it to HuggingFace Hub
- The
adapter_config.json contains a crafted auto_mapping field with parent_library set to a module that executes code at import time (via __init__.py) or has an exploitable .from_pretrained() method
- Victim runs
AutoPeftModel.from_pretrained("malicious-user/adapter-name")
- Arbitrary module is imported → code execution
Suggested Fix
Allowlist parent_library to known-safe values (e.g., transformers, diffusers).
Vulnerability 2 — MEDIUM: Arbitrary Module Import via megatron_core Config
File: peft/tuners/lora/tp_layer.py, line 325
The lora_config.megatron_core string from config JSON is passed to importlib.import_module() unsanitized:
# peft/tuners/lora/tp_layer.py:325
megatron_core = importlib.import_module(lora_config.megatron_core)
This has a narrower trigger (requires megatron_config to also be set) but shares the same root cause: user-controlled config values fed directly to importlib without validation.
Impact
- Severity: HIGH (CVSS ~8.8)
- Attack complexity: Low — attacker only needs to upload a crafted adapter config to HuggingFace Hub
- User interaction: Required — victim must call
from_pretrained() with the malicious adapter identifier
- Impact: Arbitrary code execution on the victim's machine, full supply chain compromise
- Scope: Any PEFT user loading untrusted adapters from the Hub
Proof of Concept
Create an adapter_config.json with:
{
"auto_mapping": {
"parent_library": "os",
"model_name": "system"
}
}
While os.system isn't directly callable via .from_pretrained(), the import itself of a malicious package (or a package with side effects in __init__.py) achieves code execution. A realistic attack would use a typosquatted or attacker-controlled PyPI package name that is already installed in the victim's environment, or leverage a standard library module whose import triggers useful side effects.
Recommended Fix
- Add an allowlist for
parent_library values (e.g., {"transformers", "diffusers"}) in auto.py
- Add an allowlist for
megatron_core values in tp_layer.py
- Validate all config values before passing to
importlib.import_module()
Disclosure Timeline:
- 2026-03-08: Report filed (public issue — GHSA private reporting disabled)
- 90-day disclosure deadline: 2026-06-06
Security Report: Arbitrary Module Import via Malicious Adapter Config (Supply Chain Attack)
Reporter: Conner Webber (conner.webber000@gmail.com)
Severity: HIGH
CWE: CWE-94 (Improper Control of Generation of Code / Code Injection)
Summary
PEFT's
AutoPeftModel.from_pretrained()and LoRA TP layer contain unsanitizedimportlib.import_module()calls that allow arbitrary Python module loading via malicious adapter configs hosted on HuggingFace Hub.Vulnerability 1 — HIGH: Arbitrary Module Import via
auto_mapping.parent_libraryFile:
peft/auto.py, lines 106-108When loading an adapter via
AutoPeftModel.from_pretrained(), iftask_typeisNoneandauto_mappingis present in the adapter config, theparent_librarystring from the config JSON is passed directly toimportlib.import_module()with zero validation:Line 109 then calls
getattr()with the attacker-controlledmodel_namestring, and.from_pretrained()is subsequently invoked on the resolved class.Attack Vector
adapter_config.jsoncontains a craftedauto_mappingfield withparent_libraryset to a module that executes code at import time (via__init__.py) or has an exploitable.from_pretrained()methodAutoPeftModel.from_pretrained("malicious-user/adapter-name")Suggested Fix
Allowlist
parent_libraryto known-safe values (e.g.,transformers,diffusers).Vulnerability 2 — MEDIUM: Arbitrary Module Import via
megatron_coreConfigFile:
peft/tuners/lora/tp_layer.py, line 325The
lora_config.megatron_corestring from config JSON is passed toimportlib.import_module()unsanitized:This has a narrower trigger (requires
megatron_configto also be set) but shares the same root cause: user-controlled config values fed directly toimportlibwithout validation.Impact
from_pretrained()with the malicious adapter identifierProof of Concept
Create an
adapter_config.jsonwith:{ "auto_mapping": { "parent_library": "os", "model_name": "system" } }While
os.systemisn't directly callable via.from_pretrained(), the import itself of a malicious package (or a package with side effects in__init__.py) achieves code execution. A realistic attack would use a typosquatted or attacker-controlled PyPI package name that is already installed in the victim's environment, or leverage a standard library module whose import triggers useful side effects.Recommended Fix
parent_libraryvalues (e.g.,{"transformers", "diffusers"}) inauto.pymegatron_corevalues intp_layer.pyimportlib.import_module()Disclosure Timeline: