AMSP
AI Model Scanning Platform

Stop malicious AI models before they reach production

Deep static analysis for AI model files. Detect malicious code, backdoors, embedded secrets, and supply-chain tampering across every major model format — no execution required, no GPU needed.

Request Demo
A black and white photo of a plane in the sky.
THE CHALLENGE

AI model files are an unscanned attack surface

Models from HuggingFace, Kaggle, and GitHub are deployed into production without security review. Traditional AppSec tools—SAST, DAST, SCA—don’t understand model formats or their attack surfaces. A single .pkl file can execute arbitrary commands when loaded.

// 01

Arbitrary code execution

Pickle-based models can run system commands on load. One malicious model file means full system compromise.

// 02

Embedded secrets

Cloud keys, API tokens, and credentials stored in model metadata leak into production environments undetected.

// 03

Model poisoning & backdoors

Manipulated outputs triggered by specific inputs. LoRA adapters with spectral anomalies suggesting targeted poisoning.

// 04

Supply-chain tampering

Modified models served from trusted-looking sources. Hash mismatches between hosted metadata and downloaded artifacts.

A blurry image of a red, blue, and green line.
HOW IT WORKS

Format-level static analysis, no execution required

AMSP parses raw bytes and internal structures of model files to detect threats at the format level. No model execution, no GPU, no dependency on ML frameworks. Think SonarQube for AI models.

A blurry image of a red, blue, and green line.
CORE CAPABILITIES

Everything you need to secure your model supply chain

Deep format analysis

Dedicated scanners for every major model format:

<svg width="64" height="64" viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_115_20981)">
<rect width="64" height="64" fill="white" fill-opacity="0.01"/>
<g filter="url(#filter1_f_115_20981)">
<ellipse cx="31.998" cy="68.1016" rx="37.4863" ry="11.7422" fill="#2877FA"/>
</g>
<rect width="20" height="20" transform="translate(22 22)" fill="#2877FA"/>
</g>
<defs>
<filter id="filter1_f_115_20981" x="-51.3883" y="10.4594" width="166.773" height="115.284" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feBlend mode="normal" in="SourceGraphic" in2="BackgroundImageFix" result="shape"/>
<feGaussianBlur stdDeviation="22.95" result="effect1_foregroundBlur_115_20981"/>
</filter>
<clipPath id="clip0_115_20981">
<rect width="64" height="64" fill="white"/>
</clipPath>
</defs>
</svg>

Pickle-based

(.pkl, .pt, .pth, .ckpt, .bin, .joblib): Opcode-level analysis. Detects dangerous imports and code execution patterns like os.system, subprocess, eval.

<svg width="64" height="64" viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_115_20987)">
<rect width="64" height="64" fill="white" fill-opacity="0.01"/>
<g filter="url(#filter1_f_115_20987)">
<ellipse cx="31.998" cy="68.1016" rx="37.4863" ry="11.7422" fill="#28FA2F"/>
</g>
<rect width="20" height="20" transform="translate(22 22)" fill="#28FA2F"/>
</g>
<defs>
<filter id="filter1_f_115_20987" x="-51.3883" y="10.4594" width="166.773" height="115.284" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feBlend mode="normal" in="SourceGraphic" in2="BackgroundImageFix" result="shape"/>
<feGaussianBlur stdDeviation="22.95" result="effect1_foregroundBlur_115_20987"/>
</filter>
<clipPath id="clip0_115_20987">
<rect width="64" height="64" fill="white"/>
</clipPath>
</defs>
</svg>

SafeTensors

(.safetensors): Header and tensor validation. Detects oversized headers, invalid JSON, unknown dtypes, corruption.

<svg width="64" height="64" viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_115_20993)">
<rect width="64" height="64" fill="white" fill-opacity="0.01"/>
<g filter="url(#filter1_f_115_20993)">
<ellipse cx="31.998" cy="68.1016" rx="37.4863" ry="11.7422" fill="#FA284F"/>
</g>
<rect width="20" height="20" transform="translate(22 22)" fill="#FA284F"/>
</g>
<defs>
<filter id="filter1_f_115_20993" x="-51.3883" y="10.4594" width="166.773" height="115.284" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feBlend mode="normal" in="SourceGraphic" in2="BackgroundImageFix" result="shape"/>
<feGaussianBlur stdDeviation="22.95" result="effect1_foregroundBlur_115_20993"/>
</filter>
<clipPath id="clip0_115_20993">
<rect width="64" height="64" fill="white"/>
</clipPath>
</defs>
</svg>

GGUF

Magic byte and metadata inspection. Detects malformed metadata and suspicious embedded strings.

<svg width="64" height="64" viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_115_21003)">
<rect width="64" height="64" fill="white" fill-opacity="0.01"/>
<g filter="url(#filter1_f_115_21003)">
<ellipse cx="31.998" cy="68.1016" rx="37.4863" ry="11.7422" fill="#FA9128"/>
</g>
<rect width="20" height="20" transform="translate(22 22)" fill="#FA9128"/>
</g>
<defs>
<filter id="filter1_f_115_21003" x="-51.3883" y="10.4594" width="166.773" height="115.284" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feBlend mode="normal" in="SourceGraphic" in2="BackgroundImageFix" result="shape"/>
<feGaussianBlur stdDeviation="22.95" result="effect1_foregroundBlur_115_21003"/>
</filter>
<clipPath id="clip0_115_21003">
<rect width="64" height="64" fill="white"/>
</clipPath>
</defs>
</svg>

ONNX

(.onnx): Graph traversal and operator inspection. Detects external data references and suspicious operators.

<svg width="64" height="64" viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_115_21009)">
<rect width="64" height="64" fill="white" fill-opacity="0.01"/>
<g filter="url(#filter1_f_115_21009)">
<ellipse cx="31.998" cy="68.1016" rx="37.4863" ry="11.7422" fill="#EEC089"/>
</g>
<rect width="20" height="20" transform="translate(22 22)" fill="#EEC089"/>
</g>
<defs>
<filter id="filter1_f_115_21009" x="-51.3883" y="10.4594" width="166.773" height="115.284" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feBlend mode="normal" in="SourceGraphic" in2="BackgroundImageFix" result="shape"/>
<feGaussianBlur stdDeviation="22.95" result="effect1_foregroundBlur_115_21009"/>
</filter>
<clipPath id="clip0_115_21009">
<rect width="64" height="64" fill="white"/>
</clipPath>
</defs>
</svg>

Keras/H5

(.h5): Static structure inspection. Detects lambda deserialization risk and custom object injection.

<svg width="64" height="64" viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_115_21015)">
<rect width="64" height="64" fill="white" fill-opacity="0.01"/>
<g filter="url(#filter1_f_115_21015)">
<ellipse cx="31.998" cy="68.1016" rx="37.4863" ry="11.7422" fill="#28FAD7"/>
</g>
<rect width="20" height="20" transform="translate(22 22)" fill="#28FAD7"/>
</g>
<defs>
<filter id="filter1_f_115_21015" x="-51.3883" y="10.4594" width="166.773" height="115.284" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feBlend mode="normal" in="SourceGraphic" in2="BackgroundImageFix" result="shape"/>
<feGaussianBlur stdDeviation="22.95" result="effect1_foregroundBlur_115_21015"/>
</filter>
<clipPath id="clip0_115_21015">
<rect width="64" height="64" fill="white"/>
</clipPath>
</defs>
</svg>

NumPy

(.npy, .npz): Header and dtype validation. Detects object dtype injection risk and pickled payloads.

<svg width="64" height="64" viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_115_21025)">
<rect width="64" height="64" fill="white" fill-opacity="0.01"/>
<g filter="url(#filter1_f_115_21025)">
<ellipse cx="31.998" cy="68.1016" rx="37.4863" ry="11.7422" fill="#FA28D7"/>
</g>
<rect width="20" height="20" transform="translate(22 22)" fill="#FA28D7"/>
</g>
<defs>
<filter id="filter1_f_115_21025" x="-51.3883" y="10.4594" width="166.773" height="115.284" filterUnits="userSpaceOnUse" color-interpolation-filters="sRGB">
<feFlood flood-opacity="0" result="BackgroundImageFix"/>
<feBlend mode="normal" in="SourceGraphic" in2="BackgroundImageFix" result="shape"/>
<feGaussianBlur stdDeviation="22.95" result="effect1_foregroundBlur_115_21025"/>
</filter>
<clipPath id="clip0_115_21025">
<rect width="64" height="64" fill="white"/>
</clipPath>
</defs>
</svg>

LoRA adapters

 Spectral and SVD anomaly analysis. Detects spectral dominance, entropy anomalies, rank collapse, norm outliers.

Embedded secrets detection

Detects common secret patterns across all supported formats:

Cloud credentials (AWS, GCP, Azure)

LLM provider keys (OpenAI, Anthropic, and others)

GitHub and Slack tokens, JWTs

Private endpoints and internal URLs

Includes evidence location and remediation guidance: remove, rotate, re-export.

Supply-chain integrity & provenance validation

Hash and integrity validation when metadata is available

Version-to-version tampering signals

Provenance context with source and production signals

Optional HuggingFace verification workflows via hf:// reference

Risk scoring (0–100) with explainability

Each scan produces a risk score for CI gating and policy enforcement. Severity + confidence-based scoring with category weighting (RCE weighted higher than structural anomalies). Every finding includes a “why flagged” explanation with evidence references.

CI/CD & MLOps integration

Designed for shift-left and continuous validation

CLI for local scanning

CI gating: fail builds on severity thresholds

SARIF output for code scanning platforms

Pre-commit hook support

Containerized deployment for internal environments

Compliance-ready reporting

SARIF 2.1.0 output

CycloneDX-style exports (ML-BOM style reporting)

JSON manifests for dashboards and integrations

WHAT MAKES IT DIFFERENT

Why AMSP

No execution required

Pure static analysis at the byte and structure level. No GPU, no ML framework dependencies, no risk of triggering malicious payloads during scanning.

Format-native depth

Not a generic file scanner. Dedicated parsers for Pickle, SafeTensors, GGUF, ONNX, Keras, NumPy, LoRA, and TorchScript—each with format-specific threat detection.

Pipeline-native

CLI, CI gating, SARIF, pre-commit hooks, and Docker deployment. Fits into existing AppSec and MLOps workflows without a separate tool chain.

Who is it for ?

1

ML Platform / MLOps teams

Gate model promotion and deployment with automated security scanning.

2

AppSec teams

Enforce policy on model artifacts and generate consistent evidence.

3

Security leadership

Visibility, risk scoring, and audit-ready reporting across model supply chains.

4

DevOps teams

Automate scanning across pipelines and registries.

OUTCOMES

What teams achieve

  1. Prevent RCE from unsafe model formats before models are loaded
  2. Detect and remove embedded secrets before publish or deploy
  3. Reduce likelihood of backdoored or poisoned artifacts entering production
  4. Enforce policy controls on model artifacts comparable to code security controls
  5. Produce audit-ready evidence for AI supply-chain security
  6. Standardize model security checks across teams and pipelines

See AMSP in Action

Your specific AI tools and workflows

Real detection on your type of data

ROI calculation for your organization

Deployment plan for your environment

Schedule Demo
// FAQ’s

Frequently Asked Questions

// 01

Can we deploy this on-premise?

Yes. AMSP supports local CLI, Docker deployment for internal scanning services, and an optional dashboard for centralized governance with inventory, policies, and scan history.
// 01

How is the risk score calculated?

Scores range from 0–100, based on severity and confidence of findings with category weighting. RCE findings are weighted higher than structural anomalies. Each score includes a full explanation.
// 01

Can we integrate this into our CI/CD pipeline?

Yes. AMSP provides a CLI for local scanning, CI gating with severity thresholds, SARIF output for code scanning platforms, pre-commit hooks, and containerized deployment.
// 01

What model formats are supported?

Pickle-based formats (.pkl, .pt, .pth, .ckpt, .bin, .joblib), SafeTensors, GGUF, ONNX, Keras/H5, NumPy (.npy, .npz), TorchScript, LoRA adapters, and metadata/text artifacts.
// 01

Does AMSP execute the models it scans?

No. AMSP performs pure static analysis—parsing raw bytes and internal structures. No model execution, no GPU required, and no dependency on ML frameworks.