Skip to content

Security: OWASP Agent Memory Guard – protect AutoGPT from memory poisoning (ASI06) #13097

@vgudur-dev

Description

@vgudur-dev

Context

The OWASP Top 10 for Agentic Applications identifies ASI06: Memory Poisoning as a top vulnerability for agents with persistent memory.

AutoGPT's persistent memory architecture (file-based, vector DB, etc.) is directly susceptible to memory poisoning attacks — where an attacker injects malicious content into the memory store, causing the agent to act on false or harmful information in future sessions.

Request

Would the AutoGPT team consider integrating or documenting OWASP Agent Memory Guard as a security layer?

What it does:

  • Detects tampered memory entries using SHA-256 integrity baselines
  • Scans memory reads/writes for prompt injection payloads and secret leakage
  • Enforces YAML-defined policies (block/warn/strip) at the memory boundary
  • Sub-100μs latency, zero external dependencies, runs entirely locally
pip install agent-memory-guard

GitHub: https://github.com/OWASP/www-project-agent-memory-guard

This is the official OWASP reference implementation and has been adopted by the UK Government BEIS Inspect AI framework.

Happy to contribute a PR or provide a working integration example.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions