Zum Hauptinhalt springen
LIVE Intel Feed
"Not a Pentest" Notice: This guide is for securing your own AI agent tools. No attack tools.
Moltbot AI Security · Batch 5

AI Tool Use Security: Securing LLM Function Calling

When an LLM can call tools — shell commands, HTTP requests, database queries, file writes — the attack surface explodes. A single prompt injection can pivot through an unsecured tool to the host, internal network, or sensitive data. This guide covers risk classification and concrete defenses for 7 common AI agent tool types.

7
Tool risk categories
2
CRITICAL-risk tool types
HITL
Required for write tools
0
Trusted tool outputs

Tool Risk Matrix

ToolRiskAttack VectorDefense
Shell / Code ExecutionCRITICALPrompt injection → arbitrary command execution on hostRun in --read-only container with --cap-drop=ALL. Allowlist permitted commands. 30s hard timeout. Never run as root.
HTTP / Web RequestsHIGHSSRF → internal network access, metadata endpoint, cloud credentialsAllowlist permitted domains/IPs. Block RFC-1918 ranges and link-local (169.254.x.x). Validate URLs before fetch. Log all requests.
File System ReadHIGHPath traversal → read /etc/passwd, ~/.ssh/id_rsa, .env filesRestrict to declared workspace directory. Validate resolved path against workspace root. Block symlink traversal.
File System WriteCRITICALOverwrite config files, inject malicious code, modify agent behaviorRequire human confirmation for all writes. Scope to temp directory only. Audit all write operations.
Database QueriesHIGHSQL injection via LLM-generated queries, data exfiltrationUse parameterized queries only — never string-interpolated SQL. Read-only credentials for read operations. Scope to minimal required tables.
Email / NotificationsHIGHData exfiltration via email, spam/phishing via LLM-drafted contentRequire human approval for all external sends. Allowlist recipients. Content review before send. Rate limit: max 10 emails/hour.
Calendar / SchedulingMEDIUMUnwanted calendar events, social engineering via agent-created meetingsHuman-in-the-loop for all external calendar invites. Scope to own calendar only by default.

Principle of Least Tool

Start with zero tools. Add back only what the specific task requires. A summarization agent needs no tools at all. A research agent needs HTTP read only. A coding agent needs file read + write in a scoped temp directory only.

# BAD: register all tools "just in case"
agent = Agent(tools=[ShellTool(), FileTool(), HTTPTool(),
                     EmailTool(), DBTool(), CalendarTool()])

# GOOD: minimum required for the specific task
summarizer = Agent(tools=[])  # No tools needed
researcher = Agent(tools=[HTTPTool(allowlist=["arxiv.org", "pubmed.ncbi.nlm.nih.gov"])])
coder = Agent(tools=[
  FileTool(workspace="/tmp/agent-sandbox", mode="rw"),
  # Shell removed — use isolated subprocess instead
])

Frequently Asked Questions

What is the biggest security risk of LLM function calling?

Unscoped tool access combined with prompt injection. An LLM with access to a shell tool and no sandboxing can be prompted to execute arbitrary commands. The fix: every tool must have a declared scope, run in an isolated container, and dangerous tools (shell, file write, HTTP) require human confirmation or are restricted to an allowlist.

How do I implement human-in-the-loop for AI tool use?

For high-risk tools: before execution, present the proposed tool call (tool name + parameters) to a human operator via a review interface. Only execute after explicit approval. Log: approver identity, approval timestamp, original LLM reasoning. Implement a timeout — if no approval within X minutes, cancel the action.

Can I trust tool outputs fed back to the LLM?

Never unconditionally. Tool outputs can contain adversarial content (e.g., a web page with injected instructions). Sanitize all tool outputs before feeding back to the LLM: strip HTML, extract structured data only, apply the same injection detection as user inputs. Treat tool output as untrusted data, not as trusted system context.

How do I prevent SSRF via AI HTTP tools?

1) Allowlist permitted domains — reject everything else. 2) Resolve the URL and check the IP is not RFC-1918 (10.x, 172.16.x, 192.168.x) or link-local (169.254.x.x). 3) Follow redirects but re-validate each redirect target. 4) Block metadata endpoints: 169.254.169.254 (AWS), metadata.google.internal. 5) Log all HTTP tool calls with URL, response code, response size.

Further Resources

🔒 Quantum-Resistant Mycelium Architecture
🛡️ 3M+ Runbooks – täglich von SecOps-Experten geprüft
🌐 Zero Known Breaches – Powered by Living Intelligence
🏛️ SOC2 & ISO 27001 Aligned • GDPR 100 % compliant
⚡ Real-Time Global Mycelium Network – 347 Bedrohungen in 60 Minuten
🧬 Trusted by SecOps Leaders worldwide