LetsPing
LetsPing is the seatbelt for high-stakes autonomous agents. You only wrap the .tool() implementations that execute destructive actions (POST, PUT, DELETE). LetsPing Detects anomalies via Markov profiling, Intercepts the execution by parking the state in Cryo-Sleep, and allows you to Resolve the request with a human-in-the-loop. The system learns from every approval.
LETSPING_API_KEY). Keys are generated in the dashboard and scoped to a project.diff_summary and returns APPROVED_WITH_MODIFICATIONS. Your agent learns from the correction and stops repeating the mistake.The Autonomous Shield
1. Markov Behavioral Profiling
Every time an agent executes a tool, LetsPing hashes the parameter schema and structural context (the skeleton_hash). Over time, this builds a probability graph of your agent's reasoning pathways. When an agent jumps from a read_logs node straight to an apply_terraform node with 0% historical probability, the Shield detects a high-entropy reasoning anomaly and intervenes.
2. Shadow Mode & Baseline Locking
LetsPing runs in Shadow Mode by default. It silently observers your agent without pausing execution, graphing the entropy of the system. Once the entropy curve flattens (meaning the agent has established a predictable routine), the graph status shifts to Baseline Locked. You can then toggle the integration into Enforcement Mode to actively block deviations.
3. Live Execution Graph
The LetsPing dashboard includes a real-time node map. You can visually trace the blast radius of hallucinations, identifying exactly where the agent's logic drifted from the baseline before it hit your protected tool wrapper.
4. Approval-as-Learning
Human approvals aren't just temporary overrides. When you approve a novel payload, LetsPing injects weight into that new Markov path. To handle concept drift, unapproved legacy pathways experience Time Decay, slowly fading from the baseline graph until they are flagged as anomalies again.
Quickstart
npm install @letsping/sdkCreate a project at letsping.co → Settings → Developers. Copy the generated key.
# .env
LETSPING_API_KEY=lp_live_...Wrapping critical tools also activates automatic behavioral profiling across your agent's execution graph.
import { LetsPing } from "@letsping/sdk";
const lp = new LetsPing();
// Execution halts here until a human approves or rejects.
const decision = await lp.ask({
service: "billing-agent",
action: "refund_user",
priority: "high", // "low" | "medium" | "high" | "critical"
payload: { user_id: "u_123", amount: 550, currency: "usd" }
});
if (decision.status === "APPROVED") {
// Action authorized with no modifications.
await refund(decision.payload);
} else if (decision.status === "APPROVED_WITH_MODIFICATIONS") {
// Operator edited your payload. Learn from the diff_summary!
console.log("Corrections:", decision.diff_summary);
await refund(decision.patched_payload);
} else {
// Status is "REJECTED" — stop here.
console.log("Rejected:", decision.metadata);
}API Reference
Wrapping critical tools also activates automatic behavioral profiling across your agent's execution graph.
timeoutMs, default 24 h). Returns a Decision object with status ("APPROVED" or "REJECTED"), the original payload, and patched_payload if the operator edited values. Python raises ApprovalRejectedError on rejection.id immediately. Use this in serverless functions or queues where you can't hold a connection open. Poll GET /api/status/:id or configure a webhook to receive the result.ask() in the shape expected by OpenAI function calling or similar tool conventions. Takes a string or object as context, returns an "APPROVED" / "STOP" / "ERROR" string the LLM can interpret.RequestOptions fields
| Field | Type | Description |
|---|---|---|
| service | string | Name of the agent or service (e.g. "billing-agent") |
| action | string | Specific action being requested (e.g. "refund_user") |
| payload | object | The data the human will see and optionally edit |
| priority | "low"|"medium"|"high"|"critical" | Urgency. Affects visual treatment in the dashboard. Default: "medium" |
| schema | Zod schema | JSON Schema | Optional. Generates a typed edit form in the dashboard for payload patching |
| timeoutMs | number | Optional. Max wait in ms. Default: 86,400,000 (24 h) |
| role | string | Optional. Routes to a specific team role (e.g. "finance") |
Payload Patching
When you pass a schema to ask(), the dashboard renders a type-safe form pre-filled with the submitted payload values. The operator can change any field before approving. The SDK explicitly returns an APPROVED_WITH_MODIFICATIONS status alongside a structural diff_summary, enabling your agent to learn from its mistakes and prevent future alerts (RLHF).
import { LetsPing } from "@letsping/sdk";
import { z } from "zod";
const lp = new LetsPing();
// Pass a Zod schema and the dashboard renders a type-safe edit form.
// The operator can change values before approving.
const decision = await lp.ask({
service: "outreach-bot",
action: "send_email",
payload: { to: "ceo@acme.com", subject: "Q1 recap", body: "..." },
schema: z.object({
to: z.string().email(),
subject: z.string(),
body: z.string().describe("Plain text or Markdown"),
})
});
// Check if the operator edited the subject.
const final = decision.status === "APPROVED_WITH_MODIFICATIONS"
? decision.patched_payload
: decision.payload;schema, the dashboard shows the raw JSON payload as read-only. The operator can still approve or reject, but cannot edit values.Payload Encryption
LetsPing employs Zero-Plaintext Storage by default. All payloads are automatically encrypted at the application layer using an AES-256 master key before being persisted to the backend. For organizations requiring strict zero-trust guarantees, you can also enable Client-Side E2E Encryption. This ensures LetsPing servers never even see the plaintext values in transit.
Open Settings → Encryption in the dashboard. Click Generate key. A 256-bit AES key is created in your browser using the Web Crypto API and saved to localStorage. Copy the key — it is shown only once.
# .env — same file as your API key
LETSPING_API_KEY=lp_live_...
LETSPING_ENCRYPTION_KEY=<paste key here>LETSPING_ENCRYPTION_KEY from the environment automatically. No constructor changes needed — your existing lp.ask() calls start encrypting immediately.Submit a request. In the triage queue, open it — the payload panel shows a DECRYPTED badge and the plaintext values, decrypted locally using the key in your browser's localStorage. If you open the dashboard on a different browser or device, you will see a "key not loaded" notice instead. Import the key from Settings → Encryption on that device.
payload column contains { _lp_enc: true, iv: "...", ct: "..." }. There is no server-side key — LetsPing cannot decrypt it, even with full database access./api/resolve. The SDK decrypts patched_payloadon the way out, so your agent code sees plain objects — not ciphertext.LETSPING_ENCRYPTION_KEY set are secured by server-side envelope encryption automatically and remain completely backward compatible. The API generates a unique Data Encryption Key (DEK) for every request to guarantee perfect forward secrecy.state_snapshot. The SDK encrypts this massive state payload symmetrically (either via your E2E Client Key or the auto-generated Server Fallback Envelope keys) and PUTs it directly to Supabase storage. Pointers are stored, but the database is never bloated with huge JSON context windows.Framework Adapters
The @letsping/adapters package wraps ask() in the tool shape expected by each framework so you don't have to write glue code.
LangChain / LangGraph
createLetsPingTool returns a DynamicStructuredTool. Drop it into any AgentExecutor or LangGraph node.
import { createLetsPingTool } from "@letsping/adapters/langchain";
import { z } from "zod";
// Drop into any LangGraph agent or standard LangChain AgentExecutor.
// The LLM calls this tool; execution pauses until a human resolves it.
const tools = [
createLetsPingTool({
name: "deploy_to_prod",
description: "Deploys the current build to production. Requires human sign-off.",
priority: "critical",
schema: z.object({
version: z.string().describe("Semver tag to deploy, e.g. v1.4.2"),
environment: z.enum(["staging", "production"]),
})
})
];Vercel AI SDK
letsPing() wraps createVercelTool from the ai package. Use inside a Next.js Route Handler with streamText.
import { letsPing } from "@letsping/adapters/vercel";
import { z } from "zod";
// In a Next.js Route Handler using Vercel AI SDK streamText:
const tools = {
refund_user: letsPing({
name: "refund_user",
description: "Issues a refund. Requires human approval before execution.",
priority: "high",
schema: z.object({
user_id: z.string(),
amount: z.number().positive(),
})
})
};MCP Server
@letsping/mcp is an MCP server that exposes a single tool: ask_human. Install it in Claude Desktop or Cursor and your agent can call it the same way it calls any other tool. No code required.
# Start the MCP server (reads LETSPING_API_KEY from env)
npx @letsping/mcp
# Claude Desktop — add to claude_desktop_config.json:
{
"mcpServers": {
"letsping": {
"command": "npx",
"args": ["@letsping/mcp"],
"env": { "LETSPING_API_KEY": "lp_live_..." }
}
}
}
# Cursor — add to .cursor/mcp.json with the same structure.
# The agent now has access to the "ask_human" tool.ask():service, action, payload, priority, role, and timeout. On rejection, it returns a text string starting with ACTION_REJECTED:so the LLM can gracefully stop the task.Mobile Companion
The mobile app is a PWA — add letsping.co/mobile to your home screen (iOS or Android). Pair your phone once by scanning a QR code from the dashboard. After that, incoming pending requests appear as cards you can swipe to approve or tap to reject. No separate app store install.
git clone) wires up the MCP server and mobile companion into your agent workspace without manual configuration.Webhooks
Configure a webhook URL in Settings → Developers → Webhooks. LetsPing will POST to your endpoint when a request is created, approved, or rejected. Use this to trigger downstream logic when you're usingdefer() or any other async flow.
// LetsPing sends a POST to your endpoint on every status change.
// Payload shape:
{
"event": "request.approved", // or "request.rejected" | "request.created"
"request_id": "req_abc123",
"status": "APPROVED",
"service": "billing-agent",
"action": "refund_user",
"payload": { "user_id": "u_123", "amount": 550 },
"patched_payload": { "user_id": "u_123", "amount": 220 }, // if operator edited
"resolved_at": "2026-02-20T19:30:00Z",
"state_download_url": "https://[project].supabase.co/storage/v1/object/sign/agent_states/states/req_abc123.enc?token=..."
}patched_payload first — if the operator edited values, it will be present. Fall back to payload if not.Handling State Rehydration
LetsPing does not magically inject state back into your framework natively, as LangGraph and Vercel AI manage memory differently. You must use the state_download_url provided in the webhook to fetch the frozen state, decrypt it, and manually trigger your agent's resume logic.
// app/api/webhook/letsping/route.ts
import { NextResponse } from "next/server";
import { LetsPing } from "@letsping/sdk";
const lp = new LetsPing(); // Automatically loads LETSPING_ENCRYPTION_KEY if set
export async function POST(req: Request) {
const body = await req.json();
if (body.status === "APPROVED" || body.status === "APPROVED_WITH_MODIFICATIONS") {
const finalPayload = body.patched_payload || body.payload;
let hydratedState = null;
if (body.state_download_url) {
// 1. Fetch the frozen Cryo-Sleep state
const res = await fetch(body.state_download_url);
const encryptedState = await res.json();
// 2. Decrypt locally
hydratedState = (lp as any)._decrypt(encryptedState);
}
// 3. Manually resume your specific framework (e.g. LangGraph)
// await resumeAgent(hydratedState, finalPayload);
}
return NextResponse.json({ success: true });
}