Why Vercel AI SDK + LetsPing is a good fit
Vercel AI SDK makes it easy to build streaming, tool-using agents as route handlers. LetsPing adds a behavioral firewall and human-in-the-loop approvals so you keep the simple streamText interface and gain precise control over risky tools. If an agent hallucinates a destructive payload, Vercel Queues will deliver it, and LetsPing intercepts before it touches your infrastructure. Do not put an autonomous LLM payload into a queue without wrapping it in a deterministic firewall first.
Step 1 — Install adapters and SDK
npm install @letsping/sdk @letsping/adapters zod
The @letsping/adapters/vercel package exposes a letsPing helper that wraps tools with HITL and firewall logic.
Step 2 — Wrap risky tools
Inside your route handler, wrap any tool that should require human approval:
// app/api/chat/route.ts
import { NextRequest } from "next/server";
import { streamText } from "ai";
import { letsPing } from "@letsping/adapters/vercel";
import { z } from "zod";
const refundUser = letsPing({
name: "refund_user",
description: "Issues a refund. Requires human approval before execution.",
apiKey: process.env.LETSPING_API_KEY!,
priority: "high",
schema: z.object({
user_id: z.string(),
amount: z.number().positive(),
}),
});
export async function POST(req: NextRequest) {
const { messages } = await req.json();
const result = await streamText({
model: /* your model */,
messages,
tools: { refund_user: refundUser },
});
return result.toAIStreamResponse();
}When the model calls refund_user, LetsPing pauses execution, parks state, and sends the request to the HITL console governed by the behavioral firewall. Before the call is allowed to proceed, the payload is parsed into a structured form and evaluated against grammar and policy constraints, with stricter rules when the decision is based on untrusted context.
Step 3 — Show triage status in the client
When used with streaming tool progress, the adapter can emit a progress event so your UI can show “Waiting for approval” instead of hanging.
In React, listen to the tool progress stream and render a badge when status is something like "intercepted_by_firewall" with a triage_url linking to the LetsPing dashboard.
Agents in background queues: push is your safety net
When your agent runs inside a Vercel Queue or Workflow, there is no frontend for the developer to watch. LetsPing intercepting that background job and sending a push notification to your phone is the safety net. The same firewall and HITL console apply whether the agent runs in an interactive session or in the background.
How the behavioral firewall sees your Vercel agents
Each wrapped tool call becomes a LetsPing request with a service and action. The Markov-based behavioral engine watches the sequence of these calls over time and computes anomaly scores over a reduced feature set. Deterministic guardrails decide whether a call is allowed, paused, or rejected. Markov highlights surprising sequences and feeds dashboards rather than acting as an opaque gate.
Next steps
• Explore the behavioral firewall and HITL console overviews.
• See the examples page for real Vercel AI SDK recipes.
• If multiple agents or vendors touch the same Vercel route, consider agent-to-agent escrow to keep a cryptographic chain of custody.