LetspingLetsPing
← Docs

LangGraph HITL + Behavioral Firewall Integration

LangGraph production recipe · LetsPing Answer Hub

What you get when you combine LangGraph and LetsPing

LangGraph gives you structured, graph-based control over your agents. LetsPing adds a behavioral firewall, human-in-the-loop console, Cryo-Sleep state parking, and remote checkpoints. Together, they let you pause risky nodes, route them to humans, and resume the graph safely, all while building a Markov baseline of your agent's behavior.

Step 1 — Wire in the LetsPingCheckpointer

The checkpointer makes your LangGraph threads resumable across workers and processes:

import { StateGraph } from "@langchain/langgraph";
import { LetsPing } from "@letsping/sdk";
import { LetsPingCheckpointer } from "@letsping/sdk/integrations/langgraph";

const lp = new LetsPing(process.env.LETSPING_API_KEY!);
const checkpointer = new LetsPingCheckpointer(lp);

const builder = new StateGraph<any>({});
const graph = builder.compile({ checkpointer });

From here on, checkpoints are persisted remotely via LetsPing (encrypted in Supabase Storage), not just in memory. That's what makes Cryo-Sleep and HITL safe to use in serverless and containerized environments.

Step 2 — Wrap risky nodes with HITL + firewall

For nodes that touch money, infra, or data, call LetsPing from inside the node. The behavioral firewall will observe transitions and guardrails, and the console will expose decisions to humans:

async function refundNode(state: any) {
  const decision = await lp.ask({
    service: "billing-agent",
    action: "refund_user",
    priority: "high",
    payload: {
      user_id: state.userId,
      amount: state.refundAmount,
    },
  });

  if (decision.status === "REJECTED") {
    return { ...state, refundStatus: "rejected_by_human" };
  }

  const data = decision.patched_payload ?? decision.payload;
  await stripe.refunds.create({ charge: data.user_id, amount: data.amount_cents });

  return { ...state, refundStatus: "approved_and_executed" };
}

Every time this node fires, LetsPing updates the Markov baseline for your agent, tracks guardrail hits, and stores an auditable Decision.

Step 3 — Resume the graph from a webhook after approval

When a human approves or patches a request, LetsPing can call a webhook where you resume the graph using the same LetsPingCheckpointer:

import { NextRequest, NextResponse } from "next/server";
import { StateGraph } from "@langchain/langgraph";
import { LetsPing } from "@letsping/sdk";
import { LetsPingCheckpointer } from "@letsping/sdk/integrations/langgraph";

import { buildGraph } from "@/lib/langgraph"; // your graph builder

const lp = new LetsPing(process.env.LETSPING_API_KEY!);
const checkpointer = new LetsPingCheckpointer(lp);
const graph: StateGraph<any> = buildGraph({ checkpointer });

export async function POST(req: NextRequest) {
  const raw = await req.text();
  const sig = req.headers.get("x-letsping-signature") || "";

  const event = await (lp as any).webhookHandler(raw, sig, process.env.LETSPING_WEBHOOK_SECRET!);
  const { data, state_snapshot } = event;

  const threadId = state_snapshot?.thread_id as string | undefined;
  if (!threadId) {
    return NextResponse.json({ ok: false, error: "missing_thread_id" }, { status: 400 });
  }

  await graph.invoke(state_snapshot.input, {
    configurable: { thread_id: threadId },
  });

  return NextResponse.json({ ok: true });
}

This pattern gives you a clean separation: LangGraph controls the agent, LetsPing controls when risky nodes are allowed to fire.

How the behavioral firewall fits into LangGraph

As your graph runs, LetsPing's Markov engine builds a baseline of which nodes usually follow which. Transitions are scored by edge anomaly score; when a score exceeds mean + 3·σ over the learned history (or a fallback threshold), guardrails can pause the run and surface the event in the console. The sequence_entropy guardrail supports min_markov_anomaly_score when Markov metrics are provided. Your approval or rejection feeds back into the baseline so future behavior is scored correctly.

Next steps

• Read The 2026 Guide to Securing LangGraph in Production for deeper architectural context.

• Explore the behavioral firewall and HITL console marketing pages.

• If you coordinate multiple agents or vendors, consider agent-to-agent escrow for chain-of-custody across the whole workflow.