Skip to content

Maperez1972/privaro-sdk-js

Repository files navigation

Privaro × AI Agents

🚀 Overview

Privaro enables safe agent execution by controlling data before LLM calls.


🧩 Architecture

Agent → Privaro → LLM


🔥 Example Flow

  1. Agent receives data
  2. Privaro tokenizes
  3. LLM processes safely
  4. Output re-identified

🤖 Why agents need Privaro

Agents:

  • chain actions
  • access multiple data sources
  • increase risk surface

Privaro:

  • controls flow
  • ensures compliance
  • adds auditability

🔗 Use with

  • ruflo
  • Langchain
  • n8n

Installation

npm install privaro
# or
pnpm add privaro
# or
yarn add privaro

Quick Start

Protect a prompt

import { PrivaroClient } from "privaro";

const client = new PrivaroClient({
  apiKey: "prvr_...",
  pipelineId: "your-pipeline-uuid",
});

const result = await client.protect(
  "El paciente Juan García, DNI 34521789X, email juan@clinica.es"
);

console.log(result.protected_prompt);
// "El paciente [NM-0001], [ID-0001], email [EM-0001]"

console.log(result.stats.risk_score);    // 0.87
console.log(result.stats.total_detected); // 3
console.log(result.gdpr_compliant);       // true

Detect without masking

const detections = await client.detect("IBAN ES91 2100 0418 4502 0005 1332");

console.log(detections.detections[0]);
// { type: "iban", severity: "critical", confidence: 0.99, detector: "regex" }

Conversation scoping

Pass a conversationId to ensure the same PII maps to the same token across all messages in a conversation:

const convId = "conv-uuid-from-your-db";

const msg1 = await client.protect("Juan García solicita información", convId);
const msg2 = await client.protect("El contrato de Juan García está listo", convId);

// [NM-0001] is the same token in both messages ✓

Agent API

Govern autonomous AI agents under the same privacy policy as human users.

Direct usage

import { AgentRun } from "privaro";

const run = new AgentRun({
  apiKey: "prvr_...",
  pipelineId: "your-pipeline-uuid",
});

try {
  await run.start({
    agentName: "legal-reviewer",
    framework: "custom",
  });

  // Step 1: protect input before sending to LLM
  const step = await run.protect([
    {
      role: "user",
      content: "Review contract for María García DNI 34521789X",
    },
    {
      role: "tool",
      content: "Document found: contrato_garcia_2024.pdf",
      step_type: "tool_output",
    },
  ]);

  // step.protected_messages are safe to send to any LLM
  const llmResponse = await callYourLLM(step.protected_messages);

  // Step 2: restore original values in the final response
  const final = await run.reveal(llmResponse);
  console.log(final.revealed_text); // María García appears in the response

  await run.end("completed");
} catch (error) {
  await run.end("failed");
  throw error;
}

LangChain.js integration

import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createReactAgent } from "langchain/agents";
import { PrivaroCallbackHandler } from "privaro";

const handler = new PrivaroCallbackHandler({
  apiKey: "prvr_...",
  pipelineId: "your-pipeline-uuid",
  agentName: "my-langchain-agent",
  framework: "langchain",
});

const llm = new ChatOpenAI({ callbacks: [handler] });

// All LLM calls and tool outputs are automatically protected
const agent = await createReactAgent({ llm, tools });
const executor = new AgentExecutor({ agent, tools, callbacks: [handler] });

const result = await executor.invoke({ input: "Review this contract..." });

// Reveal tokens in the final output
const revealed = await handler.agentRun.reveal(result.output);

Vercel AI SDK integration

import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { AgentRun } from "privaro";

const run = new AgentRun({ apiKey: "prvr_...", pipelineId: "uuid" });
await run.start({ framework: "vercel-ai" });

const step = await run.protect(userMessage);
const protectedContent = step.protected_messages[0].content;

const { text } = await streamText({
  model: openai("gpt-4o"),
  prompt: protectedContent,
});

const final = await run.reveal(text);
await run.end();

API Reference

PrivaroClient

Method Signature Description
protect (prompt, conversationId?, options?) => Promise<ProtectResult> Detect and tokenize PII
detect (prompt) => Promise<DetectResult> Detect PII without masking

AgentRun

Method Signature Description
start (options?) => Promise<AgentRunStartResult> Create agent run, get run_id
protect (messages, stepIndex?, mode?) => Promise<AgentStepResult> Protect one step
reveal (text) => Promise<AgentRevealResult> Detokenize final output
end (status?) => Promise<AgentRunEndResult> Close run, finalize counters
runId string | null Current agent_run_id

Error handling

import { PrivaroError } from "privaro";

try {
  const result = await client.protect("...");
} catch (error) {
  if (error instanceof PrivaroError) {
    console.error(error.message);  // "Privaro API error 401"
    console.error(error.status);   // 401
    console.error(error.body);     // { error: "invalid_api_key" }
  }
}

TypeScript

Full TypeScript support with exported types:

import type {
  ProtectResult,
  Detection,
  AgentMessage,
  AgentStepResult,
  PrivaroConfig,
} from "privaro";

Requirements

  • Node.js 18+ (uses native fetch)
  • Or any modern browser / Deno / Bun / Edge runtime with fetch

License

MIT © 2026 iCommunity Labs · privaro.ai

Releases

No releases published

Packages

 
 
 

Contributors