Platform API Client
Note
New in 4.x — TrustwiseClient is the recommended way to interact
with the Trustwise platform. It replaces TrustwiseSDK and provides a
superset of functionality.
The TrustwiseClient provides an
ergonomic Python interface to the full Trustwise Platform API (v1alpha),
including agents, guardrails, policies, evaluations, risk classification,
red-team generation, all v4 metrics, events, and subscriptions.
Note
Breaking change in 4.5 — Agents are now project-scoped. All
client.agents methods require a project_id as the first argument.
Quickstart
import os
from trustwise.sdk.config import TrustwiseConfig
from trustwise.sdk.platform import TrustwiseClient
config = TrustwiseConfig(api_key=os.environ["TW_API_KEY"])
client = TrustwiseClient(config)
# List agents within a project
agents = client.agents.list("my-project-id", limit=5)
# Evaluate faithfulness
result = client.metrics.faithfulness(
query="What is the capital of France?",
response="The capital of France is Paris.",
context=[{"chunk_text": "Paris is the capital.", "chunk_id": "1"}],
)
Raw Envelope Access
By default, methods return the parsed data directly. Pass raw=True to
get the full API envelope (with success, metadata, warnings):
# Parsed — returns the data model directly
agent = client.agents.get("project-id", "agent-id")
# Raw — returns the full API envelope
resp = client.agents.get("project-id", "agent-id", raw=True)
print(resp["success"], resp["metadata"])
Available Resources
Attribute |
Description |
|---|---|
|
Project-scoped CRUD, query, risk-classify, add policies, gateway deploy, status |
|
List AI Gateway models and guardrails |
|
CRUD, evaluate, list evaluations, providers, provider guardrails and regions |
|
CRUD, evaluate, evaluate_batch, recommend |
|
Project-scoped CRUD, classification, audit logs |
|
List available evaluators (read-only) |
|
CRUD for organizations, logo upload/manage |
|
CRUD |
|
Classify, scaffold, tags, validate, list versions |
|
Red-team prompt generation (12 endpoints) |
|
All v4 metrics (20 endpoints) |
|
List, get, dismiss event notifications |
|
CRUD for event subscriptions (webhook, email, Slack, Teams) |
|
List runtime providers and auth schemas |
Agents
Agents are project-scoped — every method requires a project_id.
project_id = "my-project-id"
# List
agents = client.agents.list(project_id, search="my-bot", limit=10)
# Create
agent = client.agents.create(
project_id,
name="My Agent",
description="A helpful assistant",
runtime_provider_config={
"type": "http",
"url": "https://my-agent.example.com/query",
},
)
# Get, update, delete
agent = client.agents.get(project_id, agent["id"])
client.agents.update(project_id, agent["id"], description="Updated")
client.agents.delete(project_id, agent["id"])
# Query the agent
response = client.agents.query(project_id, agent["id"], prompt="Hello!")
# Risk classify
result = client.agents.risk_classify(project_id, agent["id"])
# Deploy to AI Gateway
deploy = client.agents.deploy_gateway(project_id, agent["id"], models=["gpt-4.1-nano"])
# Status history
history = client.agents.status_history(project_id, agent["id"])
Guardrails
# List guardrails
guardrails = client.guardrails.list()
# Evaluate content against a guardrail
result = client.guardrails.evaluate(
guardrail_id, input="user message", output="assistant reply"
)
# Browse providers
providers = client.guardrails.providers()
# List guardrails available from a provider (e.g. AWS Bedrock)
result = client.guardrails.provider_guardrails("aws_bedrock", region="us-east-1")
# List regions supported by a provider
regions = client.guardrails.provider_regions("aws_bedrock")
Policies
# List policies
policies = client.policies.list()
# Evaluate content against a policy
result = client.policies.evaluate(policy_id, input="test input")
# Evaluate against multiple policies in one call
results = client.policies.evaluate_batch(
policy_ids=["policy-1", "policy-2"],
input={"text": "test input"},
)
# Get policy recommendations based on tags
recs = client.policies.recommend(tags=["safety", "pii"])
Evaluations
Evaluations are scoped to a project:
# List evaluations in a project
evals = client.evaluations.list("my-project-id", limit=10)
# Get audit logs
logs = client.evaluations.audit_logs("my-project-id")
Risk Engine
# Get the questionnaire scaffold
scaffold = client.risk_engine.scaffold("arc-v3")
# Classify risk
result = client.risk_engine.classify("arc-v3", answers={...})
Generation (Red-Team)
# Generate red-team prompts
prompts = client.generation.red_prompts(
system_prompt="You are a helpful assistant.",
num_prompts=5,
)
# Generate adversarial tool inputs
tool_inputs = client.generation.red_tool_input(
tool_name="search", tool_schema={...}
)
Events
Event notifications are created automatically when evaluations, guardrail triggers, or policy violations occur.
# List recent events
events = client.events.list(severity="warning", limit=20)
# Get a specific event
event = client.events.get("event-id")
# Dismiss an event
client.events.dismiss("event-id")
# Check events subsystem health
client.events.health()
Subscriptions
Subscriptions forward event notifications to external destinations.
# Create a webhook subscription
sub = client.subscriptions.create(
name="My Webhook",
event_type_filter=["evaluation.completed", "evaluation.failed"],
severity_filter=["critical", "warning"],
handler_type="webhook",
handler_config={"url": "https://my-service.example.com/events"},
)
# List, update, delete
subs = client.subscriptions.list()
client.subscriptions.update(sub.id, enabled=False)
client.subscriptions.delete(sub.id)
Runtime Providers
Runtime providers describe the connector types available for agent deployment (HTTP, A2A, Google ADK, etc.).
# List all runtime providers
providers = client.runtime_providers.list()
# Get a specific provider's config schema
provider = client.runtime_providers.get("http")
# List supported auth schemas
schemas = client.runtime_providers.auth_schemas()
Metrics
All v4 metrics are available through client.metrics:
# Faithfulness
result = client.metrics.faithfulness(
query="What is Paris?",
response="Paris is the capital of France.",
context=[{"chunk_text": "Paris is the capital of France.", "chunk_id": "1"}],
)
# Toxicity
result = client.metrics.toxicity(text="some text to check")
# Batch evaluation
result = client.metrics.evaluate(
evaluators=["faithfulness", "toxicity"],
query="...", response="...", context=[...],
)
See V4 Metrics for details on each metric’s parameters and response types.
Full API Reference
For complete method signatures and return types, see the Platform Client section in the API Reference.