TraceContext wraps a trace and its spans, providing typed methods for accessing messages, tool calls, grader results, and more. It’s the main way to work with trace data in Manta.
Creating a TraceContext
MantaService directly — useful if your analysis needs to make additional queries:
Trace properties
Messages
ctx.messages() returns MessageData objects. Filter by role:
MessageData has:
| Property | Type | Description |
|---|---|---|
role | str | "user", "assistant", "system", or "tool" |
content | str | Text content |
tool_calls | list[dict] | Function calls in this message |
is_empty | bool | No content and no tool calls |
has_tool_calls | bool | Contains tool calls |
Tool calls
ctx.tools() returns ToolData objects. Filter by name, requestor, or error status:
ToolData has:
| Property | Type | Description |
|---|---|---|
name | str | Tool name |
call_id | str | Unique call ID |
arguments | str | Raw JSON arguments |
arguments_dict | dict | Parsed arguments |
result | str | Tool output |
error | bool | Whether the call errored |
requestor | str | "agent", "user", or "system" |
Grader results
ctx.grader_result() returns the aggregate grading result with individual criteria populated:
| Property | Type | Description |
|---|---|---|
criterion_name | str | e.g. "correctness", "db_state" |
passed | bool | Whether this criterion passed |
score | float | None | Criterion score |
reasoning | str | Grader reasoning |
LLM calls
ctx.llm_calls() returns raw LLM invocations:
Conversation
ctx.conversation() returns the full conversation as a flat list of dicts — convenient for sending to an LLM:
{"role": "tool", "content": "input: {...}\noutput: ..."}.