TrialPath UX & Frontend TDD-Ready Implementation Guide
Generated from DeepWiki research on
streamlit/streamlitandemcie-co/parlant, supplemented by official Parlant documentation (parlant.io).Architecture Decisions:
- Parlant runs as independent service (REST API mode), FE communicates via
ParlantClient(httpx)- Doctor packet export: JSON + Markdown (no PDF generation in PoC)
- MedGemma: HF Inference Endpoint (cloud, no local GPU)
1. Architecture Overview
1.1 File Structure
app/
app.py # Entrypoint: st.navigation, shared sidebar, Parlant client init
pages/
1_upload.py # INGEST state: document upload + extraction trigger
2_profile_review.py # PRESCREEN state: PatientProfile review + edit
3_trial_matching.py # VALIDATE_TRIALS state: trial search + eligibility cards
4_gap_analysis.py # GAP_FOLLOWUP state: gap analysis + iterative refinement
5_summary.py # SUMMARY state: final report + doctor packet export
components/
file_uploader.py # Multi-file PDF uploader component
profile_card.py # PatientProfile display/edit component
trial_card.py # Traffic-light eligibility card component
gap_card.py # Gap analysis action card component
progress_tracker.py # Journey state progress indicator
chat_panel.py # Parlant message panel (send/receive)
search_process.py # Search refinement step-by-step visualization
disclaimer_banner.py # Medical disclaimer banner (always visible)
services/
parlant_client.py # Parlant REST API wrapper (sessions, events, agents)
state_manager.py # Session state orchestration
tests/
test_upload_page.py
test_profile_review_page.py
test_trial_matching_page.py
test_gap_analysis_page.py
test_summary_page.py
test_components.py
test_parlant_client.py
test_state_manager.py
1.2 Module Dependency Graph
app.py
-> pages/* (via st.navigation)
-> services/parlant_client.py (Parlant REST API)
-> services/state_manager.py (session state orchestration)
pages/*
-> components/* (UI building blocks)
-> services/parlant_client.py
-> services/state_manager.py
components/*
-> st.session_state (read/write)
services/parlant_client.py
-> parlant-client SDK or httpx (REST calls to Parlant server)
services/state_manager.py
-> st.session_state
-> services/parlant_client.py
1.3 Key Dependencies
| Package | Purpose |
|---|---|
streamlit>=1.40 |
Frontend framework, multipage app, AppTest |
parlant-client |
Python SDK for Parlant REST API |
httpx |
Async HTTP client (fallback for Parlant) |
pytest |
Test runner |
2. Streamlit Framework Guide
2.1 Multipage App with st.navigation
TrialPath uses the modern st.navigation API (not legacy pages/ auto-discovery) for explicit page control tied to Journey states.
Pattern: Entrypoint with state-aware navigation
# app.py
import streamlit as st
from services.state_manager import get_current_journey_state
st.set_page_config(page_title="TrialPath", page_icon=":material/medical_services:", layout="wide")
# Define pages mapped to Journey states
pages = {
"Patient Journey": [
st.Page("pages/1_upload.py", title="Upload Documents", icon=":material/upload_file:"),
st.Page("pages/2_profile_review.py", title="Review Profile", icon=":material/person:"),
st.Page("pages/3_trial_matching.py", title="Trial Matching", icon=":material/search:"),
st.Page("pages/4_gap_analysis.py", title="Gap Analysis", icon=":material/analytics:"),
st.Page("pages/5_summary.py", title="Summary & Export", icon=":material/summarize:"),
]
}
current_page = st.navigation(pages)
# Shared sidebar: progress tracker
with st.sidebar:
st.markdown("### Journey Progress")
state = get_current_journey_state()
# Render progress indicator based on current Parlant Journey state
current_page.run()
Key API details (from DeepWiki):
st.navigation(pages, position="sidebar")returns the currentStreamlitPage, must call.run().st.switch_page("pages/2_profile_review.py")for programmatic navigation (stops current page execution).st.page_link(page, label, icon)for clickable navigation links.- Pages organized as dict = sections in sidebar nav.
2.2 File Upload (st.file_uploader)
Pattern: Multi-file PDF upload with validation
# components/file_uploader.py
import streamlit as st
from typing import List
def render_file_uploader() -> List:
"""Render multi-file uploader for clinical documents."""
uploaded_files = st.file_uploader(
"Upload clinical documents (PDF)",
type=["pdf", "png", "jpg", "jpeg"],
accept_multiple_files=True,
key="clinical_docs_uploader",
help="Upload clinic letters, pathology reports, lab results",
)
if uploaded_files:
st.success(f"{len(uploaded_files)} file(s) uploaded")
for f in uploaded_files:
st.caption(f"{f.name} ({f.size / 1024:.1f} KB)")
return uploaded_files or []
Key API details (from DeepWiki):
accept_multiple_files=TruereturnsList[UploadedFile].UploadedFileextendsio.BytesIO-- can be passed directly to PDF parsers.- Default size limit: 200 MB per file (configurable via
server.maxUploadSizeinconfig.tomlor per-widgetmax_upload_sizeparam). typeparameter is best-effort filtering, not a security guarantee.- Files are held in memory after upload.
- Additive selection: clicking browse again adds files, does not replace.
2.3 Session State Management
Pattern: Centralized state initialization
# services/state_manager.py
import streamlit as st
JOURNEY_STATES = ["INGEST", "PRESCREEN", "VALIDATE_TRIALS", "GAP_FOLLOWUP", "SUMMARY"]
def init_session_state():
"""Initialize all session state variables with defaults."""
defaults = {
"journey_state": "INGEST",
"parlant_session_id": None,
"parlant_agent_id": None,
"patient_profile": None, # PatientProfile dict
"uploaded_files": [],
"search_anchors": None, # SearchAnchors dict
"trial_candidates": [], # List[TrialCandidate]
"eligibility_ledger": [], # List[EligibilityLedger]
"last_event_offset": 0, # For Parlant long-polling
}
for key, default_value in defaults.items():
if key not in st.session_state:
st.session_state[key] = default_value
def get_current_journey_state() -> str:
return st.session_state.get("journey_state", "INGEST")
def advance_journey(target_state: str):
"""Advance Journey to target state with validation."""
current_idx = JOURNEY_STATES.index(st.session_state.journey_state)
target_idx = JOURNEY_STATES.index(target_state)
if target_idx > current_idx:
st.session_state.journey_state = target_state
Key API details (from DeepWiki):
st.session_stateis aSessionStateProxywrapping thread-safeSafeSessionState.- Internal three-layer dict:
_old_state(previous run),_new_session_state(user-set),_new_widget_state(widget values). - Cannot modify widget-bound state after widget instantiation in same run (raises
StreamlitAPIException). - Widget
keyparameter maps tost.session_state[key]for read access. - Values must be pickle-serializable.
2.4 Real-Time Progress Feedback
Pattern: AI inference progress with st.status
# Usage in pages/1_upload.py
def run_extraction(uploaded_files):
"""Run MedGemma extraction with real-time status feedback."""
with st.status("Extracting clinical data from documents...", expanded=True) as status:
st.write("Reading uploaded documents...")
# Step 1: Send files to MedGemma
st.write("Running AI extraction (MedGemma 4B)...")
# Step 2: Poll for results
st.write("Building patient profile...")
# Step 3: Parse results into PatientProfile
status.update(label="Extraction complete!", state="complete")
Pattern: Streaming LLM output with st.write_stream
def stream_gap_analysis(generator):
"""Stream Gemini gap analysis output with typewriter effect."""
st.write_stream(generator)
Pattern: Auto-refreshing fragment for Parlant events
@st.fragment(run_every=3) # Poll every 3 seconds
def parlant_event_listener():
"""Fragment that polls Parlant for new events without full page rerun."""
from services.parlant_client import poll_events
new_events = poll_events(
st.session_state.parlant_session_id,
st.session_state.last_event_offset
)
if new_events:
for event in new_events:
if event["kind"] == "message" and event["source"] == "ai_agent":
st.chat_message("assistant").write(event["message"])
elif event["kind"] == "status":
st.caption(f"Agent status: {event['data']}")
st.session_state.last_event_offset = new_events[-1]["offset"] + 1
Key API details (from DeepWiki):
st.status(label, expanded, state)-- context manager, auto-completes. States:"running","complete","error".st.spinner(text, show_time=True)-- simple loading indicator.st.progress(value, text)-- 0-100 int or 0.0-1.0 float.st.toast(body, icon, duration)-- transient notification, top-right.st.write_stream(generator)-- typewriter effect for strings,st.writefor other types. Supports OpenAIChatCompletionChunkand LangChainAIMessageChunk.@st.fragment(run_every=N)-- partial rerun every N seconds, isolated from full app rerun.st.rerun(scope="fragment")-- rerun only the enclosing fragment.
2.5 Layout System (from DeepWiki streamlit/streamlit)
Layout primitives for TrialPath UI:
| Primitive | Purpose in TrialPath | Key Params |
|---|---|---|
st.columns(spec) |
Trial card grid, profile fields side-by-side | spec (int or list of ratios), gap, vertical_alignment |
st.tabs(labels) |
Switching between trial categories (Eligible/Borderline/Not Eligible) | Returns list of containers |
st.expander(label) |
Collapsible criterion detail, evidence citations | expanded (bool), icon |
st.container(height, border) |
Scrollable trial list, chat panel | height (int px), horizontal (bool) |
st.empty() |
Dynamic status updates, replacing content | Single-element, replaceable |
Layout composition pattern for trial cards:
# Trial matching page layout
tabs = st.tabs(["Eligible", "Borderline", "Not Eligible", "Unknown"])
with tabs[0]: # Eligible trials
for trial in eligible_trials:
with st.expander(f"{trial['nct_id']} - {trial['title']}", expanded=False):
cols = st.columns([0.7, 0.3])
with cols[0]:
st.markdown(f"**Phase**: {trial['phase']}")
st.markdown(f"**Sponsor**: {trial['sponsor']}")
with cols[1]:
# Traffic light summary
met = sum(1 for c in trial['criteria'] if c['status'] == 'MET')
total = len(trial['criteria'])
st.metric("Criteria Met", f"{met}/{total}")
# Criterion-level detail
for criterion in trial['criteria']:
col1, col2 = st.columns([0.8, 0.2])
with col1:
st.write(criterion['description'])
with col2:
color_map = {"MET": "green", "NOT_MET": "red", "BORDERLINE": "orange", "UNKNOWN": "grey"}
st.markdown(f":{color_map[criterion['status']]}[{criterion['status']}]")
Responsive behavior:
st.columnsstacks vertically at viewport width <= 640px.- Use
width="stretch"for elements to fill available space. - Avoid nesting columns more than once.
- Scrolling containers: avoid heights > 500px for mobile.
2.6 Caching System (from DeepWiki streamlit/streamlit)
Two caching decorators:
| Decorator | Returns | Serialization | Use Case |
|---|---|---|---|
@st.cache_data |
Copy of cached value | Requires pickle | Data transformations, API responses, search results |
@st.cache_resource |
Shared instance (singleton) | No pickle needed | ParlantClient instance, HTTP clients, model objects |
TrialPath caching patterns:
@st.cache_resource
def get_parlant_client() -> ParlantClient:
"""Singleton Parlant client shared across all sessions."""
return ParlantClient(base_url=os.environ.get("PARLANT_URL", "http://localhost:8000"))
@st.cache_data(ttl=300) # 5-minute TTL
def search_trials(query_params: dict) -> list:
"""Cache trial search results to avoid redundant MCP calls."""
client = get_parlant_client()
# ... perform search
return results
Key details:
- Cache key = hash of (function source code + arguments).
ttl(time-to-live): auto-expire entries. Use for API results that may change.max_entries: limit cache size.hash_funcs: custom hash for unhashable args.- Prefix arg with
_to exclude from hash (e.g.,_client). @st.cache_resourceobjects are shared across ALL sessions/threads -- must be thread-safe.- Do NOT call interactive widgets inside cached functions (triggers warning).
- Cache invalidated on: argument change, source code change, TTL expiry,
max_entriesoverflow, explicit.clear().
2.7 Global Disclaimer Banner (PRD Section 9)
Every page must display a medical disclaimer. Implement as a shared component called from app.py before navigation.
Pattern: Global disclaimer in entrypoint
# app/app.py (add before st.navigation)
from components.disclaimer_banner import render_disclaimer
# Always render disclaimer at top of every page
render_disclaimer()
nav = st.navigation(pages)
nav.run()
Component: disclaimer_banner.py
# app/components/disclaimer_banner.py
import streamlit as st
DISCLAIMER_TEXT = (
"This tool provides information for educational purposes only and does not "
"constitute medical advice. Always consult your healthcare provider before "
"making decisions about clinical trial participation."
)
def render_disclaimer():
"""Render medical disclaimer banner. Must appear on every page."""
st.info(DISCLAIMER_TEXT, icon="ℹ️")
3. Parlant Frontend Integration Guide
3.1 Architecture: Asynchronous Event-Driven Model
Parlant uses an asynchronous, event-driven conversation model -- NOT traditional request-reply. Both customer and AI agent can post events to a session at any time.
Core concepts:
- Session = timeline of all events (messages, status updates, tool calls, custom events)
- Event = timestamped item with
offset,kind,source,trace_id - Long-polling = client polls for new events with
min_offsetandwait_for_datatimeout
3.2 REST API Endpoints
| Method | Path | Purpose |
|---|---|---|
| POST | /agents |
Create agent |
| POST | /sessions |
Create session (agent + customer) |
| GET | /sessions |
List sessions (filter by agent/customer, paginated) |
| POST | /sessions/{id}/events |
Send message/event |
| GET | /sessions/{id}/events |
List/poll events (long-polling) |
| PATCH | /sessions/{id}/events/{event_id} |
Update event metadata |
Create Event request schema (EventCreationParamsDTO):
kind:"message"|"custom"|"status"source:"customer"|"human_agent"|"customer_ui"message: string (for message events)data: dict (for custom/status events)metadata: dict (optional)
List Events query params:
min_offset: int -- only return events after this offsetwait_for_data: int (seconds) -- long-poll timeout; returns504if no new eventssource,correlation_id,trace_id,kinds: optional filters
3.3 Parlant Client Service
# services/parlant_client.py
import httpx
from typing import Optional
PARLANT_BASE_URL = "http://localhost:8000"
class ParlantClient:
"""Synchronous wrapper around Parlant REST API for Streamlit."""
def __init__(self, base_url: str = PARLANT_BASE_URL):
self.base_url = base_url
self.http = httpx.Client(base_url=base_url, timeout=65.0) # > long-poll timeout
def create_agent(self, name: str, description: str = "") -> dict:
resp = self.http.post("/agents", json={"name": name, "description": description})
resp.raise_for_status()
return resp.json()
def create_session(self, agent_id: str, customer_id: Optional[str] = None) -> dict:
payload = {"agent_id": agent_id}
if customer_id:
payload["customer_id"] = customer_id
resp = self.http.post("/sessions", json=payload)
resp.raise_for_status()
return resp.json()
def send_message(self, session_id: str, message: str) -> dict:
resp = self.http.post(
f"/sessions/{session_id}/events",
json={"kind": "message", "source": "customer", "message": message}
)
resp.raise_for_status()
return resp.json()
def send_custom_event(self, session_id: str, event_type: str, data: dict) -> dict:
"""Send custom event (e.g., journey state change, file upload notification)."""
resp = self.http.post(
f"/sessions/{session_id}/events",
json={"kind": "custom", "source": "customer_ui", "data": {"type": event_type, **data}}
)
resp.raise_for_status()
return resp.json()
def poll_events(self, session_id: str, min_offset: int = 0, wait_seconds: int = 60) -> list:
resp = self.http.get(
f"/sessions/{session_id}/events",
params={"min_offset": min_offset, "wait_for_data": wait_seconds}
)
resp.raise_for_status()
return resp.json()
3.4 Event Types Reference
| Kind | Source(s) | Description |
|---|---|---|
| message | customer, ai_agent | Text message from participant |
| status | ai_agent | Agent state: acknowledged, processing, typing, ready, error, cancelled |
| tool | ai_agent | Tool call result (MedGemma, MCP) |
| custom | customer_ui, system | App-defined (journey state, uploads) |
3.5 Journey State Synchronization
Map Parlant events to TrialPath Journey states:
# services/state_manager.py (continued)
JOURNEY_CUSTOM_EVENTS = {
"extraction_complete": "PRESCREEN",
"profile_confirmed": "VALIDATE_TRIALS",
"trials_evaluated": "GAP_FOLLOWUP",
"gaps_resolved": "SUMMARY",
}
def handle_parlant_event(event: dict):
"""Process incoming Parlant event and update Journey state if needed."""
if event["kind"] == "custom" and event.get("data", {}).get("type") in JOURNEY_CUSTOM_EVENTS:
new_state = JOURNEY_CUSTOM_EVENTS[event["data"]["type"]]
advance_journey(new_state)
elif event["kind"] == "status" and event.get("data") == "error":
st.session_state["last_error"] = event.get("message", "Unknown error")
3.6 Parlant Journey System (from DeepWiki emcie-co/parlant)
Parlant's Journey System defines structured multi-step interaction flows. This is the core mechanism for implementing TrialPath's 5-state patient workflow.
Journey state types:
- Chat State -- agent converses with customer, guided by state's
action. Can stay for multiple turns. - Tool State -- agent calls external tool, result loaded into context. Must be followed by a chat state.
- Fork State -- agent evaluates conditions and branches the flow.
TrialPath Journey definition pattern:
import parlant as p
async def create_trialpath_journey(agent: p.Agent):
journey = await agent.create_journey(
title="Clinical Trial Matching",
conditions=["The patient wants to find matching clinical trials"],
description="Guide NSCLC patients through clinical trial matching: "
"document upload, profile extraction, trial search, "
"eligibility analysis, and gap identification.",
)
# INGEST: Upload and extract
t1 = await journey.initial_state.transition_to(
chat_state="Ask patient to upload clinical documents (clinic letters, pathology reports, lab results)"
)
# Tool state: Run MedGemma extraction
t2a = await t1.target.transition_to(
condition="Documents uploaded",
tool_state=extract_patient_profile # MedGemma tool
)
# PRESCREEN: Review extracted profile
t2b = await t2a.target.transition_to(
chat_state="Present extracted PatientProfile for review and confirmation"
)
# Tool state: Search trials via MCP
t3a = await t2b.target.transition_to(
condition="Profile confirmed",
tool_state=search_clinical_trials # ClinicalTrials MCP tool
)
# VALIDATE_TRIALS: Show results with eligibility
t3b = await t3a.target.transition_to(
chat_state="Present trial matches with criterion-level eligibility assessment"
)
# GAP_FOLLOWUP: Identify gaps and suggest actions
t4 = await t3b.target.transition_to(
condition="Trials evaluated",
chat_state="Analyze eligibility gaps and suggest next steps "
"(additional tests, document uploads)"
)
# Loop back if new documents uploaded
await t4.target.transition_to(
condition="New documents uploaded for gap resolution",
state=t2a.target # Back to extraction
)
# SUMMARY: Final report
t5 = await t4.target.transition_to(
condition="Gaps resolved or patient ready for summary",
chat_state="Generate summary report and doctor packet"
)
Key details (from DeepWiki):
- Journeys are activated by
conditions(observational guidelines matched byGuidelineMatcher). - Transitions can be direct (always taken) or conditional (only if condition met).
- Can transition to existing states (for loops, e.g., gap resolution cycle).
END_JOURNEYis a special terminal state.- Journeys dynamically manage LLM context to include only relevant guidelines at each state.
3.7 Parlant Guideline System (from DeepWiki emcie-co/parlant)
Guidelines define behavioral rules for agents. Two types:
| Type | Has Action? | Purpose |
|---|---|---|
| Observational | No | Track conditions, activate journeys |
| Actionable | Yes | Drive agent behavior when condition is met |
Journey-scoped vs Global guidelines:
- Global guidelines apply across all conversations.
- Journey-scoped guidelines are only active when their parent journey is active. Created via
journey.create_guideline().
TrialPath guideline examples:
# Global guideline: always cite evidence
await agent.create_guideline(
condition="the agent makes a clinical assessment",
action="cite the source document, page number, and relevant text span"
)
# Journey-scoped: only during VALIDATE_TRIALS
await journey.create_guideline(
condition="a criterion cannot be evaluated due to missing data",
action="mark it as UNKNOWN and add to the gap list with the specific data needed"
)
Matching pipeline (from DeepWiki): GuidelineMatcher uses LLM-based evaluation with multiple batch types (observational, actionable, low-criticality, disambiguation, journey-node-selection) to determine which guidelines apply to the current conversation context.
3.8 Parlant Tool Integration (from DeepWiki emcie-co/parlant)
Parlant supports 4 tool service types: local, sdk/plugin, openapi, and mcp.
TrialPath will use:
- SDK/Plugin tools for MedGemma extraction
- MCP tools for ClinicalTrials.gov search
Tool definition with @p.tool decorator:
@p.tool
async def extract_patient_profile(
context: p.ToolContext,
document_urls: list[str],
) -> p.ToolResult:
"""Extract patient clinical profile from uploaded documents using MedGemma 4B.
Args:
document_urls: List of URLs/paths to uploaded clinical documents.
"""
# Call MedGemma endpoint
profile = await call_medgemma(document_urls)
return p.ToolResult(
data=profile,
metadata={"source": "MedGemma 4B", "doc_count": len(document_urls)},
)
Tool execution flow (from DeepWiki):
- GuidelineMatch identifies tools associated with matched guidelines
- ToolCaller resolves tool parameters from ServiceRegistry
- ToolCallBatcher groups tools for efficient LLM inference
- LLM infers tool arguments from conversation context
- ToolService.call_tool() executes and returns ToolResult
- ToolEventGenerator emits ToolEvent to session
ToolResult structure:
data-- visible to agent for further processingmetadata-- frontend-only info (not used by agent)control-- processing options:mode(auto/manual),lifespan(response/session)
3.9 Parlant NLP Provider: Gemini (from DeepWiki emcie-co/parlant)
Parlant natively supports Google Gemini, which aligns with TrialPath's planned use of Gemini 3 Pro.
Configuration:
# Install with Gemini support
pip install parlant[gemini]
# Set API key
export GEMINI_API_KEY="your-api-key"
# Start server with Gemini backend
parlant-server --gemini
Supported providers (from DeepWiki): OpenAI, Anthropic, Azure, AWS Bedrock, Google Gemini, Vertex AI, Together.ai, LiteLLM, Cerebras, DeepSeek, Ollama, Mistral, and more.
Vertex AI alternative -- for production, can use pip install parlant[vertex] with VERTEX_AI_MODEL=gemini-2.5-pro.
3.10 AlphaEngine Processing Pipeline (from DeepWiki emcie-co/parlant)
This is the complete flow from customer message to agent response. Critical for understanding latency and UI feedback points.
Step-by-step pipeline:
1. EVENT CREATION
Customer sends message -> POST /sessions/{id}/events
-> SessionModule creates event, dispatches background processing
2. CONTEXT LOADING
AlphaEngine.process() loads:
- Session history (interaction events)
- Agent identity + description
- Customer info
- Context variables (per-customer/per-tag/global)
-> Assembled into EngineContext
3. PREPARATION LOOP (while not prepared_to_respond)
a. GUIDELINE MATCHING
GuidelineMatcher evaluates guidelines against conversation context
- Observational guidelines (track conditions)
- Actionable guidelines (drive behavior)
- Journey-node guidelines (determine next journey step)
Uses LLM to score relevance -> GuidelineMatch objects
b. TOOL CALLING (if guidelines require tools)
ToolCaller resolves + executes tools
- ToolCallBatcher groups for efficient LLM inference
- LLM infers arguments from context
- ToolService.call_tool() executes
- ToolEventGenerator emits ToolEvent to session
-> Tool results may trigger re-evaluation of guidelines
4. PREAMBLE GENERATION (optional)
Quick acknowledgment for perceived responsiveness
-> Emitted as early status event ("acknowledged" / "processing")
5. MESSAGE COMPOSITION
Based on agent's CompositionMode:
- FLUID: MessageGenerator builds prompt, generates via SchematicGenerator
-> Revision loop with temperature-based retries
- CANNED_STRICT: Only uses predefined templates
- CANNED_COMPOSITED: Mimics style of canned responses
- CANNED_FLUID: Prefers canned but falls back to fluid
6. EVENT EMISSION
Generated message -> emitted as message event
"ready" status event signals completion
UI feedback mapping for TrialPath:
| Pipeline Step | Parlant Status Event | UI Feedback |
|---|---|---|
| Event created | acknowledged |
"Message received" indicator |
| Context loading | processing |
st.status("Analyzing your request...") |
| Tool calling | tool events |
st.status("Searching ClinicalTrials.gov...") |
| Message generation | typing |
Typing indicator animation |
| Complete | ready |
Display agent response |
| Error | error |
st.error() with retry option |
3.11 Context Variables (from DeepWiki emcie-co/parlant)
Context variables store dynamic data that agents can reference during conversations. Essential for TrialPath to maintain patient profile state across the journey.
Variable scoping (priority order):
- Customer-specific values (per patient)
- Tag-specific values (e.g., per disease type)
- Global defaults
TrialPath context variable examples:
# Create context variables for patient data
patient_profile_var = await client.context_variables.create(
name="patient_profile",
description="Current patient clinical profile extracted from documents",
)
# Set per-customer value
await client.context_variables.set_value(
variable_id=patient_profile_var.id,
key=customer_id, # Per-patient
value=patient_profile_dict,
)
# Auto-refresh variable via tool (with freshness rules)
trial_results_var = await client.context_variables.create(
name="matching_trials",
description="Current list of matching clinical trials",
tool_id=search_trials_tool_id,
freshness_rules="*/10 * * * *", # Refresh every 10 minutes
)
Key details:
- Values are JSON-serializable.
- Included in PromptBuilder's
add_context_variablessection for LLM context. - Can be auto-refreshed via associated tools + cron-based
freshness_rules. ContextVariableStore.GLOBAL_KEYfor default values.
3.12 MCP Tool Service Details (from DeepWiki emcie-co/parlant)
Parlant has native MCP support via MCPToolClient. This is how TrialPath connects to ClinicalTrials.gov.
Registration:
# Via REST API
PUT /services/clinicaltrials_mcp
{
"kind": "mcp",
"mcp": {
"url": "http://localhost:8080"
}
}
# Via CLI
parlant service create \
--name clinicaltrials_mcp \
--kind mcp \
--url http://localhost:8080
MCPToolClient internals:
- Connects via
StreamableHttpTransportto MCP server's/mcpendpoint. list_tools()discovers available tools from MCP server.mcp_tool_to_parlant_tool()converts MCP tool schemas to Parlant'sToolobjects.- Type mapping:
string,integer,number,boolean,date,datetime,uuid,array,enum. call_tool()invokes MCP tool, extracts text content from result, wraps inToolResult.- Default MCP port:
8181.
Integration with Guideline System:
# Associate MCP tool with a guideline
search_guideline = await agent.create_guideline(
condition="the patient profile has been confirmed and trial search is needed",
action="search ClinicalTrials.gov for matching NSCLC trials using the patient's biomarkers and staging",
tools=[clinicaltrials_search_tool], # MCP tool reference
)
3.13 Prompt Construction (from DeepWiki emcie-co/parlant)
Understanding how Parlant builds LLM prompts is essential for designing effective guidelines and journey states.
PromptBuilder sections (in order):
| Section | Content | TrialPath Relevance |
|---|---|---|
| General Instructions | Task description, role | Define clinical trial matching context |
| Agent Identity | Agent name + description | "patient_trial_copilot" identity |
| Customer Identity | Customer name, session ID | Patient identifier |
| Context Variables | Dynamic data (JSON) | PatientProfile, SearchAnchors, prior results |
| Glossary | Domain terms | NSCLC, ECOG, biomarker definitions |
| Capabilities | What agent can do | Tool descriptions (MedGemma, MCP) |
| Interaction History | Conversation events | Full chat history with tool results |
| Guidelines | Matched condition/action pairs | Active behavioral rules for current state |
| Journey State | Current position in journey | Which step in INGEST->SUMMARY flow |
| Few-shot Examples | Desired output format | Example eligibility assessments |
| Staged Tool Events | Pending/completed tool results | MedGemma extraction results, MCP search results |
Context window management:
- GuidelineMatcher selectively loads only relevant guidelines and journeys.
- Journey-scoped guidelines only included when journey is active.
- Prevents context bloat by pruning low-probability journey guidelines.
3.14 Parlant Testing Framework (from DeepWiki emcie-co/parlant)
Parlant provides a dedicated testing framework with NLP-based assertions (LLM-as-a-Judge).
Key test utilities:
| Class | Purpose |
|---|---|
Suite |
Test runner, manages server connection and scenarios |
Session |
Test session context manager |
Response |
Agent response with .should() assertion |
InteractionBuilder |
Build conversation history for preloading |
CustomerMessage / AgentMessage |
Step types for conversation construction |
TrialPath test examples:
from parlant.testing import Suite, InteractionBuilder
from parlant.testing.steps import AgentMessage, CustomerMessage
suite = Suite(
server_url="http://localhost:8800",
agent_id="patient_trial_copilot"
)
@suite.scenario
async def test_extraction_journey_step():
"""Test that agent asks for documents in INGEST state."""
async with suite.session() as session:
response = await session.send("I want to find clinical trials for my lung cancer")
await response.should("ask the patient to upload clinical documents")
await response.should("mention accepted file types like PDF or images")
@suite.scenario
async def test_gap_analysis_identifies_missing_data():
"""Test gap analysis identifies unknown biomarkers."""
async with suite.session() as session:
# Preload history simulating completed extraction + matching
history = (
InteractionBuilder()
.step(CustomerMessage("Here are my medical documents"))
.step(AgentMessage("I've extracted your profile. You have NSCLC Stage IIIB, "
"EGFR positive, but KRAS status is unknown."))
.step(CustomerMessage("What trials am I eligible for?"))
.step(AgentMessage("I found 5 trials. For NCT04000005, KRAS status is required "
"but missing from your records."))
.build()
)
await session.add_events(history)
response = await session.send("What should I do about the missing KRAS test?")
await response.should("suggest getting a KRAS mutation test")
await response.should("explain which trials require KRAS status")
@suite.scenario
async def test_multi_turn_journey_flow():
"""Test complete journey flow with unfold()."""
async with suite.session() as session:
await session.unfold([
CustomerMessage("I have NSCLC and want to find trials"),
AgentMessage(
text="I'd be happy to help. Please upload your clinical documents.",
should="ask for document upload",
),
CustomerMessage("I've uploaded my pathology report"),
AgentMessage(
text="I've extracted your profile...",
should=["confirm profile extraction", "present key findings"],
),
CustomerMessage("That looks correct, please search for trials"),
AgentMessage(
text="I found 8 matching trials...",
should=["present trial matches", "include eligibility assessment"],
),
])
Running tests:
parlant-test tests/ # Run all test files
parlant-test tests/ -k gap # Filter by pattern
parlant-test tests/ -n 4 # Run in parallel
3.15 Canned Response System (from DeepWiki emcie-co/parlant)
Canned responses provide consistent, template-based messaging. Useful for TrialPath's structured outputs.
CompositionMode options:
| Mode | Behavior | TrialPath Use |
|---|---|---|
FLUID |
Free-form LLM generation | General conversation, gap explanations |
CANNED_STRICT |
Only predefined templates | Disclaimer text, safety warnings |
CANNED_COMPOSITED |
Mimics canned style | Eligibility summaries |
CANNED_FLUID |
Prefers canned, falls back to fluid | Standard responses with flexibility |
Journey-state-scoped canned responses:
# Canned response only active during SUMMARY state
summary_template = await journey.create_canned_response(
value="Based on your clinical profile, you match {{match_count}} trials. "
"{{eligible_count}} are likely eligible, {{borderline_count}} are borderline, "
"and {{gap_count}} have unresolved gaps. "
"See the attached doctor packet for full details.",
fields=["match_count", "eligible_count", "borderline_count", "gap_count"],
)
Template features:
- Jinja2 syntax for dynamic fields (e.g.,
{{std.customer.name}}). - Fields auto-populated from tool results and context variables.
- Relevance-scored matching via LLM when multiple templates exist.
signalsandmetadatafor additional template categorization.
4. UI Component Design per Journey State
4.1 INGEST State -- Upload Page
+------------------------------------------+
| [i] This tool is for information only... |
| [Sidebar: Journey Progress] |
| |
| Upload Clinical Documents |
| +---------------------------------+ |
| | Drag & drop or browse | |
| | Accepted: PDF, PNG, JPG | |
| +---------------------------------+ |
| |
| Uploaded Files: |
| - clinic_letter.pdf (245 KB) [x] |
| - pathology_report.pdf (1.2 MB) [x] |
| - lab_results.png (890 KB) [x] |
| |
| [Start Extraction] |
| |
| st.status: "Extracting clinical data..." |
| - Reading documents... |
| - Running MedGemma 4B... |
| - Building patient profile... |
+------------------------------------------+
Key components: file_uploader, progress_tracker
4.2 PRESCREEN State -- Profile Review Page
+------------------------------------------+
| [i] This tool is for information only... |
| [Sidebar: Journey Progress] |
| |
| Patient Clinical Profile |
| +--------------------------------------+ |
| | Demographics: Female, 62, ECOG 1 | |
| | Diagnosis: NSCLC Stage IIIB | |
| | Histology: Adenocarcinoma | |
| | Biomarkers: | |
| | EGFR: Positive (exon 19 del) | |
| | ALK: Negative | |
| | PD-L1: 45% | |
| | Prior Treatment: | |
| | Carboplatin+Pemetrexed (2 cycles) | |
| | Unknowns: | |
| | [!] KRAS status not found | |
| | [!] Brain MRI not available | |
| +--------------------------------------+ |
| |
| [Edit Profile] [Confirm & Search Trials] |
| |
| Searching ClinicalTrials.gov... |
| Step 1: Initial query -> 47 results |
| Refining: adding Phase 3 filter... |
| Step 2: Refined query -> 12 results |
| Shortlisting top candidates... |
+------------------------------------------+
Key components: profile_card, search_process, progress_tracker
4.3 VALIDATE_TRIALS State -- Trial Matching Page
+------------------------------------------+
| [i] This tool is for information only... |
| [Sidebar: Journey Progress] |
| |
| Matching Trials (8 found) |
| |
| Search Process: |
| Step 1: NSCLC + Stage IV + DE -> 47 |
| -> Refined: added Phase 3 |
| Step 2: + Phase 3 -> 12 results |
| -> Shortlisted: reading summaries |
| Step 3: 5 trials selected for review |
| [Show/Hide Search Details] |
| |
| +--------------------------------------+ |
| | NCT04000001 - KEYNOTE-999 | |
| | Pembrolizumab + Chemo for NSCLC | |
| | Overall: LIKELY ELIGIBLE | |
| | | |
| | Criteria: | |
| | [G] NSCLC confirmed | |
| | [G] ECOG 0-1 | |
| | [Y] PD-L1 >= 50% (yours: 45%) | |
| | [R] No prior immunotherapy | |
| | [?] Brain mets (unknown) | |
| +--------------------------------------+ |
| | NCT04000002 - ... | |
| +--------------------------------------+ |
| |
| [G]=Met [Y]=Borderline [R]=Not Met |
| [?]=Unknown/Needs Info |
+------------------------------------------+
Key components: trial_card (traffic-light display), search_process, progress_tracker
4.4 GAP_FOLLOWUP State -- Gap Analysis Page
+------------------------------------------+
| [i] This tool is for information only... |
| [Sidebar: Journey Progress] |
| |
| Gap Analysis & Next Steps |
| |
| +--------------------------------------+ |
| | GAP: Brain MRI results needed | |
| | Impact: Would resolve [?] criteria | |
| | for NCT04000001, NCT04000003 | |
| | Action: Upload brain MRI report | |
| | [Upload Document] | |
| +--------------------------------------+ |
| | GAP: KRAS mutation status | |
| | Impact: Required for NCT04000005 | |
| | Action: Request test from oncologist | |
| +--------------------------------------+ |
| |
| [Re-run Matching with New Data] |
| [Proceed to Summary] |
+------------------------------------------+
Key components: gap_card, file_uploader (for additional docs), progress_tracker
4.5 SUMMARY State -- Summary & Export Page
+------------------------------------------+
| [i] This tool is for information only... |
| [Sidebar: Journey Progress] |
| |
| Clinical Trial Matching Summary |
| |
| Eligible Trials: 3 |
| Borderline Trials: 2 |
| Not Eligible: 3 |
| Unresolved Gaps: 1 |
| |
| [Download Doctor Packet (JSON/Markdown)] |
| [Start New Session] |
| |
| Chat with AI Copilot: |
| +--------------------------------------+ |
| | AI: Based on your profile... | |
| | You: What about trial NCT...? | |
| | AI: That trial requires... | |
| +--------------------------------------+ |
| | [Type a message...] [Send] | |
| +--------------------------------------+ |
+------------------------------------------+
Key components: chat_panel, progress_tracker
5. TDD Test Cases
5.1 Upload Page Tests
| Test Case | Input | Expected Output | Boundary |
|---|---|---|---|
| No files uploaded | Empty uploader | "Start Extraction" button disabled | N/A |
| Single PDF upload | 1 PDF file | File listed, extraction button enabled | N/A |
| Multiple files | 3 PDF + 1 PNG | All 4 files listed with sizes | N/A |
| Invalid file type | 1 .docx file | File rejected, error message shown | File type filter |
| Large file | 250 MB PDF | Error or warning per maxUploadSize |
Size limit |
| Extraction triggered | Click "Start Extraction" | st.status shows running, Parlant event sent |
N/A |
| Extraction completes | MedGemma returns profile | Journey advances to PRESCREEN, profile in session_state | State transition |
| Extraction fails | MedGemma error | st.status shows error state, retry option |
Error handling |
5.2 Profile Review Page Tests
| Test Case | Input | Expected Output | Boundary |
|---|---|---|---|
| Profile display | PatientProfile in session_state | All fields rendered correctly | N/A |
| Unknown fields highlighted | Profile with unknowns list | Unknowns shown with warning icon | N/A |
| Edit profile | Click Edit, modify ECOG | session_state updated, confirmation shown | N/A |
| Confirm profile | Click "Confirm & Search" | Journey advances to VALIDATE_TRIALS | State transition |
| Empty profile | No profile in session_state | Redirect to Upload page | Guard clause |
| Biomarker display | Complex biomarker data | All biomarkers with values and methods | Data richness |
5.3 Trial Matching Page Tests
| Test Case | Input | Expected Output | Boundary |
|---|---|---|---|
| Trials loading | Matching in progress | st.spinner or st.status shown |
N/A |
| Trials displayed | 8 TrialCandidates | 8 trial cards with traffic-light criteria | N/A |
| Green criterion | Criterion met with evidence | Green indicator, evidence citation | N/A |
| Yellow criterion | Borderline match | Yellow indicator, explanation | N/A |
| Red criterion | Criterion not met | Red indicator, specific reason | N/A |
| Unknown criterion | Missing data | Question mark, linked to gap | N/A |
| Zero trials | No matches found | Informative message, suggest broadening | Empty state |
| Many trials | 50+ results | Pagination or scroll, performance ok | Scale |
| Search process displayed | SearchLog with 3 steps | 3 step entries shown with query params and result counts | N/A |
| Refinement visible | >50 initial results refined to 12 | Shows refinement action and reason | Iterative loop |
| Relaxation visible | 0 initial results relaxed to 5 | Shows relaxation action and reason | Iterative loop |
5.4 Gap Analysis Page Tests
| Test Case | Input | Expected Output | Boundary |
|---|---|---|---|
| Gaps identified | 3 gaps in ledger | 3 gap cards with actions | N/A |
| Upload resolves gap | Upload brain MRI report | Gap card updates, re-match option | Iterative flow |
| No gaps | All criteria resolved | Message: "No gaps", proceed to summary | Happy path |
| Gap impacts multiple trials | 1 gap affects 3 trials | Gap card lists all 3 affected trials | Cross-reference |
| Re-run matching | Click re-run after upload | New extraction + matching cycle | Loop back |
5.5 Summary Page Tests
| Test Case | Input | Expected Output | Boundary |
|---|---|---|---|
| Summary statistics | Complete ledger | Correct counts per category | N/A |
| Download doctor packet | Click download | JSON + Markdown files downloadable via st.download_button | N/A |
| Chat interaction | Send message | Message appears, agent responds | N/A |
| New session | Click "Start New" | State cleared, redirect to Upload | State reset |
5.6 Disclaimer Tests
| Test Case | Input | Expected Output | Boundary |
|---|---|---|---|
| Disclaimer on upload page | Navigate to Upload | Info banner with disclaimer text visible | N/A |
| Disclaimer on profile page | Navigate to Profile Review | Info banner with disclaimer text visible | N/A |
| Disclaimer on matching page | Navigate to Trial Matching | Info banner with disclaimer text visible | N/A |
| Disclaimer on gap page | Navigate to Gap Analysis | Info banner with disclaimer text visible | N/A |
| Disclaimer on summary page | Navigate to Summary | Info banner with disclaimer text visible | N/A |
| Disclaimer text content | Any page | Contains "information only" and "not medical advice" | Exact wording |
6. Streamlit AppTest Testing Strategy
6.1 Test Setup Pattern
# tests/test_upload_page.py
import pytest
from streamlit.testing.v1 import AppTest
@pytest.fixture
def upload_app():
"""Create AppTest instance for upload page."""
at = AppTest.from_file("pages/1_upload.py")
# Initialize required session state
at.session_state["journey_state"] = "INGEST"
at.session_state["parlant_session_id"] = "test-session-123"
at.session_state["uploaded_files"] = []
return at.run()
def test_initial_state(upload_app):
"""Upload page shows uploader and disabled extraction button."""
at = upload_app
# Check file uploader exists
assert len(at.file_uploader) > 0
# Check no error state
assert len(at.exception) == 0
def test_extraction_button_disabled_without_files(upload_app):
"""Extraction button should be disabled when no files uploaded."""
at = upload_app
# Button should exist but extraction should not proceed without files
assert at.button[0].disabled or at.session_state.get("uploaded_files") == []
6.2 Widget Interaction Patterns
def test_text_input_profile_edit():
"""Test editing patient profile fields via text input."""
at = AppTest.from_file("pages/2_profile_review.py")
at.session_state["journey_state"] = "PRESCREEN"
at.session_state["patient_profile"] = {
"demographics": {"age": 62, "sex": "Female"},
"diagnosis": {"stage": "IIIB", "histology": "Adenocarcinoma"},
}
at = at.run()
# Simulate editing a field
if len(at.text_input) > 0:
at.text_input[0].input("IIIA").run()
# Assert profile updated in session state
def test_button_click_advances_journey():
"""Clicking confirm button advances journey to next state."""
at = AppTest.from_file("pages/2_profile_review.py")
at.session_state["journey_state"] = "PRESCREEN"
at.session_state["patient_profile"] = {"demographics": {"age": 62}}
at = at.run()
# Find and click confirm button
confirm_buttons = [b for b in at.button if "Confirm" in str(b.label)]
if confirm_buttons:
confirm_buttons[0].click()
at = at.run()
assert at.session_state["journey_state"] == "VALIDATE_TRIALS"
6.3 Page Navigation Test
def test_guard_redirect_without_profile():
"""Profile review page redirects to upload if no profile exists."""
at = AppTest.from_file("pages/2_profile_review.py")
at.session_state["journey_state"] = "PRESCREEN"
at.session_state["patient_profile"] = None # No profile
at = at.run()
# Should show warning or error, not crash
assert len(at.exception) == 0
# Could check for warning message
warnings = [m for m in at.warning if "upload" in str(m.value).lower()]
assert len(warnings) > 0 or at.session_state["journey_state"] == "INGEST"
6.4 Session State Test
def test_session_state_initialization():
"""All session state keys should be initialized on first run."""
at = AppTest.from_file("app.py").run()
required_keys = [
"journey_state", "parlant_session_id", "patient_profile",
"uploaded_files", "trial_candidates", "eligibility_ledger"
]
for key in required_keys:
assert key in at.session_state, f"Missing session state key: {key}"
def test_session_state_persists_across_reruns():
"""Session state values persist across multiple reruns."""
at = AppTest.from_file("app.py").run()
at.session_state["journey_state"] = "PRESCREEN"
at = at.run()
assert at.session_state["journey_state"] == "PRESCREEN"
6.5 Component Rendering Tests
def test_trial_card_traffic_light_rendering():
"""Trial card displays correct traffic light colors for criteria."""
at = AppTest.from_file("pages/3_trial_matching.py")
at.session_state["journey_state"] = "VALIDATE_TRIALS"
at.session_state["trial_candidates"] = [
{
"nct_id": "NCT04000001",
"title": "Test Trial",
"criteria_results": [
{"criterion": "NSCLC", "status": "MET", "evidence": "pathology report p.1"},
{"criterion": "ECOG 0-1", "status": "MET", "evidence": "clinic letter"},
{"criterion": "No prior IO", "status": "NOT_MET", "evidence": "treatment history"},
{"criterion": "Brain mets", "status": "UNKNOWN", "evidence": None},
]
}
]
at = at.run()
# Check that trial card content is rendered
assert len(at.exception) == 0
# Check for presence of trial ID in rendered markdown
markdown_texts = [str(m.value) for m in at.markdown]
assert any("NCT04000001" in text for text in markdown_texts)
6.6 Error Handling Tests
def test_parlant_connection_error_handling():
"""App should handle Parlant server unavailability gracefully."""
at = AppTest.from_file("app.py")
at.session_state["parlant_session_id"] = None # Simulate no connection
at = at.run()
# Should not crash
assert len(at.exception) == 0
def test_extraction_error_shows_retry():
"""When extraction fails, user sees error status and retry option."""
at = AppTest.from_file("pages/1_upload.py")
at.session_state["journey_state"] = "INGEST"
at.session_state["extraction_error"] = "MedGemma timeout"
at = at.run()
# Should show error message
assert len(at.exception) == 0
error_msgs = [str(e.value) for e in at.error]
assert len(error_msgs) > 0 or at.session_state.get("extraction_error") is not None
6.7 Search Process Component Tests
# tests/test_components.py (addition)
class TestSearchProcessComponent:
"""Test search process visualization component."""
def test_renders_search_steps(self):
"""Search process should display all refinement steps."""
at = AppTest.from_file("app/components/search_process.py")
at.session_state["search_log"] = {
"steps": [
{"step": 1, "query": {"condition": "NSCLC", "location": "DE"}, "found": 47, "action": "refine", "reason": "Too many results, adding phase filter"},
{"step": 2, "query": {"condition": "NSCLC", "location": "DE", "phase": "Phase 3"}, "found": 12, "action": "shortlist", "reason": "Right size for detailed review"},
],
"final_shortlist_nct_ids": ["NCT001", "NCT002", "NCT003", "NCT004", "NCT005"],
}
at.run()
# Verify steps are displayed
assert "47" in at.text[0].value # First step result count
assert "12" in at.text[1].value # Second step result count
assert "Phase 3" in at.text[0].value or "Phase 3" in at.text[1].value
def test_empty_search_log(self):
"""Should handle missing search log gracefully."""
at = AppTest.from_file("app/components/search_process.py")
at.run()
# Should not crash, show placeholder
assert not at.exception
def test_collapsible_details(self):
"""Search details should be in an expander for clean UI."""
at = AppTest.from_file("app/components/search_process.py")
at.session_state["search_log"] = {
"steps": [{"step": 1, "query": {}, "found": 10, "action": "shortlist", "reason": "OK"}],
}
at.run()
# Verify expander exists for search details
assert len(at.expander) >= 1
6.8 Disclaimer Component Tests
# tests/test_components.py (addition)
class TestDisclaimerBanner:
"""Test medical disclaimer banner appears correctly."""
def test_disclaimer_renders(self):
"""Disclaimer banner should render on every page."""
at = AppTest.from_file("app/components/disclaimer_banner.py")
at.run()
assert len(at.info) >= 1
assert "information" in at.info[0].value.lower()
assert "medical advice" in at.info[0].value.lower()
def test_disclaimer_in_upload_page(self):
"""Upload page should include disclaimer."""
at = AppTest.from_file("app/pages/1_upload.py")
at.run()
info_texts = [i.value.lower() for i in at.info]
assert any("information" in t and "medical" in t for t in info_texts)
6.9 AppTest Limitations
AppTestdoes not support testingst.file_uploaderfile content directly (mock at service layer instead).- Not yet compatible with
st.navigation/st.Pagemultipage (test individual pages viafrom_file). - No browser rendering -- tests run headless, pure Python.
- Must call
.run()after every interaction to see updated state.
7. Appendix: API Reference
7.1 Streamlit Key APIs
| API | Purpose | Notes |
|---|---|---|
st.navigation(pages, position) |
Define multipage app | Returns current page, must call .run() |
st.Page(page, title, icon, url_path) |
Define a page | page = filepath or callable |
st.switch_page(page) |
Programmatic navigation | Stops current page execution |
st.page_link(page, label, icon) |
Clickable nav link | Non-blocking |
st.file_uploader(label, type, accept_multiple_files, key) |
File upload widget | Returns UploadedFile (extends BytesIO) |
st.session_state |
Persistent key-value store | Survives reruns, per-session |
st.status(label, expanded, state) |
Collapsible status container | Context manager, auto-completes |
st.spinner(text, show_time) |
Loading spinner | Context manager |
st.progress(value, text) |
Progress bar | 0-100 int or 0.0-1.0 float |
st.toast(body, icon, duration) |
Transient notification | Top-right corner |
st.write_stream(generator) |
Streaming text output | Typewriter effect for strings |
@st.fragment(run_every=N) |
Partial rerun decorator | Isolated from full app rerun |
st.rerun(scope) |
Trigger rerun | "app" or "fragment" |
st.chat_message(name) |
Chat bubble | "user", "assistant", or custom |
st.chat_input(placeholder) |
Chat text input | Fixed at bottom of container |
AppTest.from_file(path) |
Create test instance | .run() to execute |
AppTest.from_string(code) |
Test from string | Quick inline tests |
at.button[i].click() |
Simulate button click | Chain with .run() |
at.text_input[i].input(val) |
Simulate text entry | Chain with .run() |
at.slider[i].set_value(val) |
Set slider value | Chain with .run() |
7.2 Parlant Key APIs (from DeepWiki emcie-co/parlant)
REST Endpoints:
| Endpoint | Method | Purpose | Key Params |
|---|---|---|---|
/agents |
POST | Create agent | name, description |
/sessions |
POST | Create session | agent_id, customer_id (optional), title, metadata |
/sessions |
GET | List sessions | agent_id, customer_id, limit, cursor, sort |
/sessions/{id}/events |
POST | Send event | kind, source, message/data, metadata; query: moderation |
/sessions/{id}/events |
GET | Poll events | min_offset, wait_for_data, source, correlation_id, trace_id, kinds |
/sessions/{id}/events/{eid} |
PATCH | Update event | metadata updates only |
Event kinds: message, status, tool, custom
Event sources: customer, customer_ui, ai_agent, human_agent, human_agent_on_behalf_of_ai_agent, system
Status event states: acknowledged, processing, typing, ready, error, cancelled
Long-polling behavior: wait_for_data > 0 blocks until new events or timeout; returns 504 on timeout.
SDK APIs:
| SDK Method | Purpose |
|---|---|
agent.create_journey(title, conditions, description) |
Create Journey with state machine |
journey.initial_state.transition_to(chat_state=..., tool_state=..., condition=...) |
Define state transitions |
agent.create_guideline(condition, action, tools=[...]) |
Create global guideline |
journey.create_guideline(condition, action, tools=[...]) |
Create journey-scoped guideline |
p.Server(session_store="local"/"mongodb://...") |
Configure session persistence |
Tool decorator: @p.tool auto-extracts name, description, parameters from function signature.
NLP backend: parlant-server --gemini (requires GEMINI_API_KEY and pip install parlant[gemini]).
Client SDK: parlant-client (Python), TypeScript client, or direct REST.
Storage options: in-memory (default/testing), local JSON, MongoDB (production).
7.3 Integration Pattern: Streamlit + Parlant
User Action (Streamlit UI)
-> st.session_state update
-> ParlantClient.send_message() or send_custom_event()
-> Parlant Server processes (async)
-> @st.fragment polls ParlantClient.poll_events()
-> New events update st.session_state
-> UI rerenders with new data
This polling loop runs via @st.fragment(run_every=3) to avoid blocking the main app thread, providing near-real-time updates without full page reruns.
References
- Streamlit source: DeepWiki analysis of
streamlit/streamlit - Parlant source: DeepWiki analysis of
emcie-co/parlant - Parlant official docs: https://www.parlant.io/docs/
- Parlant Sessions: https://www.parlant.io/docs/concepts/sessions/
- Parlant Conversation API: https://www.parlant.io/docs/engine-internals/conversation-api/
- Parlant GitHub: https://github.com/emcie-co/parlant
- Parlant Journey System: DeepWiki
emcie-co/parlantsection 5.2 - Parlant Guideline System: DeepWiki
emcie-co/parlantsection 5.1 - Parlant Tool Integration: DeepWiki
emcie-co/parlantsection 6 - Parlant NLP Providers: DeepWiki
emcie-co/parlantsection 10.1