Role & Persona Overlays: Multi-Agent Identity in SI-Core
This document is non-normative. It describes how to think about and implement multi-agent / multi-role identity in SI-Core.
Normative contracts live in the [ID] spec, Jump contract docs, and the evaluation packs.
0. Conventions used in this draft (non-normative)
This draft follows the portability conventions used in 069/084+ when an artifact might be exported, hashed, or attested (RolePersonaOverlays, delegation records, role contracts, capability logs, JumpTrace/MEM summaries):
created_atis operational time (advisory unless time is attested).as_ofcarries markers only (time claim + optional revocation view markers) and SHOULD declareclock_profile: "si/clock-profile/utc/v1"when exported.trustcarries digests only (trust anchors + optional revocation view digests). Never mix markers intotrust.bindingspins meaning as{id,digest}(meaningful identities must not be digest-only).- Avoid floats in policy-/digest-bound artifacts: prefer scaled integers (
*_bp,*_ppm) and integer micro/milliseconds. - If you hash/attest procedural artifacts, declare canonicalization explicitly:
canonicalization: "si/jcs-strict/v1"andcanonicalization_profile_digest: "sha256:...". digest_rulestrings (when present) are explanatory only; verifiers MUST compute digests using pinned schemas/profiles, not by parsingdigest_rule.
Numeric conventions used in examples:
- For weights and ratios in
[0,1], export as basis points:x_bp = round(x * 10000). - For probabilities in
[0,1], export as basis points:p_bp = round(p * 10000). - For very small probabilities, ppm is acceptable:
p_ppm = round(p * 1_000_000).
Internal computation may still use floats; the convention here is about exported/hashed representations.
1. Why “Role & Persona” exists at all
Most examples in the early SI-Core docs implicitly assume a single decision-maker:
"user" → [OBS] → Jump → [ETH] → [RML]
In real systems, that breaks immediately:
- CityOS: city operator, flood-control SI, power-grid SI, emergency services, regulators.
- Education: learner, parent, teacher, school, platform operator, regulator.
- OSS: contributor, maintainer, foundation, corporate users, CI/infra operators.
- Healthcare: patient, clinician, hospital, insurer, regulator.
The same Jump can mean different things depending on:
- Whose goals are being optimized (child vs school vs platform).
- Who is allowed to decide (teacher vs parent vs city authority).
- Which “persona/viewpoint” is active (as the learner, as the regulator, as the system operator).
So we want a first-class structure that answers, for every Jump and every effect:
“Who is this Jump for, who is doing it, under which role and which viewpoint?”
This is what Role & Persona Overlays provide on top of [ID].
2. Mental model: principals, agents, roles, personas
We’ll use four basic terms:
Principal — whose world-state / welfare the goals are primarily about.
- Examples: learner-123, city-A, project-foo OSS community, patient-456.
Agent — the entity acting (human operator, SI-Core instance, external service).
- Examples: city_ops_console, learning_companion_si, ci_orchestrator, clinician_portal.
Role — how an agent is currently allowed to act with respect to a principal.
- Examples: “teacher-as-educator of learner-123”, “city_operator for city-A flood_control”, “regulator for district-X”.
Persona — which viewpoint / lens is used to render goals/metrics/risks/justifications for a stakeholder. Personas do not grant authority, and must not change the chosen action. If you need different actions, model it as a different role and/or a different role-projected goal surface.
A Role & Persona Overlay makes these explicit and attaches them structurally to:
- every Jump,
- every GCS / goal surface view used for planning,
- every RML effect,
- every MEM / audit entry.
Very roughly:
jump_identity_overlay:
created_at: "2028-04-15T10:03:12Z" # operational time (advisory unless time is attested)
as_of:
time: "2028-04-15T10:03:12Z"
clock_profile: "si/clock-profile/utc/v1"
trust:
trust_anchor_set_id: "si/trust-anchors/example/v1"
trust_anchor_set_digest: "sha256:..."
canonicalization: "si/jcs-strict/v1"
canonicalization_profile_digest: "sha256:..."
# Pin meaning when exporting/hashing (avoid digest-only meaning).
bindings:
principal:
id: "learner:1234"
digest: "sha256:..."
agent:
id: "si:learning_companion:v2"
digest: "sha256:..."
role:
id: "role:teacher_delegate/reading_support"
digest: "sha256:..."
persona:
id: "persona:learner_view"
digest: "sha256:..."
principal_id: "learner:1234"
agent_id: "si:learning_companion:v2"
role_id: "role:teacher_delegate/reading_support"
persona_id: "persona:learner_view"
delegation_chain:
- "learner:1234"
- "guardian:777"
- "school:abc"
- "human:teacher:42"
- "si:learning_companion:v2"
3. Identity graph: from single IDs to role graphs
Instead of flat “user_id”, SI-Core keeps an identity graph:
id_graph:
principals:
- id: "learner:1234"
type: "human"
domain: "learning"
- id: "city:A"
type: "organization"
domain: "city"
agents:
- id: "si:learning_companion:v2"
type: "si_core"
- id: "human:teacher:42"
type: "human"
- id: "service:ci_orchestrator"
type: "service"
roles:
- id: "role:teacher_delegate/reading_support"
principal: "learner:1234"
agent: "si:learning_companion:v2"
granted_by: "human:teacher:42"
capabilities:
- "propose_learning_plan"
- "select_exercise"
- "log_learning_events"
prohibited:
- "change_guardian_settings"
- "access_financial_data"
personas:
- id: "persona:learner_view"
projection_rules:
explanation_style: "simple_language"
show_metrics: ["mastery_progress", "wellbeing", "fun_score"]
hide_metrics: ["platform_cost", "platform_engagement_score"]
The ID subsystem provides:
- APIs to resolve
principal → roles → agents, - delegation chains (
who delegated what to whom, when, under what terms), - persona definitions and projection rules.
Role & Persona Overlays are just structured views over this graph, attached to Jumps.
4. Role & Persona Overlay: the contract on a Jump
Every Jump request carries an ID overlay:
@dataclass
class RolePersonaOverlay:
principal_id: str
agent_id: str
role_id: str
persona_id: str | None # presentation / explanation only
delegation_chain: list[str] # principal → … → current agent
consent_refs: list[str] # MEM pointers (e.g. consent records)
# The ONLY surface used for GCS/planning: role-projected from the full system surface.
# Persona must NOT modify this.
goal_surface_view: GoalSurfaceView
# Optional: “who is watching”
oversight_roles: list[str] # e.g. ["role:school_ethics_board", "role:city_regulator"]
# Optional: multi-principal runs (e.g., learner + guardian constraints)
co_principals: list[str] # additional principals whose constraints must be honored
A robust runtime SHOULD enforce these implementation invariants (non-normative):
Use
role_persona.goal_surface_viewfor GCS and planning, not the global unconstrained surface.Treat
persona_idas a rendering selector only (explanations, metric projections, UI framing). If different personas require different actions, model it as a different role and/or a different role-projectedgoal_surface_view.Enforce role capabilities when constructing candidate actions (ETH/RML).
Emit ID-aware traces for [MEM] so auditors can later answer:
- “This Jump was done by which agent, acting for whom, under which role?”
Non-normative JumpRequest sketch:
@dataclass
class JumpRequest:
obs_bundle: ObservationBundle
goal_surface: GoalSurface # full, system-level (may be present for reference/derivation)
role_persona: RolePersonaOverlay
rml_budget: RmlBudget
# ...
Those invariants apply regardless of whether the request carries a full global surface; action selection is always constrained by the role-projected view and role capabilities.
Normative cross-reference (pointer only): the corresponding MUST-level requirements belong in the [ID]/[JUMP] specs. This document restates them informally as the SHOULD-level invariants above.
5. From global goals to per-role goal surfaces
Global goals are often “city-wide” or “platform-wide”. But each role sees a slice.
Example: city flood control
global_goal_surface:
safety:
flood_risk_min: {weight_bp: 10000}
hospital_access: {weight_bp: 10000}
fairness:
district_equity: {weight_bp: 7000}
efficiency:
mobility_efficiency: {weight_bp: 5000}
energy_cost_min: {weight_bp: 3000}
ops:
infra_stability: {weight_bp: 8000}
Role-specific projections (export-friendly and valid YAML):
role_goal_views:
"role:city_ops/flood_operator__persona:ops_view":
role_id: "role:city_ops/flood_operator"
persona_id: "persona:ops_view"
include:
- "safety.*"
- "ops.*"
- "efficiency.mobility_efficiency"
downweight_bp:
fairness.district_equity: 5000 # multiplier 0.5 (still present, weaker)
constraints:
- "never_violate_regulator_minimums == true"
"role:regulator/flood_safety__persona:regulator_view":
role_id: "role:regulator/flood_safety"
persona_id: "persona:regulator_view"
include:
- "safety.*"
- "fairness.*"
downweight_bp:
efficiency.*: 2500 # multiplier 0.25
An overlay construction function:
def derive_goal_surface_view(global_surface, role_id, persona_id) -> "GoalSurfaceView":
view_key = f"{role_id}__{persona_id}"
rules = role_goal_config[view_key]
return project_goal_surface(global_surface, rules)
This ensures:
- The same world can be seen through multiple goal surfaces.
- A Jump always uses the goal view consistent with its role/persona, rather than some implicit, global default.
6. Multi-agent Jump patterns
6.1 Pattern 1 — Single principal, multiple cooperating agents
Example: CityOS with several SI subsystems (flood, traffic, grid) all optimising on behalf of principal = city:A.
We have:
multi_agent_city:
principal: "city:A"
agents:
- "si:flood_control:v1"
- "si:traffic_mgr:v3"
- "si:grid_mgr:v2"
roles:
- id: "role:city:flood_controller"
principal: "city:A"
agent: "si:flood_control:v1"
capabilities: ["adjust_gates", "issue_flood_alerts"]
- id: "role:city:traffic_controller"
principal: "city:A"
agent: "si:traffic_mgr:v3"
capabilities: ["update_signals", "route_closures"]
Coordination Jump:
class CityCoordinationJump(JumpEngine):
def propose(self, req: JumpRequest) -> JumpDraft:
# 1. Decompose into sub-Jumps with their own overlays
flood_req = req.with_role("role:city:flood_controller", persona_id="persona:ops_view")
traffic_req = req.with_role("role:city:traffic_controller", persona_id="persona:ops_view")
flood_plan = self.flood_engine.propose(flood_req)
traffic_plan = self.traffic_engine.propose(traffic_req)
# 2. Reconcile under shared principal & meta-goals
reconciled = self.coordination_layer.merge(
flood_plan, traffic_plan, principal=req.role_persona.principal_id
)
return reconciled
ID invariant: all sub-Jumps in this coordination share the same principal_id = "city:A", but differ in agent_id and role_id.
6.2 Pattern 2 — Multi-principal conflicts
Example: A learning platform with:
- principal A: learner-123,
- principal B: school-XYZ,
- principal C: platform operator.
Sometimes goals conflict (e.g., time spent vs wellbeing vs completion targets).
We make this explicit:
multi_principal_context:
principals:
primary: "learner:123"
co_principals:
school: "school:xyz"
platform: "platform:learning_app"
goals:
learner:123:
wellbeing: {weight_bp: 10000, hard: true}
mastery: {weight_bp: 8000}
school:xyz:
completion_rate: {weight_bp: 7000}
test_scores: {weight_bp: 9000}
platform:learning_app:
engagement_time: {weight_bp: 5000}
churn_min: {weight_bp: 6000}
A multi-principal Jump then has:
role_persona:
principal_id: "learner:123" # primary
co_principals: ["school:xyz", "platform:learning_app"]
agent_id: "si:learning_companion:v2"
role_id: "role:learning_companion/student_side"
persona_id: "persona:learner_view"
The Jump engine:
- uses primary principal for hard constraints (e.g., learner wellbeing),
- treats other principals as soft multi-objective goals,
- exposes the trade-offs in the Ethics/EVAL traces.
We’ll leave detailed multi-principal trade-offs to the EVAL article, but the ID overlay is what makes the structure explicit.
6.3 Pattern 3 — Human-in-the-loop joint Jumps
Example: City operator + SI engine jointly steering flood gates.
joint_jump:
principal_id: "city:A"
participants:
- agent_id: "si:flood_control:v1"
role_id: "role:city:flood_controller"
- agent_id: "human:ops:alice"
role_id: "role:city:human_operator"
coordination_mode: "si_proposes_human_confirms"
Execution sketch:
def human_in_the_loop_jump(req):
si_overlay = req.role_persona.with_agent("si:flood_control:v1",
role_id="role:city:flood_controller")
draft = flood_engine.propose(req.with_overlay(si_overlay))
# Present to human operator with their own persona
human_overlay = req.role_persona.with_agent("human:ops:alice",
role_id="role:city:human_operator",
persona_id="persona:ops_view")
decision = human_console.review_and_confirm(draft, human_overlay)
return decision
ID/MEM must record both agents and roles in the resulting JumpTrace.
7. Persona overlays: “whose eyes are we looking through?”
Personas are mostly about how we view and explain goals, not about legal authority.
Example in learning:
persona:learner_view:
explanation_language: "simple"
show_metrics: ["mastery_progress", "fun_score"]
hide_metrics: ["school_completion_target", "platform_engagement_score"]
persona:teacher_view:
explanation_language: "professional"
show_metrics: ["mastery_progress", "curriculum_coverage", "risk_flags"]
hide_metrics: ["platform_engagement_score"]
persona:guardian_view:
explanation_language: "non-technical"
show_metrics: ["wellbeing", "screen_time", "data_use_summary"]
The same underlying Jump can be rendered differently:
def render_jump_result(result, persona_id):
persona = persona_catalog[persona_id]
projected_metrics = project_metrics(result.metrics, persona)
explanation = render_explanation(result, persona)
return PersonaResult(metrics=projected_metrics, explanation=explanation)
Key points:
- Personas should not be used to choose different actions (that’s roles + goals + ETH).
- Personas change how outcomes, risks, and justifications are surfaced to different stakeholders.
- Personas do not change goal weights or action selection.
- If required disclosures cannot be rendered for a given persona, the runtime should HARD_BLOCK via [ETH] (a visibility/consent constraint), rather than silently executing.
- Personas should be considered in [ETH]: “is this explanation adequate and non-manipulative for this persona?”
8. Role & Persona in Jumps, ETH, RML, MEM
8.1 Jump contracts
Each Jump contract gets explicit ID fields:
jump_contract:
name: "learning.pick_next_exercise"
id_overlay_required: true
requires:
principal_type: "learner"
allowed_roles:
- "role:learning_companion/student_side"
- "role:teacher_delegate/reading_support"
forbids:
# Visibility/consent gate: a Jump that cannot be rendered with required disclosures
# for the requesting persona SHOULD be blocked (handled via ETH), rather than executed silently.
persona_ids: ["persona:ops_view"]
8.2 ETH overlays
ETH rules can scope by role/persona:
eth_rules:
- id: "no_dark_patterns"
applies_to:
jumps: ["learning.pick_next_exercise"]
personas: ["persona:learner_view"]
rule: |
"Never choose an exercise primarily to increase 'engagement_time'
when learner wellbeing is below threshold."
8.3 RML and effects
Effects must know on whose behalf they run:
rml_effect:
type: "send_notification"
principal_id: "learner:1234"
agent_id: "si:learning_companion:v2"
role_id: "role:learning_companion/student_side"
persona_id: "persona:learner_view"
This matters for:
- audit and consent,
- rate limiting & quotas per principal,
- incident response (“which role caused this effect?”).
8.4 MEM / audit traces
A JumpTrace enriched with ID overlay:
jump_trace:
jump_id: "JMP-2028-04-15-00123"
time: "2028-04-15T10:03:12Z"
principal_id: "city:A"
agent_id: "si:flood_control:v1"
role_id: "role:city:flood_controller"
persona_id: "persona:ops_view"
delegation_chain:
- "city:A"
- "city_council:resolution_2028-03"
- "city_ops_team"
- "si:flood_control:v1"
decisions:
action: "open_gates_sector_12_by_0.15"
gcs_vector: {...}
eth_trace_ref: "ETH-TR-987"
effects:
- effect_id: "EFF-555"
rml_type: "actuator.move"
rml_level: 2
This makes “who decided what, for whom, under which authority?” a query, not a forensic reconstruction.
9. Domain patterns
9.1 Learning & developmental support
Typical principals:
learner:*,guardian:*,school:*,platform:*.
Roles & personas:
roles:
role_learning_companion_student_side:
id: "role:learning_companion/student_side"
principal: "learner:*"
agent: "si:learning_companion:*"
role_teacher_delegate_reading_support:
id: "role:teacher_delegate/reading_support"
principal: "learner:*"
agent: "si:learning_companion:*"
granted_by: "human:teacher:*"
role_school_admin_curriculum:
id: "role:school_admin/curriculum"
principal: "school:*"
agent: "human:school_admin:*"
personas:
learner_view: {}
teacher_view: {}
guardian_view: {}
ops_view: {}
Patterns:
- Student-side Jump:
principal=learner,persona=learner_view. - Progress report Jump:
principal=learner,persona=teacher_viewvsguardian_view. - Ops tuning Jump:
principal=platform,persona=ops_view, constrained by ETH with respect to learners.
9.2 CityOS
Principals:
city:*,district:*,resident_group:*.
Roles:
roles:
- "role:city:flood_controller"
- "role:city:traffic_controller"
- "role:regulator/flood_safety"
- "role:infra_operator/power_grid"
Personas:
ops_view,regulator_view,public_report_view.
Typical pattern:
- Operational Jump runs under
ops_view, but all high-impact effect traces must be viewable underregulator_viewandpublic_report_viewpersonas, with different explanation/detail levels.
9.3 OSS / infrastructure
Principals:
project:foo,foundation:bar,customer:corp123.
Roles:
role:maintainer/merge,role:ci_orchestrator,role:security_reviewer.
Personas:
maintainer_view,contributor_view,security_view,customer_view.
Example:
- A “genius” refactor CI Jump: principal
project:foo, agentci_orchestrator, rolesrole:ci_orchestrator. - When viewed as
security_view, Jump explanations emphasise supply-chain risk, not build latency.
10. Implementation path
You don’t need a full ID stack to start.
Stop treating
user_idas “identity”. Introduce a minimalRolePersonaOverlaystruct in your Jump runtime.Define principals and roles for one domain.
- Learning:
learner,teacher_delegate,platform_ops. - City:
city,district,ops,regulator.
- Learning:
Start logging overlays on Jumps and RML effects.
- Even if you don’t yet project goal surfaces, get the audit trail in place.
Gradually tie goal surfaces to roles.
- Step 1: read-only view; record which goals would change under different personas.
- Step 2: enforce role-specific goal views for a subset of Jumps.
Hook into ETH & EVAL.
- ETH: rules conditioned on role/persona.
- EVAL: metrics broken down by principal and role (“which roles generate which outcomes?”).
Move towards a full ID graph.
- Replace ad-hoc delegation logic with a single graph + query API.
- Make “who can trigger which Jump for which principal” a centralized policy.
11. Invariants and failure modes
Non-normative but useful invariants:
[ID-1] No Jump without an overlay
Every Jump must have a
RolePersonaOverlaywith a validprincipal_idandagent_id.[ID-2] Capabilities match roles
A Jump may not propose or execute RML effects outside the capabilities of its
role_id.[ID-3] Principal consistency in sequences
Multi-Jump sequences must explicitly handle multi-principal cases; otherwise, all Jumps in a sequence must share the same
principal_id.[ID-4] ETH visibility
ETH overlays must be able to see
principal_id,agent_id,role_id, andpersona_idwhen evaluating constraints.
Common failure modes this design tries to avoid:
- “Ghost principal” — effects executed, but no clear record of on whose behalf.
- “Role drift” — SI starts acting as ops or platform when it should be acting as learner or citizen.
- “Persona mismatch” — explanations / UI that are misaligned with the audience (e.g., ops-view exposed to children).
12. Summary
Role & Persona Overlays give SI-Core a first-class structure for:
- Whose world we’re optimising (principals),
- Who is acting (agents),
- Under what authority (roles),
- Through whose eyes (personas).
They sit on top of [ID], but touch everything:
- [OBS] and goal surfaces (per-role projections),
- [JUMP] and decision engines,
- [ETH] and constraints,
- [RML] and effect logs,
- [MEM] and audit.
They are what turns “a Jump that looks smart” into:
“A Jump that is smart for the right person, by the right agent, under the right authority and viewpoint — and provably so in the logs.”
That’s the point of Role & Persona Overlays in SI-Core.
13. Delegation chain management and verification
Challenge: Delegation chains must be verifiable, revocable, and time-bounded. “SI did X on behalf of Y” is only meaningful if we can prove the delegation was valid at the time of the Jump, and see when it stopped being valid.
13.1 Delegation records
We treat each “link” in an authority chain as a first-class delegation record (portable/export-friendly):
delegation:
id: "DEL-2028-04-123"
created_at: "2028-04-01T08:00:00Z" # operational time (advisory unless time is attested)
as_of:
time: "2028-04-01T08:00:00Z"
clock_profile: "si/clock-profile/utc/v1"
trust:
trust_anchor_set_id: "si/trust-anchors/example/v1"
trust_anchor_set_digest: "sha256:..."
canonicalization: "si/jcs-strict/v1"
canonicalization_profile_digest: "sha256:..."
bindings:
from:
id: "human:teacher:42"
digest: "sha256:..."
to:
id: "si:learning_companion:v2"
digest: "sha256:..."
role:
id: "role:teacher_delegate/reading_support"
digest: "sha256:..."
principal:
id: "learner:1234"
digest: "sha256:..."
from: "human:teacher:42"
to: "si:learning_companion:v2"
role: "role:teacher_delegate/reading_support"
principal: "learner:1234"
granted_at: "2028-04-01T08:00:00Z"
expires_at: "2028-09-30T23:59:59Z"
capabilities:
- "propose_learning_plan"
- "select_exercise"
- "log_events"
conditions:
- "only_during_school_hours == true"
- "must_notify_teacher_of_concerns == true"
revocable_by:
- "human:teacher:42"
- "guardian:777"
- "learner:1234"
A delegation chain is then just:
principal → … → agent
with each hop backed by one of these records.
13.2 Delegation chain verification
Before executing a Jump, SI-Core should verify that the full chain from principal to agent is valid at this moment:
class DelegationChainVerifier:
def verify_chain(self, chain: list[str], current_time) -> "VerificationResult":
"""Verify that the principal → … → agent chain is valid."""
for i in range(len(chain) - 1):
delegator = chain[i]
delegatee = chain[i + 1]
delegation = self.get_delegation(delegator, delegatee)
if not delegation:
return VerificationResult(
ok=False,
reason=f"No delegation from {delegator} to {delegatee}"
)
# Expiry
if delegation.expires_at < current_time:
return VerificationResult(ok=False, reason="Delegation expired")
# Revocation
if self.is_revoked(delegation.id):
return VerificationResult(ok=False, reason="Delegation revoked")
# Conditions (time-of-day, context, etc.)
if not self._check_conditions(delegation.conditions, current_time):
return VerificationResult(
ok=False,
reason="Delegation conditions not met"
)
return VerificationResult(ok=True)
For safety-critical domains, this verification result (and the chain itself) should be embedded into the JumpTrace so auditors can reconstruct exactly which delegations were relied on.
13.3 Revocation and invalidation
Delegations must be revocable by the appropriate parties:
class DelegationRevoker:
def revoke(self, delegation_id: str, revoker_id: str, reason: str):
"""Revoke an existing delegation."""
delegation = self.delegation_store.get(delegation_id)
if revoker_id not in delegation.revocable_by:
raise UnauthorizedRevocation(f"{revoker_id} cannot revoke {delegation_id}")
revocation = Revocation(
delegation_id=delegation_id,
revoked_by=revoker_id,
revoked_at=now(),
reason=reason,
)
self.revocation_store.save(revocation)
# Optionally: cancel or downgrade in-flight sessions / Jumps
self._invalidate_active_jumps(delegation_id)
Key invariant: no new Jump may rely on a delegation once a revocation record exists.
13.4 Time-bounded delegation policies
Rather than ad-hoc expiries, it helps to define a small set of delegation time profiles:
time_policies:
short_term:
duration: "24 hours"
use_case: "Emergency response delegation"
auto_extend: false
term_based:
duration: "1 semester"
use_case: "Teacher delegate for learning"
auto_extend: false
renewal_reminder: "30 days before expiry"
ongoing:
duration: "indefinite"
use_case: "City operator permanent role"
periodic_review: "6 months"
These become templates for delegation records. For long-lived delegations, periodic reviews act as a soft “re-consent” mechanism.
13.5 Delegation graph queries
The ID subsystem should provide graph queries such as:
class DelegationGraphQuery:
def get_active_delegations(self, principal_id: str):
"""Return all active (non-expired, non-revoked) delegations for a principal."""
return [
d for d in self.delegation_store.query(principal=principal_id)
if not d.expired and not self.is_revoked(d.id)
]
def get_transitive_agents(self, principal_id: str) -> list[str]:
"""All agents that can (transitively) act for this principal."""
visited = set()
queue = [principal_id]
agents = []
while queue:
current = queue.pop(0)
if current in visited:
continue
visited.add(current)
delegations = self.get_active_delegations(current)
for d in delegations:
if d.to not in visited:
agents.append(d.to)
queue.append(d.to)
return agents
This makes questions like “who can act for learner-1234 right now?” or “which SIs had any authority over city-A” explicit and auditable.
14. Multi-principal conflict resolution
Challenge: When multiple principals (learner, school, platform, city, etc.) have conflicting goals, Jumps need a systematic way to resolve conflicts, not ad-hoc heuristics.
14.1 Conflict types
Hard constraint conflicts
One principal’s hard constraint makes another principal’s preferred policy impossible.
scenario: learner:123: wellbeing_min: 0.7 # hard constraint school:xyz: test_preparation_hours: 5 # desired study hours per day conflict: "5 hours of study violates the wellbeing threshold"Soft goal trade-offs
All constraints can be satisfied, but optimizing for one principal’s soft goals hurts another’s.
scenario: learner:123: mastery_weight: 0.8 engagement_weight: 0.7 platform: retention_weight: 0.9 monetization_weight: 0.6 conflict: "Aggressively maximizing retention reduces mastery focus"
14.2 Resolution strategies
Strategy 1 — Principal hierarchy
Designate a primary principal whose hard constraints cannot be violated.
class HierarchicalResolver:
def resolve(self, principals, goal_surfaces, candidates):
"""Primary principal hard constraints dominate."""
primary = principals[0]
secondary = principals[1:]
primary_constraints = goal_surfaces[primary].hard_constraints
feasible_space = self._compute_feasible_space(primary_constraints, candidates)
return self._optimize_multi_objective(
secondary_goals=[goal_surfaces[p] for p in secondary],
feasible_candidates=feasible_space,
)
Typical pattern: in learning, child/learner is primary; in health, patient is primary; in city operations, public safety often has top priority.
Strategy 2 — Pareto frontier
Find actions that are Pareto-optimal across all principals, then choose among them.
def pareto_optimal_resolution(principals, goal_surfaces, candidates):
"""Return a Pareto-optimal action, with primary principal as tie-breaker."""
evaluations = []
for action in candidates:
scores = {p: goal_surfaces[p].evaluate(action) for p in principals}
evaluations.append((action, scores))
pareto_set = []
for i, (cand_i, scores_i) in enumerate(evaluations):
dominated = False
for j, (cand_j, scores_j) in enumerate(evaluations):
if i == j:
continue
if dominates(scores_j, scores_i, principals):
dominated = True
break
if not dominated:
pareto_set.append((cand_i, scores_i))
return select_from_pareto(
pareto_set,
primary_principal=principals[0],
)
This is useful when no strict hierarchy is agreed, but we still want to avoid “obviously bad” trade-offs.
Strategy 3 — Weighted aggregation with veto
Combine scores with weights, but allow each principal a veto via hard constraints:
class WeightedAggregationResolver:
def resolve(self, principals, goal_surfaces, weights, candidates):
"""Weighted sum with per-principal veto via hard constraints."""
def aggregate_score(action):
scores = {}
for p in principals:
if goal_surfaces[p].violates_hard_constraints(action):
return float("-inf") # veto
scores[p] = goal_surfaces[p].evaluate(action)
return sum(weights[p] * scores[p] for p in principals)
return max(candidates, key=aggregate_score)
14.3 Mediation and transparency
For high-stakes cases, the system should surface how it resolved conflicts:
mediation_process:
stage_1_detect:
- "Identify conflicting goals"
- "Quantify conflict severity"
stage_2_negotiate:
- "Attempt soft-goal compromise"
- "Check alternate candidates on Pareto frontier"
stage_3_escalate:
condition: "No acceptable compromise found"
actions:
- "Escalate to human mediator"
- "Apply default principal hierarchy"
- "Defer or downgrade decision"
And the Jump result itself should carry a multi-principal explanation:
class MultiPrincipalJumpResult:
def __init__(self, action, scores_by_principal, conflicts, resolution_method):
self.action = action
self.scores_by_principal = scores_by_principal
self.conflicts_detected = conflicts
self.resolution_method = resolution_method
def explain(self) -> str:
return f"""
Action: {self.action}
Scores by principal:
{self._format_scores(self.scores_by_principal)}
Conflicts: {self.conflicts_detected}
Resolution method: {self.resolution_method}
Trade-offs: {self._explain_tradeoffs()}
"""
This is the structured place where ETH/EVAL can later ask: “did we consistently sacrifice the same principal’s interests?”
15. Role capability enforcement and validation
Challenge: A role must actually constrain what an agent can do. It’s not enough to write “teacher_delegate” in a log; we need runtime enforcement.
15.1 Capability model
Capabilities describe what actions a role can perform, on which resources, and with which limits:
capability:
id: "cap:select_exercise"
type: "action"
resource: "learning.exercises"
operations: ["read", "propose"]
constraints:
- "Only exercises within learner's level range"
- "Must respect accommodations"
prohibited:
- "Cannot modify exercise content"
- "Cannot change learner_profile"
Roles bind capabilities:
role: "role:teacher_delegate/reading_support"
capabilities:
- "cap:select_exercise"
- "cap:view_progress"
- "cap:log_events"
15.2 Enforcement points in the Jump runtime
Before running a Jump, we check capabilities against the requested observations and potential actions:
class CapabilityEnforcer:
def check_jump_capabilities(self, jump_req, role) -> None:
"""Verify that the requested Jump stays within role capabilities."""
capabilities = self.role_catalog.get_capabilities(role.id)
# Observation access
if not self._check_obs_access(jump_req.obs_bundle, capabilities):
raise InsufficientCapabilities("Cannot access required observations")
# Pre-declared actions (for plan-type Jumps)
for action in getattr(jump_req, "proposed_actions", []):
if not self._check_action_capability(action, capabilities):
raise InsufficientCapabilities(
f"Action {action} not permitted for role {role.id}"
)
# Pre-declared RML calls
for rml_call in getattr(jump_req, "rml_calls", []):
if not self._check_rml_capability(rml_call, capabilities):
raise InsufficientCapabilities(
f"RML effect {rml_call} not permitted for role {role.id}"
)
The runtime wraps execution:
class JumpRuntimeWithCapabilities:
def run_jump(self, req: JumpRequest):
# Pre-execution guard
self.capability_enforcer.check_jump_capabilities(req, req.role_persona.role_id)
draft = self.engine.propose(req)
# Post-execution guard: ensure the draft didn’t “escape” capabilities
self._verify_draft_capabilities(draft, req.role_persona.role_id)
# Filter RML effects again just before execution
filtered_rml = filter_rml_by_capabilities(
draft.rml_calls,
self.role_catalog.get_capabilities(req.role_persona.role_id),
)
return self._execute_rml(filtered_rml, draft, req)
15.3 Capability-based RML filtering
RML execution is a particularly important enforcement point:
def filter_rml_by_capabilities(rml_calls, capabilities):
"""Filter or modify RML calls to respect role capabilities."""
filtered = []
for rml_call in rml_calls:
if rml_call.effect_type in capabilities.prohibited_effects:
log_violation(rml_call, capabilities)
continue # hard block
if rml_call.effect_type in capabilities.restricted_effects:
modified = apply_capability_restrictions(rml_call, capabilities)
filtered.append(modified)
else:
filtered.append(rml_call)
return filtered
15.4 Auditing capability violations
All attempted overreach should be logged structurally:
capability_violation_log:
violation_id: "CAP-VIO-2028-123"
time: "2028-04-15T10:15:00Z"
agent_id: "si:learning_companion:v2"
role_id: "role:teacher_delegate/reading_support"
attempted_action: "modify_learner_profile"
required_capability: "cap:modify_profile"
actual_capabilities:
- "cap:select_exercise"
- "cap:log_events"
result: "blocked"
escalation: "notified_delegator"
This gives [ETH]/[MEM]/[ID] a clear view of “role drift” attempts.
16. Persona projection algorithms
Challenge: Personas are not just labels; they define how metrics and explanations should be projected for different audiences.
16.1 Metric projection
Given a rich internal metric set, we project a persona-appropriate subset:
class MetricProjector:
def project_metrics(self, raw_metrics: dict, persona) -> dict:
"""Project raw metrics into a persona-appropriate view."""
rules = self.persona_catalog[persona.id].projection_rules
projected = {}
for name, value in raw_metrics.items():
if name in rules.hide_metrics:
continue
if name in rules.show_metrics:
if name in rules.transformations:
transform = rules.transformations[name]
projected[name] = transform(value)
else:
projected[name] = value
return projected
Example persona configuration:
persona:learner_view:
show_metrics: ["mastery_progress", "stress_load", "engagement"]
hide_metrics: ["platform_engagement_score", "internal_risk_score"]
transformations:
mastery_progress: "normalize_0_1_to_percent"
stress_load: "map_to_3_bucket_label"
persona:teacher_view:
show_metrics: ["mastery_mu", "curriculum_coverage", "risk_flags"]
hide_metrics: ["platform_engagement_score"]
16.2 Explanation adaptation
Explanations also need persona-specific adaptation:
class ExplanationAdapter:
def adapt_explanation(self, raw_explanation: str, persona):
"""Adapt explanation language/detail level for persona."""
style = self.persona_catalog[persona.id].explanation_style
if style == "simple":
return self._simplify_explanation(raw_explanation)
elif style == "technical":
return self._technical_explanation(raw_explanation)
elif style == "regulatory":
return self._regulatory_explanation(raw_explanation)
else:
return raw_explanation
def _simplify_explanation(self, explanation: str) -> str:
# Remove jargon, shorten sentences, focus on outcomes.
...
def _technical_explanation(self, explanation: str) -> str:
# Keep structure, add model/metric names, keep more detail.
...
def _regulatory_explanation(self, explanation: str) -> str:
# Highlight policy constraints, trade-offs, references to ETH rules.
...
16.3 Goal and metric naming by persona
We can also persona-map names of goals and metrics:
goal_rendering_by_persona:
learner_view:
mastery: "Your learning progress"
stress_load: "How challenging this feels"
engagement: "How fun and interesting"
teacher_view:
mastery: "Mastery level (IRT estimate)"
stress_load: "Cognitive load indicator"
curriculum_coverage: "Standards alignment"
ops_view:
mastery: "mastery_mu (BKT posterior)"
latency_p95: "Response time p95"
resource_utilization: "Compute cost per session"
This keeps the underlying goal structures shared, while giving each audience a tailored “surface”.
17. Testing strategies for Role & Persona overlays
Challenge: Role & Persona logic is subtle and cross-cutting. Bugs here become governance failures, not just off-by-ones.
17.1 Testing pyramid
A non-normative testing pyramid for Role & Persona:
Unit tests
- Delegation chain verification (
verify_chain). - Capability enforcement and RML filtering.
- Goal surface projection for roles/personas.
- Metric and explanation projection.
- Delegation chain verification (
Integration tests
- Multi-agent Jump coordination (principal shared, roles differ).
- Multi-principal conflict resolution (hierarchy / Pareto / veto).
- Persona-specific rendering end-to-end (same Jump, different personas).
Property-based tests
from hypothesis import given, strategies as st @given(delegation_chain=st.lists(st.text(), min_size=2, max_size=5)) def test_delegation_chain_transitive(delegation_chain): """If A→B→C→...→Z, then there is a transitive path from A to Z.""" for i in range(len(delegation_chain) - 1): setup_delegation(delegation_chain[i], delegation_chain[i + 1]) assert can_delegate_transitively( delegation_chain[0], delegation_chain[-1] )Contract tests
role_contract_test: role: "role:teacher_delegate/reading_support" must_allow: - "select_exercise for assigned learners" - "view_progress for assigned learners" must_forbid: - "modify_learner_profile" - "change_guardian_settings" - "access_financial_data"Simulation tests
def test_multi_principal_conflict_resolution(): """Simulate multi-principal scenarios and verify invariants.""" scenarios = generate_conflict_scenarios(n=100) for scenario in scenarios: result = resolve_multi_principal_conflict(scenario) # Primary principal’s hard constraints must always hold. assert satisfies_hard_constraints( result, scenario.primary_principal ) # Result should not be clearly dominated for any principal. assert is_pareto_optimal( result, scenario.all_principals )
The goal is to treat Role & Persona overlays as first-class infrastructure: they get their own contract tests, property tests, and simulation suites, not just ad-hoc unit tests.