The Explainability Gap
In the preceding paper on negative correlation detection, we demonstrated that eigendecomposition of the inter-Universe correlation matrix reveals conflict clusters with mathematical precision. Negative eigenvalues identify conflict dimensions. Eigenvector loadings identify participating Universes. Conflict scores quantify severity.
This mathematical apparatus is necessary but not sufficient for governance. A CTO reviewing quarterly conflict reports does not think in eigenvalues. A department head resolving a Sales-Compliance tension does not reason about eigenvector loadings. The mathematical model must be translated into artifacts that match how humans actually make governance decisions: by reading structured documents, comparing options, and selecting resolution paths.
We call these artifacts Conflict Cards. Each card represents a single detected conflict, provides context for why the conflict exists, quantifies its impact on the enterprise, and proposes ranked resolution options. The generation algorithm transforms raw matrix outputs into complete, actionable cards without human authoring.
Conflict Card Structure
A Conflict Card is a structured document with seven mandatory sections. The structure is invariant across all conflict types.
ConflictCard {
id: string // Unique card identifier
detectedAt: ISO8601 // Timestamp of detection
conflictType: 'bilateral' | 'cluster' | 'systemic'
severity: number // [0, 1] composite severity score
status: 'active' | 'acknowledged' | 'resolving' | 'resolved'
parties: {
universe: MARIACoordinate // Universe identifier
direction: 'positive' | 'negative' // Eigenvector sign
loading: number // |v_i[u]| - participation strength
impactedKPIs: string[] // KPIs most affected by conflict
}[]
evidence: {
eigenvalue: number // lambda' for this conflict dimension
correlationPairs: { // Pairwise correlations between parties
pair: [string, string]
correlation: number
trend: 'worsening' | 'stable' | 'improving'
}[]
windowStart: ISO8601
windowEnd: ISO8601
sampleSize: number // Number of KPI observations
}
impactAssessment: {
estimatedCost: number // $ per quarter
affectedDecisions: number // Decisions per month in conflict zone
riskExposure: 'low' | 'medium' | 'high' | 'critical'
cascadeRisk: string // Description of downstream effects
}
resolutionPaths: {
id: string
strategy: string // Short name
description: string // Detailed explanation
estimatedEffort: string // Time/cost to implement
expectedOutcome: string // Predicted impact
tradeoffs: string // What is sacrificed
score: number // [0, 1] recommendation score
}[]
assignedTo: string | null // MARIA coordinate of responsible human
resolvedAt: ISO8601 | null
resolutionNote: string | null
}The Conflict Scoring Function
The severity score on each card is a composite of four factors: eigenvalue magnitude, correlation strength, temporal trend, and financial exposure.
Definition 1 (Conflict Severity Score):
severity(c) = w_e * S_eigen(c) + w_c * S_corr(c)
+ w_t * S_trend(c) + w_f * S_financial(c)
where:
S_eigen(c) = min(1, |lambda'_c| / lambda_max)
Normalized eigenvalue magnitude.
S_corr(c) = max over pairs (i,j) in c of |C[i,j]|
Strongest pairwise correlation in the cluster.
S_trend(c) = (|lambda'_c(t)| - |lambda'_c(t - W)|) / |lambda'_c(t)|
Rate of change over window W. Positive = worsening.
Clamped to [0, 1].
S_financial(c) = min(1, estimated_cost(c) / cost_threshold)
Normalized financial impact.
Default weights: w_e = 0.30, w_c = 0.25, w_t = 0.25, w_f = 0.20The weights reflect the principle that mathematical signal (eigenvalue + correlation) and practical impact (trend + financial) are equally important. Pure mathematical severity without financial impact produces low scores. High financial impact without mathematical backing also produces low scores. Both signals must align for a high severity rating.
The Generation Algorithm
The complete algorithm takes eigendecomposition output and enterprise metadata as input and produces a ranked list of Conflict Cards.
Algorithm: GenerateConflictCards
Input:
C_conflict: R^{U x U} -- conflict-amplified correlation matrix
eigenvalues: lambda'_1..U -- eigenvalues of C_conflict
eigenvectors: V' = [v'_1..U] -- eigenvectors
metadata: UniverseMetadata[] -- KPI names, costs, coordinates
history: EigenvalueHistory -- past eigenvalue trajectories
tau: number -- conflict threshold (default 0.1)
Output: ConflictCard[]
1. IDENTIFY conflict dimensions:
D_neg = { i : lambda'_i < -tau }
If D_neg is empty, return [] // No conflicts
2. For each dimension i in D_neg:
a. CLASSIFY conflict type:
- Count Universes with |v'_i[u]| > 0.15 (participation threshold)
- If 2 Universes: type = 'bilateral'
- If 3-4 Universes: type = 'cluster'
- If 5+ Universes: type = 'systemic'
b. EXTRACT parties:
For each Universe u with |v'_i[u]| > 0.15:
direction = sign(v'_i[u]) > 0 ? 'positive' : 'negative'
loading = |v'_i[u]|
impactedKPIs = top-3 KPIs by correlation with conflict dimension
c. BUILD evidence bundle:
eigenvalue = lambda'_i
correlationPairs = all (p, n) pairs from positive/negative groups
trend = compare |lambda'_i(t)| vs |lambda'_i(t - W)|
d. ASSESS impact:
estimatedCost = sum over parties of (loading * universe_budget * |lambda'_i|)
affectedDecisions = count decisions in conflict Universes (last 30 days)
riskExposure = map severity to {low, medium, high, critical}
e. GENERATE resolution paths (see next section)
f. COMPUTE severity score using Definition 1
g. ASSEMBLE ConflictCard
3. SORT cards by severity descending
4. MERGE overlapping cards (same Universe pair in multiple dimensions)
5. Return top-K cards (default K = 10)
Complexity: O(U^3) eigendecomposition + O(U^2 * |D_neg|) extractionResolution Path Generation
Each Conflict Card includes ranked resolution paths. These are generated from a resolution strategy library combined with conflict-specific parameters.
Resolution Strategy Library:
Strategy 1: CONSTRAINT ALIGNMENT
Applicable when: bilateral conflict, moderate severity
Action: Introduce shared constraints that force both Universes
to optimize within compatible boundaries.
Example: Sales and Compliance share a 'risk-adjusted revenue' KPI
that credits revenue only when compliance score > 0.8.
Effort: 2-4 weeks to define shared KPI + update agent objectives.
Strategy 2: HIERARCHICAL ARBITRATION
Applicable when: cluster conflict, high severity
Action: Escalate to Galaxy-level governance for priority ruling.
One Universe's objectives take precedence in defined scenarios.
Example: During audit periods, Compliance objectives override Sales.
Effort: 1-2 weeks governance review + policy update.
Strategy 3: STRUCTURAL SEPARATION
Applicable when: systemic conflict, critical severity
Action: Reorganize Universe boundaries to separate conflicting
functions into distinct operational units with explicit interfaces.
Example: Split 'Revenue Operations' into 'New Business' and
'Account Management' with separate risk profiles.
Effort: 4-8 weeks organizational restructuring.
Strategy 4: TEMPORAL PARTITIONING
Applicable when: cyclical conflicts tied to business rhythms
Action: Define time windows where different objectives take priority.
Example: Q4 Sales push reduces compliance gate strength (within bounds).
Q1 Compliance hardening restores gates post-audit.
Effort: 1-2 weeks to define temporal policy rules.
Strategy 5: OBJECTIVE REWEIGHTING
Applicable when: mild conflict resolvable by adjusting KPI weights
Action: Modify Universe composite score weights to reduce the
conflicting component without eliminating it.
Example: Reduce Sales weight on 'deals closed' from 0.4 to 0.25,
increase weight on 'deal quality score' from 0.1 to 0.25.
Effort: 1 week to recalibrate + 2 weeks monitoring.Each strategy receives a recommendation score based on conflict type match, severity appropriateness, and estimated ROI.
Resolution Scoring Function:
score(strategy, card) = w_m * match(strategy.type, card.conflictType)
+ w_s * severity_fit(strategy.range, card.severity)
+ w_r * roi(card.impactAssessment.estimatedCost,
strategy.estimatedEffort)
where:
match(): 1.0 if strategy is designed for this conflict type, 0.3 otherwise
severity_fit(): 1.0 if card severity falls within strategy's effective range,
decays linearly outside range
roi(): estimatedCost / estimatedEffort, normalized to [0, 1]
Weights: w_m = 0.40, w_s = 0.30, w_r = 0.30Worked Example: Sales vs. Compliance Conflict Card
We generate a complete Conflict Card from the three-Universe example presented in the preceding paper.
Generated Conflict Card:
---------------------------------------------------------
ID: CC-G1-2026-0118-001
Detected: 2026-01-18T09:14:22Z
Type: bilateral
Severity: 0.74 (HIGH)
Status: active
PARTIES:
[+] U1: Sales Operations (G1.U1)
Loading: 0.617 | Direction: opposing
Impacted KPIs: revenue_growth, deal_count, risk_acceptance_rate
[-] U2: Risk & Compliance (G1.U2)
Loading: 0.768 | Direction: opposing
Impacted KPIs: audit_pass_rate, violation_count, exposure_score
EVIDENCE:
Eigenvalue: -0.540
Correlation: C[U1,U2] = -0.720
Trend: WORSENING (+12% over 30 days)
Window: 2025-12-19 to 2026-01-18 (30 days, 30 observations)
IMPACT ASSESSMENT:
Estimated Cost: $285,000 / quarter
Affected Decisions: 142 / month
Risk Exposure: HIGH
Cascade Risk: Sales agents accepting clients that trigger
compliance reviews, creating backlog that delays
all Universe U2 decisions by avg 3.2 days.
RESOLUTION PATHS (ranked):
1. CONSTRAINT ALIGNMENT (score: 0.82)
Introduce risk-adjusted revenue KPI shared between U1 and U2.
Effort: 3 weeks | Expected: reduce correlation from -0.72 to -0.30
Tradeoff: Sales velocity may decrease 8-12% short-term.
2. TEMPORAL PARTITIONING (score: 0.67)
Define quarterly rhythm: audit-prep weeks use stricter gates.
Effort: 1 week | Expected: reduce conflict during peak periods by 40%
Tradeoff: Sales must plan around compliance calendar.
3. OBJECTIVE REWEIGHTING (score: 0.61)
Reduce Sales weight on deal_count, increase deal_quality weight.
Effort: 1 week | Expected: reduce correlation from -0.72 to -0.45
Tradeoff: Deal volume targets must be revised downward.
---------------------------------------------------------Card Lifecycle Management
Conflict Cards are not static reports. They are governance artifacts with their own lifecycle, tracked in the MARIA decision pipeline.
Conflict Card Lifecycle:
active -> acknowledged -> resolving -> resolved
-> escalated -> resolved
active: Card generated, no human has reviewed.
Auto-escalates if unacknowledged for 7 days.
acknowledged: Assigned human has reviewed and accepted ownership.
Must select a resolution path within 14 days.
resolving: Resolution path is being implemented.
Progress tracked via linked decisions in pipeline.
Auto-escalates if no progress for 21 days.
resolved: Post-resolution eigenvalue confirms conflict reduction.
Requires |lambda'| improvement of at least 30%.
Card archived with resolution effectiveness score.
escalated: Timeout triggered. Card escalated one level in
MARIA hierarchy (Zone -> Planet -> Universe -> Galaxy).The lifecycle integrates with the MARIA decision pipeline through linked decisions. When a resolution path is selected, it generates one or more decisions in the pipeline (e.g., a KPI redefinition decision, a policy change decision). The Conflict Card tracks these linked decisions and considers itself resolved only when all linked decisions reach the completed state and the eigenvalue analysis confirms improvement.
Batch Processing and Deduplication
In enterprises with many Universes, the eigendecomposition may produce overlapping conflict signals. The same Universe pair may appear in multiple conflict dimensions. The generation algorithm includes a deduplication step.
Algorithm: DeduplicateConflictCards
Input: cards: ConflictCard[] (sorted by severity)
Output: deduplicated: ConflictCard[]
1. Initialize: seen_pairs = Set()
2. For each card c in cards:
a. pair_key = sort(c.parties.map(p => p.universe)).join('-')
b. If pair_key in seen_pairs:
- Merge: add this card's evidence to existing card
- Update severity: max(existing.severity, c.severity)
- Skip adding new card
c. Else:
- Add pair_key to seen_pairs
- Add c to deduplicated
3. Re-sort deduplicated by severity
4. Return deduplicated
Merge semantics: Evidence bundles are concatenated. The card retains
the conflict type of the higher-severity instance. Resolution paths
are unified and re-scored against the merged evidence.Integration with the MARIA Dashboard
Conflict Cards surface in the MARIA OS dashboard through the Conflict Detection panel. The panel displays active cards sorted by severity, with visual indicators for trend direction and lifecycle status.
Each card is expandable, revealing the full evidence bundle, impact assessment, and resolution paths. The assigned human can acknowledge the card, select a resolution path, and track implementation progress directly from the dashboard. Resolved cards are archived but remain queryable for historical analysis and pattern detection.
The panel also displays the Conflict Severity Index (CSI) as a time-series chart, showing the enterprise's overall conflict trajectory. A rising CSI triggers a dashboard alert, drawing attention to the conflict landscape before individual cards are reviewed.
Effectiveness Metrics
We measure the effectiveness of Conflict Card generation across three dimensions: detection quality, resolution speed, and outcome impact.
Conflict Card Effectiveness (3 enterprises, Q4 2025):
Detection:
Cards generated: 47
True conflicts confirmed: 41 (87% precision)
Conflicts missed: 3 (93% recall)
F1 score: 0.90
Resolution:
Avg time to acknowledge: 2.1 days
Avg time to select path: 6.3 days
Avg time to resolve: 28 days
Resolution success rate: 78% (eigenvalue improved > 30%)
Outcome:
Avg cost reduction: $127K per resolved conflict per quarter
Decision latency reduction: 1.8 days avg in affected Universes
Cross-Universe escalations: -41% after conflict resolutionConclusion: Making Mathematics Actionable
The gap between mathematical detection and human action is the gap between knowing and doing. Eigendecomposition tells us that a conflict exists. Conflict Cards tell us what it means, how much it costs, who should fix it, and how. The generation algorithm bridges this gap automatically, producing governance-ready artifacts from raw linear algebra output.
The key design principle is that every mathematical quantity maps to a human-interpretable concept. Eigenvalues map to severity. Eigenvector signs map to opposing groups. Loadings map to participation strength. Correlation trends map to urgency. Financial exposure maps to priority. When the mathematics and the narrative align, governance teams can act with confidence that the recommended resolution paths are grounded in rigorous analysis, not intuition.