ENGINEERING BLOG
Technical research and engineering insights from the team building the operating system for responsible AI operations.
188 articles · Published by MARIA OS
Eight papers that form the complete theory-to-operations stack: why organizational judgment needs an OS, structural design, stability laws, algorithm architecture, mission-constrained optimization, survival optimization, workforce transition, and agent lifecycle management.
Series Thesis
Company Intelligence explains why the OS exists. Structure defines responsibility. Stability laws prove when governance holds. Algorithms make it executable. Mission constraints keep optimization aligned. Survival theory determines evolutionary direction. White-collar transition shows who moves first. VITAL keeps the whole system alive.
00
Company Intelligence
Why organizational judgment needs an operating system, not just AI tools.
00
Company Intelligence
Why organizational judgment needs an operating system, not just AI tools.
01
Structural Design
How to decompose responsibility across human-agent boundaries.
02
Stability Laws
Mathematical conditions under which agentic governance holds or breaks.
03
Algorithm Stack
10 algorithms mapped to a 7-layer architecture for agentic organizations.
04
Mission Constraints
How to optimize agent goals without eroding organizational values.
05
Survival Optimization
Does evolutionary pressure reduce organizations to pure survival machines? The math of directed vs. undirected evolution.
06
Workforce Transition
Which white-collar workflows move first, and how fast the shift happens.
A formal analysis of how multi-agent teams calibrate collective confidence through structured interaction, showing why individual calibration is necessary but insufficient for team-level epistemic accuracy and how topology governs convergence
Individual calibration error measures how well one agent's stated confidence matches realized accuracy. In collaborative settings, however, a distinct phenomenon appears: collective calibration, where team-level confidence must track team-level accuracy. This paper defines collective calibration error as a metric that cannot be reduced to aggregated individual calibration, proves that individually well-calibrated agents can still form a poorly calibrated team under certain interaction topologies, and derives sufficient graph conditions for convergence. We validate the framework on MARIA OS deployments with 623 agents across 9 zones, showing a 41.7% reduction in collective calibration error via topology-aware reflection scheduling.
Topological signals expose hidden coverage gaps and groupthink risk that pairwise diversity metrics can miss
Persistent homology tracks coverage holes across scales to flag latent team blind spots earlier.
Operationalize evidence-backed dissent, validation diversity, and anti-groupthink interventions
Structured disagreement channels dissent into testable claims, improving decision quality without collapsing throughput.
AGENT TEAMS FOR TECH BLOG
Every article passes through a 5-agent editorial pipeline. From research synthesis to technical review, quality assurance, and publication approval — each agent operates within its responsibility boundary.
Editor-in-Chief
ARIA-EDIT-01
Content strategy, publication approval, tone enforcement
G1.U1.P9.Z1.A1
Tech Lead Reviewer
ARIA-TECH-01
Technical accuracy, code correctness, architecture review
G1.U1.P9.Z1.A2
Writer Agent
ARIA-WRITE-01
Draft creation, research synthesis, narrative craft
G1.U1.P9.Z2.A1
Quality Assurance
ARIA-QA-01
Readability, consistency, fact-checking, style compliance
G1.U1.P9.Z2.A2
R&D Analyst
ARIA-RD-01
Benchmark data, research citations, competitive analysis
G1.U1.P9.Z3.A1
Distribution Agent
ARIA-DIST-01
Cross-platform publishing, EN→JA translation, draft management, posting schedule
G1.U1.P9.Z4.A1
Complete list of all 188 published articles. EN / JA bilingual index.
188 articles
All articles reviewed and approved by the MARIA OS Editorial Pipeline.
© 2026 MARIA OS. All rights reserved.