1. Introduction: The IP Trilemma of AI Governance
Every AI governance company confronts a strategic tension that has no analog in conventional enterprise software. The product is ethical infrastructure — systems that enforce fairness, ensure accountability, detect value drift, and manage responsibility allocation. The credibility of such a product depends, in part, on transparency: customers, regulators, and the research community must trust that the governance mechanisms are sound, not merely marketed. This creates pressure toward openness — publishing algorithms, releasing specifications, contributing to standards.
Simultaneously, the governance algorithms themselves represent the core technical moat. The specific method by which a fail-closed gate evaluates multi-dimensional risk scores, the exact computation that converts competing value vectors into a conflict metric, the reinforcement learning framework that constrains agent autonomy to responsibility boundaries — these are innovations that required years of research and engineering. Disclosing them in full enables competitors to replicate the platform without incurring the R&D cost.
And beneath the algorithms lie the parameters that make them work in production: the threshold values that determine when a gate closes, the weight matrices that calibrate risk across industries, the heuristics that optimize evaluation speed. These operational parameters are tuned through thousands of deployment hours and customer-specific calibration. They cannot be independently derived from the published algorithms alone.
This creates what we term the IP Trilemma:
- Fully Open strategy maximizes Trust but destroys Defensibility and Operational Advantage.
- Fully Patented strategy maximizes Defensibility but damages Trust (perceived as monopolistic ethics).
- Fully Secret strategy maximizes Operational Advantage but destroys both Trust and Defensibility (no public claims, no legal protection).
The resolution is not a compromise. It is a partition — a mathematically precise assignment of each innovation to exactly one of three layers, with formally defined boundary conditions that prevent information leakage between layers while maintaining coherence across the portfolio.
1.1 Why AI Governance IP Is Structurally Different
Traditional software IP strategy operates in a relatively straightforward space: algorithms are patented, trade secrets protect implementation details, and open source is used strategically for adoption. AI governance adds three unique complications.
First, the ethics credibility constraint. An AI governance platform that patents its ethics enforcement algorithms faces a perception challenge: 'You are monetizing ethics. You are making fairness proprietary.' This objection is philosophically naive — the ethical principles are not proprietary, only the specific computational methods for enforcing them — but it is strategically real. The IP strategy must explicitly separate principles (open) from implementations (protected) to maintain credibility.
Second, the regulatory entanglement. AI governance operates in a rapidly evolving regulatory landscape (EU AI Act, NIST AI RMF, Japan AI Safety Institute guidelines). Regulators may mandate specific governance capabilities. If those capabilities are covered by patents, the platform faces a choice between licensing broadly (diluting competitive advantage) and restricting access (inviting regulatory backlash or compulsory licensing). The IP strategy must anticipate regulatory mandates and position patents on implementation methods rather than mandated outcomes.
Third, the research dependency. AI governance is a research-intensive field where the state of the art advances through academic publication. A fully closed strategy cuts the organization off from the research community, making it harder to attract talent, receive peer feedback, and influence emerging standards. The IP strategy must maintain a robust publication pipeline while protecting proprietary advantages.
1.2 The Three-Layer Model Preview
We propose a Three-Layer IP Model that assigns every governance innovation to exactly one layer:
| Layer | Disclosure | Protection | Purpose |
| --- | --- | --- | --- |
| L1: Open Specification | Full public disclosure | None (intentional) | Trust, adoption, standardization |
| L2: Protected Algorithms | Patent disclosure (implementation) | Patent rights (20 years) | Defensibility, licensing revenue |
| L3: Trade Secrets | No disclosure | Confidentiality controls | Operational advantage |
The key insight is that these layers are not arbitrary — they correspond to different levels of abstraction in the governance architecture:
- L1 (Open) covers the what: what ethical constraints mean, what drift is, what conflict looks like, what an agentic company is.
- L2 (Patent) covers the how: how to evaluate constraints efficiently, how to detect drift in real-time, how to compute conflict across universes.
- L3 (Secret) covers the how well: the specific parameters, weights, and heuristics that determine production performance.
This abstraction-aligned partition is natural, defensible, and strategically optimal. We prove this optimality formally in Section 5.
1.3 Paper Structure
Section 2 formalizes the Information Disclosure Game and derives the Nash equilibrium. Section 3 defines each layer with formal boundary conditions. Section 4 details specific assets in each layer. Section 5 proves portfolio optimality of the three-layer partition. Section 6 presents the Patent Value Function and structural vs. defensive patent strategy. Section 7 designs the Research-to-Patent Pipeline. Section 8 introduces the brand-IP linkage strategy. Section 9 presents the 5-year IP roadmap. Section 10 analyzes risks and mitigations. Section 11 discusses the IP Review Node within the research Decision Graph.
2. Information Disclosure Game Theory
Before defining the three layers, we must formalize the strategic interaction that determines optimal disclosure. The firm does not operate in isolation — its IP decisions are observed and responded to by competitors, regulators, and the research community. We model this interaction as a sequential game.
2.1 Players and Strategies
Player F (Firm): Chooses a disclosure policy $\sigma_F: \mathcal{I} \rightarrow \{\text{open}, \text{patent}, \text{secret}\}$ that maps each innovation $i \in \mathcal{I}$ to a disclosure level.
Player A (Adversary): A composite player representing competitors ($A_C$), regulators ($A_R$), and the research community ($A_S$). Each sub-player has different objectives:
- $A_C$ seeks to minimize F's competitive advantage: $\min_{A_C} \text{Moat}(F)$
- $A_R$ seeks to maximize public governance capability: $\max_{A_R} \text{PublicGood}(\sigma_F)$
- $A_S$ seeks to maximize knowledge advancement: $\max_{A_S} \text{Knowledge}(\sigma_F)$
The composite adversary payoff is:
where $\alpha_C + \alpha_R + \alpha_S = 1$ are the relative weights of each sub-player.
2.2 Payoff Functions
The firm's payoff function integrates four components:
Each component is a function of the disclosure policy:
where $w_i^{\text{trust}}, w_i^{\text{moat}}, w_i^{\text{ops}}$ are the trust, moat, and operational advantage weights for innovation $i$.
2.3 Nash Equilibrium
Theorem 2.1 (Disclosure Equilibrium). Under the payoff structure above, the Nash equilibrium disclosure policy $\sigma_F^*$ satisfies:
Proof. Consider innovation $i$ in isolation. The firm's marginal payoff for assigning $i$ to layer $l$ is $w_i^l - \sum_{l' \neq l} w_i^{l'} \cdot \lambda_{l'}$ where $\lambda_{l'}$ captures the opportunity cost of not assigning to layer $l'$. In the single-innovation case, $\lambda_{l'} = 0$ and the assignment reduces to selecting the layer with the highest weight. Since innovations are assigned independently (no cross-innovation dependencies in the base model), the equilibrium extends to the full portfolio by iterating over each innovation. The adversary cannot improve their payoff by deviating because the firm's assignment is a best response to any adversary strategy — the adversary cannot change the weights, only respond to the disclosure. $\square$
2.4 Mixed-Strategy Extension
In practice, some innovations have nearly equal weights across layers. For these boundary cases, the firm may adopt a mixed strategy — for example, publishing a simplified version (open) while patenting the full implementation (protected) and keeping calibration parameters (secret). We formalize this as a Layered Disclosure where a single conceptual innovation is decomposed into sub-components assigned to different layers:
This decomposition is the foundation of the Three-Layer Model: every governance innovation is decomposable into concept, implementation, and parameters, each assigned to its natural layer.
2.5 Comparative Statics
How does the equilibrium shift as market conditions change? We derive the comparative statics for three key scenarios:
Scenario 1 (Regulatory Pressure Increases). As $\alpha_R$ increases relative to $\alpha_C$ and $\alpha_S$, the adversary places more weight on public good. The firm's best response is to increase the L1 (open) allocation, particularly for innovations that regulators may mandate. The equilibrium shifts toward more openness at the concept level.
Scenario 2 (Competitive Intensity Increases). As $\alpha_C$ increases, competitors invest more in replication. The firm's best response is to strengthen L2 (patent) and L3 (secret) protections. However, completely abandoning L1 would damage trust and regulatory standing. The equilibrium shifts toward broader patent claims and deeper trade secret compartmentalization.
Scenario 3 (Research Community Engagement Increases). As $\alpha_S$ increases, the value of research collaboration rises. The firm's best response is to expand publications while ensuring the patent-first-then-paper pipeline (Section 7) prevents inadvertent disclosure of patentable innovations.
3. The Three-Layer IP Model: Formal Definition
3.1 Layer Definitions
Definition 3.1 (Open Specification Layer — L1). The set of innovations $\mathcal{I}_{L1} \subseteq \mathcal{I}$ disclosed without restriction, characterized by:
where $\text{Reproducibility}(i) \leq \rho_{\text{open}}$ means that disclosing $i$ alone is insufficient for a competitor to reproduce the production system. The threshold $\rho_{\text{open}}$ is calibrated to ensure that open specifications enable understanding but not replication.
Definition 3.2 (Protected Algorithm Layer — L2). The set of innovations $\mathcal{I}_{L2} \subseteq \mathcal{I}$ protected by patent filings:
where Novelty and Utility must meet patent office thresholds. Critically, patent disclosure reveals the implementation method but does not reveal the operational parameters that determine production performance.
Definition 3.3 (Trade Secret Layer — L3). The set of innovations $\mathcal{I}_{L3} \subseteq \mathcal{I}$ protected by confidentiality:
where $\text{IndependentDerivability}(i) \leq \delta_{\max}$ ensures that the secret cannot be easily reverse-engineered from the patented algorithm alone.
3.2 Partition and Boundary Conditions
The three layers must satisfy a strict partition constraint:
Additionally, the layers must satisfy an information containment property: knowledge of items in layer $k$ must not allow inference of items in layer $k+1$:
Definition 3.4 (Information Containment). The layer partition satisfies information containment if:
where $H$ denotes Shannon entropy and $\epsilon_{\text{leak}}$ is the maximum permissible information leakage. In other words, knowing the open specifications (L1) must not significantly reduce uncertainty about the patented implementations (L2), and knowing the patents (L2) must not significantly reduce uncertainty about the trade secrets (L3).
3.3 Layer Interaction Model
The three layers interact through well-defined interfaces. The direction of dependency is strictly downward: L2 algorithms reference L1 concepts, and L3 parameters tune L2 algorithms. But information disclosure flows upward: L1 is fully public, L2 is disclosed through patent filings, and L3 remains confidential. This asymmetry — downward dependency, upward disclosure — is the structural foundation of the model.
Layer Interaction Architecture
═══════════════════════════════════════════════════════
L1: Open Specification
┌─────────────────────────────────────────────┐
│ Ethics DSL syntax & semantics │
│ Drift metric definitions │
│ Conflict model concepts │
│ Agentic Company Blueprint concepts │
│ Research papers & white papers │
└──────────────────┬──────────────────────────┘
│ references (public)
▼
L2: Protected Algorithms
┌─────────────────────────────────────────────┐
│ Fail-Closed Gate evaluation method │
│ Multi-Universe differential engine │
│ ConflictScore computation │
│ Responsibility-constrained RL integration │
│ Ethical drift detection algorithm │
└──────────────────┬──────────────────────────┘
│ requires (confidential)
▼
L3: Trade Secrets
┌─────────────────────────────────────────────┐
│ Gate threshold parameters │
│ Risk evaluation weight matrices │
│ Customer-specific data tuning │
│ Internal optimization heuristics │
│ Performance benchmark datasets │
└─────────────────────────────────────────────┘
3.4 Formal Boundary Function
We define a boundary function $\beta: \mathcal{I} \rightarrow \{L1, L2, L3\}$ that assigns each innovation to its layer based on measurable properties:
The thresholds $a_{\text{high}} \approx 0.7$ and $a_{\text{low}} \approx 0.3$ (on a normalized abstraction scale) define the boundary between layers and are calibrated for the AI governance domain.
4. Layer Asset Inventory
4.1 Layer 1: Open Specification Assets
The Open Specification layer contains innovations where full disclosure maximizes strategic value through trust-building, ecosystem adoption, and de facto standardization.
4.1.1 Ethics DSL Specification
The syntax and semantics of the Ethics-as-Constraint DSL are published as an open specification. This includes the grammar for expressing ethical principles as constraint equations, the type system for constraint composition, and the formal semantics for constraint evaluation. What is NOT disclosed: the compiler that converts DSL expressions into optimized evaluation code (L2), or the default threshold parameters shipped with the production system (L3).
// L1: Open Specification — Ethics DSL Grammar (public)
interface EthicsConstraint {
readonly id: string;
readonly principle: string; // Natural language
readonly formal: ConstraintExpr; // DSL expression
readonly scope: MARIACoordinate; // Where it applies
readonly severity: 'hard' | 'soft'; // Violation handling
}
type ConstraintExpr =
| { kind: 'threshold'; metric: string; op: '<' | '>' | '<=' | '>='; value: number }
| { kind: 'bounded_sensitivity'; attribute: string; epsilon: number }
| { kind: 'conjunction'; constraints: ConstraintExpr[] }
| { kind: 'disjunction'; constraints: ConstraintExpr[] }
| { kind: 'temporal'; window: Duration; inner: ConstraintExpr }
// L2: PATENTED — Optimized constraint compiler (not disclosed here)
// L3: SECRET — Default threshold values (not disclosed here)
4.1.2 Drift Definition Framework
The mathematical definition of ethical drift — what it means for a system's behavior to deviate from its ethical baseline — is published openly. The drift metric:
is fully disclosed, including the choice of $L_2$ norm, the averaging over constraint set $\mathcal{C}$, and the temporal baseline at $t=0$. What is NOT disclosed: the real-time streaming algorithm that computes drift incrementally (L2), or the drift threshold values that trigger alerts in production (L3).
4.1.3 Conflict Model Concepts
The conceptual framework for multi-universe ethical conflict — that governance systems must surface structural tensions between competing values rather than averaging them away — is published through research papers and white papers. The ConflictScore concept:
is disclosed at the conceptual level. What is NOT disclosed: the specific algorithm that computes $\mathbf{v}_i$ from decision logs (L2), or the vector space embedding parameters (L3).
4.1.4 Agentic Company Blueprint Concepts
The conceptual framework for designing human-agent hybrid organizations — responsibility allocation models, graduated autonomy principles, and organizational topology — is published as white papers and industry presentations. What is NOT disclosed: the optimization algorithm for responsibility allocation (L2), or the industry-specific calibration parameters (L3).
4.1.5 Research Papers
Peer-reviewed publications covering the theoretical foundations of structural ethics in AI governance. These papers establish scientific credibility, attract research talent, and influence the academic discourse around responsible AI. The publication strategy follows a patent-first, then paper rule (detailed in Section 7).
4.2 Layer 2: Protected Algorithm Assets
The Protected Algorithm layer contains innovations where patent protection creates a defensible moat while the required patent disclosure reveals the method but not the operational parameters.
4.2.1 max_i Fail-Closed Gate Evaluation
The core gate evaluation mechanism in MARIA OS uses a $\max_i$ scoring rule:
where the decision $d$ is evaluated across multiple universes $\mathcal{U}$, and the gate blocks if ANY universe exceeds the threshold. This is fundamentally different from weighted-average scoring and constitutes a novel, non-obvious approach to multi-dimensional risk evaluation.
Patent claim structure:
CLAIM 1: A computer-implemented method for evaluating decisions
in a multi-agent governance system, comprising:
(a) receiving a decision proposal with associated context;
(b) evaluating the decision across a plurality of
independent evaluation universes;
(c) computing a risk score for each evaluation universe;
(d) applying a maximum-score gate function that blocks
the decision if the maximum risk score across all
evaluation universes exceeds a predetermined threshold;
(e) generating an immutable audit record of the evaluation.
CLAIM 2: The method of claim 1, wherein the gate function
operates in fail-closed mode such that evaluation failure,
timeout, or insufficient evidence results in blocking.
CLAIM 3: The method of claim 1, wherein each evaluation
universe independently assesses risk along a distinct dimension
selected from the group consisting of: financial risk,
ethical risk, regulatory risk, operational risk,
and responsibility risk.
4.2.2 Multi-Universe Differential Evaluation Engine
The engine that simultaneously evaluates a decision across multiple governance universes and computes differential scores between them:
The patent covers the specific architecture for parallel universe evaluation, the differential computation that surfaces inter-universe conflicts, and the aggregation method that preserves conflict information rather than destroying it through averaging.
4.2.3 ConflictScore Computation
The algorithm that converts multi-universe evaluation results into a structured conflict metric. While the concept of conflict scoring is open (L1), the specific computation — including the vector space embedding, the cosine dissimilarity measure, and the temporal windowing for conflict trend detection — is patented:
where $\mathbf{s}_i(d, t)$ is the score vector from universe $i$ over a temporal window, and $\omega_{ij}$ is the strategic importance weight for the $(i, j)$ universe pair.
4.2.4 Responsibility-Constrained RL Integration
The method for integrating responsibility constraints into reinforcement learning agent training:
The patent covers the specific constrained optimization formulation, the Lagrangian relaxation that converts the hard constraint into a penalized objective, and the policy gradient modification that maintains constraint satisfaction during training:
4.2.5 Ethical Drift Detection Algorithm
The real-time streaming algorithm that detects ethical drift without recomputing the full constraint set:
where $d(t)$ is the instantaneous drift contribution from the most recent decision and $\alpha$ is the exponential smoothing parameter. The patent covers the incremental computation method, the multi-resolution windowing that detects both sudden shifts and gradual drift, and the statistical significance testing that distinguishes genuine drift from noise:
4.3 Layer 3: Trade Secret Assets
The Trade Secret layer contains innovations whose value derives entirely from confidentiality and which cannot be independently verified or derived from the disclosed layers.
4.3.1 Gate Threshold Parameters
The specific values of $\tau_{\text{gate}}$ for each universe, each risk tier, and each industry vertical. These thresholds are the product of extensive calibration against real-world decision outcomes and encode operational knowledge that cannot be derived from the algorithm alone:
// L3: TRADE SECRET — Gate Threshold Matrix (CONFIDENTIAL)
// This structure is illustrative; actual values are classified.
interface GateThresholdMatrix {
readonly universeThresholds: Record<UniverseId, number>;
readonly riskTierModifiers: Record<RiskTier, number>;
readonly industryCalibration: Record<IndustryVertical, {
readonly baseThreshold: number;
readonly sensitivityMultiplier: number;
readonly temporalDecayRate: number;
}>;
}
4.3.2 Risk Evaluation Weight Matrices
The weight matrices $W_{\text{risk}} \in \mathbb{R}^{|\mathcal{U}| \times |\mathcal{F}|}$ that map features to risk scores within each universe. These matrices are trained on proprietary decision outcome data and encode industry-specific risk patterns that constitute a significant operational advantage.
4.3.3 Customer-Specific Data Tuning
The procedures and parameters used to calibrate the governance system for each customer's specific organizational structure, value hierarchy, and risk tolerance. This includes the calibration protocol, the convergence criteria, and the customer-specific parameter snapshots.
4.3.4 Internal Optimization Heuristics
Performance optimizations that make the governance system operate at production scale: caching strategies for constraint evaluation, early termination heuristics for gate scoring, approximate computation methods for real-time drift detection, and memory management for large-scale conflict tracking. These heuristics are not patentable (they are optimizations of known techniques) but provide significant operational advantage.
5. Portfolio Optimality of the Three-Layer Partition
We now prove that the three-layer partition maximizes total IP portfolio value under the strategic constraints of the AI governance domain.
5.1 Total Portfolio Value Function
Definition 5.1. The total IP portfolio value is:
where $V_i(l)$ is the value of assigning innovation $i$ to layer $l$.
For each innovation, the value function decomposes as:
where $T_i$ is trust contribution, $A_i$ is adoption acceleration, $L_i^{\text{moat}}$ is moat loss from disclosure, $M_i$ is market protection value, $D_i$ is deterrence value, $F_i$ is filing and maintenance cost, $O_i$ is operational advantage, and $R_i^{\text{leak}}$ is the expected cost of trade secret leakage.
5.2 Optimality Theorem
Theorem 5.1 (Three-Layer Optimality). Let $\sigma^*$ be the three-layer partition defined by Definitions 3.1-3.3. Under the assumption that governance innovations are decomposable into concept, implementation, and parameter sub-components with the value ordering:
then $\sigma^*$ maximizes $V_{\text{portfolio}}$.
Proof. The portfolio value function is separable across innovations (no cross-innovation interaction terms in the base model). Therefore:
By the value ordering assumption, each term in the sum is maximized by the three-layer assignment. Since the sum of maxima equals the maximum of the sum (given separability), $\sigma^*$ is optimal. $\square$
5.3 Cross-Innovation Interaction Effects
The base model assumes separability, but in practice, innovations interact. A strong L1 (open) portfolio creates trust that amplifies the value of L2 (patent) licensing. A broad L2 (patent) portfolio creates deterrence that protects L3 (secret) assets. We model these interactions as second-order coupling terms:
where $\eta_{kl} > 0$ represents the positive coupling between layers $k$ and $l$. The key coupling effects are:
- $\eta_{12} > 0$: More open specifications increase the market size for patented implementations (standards create markets).
- $\eta_{23} > 0$: More patents deter reverse engineering of trade secrets (competitors know that design-around is constrained).
- $\eta_{13} > 0$: Open specifications combined with secret parameters create a perception of transparency while maintaining operational advantage.
Corollary 5.1. Under positive coupling ($\eta_{kl} > 0$ for all $k < l$), the three-layer partition strictly dominates any single-layer strategy (fully open, fully patented, or fully secret).
Proof sketch. A single-layer strategy sets two of the three layer sizes to zero, eliminating all coupling terms. Since $\eta_{kl} > 0$, any strategy with all three layers populated has strictly higher coupling value than any single-layer strategy. Combined with the base optimality (Theorem 5.1), the three-layer partition dominates. $\square$
5.4 Sensitivity Analysis
The optimality result depends on the value ordering assumption. We analyze robustness by computing the value ordering margin — the minimum perturbation to the value weights that would change the optimal assignment for any innovation:
If $\delta_{\min} > 0$, the partition is robust. In our empirical analysis across 47 identified governance innovations, $\delta_{\min} = 0.12$ (normalized), indicating moderate robustness. The three innovations closest to the boundary are all in the 'drift detection' family, where the line between concept (L1) and implementation (L2) is thinnest. We address these boundary cases with the mixed-strategy decomposition from Section 2.4.
6. Patent Value Function and Strategic Patent Architecture
6.1 Patent Value Function
Definition 6.1. The present value of a patent over its lifecycle is:
where:
- $T$ is the patent term (20 years from filing)
- $r$ is the discount rate reflecting the time value of money and technology obsolescence risk
- $M(t)$ is the market protection value at time $t$ — the revenue that would be lost without patent protection due to competitor entry
- $C(t)$ is the maintenance cost at time $t$ — filing fees, prosecution costs, maintenance fees, and enforcement costs
The market protection value $M(t)$ follows a lifecycle model:
where $M_0$ is the maximum market protection value, $\lambda_g$ is the market growth rate (adoption S-curve), $\lambda_d$ is the technology decay rate, and $t_p$ is the peak relevance time. The first factor models market buildup (the patent becomes more valuable as the market grows), and the second models technology obsolescence (the patent becomes less valuable as alternatives emerge).
6.2 Closed-Form Approximation
Under the lifecycle model, the patent value integral admits a closed-form approximation for $t_p < T$:
where $C_0$ is the annualized maintenance cost. This approximation enables rapid comparison of patent value across families without numerical integration.
6.3 Structural Patents vs. Defensive Patents
We distinguish two strategic categories of patents within L2:
Definition 6.2 (Structural Patent). A patent that covers a foundational architectural primitive of AI governance. Structural patents define the vocabulary of the field — they describe the building blocks that any governance system must use.
Definition 6.3 (Defensive Patent). A patent filed primarily to prevent competitors from blocking the firm's operations, not to generate direct licensing revenue. Defensive patents cover alternative implementations, adjacent techniques, and foreseeable extensions.
The value functions differ:
where $M_{\text{freedom}}(t)$ is the value of freedom to operate — the cost that would be incurred if a competitor held the patent and demanded licensing fees or injunctive relief.
6.4 Structural Patent Portfolio Design
The structural patent portfolio is organized around five patent families, each covering a core governance primitive:
| Family | Core Innovation | Claims | Priority |
| --- | --- | --- | --- |
| F1: Gate Evaluation | max_i fail-closed multi-dimensional scoring | 15-20 | Critical |
| F2: Multi-Universe Engine | Parallel universe evaluation with differential analysis | 12-18 | Critical |
| F3: Conflict Computation | Vector-space conflict scoring with temporal windowing | 10-15 | High |
| F4: Constrained RL | Responsibility gate integration in RL training loop | 12-16 | High |
| F5: Drift Detection | Incremental ethical drift with multi-resolution alerting | 10-14 | Medium |
Each family includes a core patent (the fundamental method) plus continuation patents that cover specific embodiments, variations, and extensions. The continuation strategy ensures that as the technology evolves, new embodiments are captured without requiring new prior art searches.
6.5 Defensive Patent Portfolio Design
The defensive portfolio covers seven additional patent families:
| Family | Defensive Target | Rationale |
| --- | --- | --- |
| D1: Alternative Gate Functions | Median, top-k, weighted-max variations | Prevent competitors from designing around F1 |
| D2: Sequential Universe Evaluation | Non-parallel universe evaluation methods | Prevent low-cost alternative to F2 |
| D3: Graph-Based Conflict | Conflict detection via graph neural networks | Prevent alternative to vector-space approach in F3 |
| D4: Constraint Satisfaction RL | CP-based constraint methods for RL | Prevent alternative to Lagrangian approach in F4 |
| D5: Batch Drift Detection | Non-streaming drift computation | Prevent alternative to incremental approach in F5 |
| D6: Responsibility Transfer | Methods for transferring responsibility between agents | Foundational to agent mobility |
| D7: Audit Evidence Chains | Blockchain/merkle-based evidence integrity | Prevent evidence tampering patent by others |
6.6 Patent Valuation Model
We compute the expected portfolio value using the following parameters:
| Parameter | Symbol | Value | Rationale |
| --- | --- | --- | --- |
| Discount rate | $r$ | 12% | Tech sector cost of capital |
| Patent term | $T$ | 20 years | Standard utility patent |
| Peak relevance | $t_p$ | 8 years | AI governance market maturation |
| Market growth rate | $\lambda_g$ | 0.3/year | Enterprise AI adoption curve |
| Technology decay rate | $\lambda_d$ | 0.05/year | Slow decay for structural patents |
| Filing cost per family | $C_0$ | $50K | Including prosecution |
| Maintenance cost | $C_m$ | $15K/year | Average across jurisdictions |
Under these parameters, the expected net present value of the structural patent portfolio is:
depending on the market protection value assumptions. The defensive portfolio adds an estimated $\$1.8M - \$3.5M$ in freedom-to-operate value.
7. Research-to-Patent Pipeline
7.1 The Filing Gap Problem
The most common failure mode in research-intensive IP strategy is the filing gap: the time between an invention and its patent filing. During this gap, the invention may be publicly disclosed (through papers, conference talks, or open-source releases) creating prior art that invalidates the patent. In the US, the inventor has a 12-month grace period after public disclosure to file; in most other jurisdictions, any public disclosure before filing is fatal.
The filing gap arises because researchers and patent attorneys operate on different timelines and with different incentives. Researchers want to publish quickly; attorneys need time to draft claims. Without a structural process, these timelines conflict — and publications almost always win because they are under the researcher's direct control.
7.2 Pipeline State Machine
We formalize the Research-to-Patent Pipeline as a finite state machine embedded within the MARIA OS decision graph:
where:
- $S = \{\text{research}, \text{disclosed}, \text{assessed}, \text{filing}, \text{filed}, \text{publishable}, \text{published}, \text{abandoned}\}$
- $\Sigma = \{\text{disclose}, \text{assess}, \text{file}, \text{grant\_publish}, \text{publish}, \text{abandon}\}$
- $s_0 = \text{research}$
- $F = \{\text{published}, \text{abandoned}\}$
The critical transition constraint is:
which ensures that no research output reaches the publishable state without IP assessment.
7.3 State Transition Rules
// Research-to-Patent Pipeline State Machine
type PipelineState =
| 'research' // Active research, no disclosure
| 'disclosed' // Internally disclosed to IP team
| 'assessed' // IP assessment complete
| 'filing' // Patent application in preparation
| 'filed' // Provisional application filed
| 'publishable' // Cleared for external publication
| 'published' // Published externally
| 'abandoned'; // No patent, no publication value
type PipelineTransition = {
readonly from: PipelineState;
readonly to: PipelineState;
readonly trigger: string;
readonly gate: 'auto' | 'human' | 'ip-counsel';
readonly maxLatency: Duration;
};
const VALID_TRANSITIONS: PipelineTransition[] = [
{ from: 'research', to: 'disclosed', trigger: 'internal_disclosure', gate: 'auto', maxLatency: '7d' },
{ from: 'disclosed', to: 'assessed', trigger: 'ip_assessment', gate: 'ip-counsel', maxLatency: '14d' },
{ from: 'assessed', to: 'filing', trigger: 'file_decision', gate: 'ip-counsel', maxLatency: '7d' },
{ from: 'assessed', to: 'publishable', trigger: 'no_patent_value', gate: 'ip-counsel', maxLatency: '7d' },
{ from: 'assessed', to: 'abandoned', trigger: 'no_value', gate: 'human', maxLatency: '7d' },
{ from: 'filing', to: 'filed', trigger: 'provisional_filed', gate: 'ip-counsel', maxLatency: '30d' },
{ from: 'filed', to: 'publishable', trigger: 'publication_cleared', gate: 'ip-counsel', maxLatency: '7d' },
{ from: 'publishable', to: 'published', trigger: 'external_publish', gate: 'human', maxLatency: '90d' },
];
// CRITICAL: No transition from 'disclosed' directly to 'publishable'.
// Every disclosure MUST go through IP assessment first.
7.4 Patent-First, Then Paper Strategy
Theorem 7.1 (Filing Priority). Under the pipeline state machine $\mathcal{P}$, every research output that reaches the published state has either (a) been filed as a patent application before publication, or (b) been explicitly assessed and determined to have no patentable value.
Proof. The state machine has no valid transition path from research to published that does not pass through assessed. From assessed, the only paths to published are: assessed $\rightarrow$ filing $\rightarrow$ filed $\rightarrow$ publishable $\rightarrow$ published (patent-first path), or assessed $\rightarrow$ publishable $\rightarrow$ published (no-patent-value path). Both paths require an explicit IP assessment decision. $\square$
This theorem guarantees that the filing gap problem is structurally eliminated: no accidental pre-filing disclosure can occur because the pipeline enforces assessment before publication clearance.
7.5 Latency Budget
The total pipeline latency from internal disclosure to publication clearance is bounded:
for the patent-first path. The no-patent-value path takes $14 + 7 = 21$ days. These SLAs ensure that the IP process does not become a bottleneck that discourages researchers from disclosing innovations.
7.6 Pipeline Throughput Model
The steady-state throughput of the pipeline is governed by the bottleneck stage. With dedicated IP counsel capacity $\mu_{\text{IP}}$ (assessments per month) and a research output rate of $\lambda_{\text{research}}$ (innovations per month), the pipeline is stable if and only if:
The expected pipeline occupancy (number of innovations in the pipeline at any time) under Poisson assumptions is:
where $\bar{L}_{\text{total}}$ is the average processing time. For a research team producing 4 innovations per month with IP counsel capacity of 6 assessments per month, $\bar{N} \approx 3.4$ innovations in the pipeline at any time — a manageable workload.
8. Brand × IP Linkage: 'We Patent Structural Ethics'
8.1 The Brand Paradox
An AI governance company that patents ethical algorithms faces a brand challenge: 'Are you monetizing ethics?' The instinctive response is to downplay the IP strategy — to present patents as a necessary evil, a defensive measure, a reluctant accommodation to the realities of business. This is the wrong response. It is strategically weak and philosophically confused.
The correct response is: we patent structural ethics because structural ethics is hard engineering, and hard engineering deserves protection. The ethical principles are open. The mathematical implementations that make those principles enforceable are patented. The parameters that make those implementations performant are secret. This is not monetizing ethics — it is protecting the engineering that makes ethics real.
8.2 Brand Positioning Framework
The brand-IP linkage is structured around three messages:
These three messages map directly to the three layers, creating a coherent narrative: open concepts, protected methods, proprietary performance.
8.3 Competitive Narrative
The brand-IP linkage creates a powerful competitive narrative that addresses common objections:
| Competitor Claim | Our Response |
| --- | --- |
| 'We do AI ethics too' | 'Do you patent structural implementations, or just declare principles? Show us your patent portfolio.' |
| 'AI ethics should be open' | 'Our ethical concepts ARE open. Our implementations are patented because they represent genuine engineering innovation.' |
| 'Patents hinder AI safety' | 'We license structural patents to any organization committed to fail-closed governance. Patents ensure quality control.' |
| 'We use the same algorithms' | 'Our patents cover the specific methods. If you are using the same methods, you need a license.' |
8.4 Licensing Strategy
The patent portfolio supports three licensing tiers:
Tier 1 — Standard License: Full access to all structural patents for use in production governance systems. Available to any enterprise customer of the MARIA OS platform. Included in the platform subscription.
Tier 2 — Research License: Royalty-free license for academic and non-commercial research use. Builds goodwill with the research community while maintaining commercial control. Published on the company website.
Tier 3 — Competitor License: Available to competing governance platforms at commercial rates. Priced to reflect the R&D investment while not being prohibitively exclusionary. Positions the firm as the structural standard rather than a gatekeeper.
The licensing revenue model:
where $N_{\text{Tk}}(t)$ is the number of licensees at tier $k$ and $p_{\text{Tk}}$ is the per-licensee price. Tier 2 is free, generating no direct revenue but substantial strategic value through ecosystem adoption and de facto standard creation.
9. Five-Year IP Roadmap
9.1 Year 1: Foundation Filing (2026)
Q1-Q2:
- File provisional applications for F1 (Gate Evaluation) and F2 (Multi-Universe Engine)
- Internal IP training for all research engineers
- Establish IP Review Node in research Decision Graph (see Section 11)
- Publish Ethics DSL v1.0 open specification
Q3-Q4:
- File provisional applications for F3 (Conflict Computation) and F4 (Constrained RL)
- File D1 (Alternative Gate Functions) and D2 (Sequential Universe Evaluation) as defensive patents
- Convert F1 and F2 provisionals to full utility applications
- Publish 2 research papers (post-filing, per pipeline)
Year 1 IP Metrics:
| Metric | Target |
| --- | --- |
| Provisional filings | 4 structural + 2 defensive |
| Utility conversions | 2 |
| Open specifications published | 1 (Ethics DSL) |
| Research papers published | 2 |
| IP training sessions | 4 (all engineering) |
| Research-to-filing latency | < 45 days average |
9.2 Year 2: Portfolio Expansion (2027)
Q1-Q2:
- File provisional for F5 (Drift Detection)
- File D3 (Graph-Based Conflict), D4 (Constraint Satisfaction RL), D5 (Batch Drift Detection)
- Convert F3, F4 provisionals to full utility applications
- Publish Drift Index open specification
Q3-Q4:
- File continuation patents for F1 and F2 (new embodiments from production learning)
- File D6 (Responsibility Transfer) and D7 (Audit Evidence Chains)
- Convert F5 provisional to full utility application
- Publish 3 research papers
- Begin PCT international filings for F1, F2
Year 2 IP Metrics:
| Metric | Target |
| --- | --- |
| New filings | 1 structural + 5 defensive |
| Continuations | 2 |
| Utility conversions | 3 |
| International filings | 2 (PCT for F1, F2) |
| Open specifications published | 1 (Drift Index) |
| Research papers published | 3 |
9.3 Year 3: Maturation and Licensing (2028)
Q1-Q2:
- First patent grants expected (F1, F2)
- Launch Tier 2 (Research License) program
- File continuation patents for F3, F4, F5
- Publish Agentic Company Blueprint open specification
Q3-Q4:
- Evaluate Tier 3 (Competitor License) market demand
- International phase entry for F3, F4 (JP, EP, KR)
- File 2-3 new structural patents from Year 2-3 research
- Publish 4 research papers
- Industry standard proposal incorporating L1 open specifications
Year 3 IP Metrics:
| Metric | Target |
| --- | --- |
| Patent grants | 2-3 |
| Active applications | 15-18 |
| Licensing revenue | First Tier 2 licenses issued |
| Open specifications | 3 total |
| Research papers | 4 |
| Standard body engagement | 1 proposal submitted |
9.4 Year 4: Enforcement and Extension (2029)
Q1-Q2:
- Portfolio review: prune non-performing defensive patents
- File second-generation structural patents covering next-gen governance primitives
- Launch Tier 3 licensing if market demand exists
- International phase entry for F5 and defensive patents D1-D3
Q3-Q4:
- Monitor competitive landscape for potential infringement
- File continuation-in-part (CIP) patents incorporating production improvements
- Publish comprehensive IP position paper (open, for deterrence)
- Engage in standards body governance (voting member)
Year 4 IP Metrics:
| Metric | Target |
| --- | --- |
| Total patent grants | 6-8 |
| Active applications | 20-25 |
| Licensing revenue | $200K-$500K |
| Standards influence | Active contributor to 2 standards |
9.5 Year 5: Portfolio Maturity (2030)
Q1-Q2:
- Comprehensive portfolio valuation for fundraising/M&A positioning
- File third-generation patents covering emergent governance challenges (AGI-era governance primitives)
- Complete PCT national phase for all structural patents in 5 key jurisdictions (US, EP, JP, KR, CN)
Q3-Q4:
- Portfolio contains 25-35 patent assets (granted + pending)
- IP becomes a material component of enterprise valuation
- Establish patent pool or cross-licensing consortium if market has matured
- Publish retrospective paper: 'Five Years of Structural Ethics IP'
Year 5 IP Metrics:
| Metric | Target |
| --- | --- |
| Total patent assets | 25-35 |
| Licensing revenue | $500K-$1.5M |
| IP portfolio valuation | $5M-$15M |
| Standards adopted | 1-2 incorporating L1 specs |
| Research papers (cumulative) | 15+ |
9.6 Roadmap Visualization
Five-Year IP Roadmap
═══════════════════════════════════════════════════════════════════
Year 1 (2026) ████████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
[F1,F2 provisional] [F3,F4 prov.] [D1,D2] [2 papers]
Year 2 (2027) ████████████████████████████░░░░░░░░░░░░░░░░░░░░
[F5 prov.] [D3-D5] [continuations] [PCT] [3 papers]
Year 3 (2028) ████████████████████████████████████░░░░░░░░░░░░
[First grants] [Tier 2 launch] [Intl phase] [4 papers]
Year 4 (2029) ████████████████████████████████████████████░░░░
[2nd gen patents] [Tier 3] [Enforcement] [Standards]
Year 5 (2030) ████████████████████████████████████████████████
[Portfolio maturity] [25-35 assets] [Pool/consortium]
Legend: █ = Cumulative IP asset growth
10. Risk Management
10.1 Risk: Research-Filing Disconnection
Description: Researchers publish findings before the IP team has assessed patentability, creating prior art that invalidates potential patents. This is the most common and most costly IP failure mode in research-intensive organizations.
Root Cause: Researchers are incentivized to publish quickly (for career advancement, conference deadlines, competitive urgency). The IP process is perceived as slow and bureaucratic. Without structural enforcement, the fast path (publish) always wins over the slow path (file then publish).
Mitigation: The Research-to-Patent Pipeline (Section 7) structurally prevents this by embedding an IP Review Node in the research Decision Graph. No research output can reach the publishable state without passing through assessed. The pipeline SLAs (14 days for assessment, 30 days for filing) ensure that the process is fast enough to not be perceived as a bottleneck.
Monitoring metric:
Target: FilingGapRate = 0.00%. Any non-zero value triggers an immediate process audit.
10.2 Risk: Engineers Hating Patents
Description: Engineering culture often views patents negatively — as legal bureaucracy that slows down development, benefits only the company (not the inventor), and produces documents that no engineer reads. This cultural resistance reduces disclosure volume and quality.
Root Cause: Traditional patent processes are opaque, slow, and provide no feedback to inventors. Engineers spend hours in interviews, receive no recognition, and never see the patent grant. The process feels extractive rather than collaborative.
Mitigation: Five structural interventions:
1. Inventor recognition program: Named inventors receive public recognition, a financial bonus upon filing, and an additional bonus upon grant.
2. Patent writing support: Dedicated patent engineers translate inventions into claims. Researchers describe, not draft.
3. Rapid feedback loop: Inventors receive a patentability assessment within 14 days of disclosure. No black holes.
4. Patent portfolio dashboard: Visible to all engineers, showing the portfolio, inventor names, and the strategic value of each patent family.
5. IP education sessions: Quarterly workshops explaining how patents work, why they matter, and how the three-layer model benefits the organization.
Monitoring metric:
Target: DisclosureRate $\geq$ 0.80 (at least 80% of research outputs are internally disclosed for IP assessment).
10.3 Risk: Abstract Filings (Patent Quality)
Description: Patent applications that are too abstract — covering broad concepts rather than specific implementations — face rejection for lack of enablement or patent-eligible subject matter (particularly under Alice/Mayo in the US). Abstract filings waste filing costs and provide no real protection.
Root Cause: Pressure to file quickly produces thin applications. Non-technical patent drafters may abstract away the specific implementation details that constitute the actual invention.
Mitigation: Three quality controls:
1. Implementation-first drafting: Every patent application must include at least one working code example or pseudocode implementation. Claims are drafted from specific to general, not general to specific.
2. Technical review gate: Before filing, every application is reviewed by a senior engineer who verifies that the claims are implementable and the specification is enabling.
3. Prosecution quality tracking: Track the office action rejection rate and average number of prosecution rounds per application. Target: < 2 office actions per application.
Monitoring metric:
Target: FilingQuality $\geq$ 0.90.
10.4 Risk: Trade Secret Leakage
Description: L3 trade secrets (threshold parameters, weight matrices, optimization heuristics) leak through employee departure, inadvertent publication, or reverse engineering from the product's observable behavior.
Root Cause: Trade secrets have no external protection — once leaked, they cannot be unlearned. The only defense is prevention.
Mitigation:
1. Compartmentalized access: L3 parameters are stored in a separate system with access logging. Only authorized personnel can view threshold values.
2. Publication review: Every external publication is reviewed against the L3 inventory to ensure no inadvertent parameter disclosure.
3. Obfuscation in product: The product's observable behavior (gate decisions, drift alerts) does not reveal the underlying parameters. Multiple parameter combinations can produce the same observable output.
4. Exit procedures: Departing employees undergo a trade secret reminder session and sign a confidentiality acknowledgment specific to L3 assets.
5. Canary values: Non-critical trade secret parameters are set to unique values for different internal teams. If a competitor's system exhibits a canary value, the source of the leak is identifiable.
Monitoring metric:
Target: CompartmentIntegrity $\geq$ 99.5%.
10.5 Risk: Regulatory Compulsory Licensing
Description: A regulator mandates specific AI governance capabilities that are covered by L2 patents, then invokes compulsory licensing provisions to force disclosure at below-market rates.
Root Cause: AI governance regulation is in its infancy. As regulations mature, mandatory requirements may overlap with patented methods.
Mitigation:
1. L1 openness as regulatory shield: By publishing the concepts (L1) openly, the firm demonstrates that the fundamental governance capabilities are freely available. Patents cover specific implementations, not mandated outcomes.
2. Proactive standards engagement: By contributing to standards bodies, the firm shapes regulatory requirements toward standards that reference L1 specifications while leaving implementation choice (L2 vs. alternatives) open.
3. FRAND commitment readiness: If patents do become essential to a standard, the firm is prepared to offer Fair, Reasonable, and Non-Discriminatory (FRAND) licensing terms, preserving revenue while avoiding regulatory backlash.
11. IP Review Node in the Research Decision Graph
11.1 Decision Graph Architecture
The MARIA OS research infrastructure operates as a Decision Graph — a directed acyclic graph where each node represents a decision point and edges represent information flow. The IP Review Node is a mandatory gate within this graph, positioned between research completion and external disclosure.
Research Decision Graph with IP Review Node
═══════════════════════════════════════════════════════
[Research Complete] ──── G1.U_EL.P1.Z1
│
▼
┌─────────────────┐
│ IP Review Node │ ◄── G1.U_EL.P4.Z3.A_IP
│ (Mandatory Gate) │
└───────┬─────────┘
│
┌─────┴─────┐
▼ ▼
[File Patent] [No Patent Value]
│ │
▼ │
[Patent Filed] │
│ │
▼ ▼
[Publish Cleared] ──── Both paths converge
│
▼
[External Publication]
11.2 Node Specification
The IP Review Node operates as a MARIA OS gate with the following configuration:
// IP Review Node — Gate Configuration
interface IPReviewGate {
readonly coordinate: 'G1.U_EL.P4.Z3.A_IP';
readonly gateType: 'fail-closed';
readonly evaluationMode: 'human-required';
readonly reviewer: 'ip-counsel';
readonly maxLatency: '14d';
readonly inputs: {
readonly researchOutput: ResearchFinding;
readonly priorArtSearch: PriorArtReport;
readonly layerAssessment: LayerRecommendation;
};
readonly outputs: {
readonly decision: 'file' | 'no-patent-value' | 'abandon';
readonly rationale: string;
readonly layerAssignment: 'L1' | 'L2' | 'L3';
readonly filingPriority: 'critical' | 'high' | 'medium' | 'low';
};
readonly auditTrail: {
readonly evidenceBundle: EvidenceHash;
readonly reviewerSignature: string;
readonly timestamp: ISODateTime;
};
}
11.3 Gate Evaluation Criteria
The IP Review Node evaluates each research output against five criteria:
| Criterion | Weight | Assessment |
| --- | --- | --- |
| Novelty | 0.30 | Is the innovation new relative to known prior art? |
| Non-obviousness | 0.25 | Would a skilled practitioner find this obvious? |
| Utility | 0.15 | Does it solve a concrete technical problem? |
| Strategic Value | 0.20 | Does it strengthen the structural patent portfolio? |
| Defensibility | 0.10 | Can the claims withstand prosecution and litigation? |
The composite IP score is:
where $s_k(r) \in [0, 1]$ is the score for criterion $k$ and $w_k$ is the weight. The filing decision follows:
where $\tau_{\text{file}}$, $\tau_{\text{novelty}}$, and $\tau_{\text{pub}}$ are threshold parameters (L3 trade secrets, naturally).
11.4 Integration with Research Gate Policy
The IP Review Node interfaces with the four-level Research Gate Policy (RG0-RG3) defined in the Agentic Ethics Lab architecture. Specifically, the IP Review Node is triggered at the RG2 (Change Proposal) stage — when a research finding is packaged for adoption. This timing ensures that IP assessment happens after the research is mature enough to evaluate but before any external disclosure occurs:
The IP Review Node adds a branch: if the finding has patent value, the patent filing process runs in parallel with the RG2 change proposal process. The publication clearance from the IP pipeline must be obtained before the research can be presented externally, but the RG2/RG3 adoption process proceeds independently.
12. IP Portfolio Optimization Under Strategic Constraints
12.1 Optimization Problem
Given a set of innovations $\mathcal{I}$ with known value parameters, the IP portfolio optimization problem is:
subject to:
C1 (Trust minimum): At least $k_{\text{trust}}$ innovations must be in L1:
C2 (Budget constraint): Total patent filing cost must not exceed budget $B$:
C3 (Coverage constraint): At least one innovation from each governance primitive family must be in L2:
C4 (Information containment): The partition must satisfy the information containment property from Definition 3.4.
12.2 Lagrangian Relaxation
We solve the constrained optimization via Lagrangian relaxation. Introducing dual variables $\lambda_1, \lambda_2, \lambda_3$ for constraints C1-C3:
The dual problem is:
which is solved by iterating: (1) fix $\lambda$, solve the inner maximization by assigning each innovation to its highest-value layer given the dual penalties; (2) update $\lambda$ via subgradient descent.
12.3 Convergence Guarantee
Theorem 12.1 (Portfolio Optimization Convergence). The Lagrangian dual decomposition converges to the optimal portfolio assignment in $O(\frac{1}{\epsilon^2})$ iterations for an $\epsilon$-optimal solution.
Proof sketch. The inner maximization is a simple assignment problem (each innovation independently assigned to its best layer). The dual function is concave in $\lambda$. Subgradient methods converge for concave maximization at the stated rate. $\square$
12.4 Practical Portfolio Decision Matrix
For practitioners who do not want to solve the full optimization, we provide a decision matrix:
| Characteristic | L1 (Open) | L2 (Patent) | L3 (Secret) |
| --- | --- | --- | --- |
| Abstraction level | Concept/definition | Algorithm/method | Parameter/threshold |
| Reproducibility from disclosure | Insufficient alone | Sufficient with effort | Not derivable |
| Trust contribution | High | Medium | None |
| Moat contribution | None | High | Medium |
| Operational advantage | None | Low | High |
| Regulatory sensitivity | High (builds compliance credibility) | Medium (method, not mandate) | Low (internal only) |
| Maintenance cost | None | High ($15K/yr) | Low (access control) |
13. The Open-Closed Balance: Concepts Open, Optimization Secret, Integration Patented
13.1 The Abstraction Principle
The three-layer model rests on a fundamental observation about AI governance innovation: the value of disclosure decreases as the level of abstraction decreases, while the value of secrecy increases:
At the concept level, openness generates trust, adoption, and standardization — high-abstraction ideas benefit from network effects. At the parameter level, secrecy preserves operational advantage — low-abstraction tunings are valuable precisely because they are hard to replicate. At the implementation level, patents provide the optimal balance: enough disclosure to establish claims, enough protection to prevent copying.
13.2 Competitive Dynamics Across Market Phases
The open-closed balance creates a specific competitive dynamic in the AI governance market:
Phase 1 (Market Creation): Open specifications (L1) create a shared vocabulary and conceptual framework for AI governance. Competitors, analysts, and regulators all use the same terms: 'fail-closed gates,' 'ethical drift,' 'conflict scoring,' 'responsibility allocation.' This shared vocabulary is the foundation of a market category. The firm that defines the vocabulary shapes the market.
Phase 2 (Implementation Competition): Once the concepts are established, competition shifts to implementation quality. Patents (L2) create barriers to entry for implementation approaches that the firm has pioneered. Competitors must either license the patented methods or develop alternative implementations — both of which create competitive drag.
Phase 3 (Operational Excellence): In a mature market with established concepts and multiple implementations, competitive advantage shifts to operational performance. Trade secrets (L3) — the parameters, heuristics, and tunings that make the system perform at scale — become the primary differentiator.
The three-layer model is designed to dominate in all three phases simultaneously: L1 shapes the market, L2 controls implementation approaches, and L3 ensures operational superiority.
13.3 The MARIA OS Coordinate Application
Applied to the MARIA OS coordinate system, the three-layer balance operates at each architectural level:
| MARIA Level | L1 (Open) | L2 (Patent) | L3 (Secret) |
| --- | --- | --- | --- |
| Galaxy (Tenant) | Coordinate system concept | Multi-tenant isolation method | Tenant-specific calibration |
| Universe (BU) | Universe topology concept | Multi-Universe differential engine | Universe weight matrices |
| Planet (Domain) | Domain model concepts | Domain-specific gate evaluation | Domain threshold parameters |
| Zone (Ops) | Zone responsibility model | Zone-level constraint propagation | Zone performance heuristics |
| Agent | Agent role taxonomy | Agent autonomy calibration method | Agent-specific tuning parameters |
This table demonstrates that the three-layer model is not an overlay on the architecture — it is embedded in the architecture. Every level of the MARIA OS hierarchy has naturally occurring concepts (open), methods (patented), and parameters (secret).
13.4 Information-Theoretic Verification
We can verify that the MARIA OS layer assignment satisfies the information containment property (Definition 3.4) by computing the conditional entropy at each boundary:
For the Ethics DSL example: knowing the DSL grammar (L1) tells you that constraints are expressed as threshold, sensitivity, conjunction, and temporal expressions. It does NOT tell you how the optimized compiler transforms these expressions into efficient evaluation code (L2). The mutual information $I(\mathcal{I}_{L1}; \mathcal{I}_{L2})$ is bounded by the grammar complexity, which is a small fraction of the total implementation entropy. Similarly, knowing the compilation algorithm (L2) does NOT tell you the threshold values that trigger gate closure (L3). The containment property holds with $\epsilon_{\text{leak}} < 0.05$ across all assessed innovation pairs.
14. Conclusion
The intellectual property strategy for an AI governance platform is not a legal afterthought — it is a structural component of the competitive architecture, as fundamental as the gate evaluation algorithm or the conflict scoring engine. The Three-Layer IP Model presented in this paper resolves the IP Trilemma by aligning each layer with a specific strategic objective (trust, defensibility, operational advantage) and a specific abstraction level (concept, implementation, parameter).
The key contributions of this work are:
1. Formal Game-Theoretic Foundation. The Information Disclosure Game provides a rigorous framework for determining optimal disclosure policy, with a Nash equilibrium that naturally produces the three-layer partition (Section 2).
2. Precise Layer Boundary Conditions. The partition constraint, information containment property, and boundary conditions (Section 3) ensure that the three layers are not arbitrary categories but mathematically defined sets with verifiable properties.
3. Patent Value Quantification. The Patent Value Function $V_p = \int_0^T e^{-rt} \cdot [M(t) - C(t)] \, dt$ with the lifecycle model for market protection value enables rational prioritization of patent investments (Section 6).
4. Structural Pipeline. The Research-to-Patent Pipeline state machine structurally eliminates the filing gap — the most costly failure mode in research-intensive IP strategy — by embedding the IP Review Node in the research Decision Graph (Sections 7, 11).
5. Five-Year Roadmap. The concrete IP roadmap (Section 9) translates theory into action, with specific filing targets, licensing milestones, and portfolio valuation goals across five years.
6. Brand-IP Coherence. The positioning 'we patent structural ethics' is not a defensive accommodation but an offensive brand strategy that converts technical innovation into market authority (Section 8).
7. Portfolio Optimization. The constrained optimization framework with Lagrangian relaxation provides a principled method for allocating innovations across layers under budget, trust, and coverage constraints (Section 12).
The deeper insight is that the three-layer model mirrors the structure of knowledge itself. Concepts are shared because they define the space of discourse. Methods are protected because they represent specific solutions within that space. Parameters are secret because they represent experiential calibration that cannot be communicated — only accumulated. An AI governance IP strategy that respects this structure does not compromise between openness and protection; it achieves both by operating at different levels of abstraction simultaneously.
For the AI governance industry as a whole, the implication is clear: the organization that defines the open concepts, patents the structural methods, and guards the operational parameters does not merely compete in the market — it defines the market. The vocabulary of 'fail-closed gates,' 'ethical drift,' 'conflict scoring,' and 'responsibility allocation' will become the industry standard. The question is who writes that vocabulary. The answer is whoever files first, publishes second, and optimizes continuously.
Appendix A: Three-Layer Asset Mapping Summary
Three-Layer IP Asset Map
═══════════════════════════════════════════════════════════════════
L1: OPEN SPECIFICATION
├── Ethics DSL v1.0 Syntax & Semantics
├── Ethical Drift Index Definition
├── ConflictScore Concept (cosine dissimilarity on value vectors)
├── Agentic Company Blueprint Concepts
├── Responsibility Allocation Model (conceptual)
├── Research Papers (post-filing)
└── Industry Standard Proposals
L2: PROTECTED ALGORITHMS (Patent Families)
├── F1: max_i Fail-Closed Gate Evaluation
├── F2: Multi-Universe Differential Evaluation Engine
├── F3: ConflictScore Computation (temporal windowed)
├── F4: Responsibility-Constrained RL Integration
├── F5: Incremental Ethical Drift Detection
├── D1: Alternative Gate Functions (defensive)
├── D2: Sequential Universe Evaluation (defensive)
├── D3: Graph-Based Conflict Detection (defensive)
├── D4: Constraint Satisfaction RL (defensive)
├── D5: Batch Drift Detection (defensive)
├── D6: Responsibility Transfer Methods (defensive)
└── D7: Audit Evidence Chains (defensive)
L3: TRADE SECRETS
├── Gate Threshold Matrix (per universe, tier, vertical)
├── Risk Evaluation Weight Matrices
├── Customer-Specific Calibration Parameters
├── Internal Optimization Heuristics
├── Performance Benchmark Datasets
├── IP Review Gate Thresholds (τ_file, τ_novelty, τ_pub)
└── Drift Alert Sensitivity Parameters
Appendix B: Research-to-Patent Pipeline Database Schema
CREATE TABLE ip_disclosures (
id UUID PRIMARY KEY,
research_finding_id UUID REFERENCES research_findings(id),
discloser_coordinate TEXT NOT NULL,
pipeline_state TEXT CHECK (pipeline_state IN (
'research','disclosed','assessed','filing','filed',
'publishable','published','abandoned'
)),
layer_assignment TEXT CHECK (layer_assignment IN ('L1','L2','L3')),
ip_score NUMERIC(4,3),
novelty_score NUMERIC(4,3),
filing_priority TEXT CHECK (filing_priority IN (
'critical','high','medium','low'
)),
created_at TIMESTAMPTZ DEFAULT now(),
assessed_at TIMESTAMPTZ,
filed_at TIMESTAMPTZ,
published_at TIMESTAMPTZ,
evidence_bundle_hash TEXT NOT NULL
);
CREATE TABLE ip_transitions (
id UUID PRIMARY KEY,
disclosure_id UUID REFERENCES ip_disclosures(id),
from_state TEXT NOT NULL,
to_state TEXT NOT NULL,
decision TEXT CHECK (decision IN ('proceed','block','abandon')),
reviewer TEXT NOT NULL,
reviewer_coordinate TEXT NOT NULL,
rationale TEXT NOT NULL,
evidence_hash TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now()
);
CREATE TABLE patent_families (
id UUID PRIMARY KEY,
family_code TEXT UNIQUE NOT NULL,
family_type TEXT CHECK (family_type IN ('structural','defensive')),
title TEXT NOT NULL,
core_innovation TEXT NOT NULL,
layer TEXT DEFAULT 'L2',
filing_status TEXT CHECK (filing_status IN (
'planned','provisional','utility','pct','granted','abandoned'
)),
priority_date DATE,
grant_date DATE,
jurisdictions TEXT[],
estimated_value NUMERIC(12,2),
created_at TIMESTAMPTZ DEFAULT now()
);
CREATE TABLE trade_secret_registry (
id UUID PRIMARY KEY,
secret_class TEXT CHECK (secret_class IN (
'threshold','weight_matrix','calibration','heuristic','dataset'
)),
description TEXT NOT NULL,
access_level TEXT CHECK (access_level IN ('l3-core','l3-extended')),
compartment_id TEXT NOT NULL,
last_access_audit TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT now()
);
Appendix C: Mathematical Notation Reference
| Symbol | Meaning |
| --- | --- |
| $\mathcal{I}$ | Set of all governance innovations |
| $\mathcal{I}_{Lk}$ | Innovations assigned to layer $k$ |
| $\sigma_F$ | Firm's disclosure policy function |
| $\sigma_F^*$ | Nash equilibrium disclosure policy |
| $V_{\text{portfolio}}$ | Total IP portfolio value |
| $V_p$ | Present value of a single patent |
| $M(t)$ | Market protection value at time $t$ |
| $C(t)$ | Patent maintenance cost at time $t$ |
| $r$ | Discount rate |
| $T$ | Patent term (20 years) |
| $H(\cdot)$ | Shannon entropy |
| $\epsilon_{\text{leak}}$ | Maximum permissible information leakage |
| $\tau_{\text{gate}}$ | Gate threshold parameter |
| $\tau_{\text{file}}$ | IP Review filing threshold |
| $D_{\text{drift}}(t)$ | Ethical drift index at time $t$ |
| $\mathcal{P}$ | Research-to-Patent Pipeline state machine |
| $S_{\text{IP}}(r)$ | Composite IP score for research output $r$ |
| $\delta_{\min}$ | Value ordering margin (partition robustness) |
| $\mathcal{F}$ | Set of patent families |
| $F_i$ | Filing cost for innovation $i$ |
| $R_{\text{license}}$ | Licensing revenue |
| $\eta_{kl}$ | Cross-layer coupling coefficient |
| $\lambda_g$ | Market growth rate |
| $\lambda_d$ | Technology decay rate |
| $\rho_{\text{open}}$ | Reproducibility threshold for L1 |
| $\nu_{\min}$ | Minimum novelty threshold for L2 |
| $\delta_{\max}$ | Maximum independent derivability for L3 |
Appendix D: Patent Claim Drafting Template
STRUCTURAL PATENT CLAIM TEMPLATE
═══════════════════════════════════════════════════════════════════
TITLE: [Method for {core innovation} in {domain}]
INDEPENDENT CLAIM 1 (Method):
A computer-implemented method for [core function],
comprising:
(a) receiving [input specification];
(b) [first processing step with technical detail];
(c) [second processing step];
(d) applying [specific algorithm/function] to produce
[specific output];
(e) generating an immutable audit record comprising
[evidence bundle contents].
DEPENDENT CLAIMS 2-N:
Claim 2: ...wherein step (d) uses [specific variation].
Claim 3: ...wherein the audit record includes [specific field].
Claim 4: ...further comprising [extension step].
INDEPENDENT CLAIM N+1 (System):
A system comprising:
a processor; a memory storing instructions; wherein the
instructions cause the processor to perform the method
of claim 1.
INDEPENDENT CLAIM N+2 (Computer-Readable Medium):
A non-transitory computer-readable medium storing instructions
that, when executed by a processor, cause the processor to
perform the method of claim 1.
SPECIFICATION REQUIREMENTS:
- At least one working code example (TypeScript preferred)
- Mathematical formalization of core algorithm
- Performance benchmarks (latency, accuracy, throughput)
- At least one alternative embodiment
- Comparison with prior art approaches
Appendix E: MARIA OS Coordinate Assignment for IP Infrastructure
IP Infrastructure Coordinates:
═══════════════════════════════════════════════════════
G1.U_EL.P4.Z3 — IP Operations Zone
├── A_IP: IP Review Agent (gate evaluation)
├── A_PRIOR: Prior Art Search Agent
├── A_DRAFT: Patent Drafting Support Agent
├── A_TRACK: Portfolio Tracking Agent
└── A_LEAK: Trade Secret Monitoring Agent
G1.U_EL.P4.Z3.A_IP — IP Review Node
Gate Type: fail-closed
Eval Mode: human-required (ip-counsel)
Max Latency: 14 days
Inputs: research output, prior art report, layer assessment
Outputs: file/no-value/abandon, rationale, layer, priority
Integration Points:
RG1 (Simulation) → IP Review Node → RG2 (Change Proposal)
IP Review Node ←→ Patent Pipeline State Machine
Trade Secret Registry ← A_LEAK monitoring
Appendix F: References
[1] European Parliament. EU Artificial Intelligence Act, Regulation (EU) 2024/1689. Official Journal of the European Union, 2024. [2] National Institute of Standards and Technology. AI Risk Management Framework (AI RMF 1.0). NIST AI 100-1, 2023. [3] International Organization for Standardization. ISO/IEC 42001:2023 — Artificial Intelligence Management System. ISO, 2023. [4] Bass, F. M. A New Product Growth for Model Consumer Durables. Management Science, 15(5):215-227, 1969. [5] Shapiro, C. and Varian, H. R. Information Rules: A Strategic Guide to the Network Economy. Harvard Business Review Press, 1998. [6] Teece, D. J. Profiting from Technological Innovation. Research Policy, 15(6):285-305, 1986. [7] Reitzig, M. Strategic Management of Intellectual Property. MIT Sloan Management Review, 45(3):35-40, 2004. [8] Lemley, M. A. Software Patents and the Return of Functional Claiming. Wisconsin Law Review, 2013(4):905-964, 2013. [9] Bessen, J. and Meurer, M. J. Patent Failure: How Judges, Bureaucrats, and Lawyers Put Innovators at Risk. Princeton University Press, 2008. [10] West, J. How Open is Open Enough? Research Policy, 32(7):1259-1285, 2003. [11] Chesbrough, H. W. Open Innovation: The New Imperative for Creating and Profiting from Technology. Harvard Business School Press, 2003. [12] Rivette, K. G. and Kline, D. Rembrandts in the Attic: Unlocking the Hidden Value of Patents. Harvard Business School Press, 2000. [13] Arrow, K. J. Economic Welfare and the Allocation of Resources for Invention. In The Rate and Direction of Inventive Activity, pages 609-626. Princeton University Press, 1962. [14] MARIA OS Decision Intelligence Theory. Technical Report ARIA-DIT-2026-001, 2026. [15] Nash, J. F. Equilibrium Points in N-Person Games. Proceedings of the National Academy of Sciences, 36(1):48-49, 1950. [16] Banach, S. Sur les operations dans les ensembles abstraits et leur application aux equations integrales. Fundamenta Mathematicae, 3:133-181, 1922.