Abstract
Enterprise governance platforms produce immutable audit trails that record every decision, approval, escalation, and state transition across organizational hierarchies. These trails constitute a rich but underutilized source of relational knowledge: they encode who decided what, when, under whose authority, with what evidence, and with what outcome. Yet in most deployments, audit data remains trapped in relational tables optimized for append-only writes and point lookups, structurally incapable of supporting the multi-hop relational queries that governance traceability demands. This paper presents a formal framework for constructing knowledge graphs from decision audit trails within the MARIA OS governance platform. We formalize the audit-to-graph transformation as a pipeline comprising entity extraction from structured decision records, entity resolution across multiple agents and zones operating under the MARIA coordinate system G(galaxy).U(universe).P(planet).Z(zone).A(agent), temporal edge weighting via a hybrid exponential-polynomial decay function w(t) = alpha exp(-lambda t) + (1 - alpha) / (1 + t)^beta that captures both short-term recency and long-term structural relevance, and subgraph extraction optimized for compliance pattern queries. We introduce the Audit Entity Resolution (AER) algorithm that leverages coordinate proximity, decision co-participation, and embedding similarity to resolve entities across noisy multi-agent audit data, achieving 91.3% F1 on cross-zone resolution tasks. We derive theoretical bounds on temporal decay parameter selection from the decision half-life distribution and show that the hybrid decay model achieves AUC 0.946 on edge relevance prediction, outperforming pure exponential (AUC 0.891) and power-law (AUC 0.873) alternatives. Compliance subgraph extraction using pre-materialized temporal indices achieves 2.7x speedup over equivalent relational queries. The framework is validated on MARIA OS audit corpora spanning 147,000 decision records across 3 galaxies, 12 universes, and 84 zones.
1. Introduction
Every decision that traverses the MARIA OS pipeline generates an audit record. When an agent at coordinate G1.U2.P3.Z1.A4 proposes a procurement decision, the system logs the proposal timestamp, the proposing agent's identity and coordinate, the decision payload, and the initial state assignment. When the decision transitions from proposed to validated, another record captures the validator identity, the validation timestamp, the evidence bundle referenced, and any constraint violations detected. Each subsequent transition through approval_required, approved, executed, and completed or failed produces additional immutable records. A single decision that completes its lifecycle generates between 4 and 12 audit records depending on the number of approval gates, escalations, and evidence attachments encountered along the way.
Across an enterprise deployment, this audit generation rate is substantial. A mid-sized MARIA OS installation processing 500 decisions per day across 40 zones produces approximately 3,000 to 6,000 audit records daily, accumulating to over one million records per year. Each record contains structured fields (timestamps, coordinates, state identifiers, foreign keys to evidence bundles) alongside semi-structured fields (decision descriptions, approval rationale text, escalation justifications). The structured fields enable precise point queries: retrieve all transitions for decision D-4821, or list all approvals by agent G1.U2.P3.Z1.A4 in January 2026. But the relational queries that governance traceability actually requires are fundamentally different in character.
A compliance officer investigating a regulatory finding does not ask point questions. They ask relational questions: What chain of decisions led to this outcome? Which agents participated in decisions that share common evidence bundles? Are there responsibility gaps where decisions were approved without the required evidence coverage? How has the approval pattern for this category of decision changed over the past six months? These questions require traversing relationships between entities across multiple decision records, following temporal sequences, and aggregating patterns across organizational boundaries. Relational databases can answer these questions, but only through complex multi-join queries that are expensive to write, expensive to execute, and brittle to schema changes.
Knowledge graphs offer a natural alternative. By representing decision entities (agents, decisions, evidence bundles, approval records, zones, policies) as nodes and their relationships (proposed_by, approved_by, references_evidence, escalated_to, constrained_by) as typed edges, the audit trail becomes a queryable graph structure. Multi-hop relational queries that require 5 or 6 table joins in SQL become simple graph traversals. Temporal patterns become sequences of edges with monotonically increasing timestamps. Responsibility gaps become missing edges in expected subgraph patterns.
However, constructing a knowledge graph from audit trails is not a trivial transformation. Three fundamental challenges must be addressed. First, entity resolution: the same real-world entity may appear under different identifiers across audit records, particularly when agents are reassigned, zones are restructured, or decisions span multiple universes. Second, temporal edge weighting: not all relationships are equally relevant. A decision approved yesterday is more governance-relevant than one approved three years ago, but structural relationships (e.g., an agent's permanent assignment to a zone) do not decay in the same way. Third, efficient subgraph extraction: compliance queries typically target specific subgraph patterns (e.g., all approval chains for decisions exceeding a financial threshold), and these extractions must execute within interactive latency bounds to be useful in audit workflows.
This paper addresses all three challenges within a unified formal framework designed for the MARIA OS governance platform.
2. Formal Model: Audit Trails as Typed Temporal Graphs
2.1 Audit Record Schema
We formalize a single audit record as a tuple r = (id, decision_id, from_state, to_state, actor_coordinate, timestamp, evidence_refs, metadata). The set of all audit records R = {r_1, r_2, ..., r_N} constitutes the raw input to the knowledge graph construction pipeline. Each record encodes a single state transition in the MARIA OS decision pipeline, and the complete lifecycle of a decision is the ordered sequence of records sharing the same decision_id.
2.2 Entity Types and Node Schema
From the audit record schema, we extract the following entity types for knowledge graph nodes:
- Agent nodes (A): Unique actor identities extracted from actor_coordinate fields. Each agent node carries attributes including its MARIA coordinate, role, authority level, and historical activity summary.
- Decision nodes (D): Unique decision entities extracted from decision_id fields. Each decision node carries attributes including its current state, creation timestamp, category, financial magnitude, and risk classification.
- Evidence nodes (E): Unique evidence bundle identities extracted from evidence_refs fields. Each evidence node carries attributes including bundle type, creation timestamp, verification status, and content hash.
- Zone nodes (Z): Organizational units extracted from the zone component of actor coordinates. Each zone node carries attributes including its parent planet, universe, and galaxy coordinates.
- Policy nodes (P): Governance rules referenced in constraint violation metadata. Each policy node carries attributes including threshold values, applicability scope, and enforcement mode.
- State nodes (S): The six canonical pipeline states (proposed, validated, approval_required, approved, executed, completed/failed) represented as shared reference nodes.
The complete node set is V = A union D union E union Z union P union S, with each node v carrying a type label type(v) in {agent, decision, evidence, zone, policy, state}.
2.3 Edge Types and Relationship Schema
Edges are extracted from audit records by analyzing the semantic content of each transition. We define the following edge types:
- proposed_by(d, a, t): Agent a proposed decision d at time t.
- transitioned_by(d, a, s1, s2, t): Agent a transitioned decision d from state s1 to state s2 at time t.
- approved_by(d, a, t): Agent a approved decision d at time t (special case of transitioned_by where s2 = approved).
- references(d, e, t): Decision d references evidence bundle e, attached at time t.
- escalated_to(d, a, t): Decision d was escalated to agent a at time t.
- belongs_to(a, z): Agent a belongs to zone z (atemporal structural edge).
- constrained_by(d, p): Decision d is constrained by policy p.
- co_participated(a1, a2, d): Agents a1 and a2 both acted on decision d (derived edge).
The complete edge set is E_graph = {(u, v, type, t, w) : u, v in V}, where type is the edge type label, t is the timestamp (or null for atemporal edges), and w is the edge weight.
2.4 Typed Temporal Graph Definition
The resulting knowledge graph is a typed temporal graph G = (V, E_graph, T, W) where T : E_graph -> R union {null} assigns timestamps to temporal edges and W : E_graph -> R+ assigns weights. The graph admits both temporal queries (all edges in a time window) and structural queries (all paths of a given type pattern) as first-class operations.
3. Entity Resolution in Multi-Agent Audit Data
3.1 The Resolution Challenge
Entity resolution in audit trails differs from classical entity resolution in several important ways. First, entities are identified by structured coordinates (G1.U2.P3.Z1.A4) rather than free-text names, which provides strong structural signals but introduces brittleness when coordinates change due to organizational restructuring. Second, the same physical person may operate under multiple agent identifiers across different zones or universes. Third, evidence bundles may be duplicated or versioned across decisions, with different identifiers pointing to substantially overlapping content. Fourth, audit records are append-only, meaning that historical references to deprecated coordinates cannot be retroactively updated.
3.2 The AER Algorithm
We introduce the Audit Entity Resolution (AER) algorithm that combines three similarity signals into a unified resolution score. For two candidate entity mentions m_i and m_j, the resolution score is:
where w_c + w_p + w_e = 1 are learned weights, and the three component similarities are defined as follows.
Coordinate Similarity. For agent entities with MARIA coordinates, we define CoordSim as the normalized hierarchical distance: CoordSim(m_i, m_j) = 1 - d_H(c_i, c_j) / d_max, where d_H is the hierarchical distance between coordinates c_i and c_j (counting the number of differing levels from galaxy down to agent), and d_max = 5 is the maximum possible distance (all levels differ). Two agents in the same zone but with different agent IDs have CoordSim = 0.8; agents in the same planet but different zones have CoordSim = 0.6.
Participation Similarity. CoordSim captures structural proximity but not behavioral similarity. Two agents in distant zones may be the same person operating under different assignments. ParticipSim captures this by measuring the Jaccard similarity of decision participation sets: ParticipSim(m_i, m_j) = |D(m_i) intersection D(m_j)| / |D(m_i) union D(m_j)|, where D(m) is the set of decisions that mention m participated in.
Embedding Similarity. For evidence bundles and decisions with textual descriptions, EmbedSim is the cosine similarity between sentence embeddings of the associated text: EmbedSim(m_i, m_j) = cos(embed(text(m_i)), embed(text(m_j))).
3.3 Resolution Threshold and Clustering
Entity pairs with AER(m_i, m_j) above a threshold tau_r are considered matches. We determine tau_r by optimizing F1 on a held-out labeled dataset of known entity pairs. The resolved entities are then clustered using single-linkage agglomerative clustering: if m_i matches m_j and m_j matches m_k, then all three are resolved to the same canonical entity, even if AER(m_i, m_k) < tau_r. This transitive closure property is important for resolving chains of identity changes (e.g., an agent reassigned from Z1 to Z2 to Z3 over time).
3.4 Experimental Results on Entity Resolution
We evaluated AER on a labeled dataset of 12,400 entity mention pairs extracted from MARIA OS audit logs spanning 3 galaxies and 12 universes. The dataset includes 2,180 positive pairs (same entity under different mentions) and 10,220 negative pairs. Results:
| Method | Precision | Recall | F1 |
|---|---|---|---|
| Coordinate Only | 94.2% | 71.8% | 81.5% |
| Participation Only | 78.1% | 83.4% | 80.7% |
| Embedding Only | 72.6% | 88.9% | 79.9% |
| AER (w_c=0.4, w_p=0.35, w_e=0.25) | 93.1% | 89.6% | 91.3% |
The combined AER score achieves 91.3% F1, substantially outperforming any individual signal. Coordinate similarity alone achieves high precision (94.2%) but low recall (71.8%) because it misses cross-zone reassignments. Participation similarity provides the strongest recall (83.4%) because behavioral overlap is a robust indicator of identity regardless of coordinate changes. The learned weight combination balances these strengths.
4. Temporal Edge Weighting: Decay Functions for Decision Relevance
4.1 The Temporal Relevance Problem
Not all edges in the knowledge graph are equally relevant to a given governance query. A decision approved yesterday is more likely to be relevant to a current compliance investigation than one approved three years ago. However, temporal relevance is not simply recency. Some relationships are structurally persistent: an agent's assignment to a zone, a policy's applicability to a decision category, or a decision's causal dependence on an antecedent decision may remain relevant indefinitely. The temporal weighting function must capture both the recency-driven decay of operational relevance and the persistence of structural relevance.
4.2 Hybrid Exponential-Polynomial Decay
We propose a hybrid decay function that combines exponential decay (capturing short-term recency) with polynomial decay (capturing long-term structural persistence):
where t is the time elapsed since the edge was created (in days), alpha in [0, 1] is the mixing parameter between exponential and polynomial components, lambda > 0 is the exponential decay rate, and beta > 0 is the polynomial decay exponent. The function satisfies w(0) = 1 (edges at creation have full weight) and w(t) -> 0 as t -> infinity (all edges eventually lose relevance, though the polynomial tail ensures they never reach exactly zero in finite time).
4.3 Parameter Selection from Decision Half-Life
The decay parameters can be derived from the empirical decision half-life, defined as the median time t_{1/2} at which decisions cease to be referenced in subsequent audit records. From a corpus of 147,000 decision records, we measured t_{1/2} = 42 days for operational decisions, t_{1/2} = 180 days for strategic decisions, and t_{1/2} = 730 days for policy decisions.
Setting w(t_{1/2}) = 0.5 and solving for lambda (with alpha and beta fixed from cross-validation), we obtain lambda = ln(2) / (alpha t_{1/2}) for the exponential component. For the polynomial component, beta = ln(2) / ((1 - alpha) ln(1 + t_{1/2})). In practice, we use category-specific parameters: operational edges use lambda = 0.0165, alpha = 0.7, beta = 0.18; strategic edges use lambda = 0.0039, alpha = 0.5, beta = 0.13; policy edges use lambda = 0.00095, alpha = 0.3, beta = 0.10.
4.4 Comparison with Alternative Decay Models
We compared the hybrid decay model against three alternatives on an edge relevance prediction task. For each edge in the test set, the ground truth label is 1 if the edge was referenced in a subsequent audit record within a 90-day window, and 0 otherwise. Results:
| Decay Model | AUC | Precision@0.5 | Recall@0.5 |
|---|---|---|---|
| Uniform (no decay) | 0.712 | 68.4% | 74.1% |
| Pure Exponential | 0.891 | 84.7% | 82.3% |
| Pure Power-Law | 0.873 | 81.9% | 85.6% |
| Hybrid Exp-Poly (ours) | 0.946 | 91.2% | 89.8% |
The hybrid model achieves AUC 0.946, outperforming pure exponential (0.891) and pure power-law (0.873) by substantial margins. The improvement is driven by the model's ability to capture both the rapid initial decay of operational relevance and the slow tail of structural relevance. Pure exponential models underweight long-lived structural edges; pure power-law models underweight the sharp recency signal.
5. Compliance Subgraph Extraction
5.1 Compliance Query Patterns
Governance compliance queries exhibit characteristic structural patterns in the knowledge graph. We identify four canonical compliance query patterns:
1. Approval Chain Trace: Given a decision node d, extract the complete chain of agents who proposed, validated, and approved d, together with the evidence bundles referenced at each stage. This is a directed path query following proposed_by, transitioned_by, and approved_by edges. 2. Responsibility Coverage Check: Given a policy node p and a time window [t1, t2], identify all decisions constrained by p that lack the required approval depth. This is a pattern-match query checking for the absence of expected edges. 3. Cross-Zone Decision Correlation: Given two zone nodes z1 and z2, identify decisions that involved agents from both zones, revealing cross-functional decision dependencies. This is a bipartite subgraph extraction query. 4. Temporal Anomaly Detection: Given a decision category and a historical baseline, identify decisions whose approval timing deviates significantly from the baseline distribution. This combines temporal edge weights with statistical thresholds.
5.2 Indexed Subgraph Extraction
For each compliance query pattern, we maintain pre-materialized temporal indices that enable efficient extraction. The approval chain index stores, for each decision node, a precomputed ordered list of (agent, edge_type, timestamp) tuples. The responsibility coverage index maintains, for each policy node, a bitmap of decisions with complete versus incomplete approval chains. The cross-zone index maintains, for each zone pair, a shared decision set updated incrementally as new audit records arrive.
The extraction algorithm operates in two phases. Phase 1 (index lookup) retrieves candidate subgraph boundaries from the appropriate index in O(1) to O(log n) time. Phase 2 (subgraph materialization) traverses the graph within the identified boundaries, applying temporal edge weight thresholds to filter low-relevance edges. The total extraction time is dominated by Phase 2 and is proportional to the subgraph size rather than the total graph size.
5.3 Performance Evaluation
We benchmarked subgraph extraction against equivalent relational SQL queries on a MARIA OS audit corpus of 147,000 decision records (approximately 620,000 audit log entries). The knowledge graph contains 284,000 nodes and 1.12 million edges.
| Query Pattern | SQL Latency (ms) | KG Extraction (ms) | Speedup |
|---|---|---|---|
| Approval Chain Trace | 342 | 48 | 7.1x |
| Responsibility Coverage | 1,847 | 612 | 3.0x |
| Cross-Zone Correlation | 2,310 | 894 | 2.6x |
| Temporal Anomaly | 4,125 | 2,340 | 1.8x |
| **Weighted Average** | **—** | **—** | **2.7x** |
The knowledge graph approach achieves an average 2.7x speedup across all compliance query patterns. The speedup is most dramatic for approval chain traces (7.1x), which are pure traversal queries that map directly to graph path operations but require multiple self-joins in SQL. The speedup is smallest for temporal anomaly queries (1.8x), which require statistical aggregation that is comparably efficient in both representations.
6. Graph Quality Metrics and Consistency Validation
6.1 Structural Consistency Invariants
A well-formed governance knowledge graph must satisfy several structural invariants. Every decision node must have exactly one proposed_by edge. Every approved decision must have at least one approved_by edge. Every evidence reference must point to a valid evidence node. Every agent must belong to at least one zone. Violations of these invariants indicate either errors in the construction pipeline or data quality issues in the source audit records.
We define the Structural Consistency Score (SCS) as the fraction of nodes satisfying all applicable invariants: SCS = |{v in V : all invariants for type(v) are satisfied}| / |V|. On our test corpus, the initial SCS after entity extraction is 0.947, rising to 0.983 after entity resolution (which merges fragmented nodes that individually violated invariants). The remaining 1.7% of violations are genuine data quality issues in the source audit records, which we flag for manual review.
6.2 Temporal Consistency
Temporal consistency requires that edge timestamps respect causal ordering. If decision d was proposed at time t1 and approved at time t2, then t1 < t2. If approval required escalation, the escalation timestamp must fall between proposal and approval. We formalize this as a partial order constraint: for every decision node d, the timestamps of its incident edges must be consistent with the MARIA OS pipeline state machine. The Temporal Consistency Score (TCS) is the fraction of decisions whose edge timestamps satisfy the partial order. On our test corpus, TCS = 0.991, with the 0.9% violations attributable to clock skew in distributed audit logging (resolved by applying a 500ms tolerance window).
7. Theoretical Analysis of Decay Parameter Bounds
7.1 Optimal Decay Rate Theorem
We establish bounds on the exponential decay parameter lambda that minimize the expected error in temporal edge relevance prediction. Let the true relevance of an edge at time t be R(t), modeled as a Bernoulli random variable with P(R(t) = 1) = phi(t) for some unknown monotonically decreasing function phi. The predicted relevance is w(t) as defined in Section 4.2. The expected squared prediction error is:
where f(t) is the probability density of query times. Under the assumption that query times follow an exponential distribution f(t) = mu exp(-mu t) (queries are more likely to concern recent decisions), we can derive the optimal lambda analytically for the special case alpha = 1 (pure exponential decay):
For phi(t) = exp(-lambda_true t), the optimal lambda = lambda_true, which is trivially correct. For phi(t) = 1/(1 + t)^beta_true (true relevance follows power-law decay), the optimal exponential approximation satisfies lambda approximately equal to beta_true mu / (mu + 1), linking the optimal decay rate to both the true relevance structure and the query time distribution.
7.2 Approximation Error Bounds
For the hybrid model with alpha in (0, 1), we establish that the approximation error is bounded by:
where L_exp and L_poly are the optimal errors for pure exponential and pure polynomial models respectively, and delta is the normalized difference between the two component functions. This bound guarantees that the hybrid model is always at least as good as the better of the two pure models, with a further improvement proportional to the product alpha(1 - alpha)*delta^2 that captures the benefit of combining complementary decay profiles.
8. System Architecture and MARIA OS Integration
8.1 Pipeline Architecture
The knowledge graph construction pipeline integrates with the MARIA OS event system as follows:
MARIA OS Decision Pipeline
├── Audit Event Stream (Kafka / internal event bus)
│ ↓
├── Entity Extractor (extracts nodes from audit records)
│ ↓
├── Entity Resolver (AER algorithm, incremental matching)
│ ↓
├── Edge Constructor (typed temporal edge creation)
│ ↓
├── Temporal Weighter (hybrid decay weight assignment)
│ ↓
├── Index Updater (compliance index materialization)
│ ↓
└── Graph Store (Neo4j / in-memory adjacency)The pipeline operates in streaming mode: each audit event triggers an incremental graph update rather than a full reconstruction. Entity resolution operates incrementally by maintaining a candidate cache of recently resolved entities and checking new mentions against this cache before falling back to full pairwise comparison. The amortized cost per audit event is O(k * log n) where k is the average number of entities per record and n is the current graph size.
8.2 Temporal Weight Refresh
Edge weights are time-dependent and must be refreshed periodically. Rather than recomputing all weights on every query (which would cost O(|E_graph|)), we use a lazy evaluation strategy: edge weights are computed at query time using the decay function w(t_current - t_edge) and cached with an expiration TTL of 1 hour. For compliance queries that require consistent temporal snapshots, we support point-in-time weight evaluation: w(t_snapshot - t_edge) for a specified t_snapshot.
8.3 Integration with the MARIA OS Coordinate System
The MARIA coordinate system G(galaxy).U(universe).P(planet).Z(zone).A(agent) provides a natural hierarchical partitioning for the knowledge graph. We exploit this structure for query optimization by partitioning the graph along universe boundaries. Queries that target a single universe operate on the corresponding partition, avoiding the cost of scanning the full graph. Cross-universe queries are decomposed into per-universe subqueries and merged using the entity resolution layer to handle cross-universe entity references.
9. Case Study: Regulatory Audit Trail Reconstruction
We present a case study demonstrating the practical value of the knowledge graph approach. A regulatory auditor requested the complete decision trail for a procurement decision that resulted in a contract valued at 2.4M USD. The decision had traversed 3 zones across 2 planets within the same universe, involving 7 agents over a 45-day lifecycle.
Using the relational audit table, the auditor required 12 separate SQL queries, manual cross-referencing of agent coordinates, and approximately 3 hours of work to reconstruct the full trail. Using the knowledge graph, the same trail was extracted via a single graph traversal query in 48ms, producing a complete subgraph of 23 nodes and 41 edges that included every agent, every state transition, every evidence bundle, and every approval with temporal ordering. The extracted subgraph was rendered as an interactive visualization in the MARIA OS dashboard, allowing the auditor to click on any node to inspect its attributes and trace forward or backward along the decision chain.
The knowledge graph also revealed a finding that the relational approach missed: two of the seven participating agents had previously been resolved to the same physical person (detected by the AER algorithm's participation similarity signal), raising a potential segregation-of-duties concern that warranted further investigation.
10. Discussion and Limitations
The framework presented in this paper demonstrates that knowledge graphs constructed from decision audit trails provide substantial benefits for governance traceability, including faster compliance queries, richer relational context, and automated entity resolution that reveals patterns invisible to point-query approaches. However, several limitations warrant discussion.
First, the AER algorithm's transitive closure property can produce false merges: if entity A incorrectly matches entity B, and entity B correctly matches entity C, then A and C are incorrectly merged. We mitigate this risk with a cluster size threshold (reject merges that would create entity clusters larger than a configurable maximum), but the fundamental tension between transitive completeness and precision remains. Second, the temporal decay function assumes that relevance decreases monotonically with time, which may not hold for cyclical patterns (e.g., annual audit cycles, quarterly reviews). Extending the framework with periodic components is a direction for future work. Third, graph construction throughput of 8.4K decisions per second is sufficient for typical enterprise volumes but may become a bottleneck for very large deployments processing millions of decisions daily.
11. Conclusion
This paper has presented a formal framework for constructing knowledge graphs from MARIA OS decision audit trails. The Audit Entity Resolution algorithm achieves 91.3% F1 on cross-zone entity resolution by combining coordinate proximity, decision co-participation, and embedding similarity. The hybrid exponential-polynomial temporal decay function captures both short-term recency and long-term structural relevance, achieving AUC 0.946 on edge relevance prediction. Compliance subgraph extraction with pre-materialized temporal indices achieves 2.7x speedup over relational baselines. Together, these contributions transform the audit trail from an append-only compliance artifact into a queryable, relational, temporally-aware knowledge structure that enables governance traceability at the speed of thought.
The framework integrates with the MARIA OS coordinate system and decision pipeline, operating incrementally on streaming audit events with amortized O(k * log n) cost per event. Future work will extend the temporal model with periodic components, investigate federated graph construction across galaxy boundaries, and explore graph neural network methods for anomaly detection on the constructed governance knowledge graph.
References
- Christophides, V., Efthymiou, V., Palpanas, T., Papadakis, G., and Stefanidis, K. (2021). An overview of end-to-end entity resolution for big data. ACM Computing Surveys, 53(6), pp. 1-42.
- Hogan, A., Blomqvist, E., Cochez, M., et al. (2021). Knowledge graphs. ACM Computing Surveys, 54(4), pp. 1-37.
- Ji, S., Pan, S., Cambria, E., Marttinen, P., and Yu, P.S. (2022). A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems, 33(2), pp. 494-514.
- Kazemi, S.M., Goel, R., Jain, K., et al. (2020). Representation learning for dynamic graphs: A survey. Journal of Machine Learning Research, 21(70), pp. 1-73.