# Glossary & FAQ

### **Glossary**

* **CEREBRO** — Composable Educational Reusable Explainable Brain Resource for Operations. Modular, pluggable brains for AI agents built on temporal knowledge graphs.
* **Temporal Knowledge Graph (TKG)** — Graph with timestamps/intervals on nodes and edges. Models how knowledge evolves over time.
* **Prerequisite edge (requires)** — Relation that encodes conceptual dependency and a minimum mastery threshold (e.g., fractions before algebra).
* **Mastery** — Probability-like measure of learner understanding per concept, updated through interactions and reinforcement cycles.
* **Explainability path** — Minimal, auditable sequence of nodes/edges (with sources and timestamps) that justify an answer.
* **Curation & Ingestion Design** — The process of authoring or transforming unstructured data into a structured CEREBRO. Like prompt design, the quality of the ingestion process determines the value of the brain.

***

### FAQ

**How is this different from RAG?**

* Standard RAG retrieves semantically similar text fragments from vector databases. CEREBROs go further: they reason over explicit, explainable graph paths, integrate temporal knowledge, and model prerequisites and mastery. Each CEREBRO is effectively a **graph-based RAG protocol** with built-in pedagogy and explainability.

**Can I use multiple CEREBROs at once?**

* Yes. Agents can load several domain-specific brains simultaneously. Orchestrators can merge or route across them, ensuring smooth cross-domain reasoning.

**Do I need a specific LLM?**

* No. CEREBROs are LLM-agnostic. They integrate with any model via REST, GraphQL, MCP, or function-calling. The LLM provides the natural language interface; the CEREBRO ensures structure, reasoning, and memory.

**Isn’t ingestion of data into graphs the hardest problem?**

* Ingestion is challenging, but we treat it as a **design process rather than a bottleneck**. Pipelines using **n8n** and **Graphiti libraries** support asynchronous ingestion. Constructor and Architect Agents orchestrate the process, chunking and curating data into graph form. Over time, semi-automated methods with human validation will make this scalable.

**What if the marketplace is empty at launch?**

* We seed it with flagship CEREBROs (Math, History, Coding) and incentivize contributions with **revenue sharing (70/30), verification badges, and peer review pipelines**. This ensures high-quality initial supply and community-driven growth.

**How is quality and trust ensured?**

* Every node and edge carries **provenance metadata** (citations, timestamps). Packages are cryptographically signable, and community review adds a second layer of trust. Sensitive fields (medicine, law, history) require expert validation before listing in the verified catalog.

**Will this be too slow compared to simpler methods?**

* No. CEREBROs combine **hybrid vector + graph retrieval** with caching and sharding. Benchmarks target sub-200ms query times. This keeps them competitive with RAG while adding explainability and pedagogy.

**Isn’t the system too complex to maintain?**

* Complexity is modularized. The architecture separates `cerebros-core`, APIs, marketplace, and MCP interop. Each component is independently testable and replaceable, ensuring long-term maintainability.

**Why standardize now? Won’t it risk obsolescence?**

* The **.cerebro package format** is open, JSON-LD/RDF-based, and versioned with semantic versioning. This ensures forward compatibility while allowing the ecosystem to evolve with the AI agent space.

***


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://cerebros.gitbook.io/cerebros/glossary-and-faq.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
