Summary
Mneme HQ is an architectural governance layer for AI-assisted development. It stores your team’s decisions as version-controlled YAML co-located with your codebase, injects the relevant decisions into AI context at generation time, and provides a CLI to enforce them before code reaches review. It is open source, repo-native, and designed to work with Cursor and Claude Code.
The problem we set out to solve is specific. Not AI coding agents in general. Not the quality of model output. Not whether LLMs can replace developers. One problem: the decisions your team has already made disappear from view every time a new AI session starts, and there is currently no infrastructure to prevent that.
Every architectural decision your team reaches — about databases, about patterns, about dependencies, about security constraints — exists somewhere. In an ADR document, a Confluence page, a Slack thread, the memory of the engineer who was in the room when the choice was made. What it does not exist as is machine-readable context that an AI coding assistant can consult before generating code. That absence has a cost, and the cost compounds.
“LLMs forget. Projects do not.”
Mneme HQ is the infrastructure between those two states. The name comes from the Greek goddess of memory and remembrance, one of the original nine Muses. The idea is that your project has memory. Your AI assistant should too.
What the Problem Actually Looks Like
If you have used AI coding tools on a project with any history, you have encountered the symptoms even if you have not named the cause. The agent recommends a library you evaluated and rejected six months ago. It introduces a pattern that violates a decision made after a security review. It reaches for a different ORM adapter because it has seen it more frequently in training data than the one your codebase standardised on.
None of this is the model failing. The model is doing what it was designed to do: generating competent code from the context available to it. The problem is what is missing from that context. The decisions your team made are not in the context window. They are not machine-readable. They cannot act on generated code.
Architectural drift is not the result of bad AI. It is the result of missing infrastructure.
The conventional response to this is to put more in the system prompt. Write a long CLAUDE.md. Add detailed comments. Re-explain your constraints at the start of every session. This is the right instinct applied to the wrong layer. It treats a structural problem as a documentation problem. The output is longer prompts that still fail to surface the right constraint at the right moment, because the mechanism for knowing which constraints are relevant to any given generation task does not exist.
The Three Things Mneme HQ Does
Mneme HQ does three things. They build on each other, but each has independent value.
Store decisions where the code lives. Decisions are recorded as structured YAML files in a .mneme/ directory at the root of your repository. They travel with the code, are version-controlled alongside it, and are auditable through standard git tooling. A decision looks like this:
> Decision saved: .mneme/decisions/ADR-001-storage.yaml
> Scope: global | Severity: error | Status: active
The decision is now part of the repository. It can be reviewed in pull requests. It can be referenced by ID. It has a status, a scope, and a severity level that determines how violations are treated downstream.
Inject decisions at generation time. Mneme HQ generates editor-level context from your stored decisions — Cursor rules, Claude Code configuration, and a structured context file your CI pipeline can consume. When you start a new AI session, the constraints already exist in context. You do not re-explain them. The model starts with them.
> Generated .cursor/rules/mneme.mdc from 9 decisions
> 9 active constraints injected into Cursor context
Enforce before review. The third component is the enforcement CLI. Before opening a pull request, before pushing to a branch, or as a pre-commit hook, you can run a check that evaluates whether the current state of the codebase respects your recorded decisions.
Checking 9 decisions against working tree...
PASS ADR-001: Postgres constraint enforced — no new DB connections
PASS ADR-003: JWT middleware unchanged — auth pattern respected
PASS ADR-005: API versioning pattern followed
WARN ADR-007: New dependency 'prisma' introduced — not in approved list
FAIL ADR-004: Repository pattern bypassed in services/user.service.ts:142
3 passed · 1 warning · 1 failure
PASS. WARN. FAIL. Clear signals before code reaches human review. The architectural violation in user.service.ts gets caught before it becomes a PR comment, before it gets merged, before it becomes the assumption that three subsequent PRs build on.
Why Repo-Native Matters
We made a deliberate decision not to build this as a SaaS platform or an external service. Decisions live in your repository. They are committed with your code. They are subject to the same review process as everything else. There is no external state to synchronise, no API dependency to manage, no decision that exists in one system while the code it governs exists in another.
This matters for a few reasons. First, it means decisions are auditable through standard tooling — git log, git blame, pull request history. You can see when a decision was recorded, who recorded it, and what was in the codebase at the time. Second, it means the tool works in air-gapped environments and does not require new security approvals to deploy. Third, it means the format is open: YAML files that any tool can read, not a proprietary schema locked to a platform.
Decisions that live outside the repository they govern are decisions waiting to become invisible.
Who This Is For
Mneme HQ is designed for engineering teams that are already using AI coding assistants on projects with architectural history — decisions made, patterns established, constraints that should persist across sessions and across developers. The tool gets more valuable as that history accumulates. There is less value on greenfield projects where every decision is still being made; there is significant value on projects six months in, where the architectural intent exists but exists only in human memory and documentation that the AI cannot access.
It is also designed for the specific workflow of developers working with Cursor and Claude Code — the two tools where context injection and rule-based guidance have the most direct impact on generation quality. The integration is not bolted on; the Cursor rules and Claude Code configuration that Mneme HQ generates are designed to be the primary mechanism by which constraints travel from your decisions store into the AI’s context.
The Broader Argument
We wrote separately about why architectural review is becoming the bottleneck in AI-assisted development. Mneme HQ is a direct response to that argument. If the structural problem is that generation volume is growing faster than review throughput, the solution is not to review faster. It is to catch more violations before they reach review. Constraint enforcement at generation time is the mechanism. A repo-native, version-controlled decision store is the prerequisite.
This is not a finished product. It is an early version of an infrastructure layer that does not yet exist in the standard AI-assisted development toolchain. The CLI works. The Cursor integration works. The enforcement engine works on the decision types we have implemented so far. There are decision categories we have not yet modelled, edge cases we have not yet encountered, integrations we have not yet built.
Open source is the right model for this kind of infrastructure. The decisions store format should be portable. The enforcement logic should be inspectable. The community of people building AI-assisted development workflows should be able to contribute to the tool they are depending on. That is the version of Mneme HQ we are building towards.
Key Takeaways
- Mneme HQ stores architectural decisions as YAML in your repository, version-controlled alongside the code they govern.
- Decisions are injected into AI context at generation time — Cursor rules and Claude Code configuration are generated automatically.
- A CLI enforcement engine checks the current codebase against stored decisions before code reaches pull request review.
- The repo-native approach means decisions are auditable through standard git tooling, portable, and independent of external services.
- Mneme HQ is open source and designed for teams already using AI coding assistants on projects with architectural history.
FAQ
- What is Mneme HQ?
- An architectural governance layer for AI-assisted development: version-controlled decision storage, context injection at generation time, and CLI enforcement before code reaches review.
- How does it integrate with Cursor and Claude Code?
- Mneme HQ generates Cursor rules and Claude Code configuration directly from your stored decisions. Run
mneme cursor generateto create .cursor/rules/mneme.mdc, ormneme claude generateto update your CLAUDE.md with active constraints. - Does it require a cloud service?
- No. Mneme HQ is entirely repo-native. All state lives in your repository as YAML files. There is no external service, no API dependency, and no synchronisation requirement.
- What kinds of decisions can it enforce?
- Currently: dependency constraints (approved/rejected libraries), architectural pattern requirements, file structure rules, and custom validators. The enforcement engine is extensible — new decision types can be added as YAML schema extensions.