MemNexus
GuidesWorkflows

Capturing Memories That Actually Help

What to save, how to write it, and what to skip — so your memory store stays useful instead of noisy.

Most memory stores fail the same way: they fill up with things that seemed important in the moment but are useless later. A memory that says "worked on auth today" doesn't help anyone. This guide covers what to save, how to write it, and what to leave out.

What's worth saving

The test is simple: would a capable developer need this information to continue your work, or would they have to re-discover it?

If re-discovery would take real effort — reading code, running experiments, asking someone — it's worth saving. If it's obvious from the code or Google-able in thirty seconds, it's not.

Decisions with their reasoning

The most valuable memories aren't facts — they're decisions with the context behind them.

Compare these:

Not useful:

"We use Redis for session storage."

Useful:

"We use Redis for session storage instead of the database because session data is high-frequency read/write and we didn't want that load hitting Postgres. We evaluated memcached but chose Redis for persistence across restarts. The connection pool is configured at 20 connections in production — we found 50 caused latency spikes under load."

The first version is in your config file. The second contains reasoning you'd lose the moment the conversation ended.

Debugging breakthroughs

When you finally find the root cause of a hard bug, save it. Include:

  • The symptoms
  • What you ruled out
  • The actual root cause
  • The fix
  • Any future warning signs

This is especially valuable for timing-dependent bugs, concurrency issues, or anything that took more than an hour to diagnose. When similar symptoms appear six months later, you want to find this memory immediately.

Cross-project patterns

When you solve a class of problem — not just a specific instance — that knowledge applies across projects. If you figure out how to handle backpressure in a particular async pattern, or discover a reliable way to structure integration tests for a certain type of service, that's worth saving in a way that surfaces across projects.

Architectural constraints that aren't visible in the code

Why does this service have two database connection pools? Why does that endpoint bypass the normal auth middleware? Why is pagination done at the database layer instead of the application layer?

This reasoning exists nowhere in the codebase. When it disappears, it becomes archaeology.

How to write good memory content

Be specific

Include version numbers, configuration values, benchmark results, and exact error messages when they're relevant. Vague content is hard to find and hard to use.

Vague:

"Fixed CI issue with Docker build."

Specific:

"Docker multi-stage build was failing in CI because the base image (node:20-alpine) doesn't include git, which our postinstall script needed. Fixed by adding RUN apk add --no-cache git before the npm install step. Affected services: core-api, mcp-server."

Think about what terms you'd search when you need this memory later. Include the technology names, error messages, and symptoms explicitly — don't assume the search will infer them.

If you fixed a race condition in the token refresh flow, say "race condition", "token refresh", and "auth" explicitly. Don't write "fixed the timing issue in that auth thing."

Include the date context when it matters

If a decision was made under constraints that might change — a cost constraint, a team size, a third-party API limitation — say so. "We chose this approach because the vendor's API didn't support batch operations at the time" ages better than "we chose this approach."

Name the conversation correctly

Use conversations to group related memories. A debugging session should be one conversation. A feature implementation should be one conversation. A design decision discussion should be one conversation.

This grouping matters because recap and digest operate on conversations — if your memories are scattered across random conversations, the temporal context is lost.

What to skip

Raw transcripts and chat exports

Saving a full conversation transcript creates noise without adding signal. The valuable parts are the conclusions, decisions, and discovered facts — not the back-and-forth. Extract those instead.

Transient state

What you're currently working on, what's in your local branch, what tab you have open — this isn't worth saving. It has no value after the session ends.

Things that are already in the codebase

If a fact is visible in a config file, a README, or a comment in the code, saving it as a memory just creates redundancy. The memory store is for knowledge that isn't written down anywhere else.

Speculative future plans

"We might refactor this to use microservices someday" adds noise. Save decisions that were made, not things that might be considered.

A real example: turning a recurring problem into institutional knowledge

Here's how memory prevents re-investigation of known issues.

Suppose your team hits a recurring lockfile problem — the same root cause manifesting in slightly different ways across multiple incidents. You could solve each one fresh, or you could search memories first:

mx memories search --query "lockfile CI failure package manager" --timeline

If those incidents were captured when they happened — root cause, the fix, the warning signs — the search returns a synthesized picture of the pattern. Your AI can read across those memories and surface: "Here's what's happening and why, based on four previous incidents."

That kind of pattern recognition is only possible if the individual incidents were saved well. Each memory needs enough detail that future-you can reconstruct what happened.

mx memories create \
  --conversation-id "NEW" \
  --content "CI lockfile failure: npm install inside the monorepo walked up to the parent node_modules/.pnpm/ store and added references that don't exist in the Docker build context. Fix: run npm install in an isolated temp dir when generating the lockfile. Affected: mcp-server Dockerfile. This is the third time we've hit this class of issue — root cause is always npm/pnpm cross-contamination in the monorepo." \
  --topics "ci,docker,lockfile,gotcha"

When the fourth incident happens, that memory is waiting.

The --topics "gotcha" tag is useful for this pattern. When you search --topics "gotcha", you get a curated list of the hard-won lessons — things that took real effort to figure out and are worth knowing before you run into them.

Next steps