MemNexus
Concepts

Behavioral Learning

How MemNexus learns from your memories and surfaces context-aware recommendations at the right moment.

MemNexus doesn't just store memories — it learns from them. Over time, it identifies recurring practices, preferences, and workflows, then surfaces relevant guidance at the moment you need it most.

How it works

Behavioral learning runs as a background pipeline with three stages:

Memories → Extract practices → Approve recommendations → Surface at the right moment

1. Extraction

MemNexus monitors your memories for evidence of recurring practices. Two sources feed the pipeline:

  • Explicit statements — When you directly state a practice: "Always run the integration tests before pushing"
  • Learned patterns — When the same approach appears repeatedly across memories: debugging in staging before production, checking computed styles when CSS is off, writing the test first

Extracted practices become recommendations(triggerContext, practice) pairs. For example:

When...Practice
debugging frontend layout issuescheck computed styles in browser devtools before modifying CSS
deploying to productionrun integration tests first, then health-check /api/status after
starting a new API endpointinclude OpenAPI JSDoc annotations from the start

2. Approval

New recommendations start as pending review (unless you configure requireApprovalFor: 'none'). Review and approve them in the portal, or via the CLI:

mx patterns list --pending        # See what's waiting for review
mx patterns approve <id>          # Activate a recommendation
mx patterns dismiss <id>          # Reject it

Only approved recommendations are surfaced to agents. This keeps behavioral guidance intentional — you're always in control of what influences your agent's behavior.

3. Surfacing

Approved recommendations reach you through two complementary mechanisms: JIT nudges and compile_instructions.


Getting context: JIT vs. build_context

These two mechanisms serve different moments in your workflow. Understanding the difference helps you get the most out of behavioral learning.

JIT nudges — retrospective coaching

JIT (just-in-time) nudges fire when you save a memory. MemNexus matches the memory content against your active recommendations using semantic similarity, and if a relevant practice is found, appends it to the confirmation.

Example:

You save: "Spent 2 hours debugging a CSS grid issue in the customer portal — the columns weren't aligning because of an implicit auto-fit"

Response:

Memory saved. ID: mem_abc123

---
**Relevant behavior for this context:**
- **When** debugging frontend layout issues → check computed styles in browser devtools before modifying CSS
_(from your approved behavioral patterns)_

Key characteristic: JIT nudges are retrospective.

The nudge fires after you've completed the task — when you're writing it up. It's coaching for next time, not help for right now. This is by design: the memory save is the natural moment to reinforce a lesson while it's fresh.

Zero noise when no recommendation matches — the confirmation is unchanged.

build_context — prospective context

build_context fires when you start a task. Call it at session open with a description of what you're about to work on:

Use build_context: "debugging the checkout flow in the customer portal"

It returns:

  • Active work — What you were last doing in this area
  • Key facts — Extracted knowledge relevant to your context
  • Gotchas — Recurring warnings from your history
  • Recent activity — What happened in the last 24h
  • Related patterns — Behavioral patterns relevant to the task

Key characteristic: build_context is prospective.

It surfaces context before you need it, so you can apply it from the start. The timing is different from JIT — you're receiving guidance when you can still act on it, not after the fact.

Using both together

JIT nudgebuild_context
WhenAfter saving a memoryBefore starting a task
DirectionRetrospective — coaching for next timeProspective — context for right now
TriggerAutomatic on create_memoryManual call at session start
Best forReinforcing lessons, building habitsOrienting before diving in

Neither replaces the other. A healthy workflow uses both:

  1. Session start: Call build_context with your task description
  2. During work: Save memories as you go — JIT nudges appear in responses when relevant
  3. Session end: Save a summary — JIT reinforces any applicable practices

compile_instructions

For agents that need behavioral context injected into their system prompt, compile_instructions compiles approved recommendations into a short, ordered list of instructions:

# Via CLI
mx patterns compile-instructions --current-query "debugging a TypeScript type error"

# Via MCP tool
patterns action='compile_instructions' currentQuery='debugging a TypeScript type error'

Pass currentQuery to activate embedding-based matching — the system finds recommendations whose trigger context is semantically similar to what you're actually doing. Without it, the system falls back to keyword-based matching against your stored patterns.

This is the mechanism behind the patterns.relatedPatterns section you see in build_context output.


Pattern detection (v1)

In addition to the recommendation engine, MemNexus can detect statistical patterns from your memory graph. These are coarser-grained than recommendations but useful for identifying broad themes:

mx patterns detect           # Run detection algorithms
mx patterns compile          # Extract named behavioral patterns
mx patterns analyze          # Get insights and trends

Detected patterns feed into the relatedPatterns section of build_context results.


Configuration

Control behavioral learning through your account's behavioral state:

mx behavior get              # View current configuration
mx behavior update           # Update settings

Key settings:

SettingDefaultDescription
enabledtrueEnable/disable behavioral learning entirely
requireApprovalForlearnedWhich recommendations require approval before activating (learned, all, or none)
minConfidence0.6Minimum confidence threshold for surfacing recommendations
defaultTokenBudget200Token budget for compile_instructions output

Privacy

Behavioral recommendations are derived from your memories and scoped to your account. They are:

  • Private — Only accessible with your API key, never shared across accounts
  • Inspectable — View all recommendations with mx patterns list
  • Deletable — Dismiss or delete any recommendation at any time
  • Approval-gated — No recommendation influences agent behavior without your review (with default settings)

See Privacy & Security for more details.


Next steps