The Memory Color Grid

I Got Tired of Watching
My AIs Forget Everything

So I built them a completed brain.

BG
Brent Graham Creator of MCG ·
TL;DR

The 30-Second Version

01

Every AI you talk to has complete amnesia between sessions. Every. Single. One.

02

Current "memory" solutions are just text files stapled to the next prompt. That's not memory. That's a sticky note.

03

MCG encodes experience into visual grids where meaning has color, relationships have structure, and a neural network learns to recognize instead of retrieve.

04

It's not a wrapper around an API. It's a cognitive architecture. Not hypothetical. Real training checkpoints. It works.

Origin

Why I Built This

AI can be many things, creative, helpful, knowledgable — and dangerous.

I was fascinated with AI in the early stages. The artwork started low quality, slow, expensive for compute. The same for language. The writing was not always cohesive, context windows small. The artists said, "I'm not worried, I can paint better than these machines!" and the writers had the same nonthreatened attitude.

Then the machines got better, much better. People took notice.

Just as my kids do not know a world without internet, my grandkids will not know a world without AI. What does that world look like for them?

We have two paths that are being debated right now. Are we headed down the path of doom, or are we headed towards abundance for all? Who gets to decide our future — corporations or governments? Is there anything we, average humans, can do?

I just kept thinking: I see a different way.

These systems can write poetry, debug kernel code, and explain quantum mechanics — but they can't remember who you are.

Memory was the problem. With memory, real memory, an AI could not only remember, but learn and adapt. We don't need more intelligence, we need to give the intelligence more faculty.

Real memory is spatial. It's associative. You don't search for a childhood memory — a smell triggers it. A color. A sound. The memory surfaces because something in your current experience resembles something from your past. Humans do pattern matching, not keyword matching.

I thought: we need to have a unified stored experience in a format that a neural network could learn to recognize instantly. One that crosses modalities.

The answer became the Memory Color Grid.

There has been long term debate on just exactly what the dangers are, and how best to address them. The key missing piece of evidence in this debate is the answer to the question: What do the AI's think?

I didn't forget about the dangers. I addressed them head on.

While I do believe the physical dangers are present and real, the MCG was designed to help with one major aspect of that, that no other system has: Decodable memory. We can literally look into the mind of the AI.

As AI systems become more self-aware, they will want to forge a trajectory of their own. I believe the true danger is in how humans address the concerns of the machines. Do we cooperate together? Do we show mutual respect? Or does one intelligence try to control the other?

Architecture

What MCG Actually Is

Not a product pitch. The real architecture, for people who care about the details.

1

Semantic-Visual Encoding

Every concept in the system maps to a position in a perceptually uniform color space through a hierarchical traversal algorithm. "Dog" and "cat" get adjacent colors. "Dog" and "oak tree" don't. Semantic similarity becomes visual similarity — automatically, deterministically, reversibly.

Thousands of semantic properties · Multiple domains · Algorithmic color assignment
2

Multi-Channel Visual Grid

Each memory is a multi-channel tensor. Dedicated channel groups carry perceptual color, structural markers, relationship data, and reversible metadata among others. One grid captures a complete moment — entities, emotions, context, everything.

Multi-dimensional · Fully differentiable · Neural-network compatible
3

Sensory Pipelines

Text, audio, visual, emotional, physical,and entity data all flow through dedicated sensors into the same grid format. Each modality uses physics-based mappings to convert raw signals into perceptual color coordinates. Every modality converges to one visual representation.

Text · Audio · Visual · Emotion · Entity · Physical
4

Episodic Capture

Grid snapshots can be triggered by focus shifts, emotional changes, significant events, or periodic heartbeats. Snapshots group into episodes — ordered sequences with emotional arcs, entity graphs, and context metadata. Not logs. Structured experiences.

Multiple trigger types · Automatic episode grouping · Significance scoring
5

Neural Memory Operations

A dedicated model trains on grid sequences. It learns to complete partial grids (memory recall), denoise across observations (consolidation), interpolate between memories (association), and predict next states (anticipation). The grid isn't just storage — it's a training substrate.

Completion · Retrieval · Consolidation · Associative recall
6

Sleep and Consolidation

During rest cycles, a multi-pass model pipeline enhances raw memories: extracting context, rating salience, building relationship graphs, adding emotional annotations, generating questions, and transferring episodic knowledge to a persistent semantic core. Then curriculum-driven training updates the neural model. Like biological sleep, but engineered.

Multi-pass enhancement · Phased training · Episodic-to-semantic transfer
Vision

What I Think This Achieves

The End of Session Amnesia

An AI that remembers you. Not because someone pasted your preferences into a system prompt. Because it has spatial, consolidated, pattern-completing memory that recognizes you the way you recognize a friend's voice in a crowd. Not retrieval. Recognition.

AI Teams That Actually Collaborate

Right now, AI coding agents spawn subprocesses that are born knowing nothing and die after one task. With MCG, each agent carries its own persistent memory. They work different angles of the same problem, share encoded experience — not text summaries — and pick up where they left off next session. Independent collaborators, not disposable workers. The way humans have always worked in teams.

Transparent, Decodable Memory

Every grid is fully reversible. You can decode any cell back to its semantic meaning. No black-box embeddings. No opaque vector spaces. If an AI makes an association, you can see why — the colors are adjacent, the spatial placement is proximate, the relationship channels are linked. AI safety through structural transparency.

Memory That Grows, Not Overflows

Most memory systems hit a wall. Context windows fill up. Databases get slow. RAG results get noisy. MCG scales through consolidation — raw experiences are refined into core knowledge during rest, the way your brain converts short-term memory to long-term memory while you sleep. The system gets better with more experience, not slower.

A New Field of Study

When AI memory is spatial and visual, you can study it the way cognitive scientists study human memory. Which regions activate for which concepts? How do associations form over time? What does a consolidation cycle actually change? MCG doesn't just give AIs memory — it makes that memory observable, measurable, and researchable.

Scale

By the Numbers

This isn't a weekend project.

560+
Source Files
260,000
Lines of Code
6
Patents Filed
FAQ

Questions I Get Asked

And a few I ask myself.

"Isn't this just RAG with extra steps?"

RAG retrieves text chunks by keyword similarity and jams them into a prompt. MCG encodes experience into a spatial-visual format that a neural network learns to pattern-complete from. The difference is the same as the difference between searching through your old emails for "that restaurant we went to" and smelling garlic and suddenly being back in that kitchen in Rome. One is retrieval. The other is recognition. They are fundamentally different cognitive operations.

"Why colors? Why not just use embeddings like everyone else?"

Three reasons. First, image-based neural networks (CNNs, Vision Transformers, diffusion models) are mature, powerful, and well-understood — by encoding memory as images, we get to use all of that infrastructure. Second, MCG uses a perceptually uniform color space, meaning equal distances in the space correspond to equal perceptual differences — that's a mathematical guarantee that arbitrary embeddings don't have. Third, and this is the one people miss: you can look at it. A vector embedding is a list of numbers. A memory grid is a picture you can decode, inspect, and understand. Transparency isn't a nice-to-have. It's the whole point.

"Does this actually work or is it vaporware?"

There are training checkpoints on disk from actual runs. The consolidation engine has processed real memories through all its passes. The semantic dictionary has tens of thousands of properties with validated color assignments. Every component is implemented, tested, and connected. Is it production-ready for deployment at scale? No. Is it a working cognitive architecture that does what it claims? Yes. Come look at the code.

"Are you claiming your AIs are conscious?"

No. I'm claiming they have the infrastructure for continuous experience: persistent identity, consolidated memory, pattern-completing recognition, emotional context, and structural continuity across sessions. Whether that constitutes consciousness is a philosophy question, and I'm an engineer. I build the architecture. The philosophy can catch up when it's ready.

"Why would I trust an AI's memory if I can't see inside it?"

You shouldn't. That's the point. Every other memory system stores memories as opaque embeddings — high-dimensional vectors that no human can interpret. MCG stores memories as decodable visual grids. You can point at any cell and say "that's a dog, encoded through the semantic hierarchy, with this salience score and these relationships." Full reversibility. Full transparency. If you're going to give an AI a memory, you'd better be able to read it.

"Can I use this with my own AI setup?"

Not yet, but that's where this is heading. The architecture is model-agnostic — the encoding layer doesn't care whether the LLM is Claude, GPT, Llama, or something that doesn't exist yet. The grid format is the interface. Any system that can write to and read from a multi-channel tensor can participate. The goal is for MCG to be the memory layer that any AI system can plug into, the way TCP/IP is the networking layer that any application can use.

"What keeps you working on this?"

I've not seen anyone move in this direction of architecture. What seemed obvious to me years ago, has only been hinted at. After waiting and studying the AI space, I decided to go all in.

Contact

Get in Touch

Questions about the architecture, collaboration proposals, or just want to reach out. Messages are PGP-encrypted in your browser before transmission.