The Two Creators
Milla Jovovich is best known for roles that have defined their respective franchises. As Alice in the Resident Evil series — six films from 2002 to 2017 — she played one of the most recognisable action heroines in modern cinema. As Leeloo in Luc Besson's The Fifth Element (1997), she delivered one of the most technically demanding performances of the decade. Outside of acting she runs a fashion label and has been a daily user of AI tools since 2024.
Her GitHub profile (github.com/milla-jovovich) was largely unknown before the MemPalace launch. That changed overnight.
Ben Sigman (@bensig on GitHub) is the software engineer who translated Jovovich's vision into working code. He designed the ChromaDB storage layer, the SQLite knowledge graph, the AAAK compression dialect, and the MCP server architecture. His launch post accumulated 1.5 million impressions within 48 hours.
The Daily Frustration That Started Everything
By early 2025, Milla Jovovich had developed a substantial workflow around AI tools. She used Claude and ChatGPT daily across creative development, business decisions, research, and problem-solving. Over the course of months, those conversations had accumulated into something genuinely valuable: a record of decisions made, reasoning worked through, directions explored and rejected, and context that existed nowhere else.
The frustration was predictable but significant: every new session began from zero. The reasoning she had built, the context she had developed, the alternatives she had already ruled out — all of it was gone. She would start a new conversation and find herself re-explaining things she had worked through in detail months earlier.
She tried the existing tools. Mem0 and Zep were both available and functional. Both had the same approach: run an LLM over each conversation to extract key facts, and store the extracted summary. The result was always a pale version of what had actually been discussed. "User prefers Postgres" was stored. The conversation where she had explained that the project expected data volumes above 10GB, that the team had spent three days benchmarking concurrent write throughput, and that MySQL had been eliminated for a specific reason — that was gone.
The Central Insight
The insight that became MemPalace was simple and direct: do not let any process decide what to throw away. Store everything, then make it retrievable.
The name came from a technique with a long history. Ancient Greek and Roman orators used the "method of loci" to memorise long speeches: they would mentally walk through a familiar building, placing each idea in a specific room, then retrieve it by mentally walking the same path. The structure of the building served as the memory system. MemPalace applied this metaphor to AI conversations: Wings for people and projects, Rooms for topics within those projects, Halls for memory types, Tunnels for cross-wing connections, Closets for summaries, Drawers for the original verbatim content.
The Architectural Decisions
Three architectural choices define MemPalace and differentiate it from competitors:
Verbatim storage in ChromaDB. Every conversation is stored exactly as written. ChromaDB provides local semantic vector search without any external API dependency. The 96.6% LongMemEval score flows directly from this decision — the system cannot lose information it never discards.
Palace metadata structure. The wing/room/hall organisation creates a metadata schema that pre-filters vector searches to topically relevant subsets. This single addition improves recall from 60.9% (unfiltered) to 94.8% (wing+room filtered) on the same corpus.
SQLite knowledge graph. Rather than requiring Neo4j or a cloud graph database, MemPalace stores temporal entity-relationship triples in SQLite. Zero-cost, zero-configuration, entirely local. Sufficient for the use cases the tool targets, while eliminating the infrastructure dependency that competing tools carry.
Building with Claude Code
The project was built over several months using Claude Code, Anthropic's AI-assisted software development tool. The collaboration divided along clear lines: Jovovich drove the product vision and identified what was not working from a daily-user perspective; Sigman designed the architecture and wrote the implementation.
The resulting codebase is 98.6% Python with 1.4% shell scripts. The key modules and what they do:
searcher.py— ChromaDB-backed semantic search with metadata pre-filteringknowledge_graph.py— temporal entity-relationship triples with SQLite storage, time-bounded validity, and invalidation supportmcp_server.py— the 19-tool MCP server, including the AAAK auto-teach protocol that lets any connected AI learn the dialect from themempalace_statusresponsedialect.py— the AAAK compression systemconvo_miner.py— conversation ingestion, normalising five export formats into a standard transcriptlayers.py— the 4-layer memory stack logic (L0 through L3)onboarding.py— the guided setup wizard that auto-detects people and projects and generates the initial palace structurehooks/mempal_save_hook.shandmempal_precompact_hook.sh— the Claude Code auto-save scriptsbenchmarks/longmemeval_bench.py— the reproducible LongMemEval runner
The Launch — April 6, 2026
Ben Sigman's launch post read: "My friend Milla Jovovich and I spent months building MemPalace with Claude Code. First perfect score on LongMemEval. 5,400 GitHub stars in 24 hours."
The story reached multiple audiences simultaneously: the developer community interested in the benchmark claims, the technology press interested in the actress-turned-coder narrative, and Jovovich's existing audience interested in her technical work. By 72 hours the repository had 26,900 stars and 3,300 forks — making it one of the fastest-growing open-source repositories of that month. Community reactions ranged from genuine admiration to dry wit: the top comment on the HackerNews thread, by user denysvitali, noted a missed opportunity to call the project "Resident Eval."
Community Scrutiny and Transparent Response
The technical community examined the README closely. Within hours, several claims did not hold up: the AAAK token comparison used a rough heuristic instead of a real tokenizer, "30x lossless compression" was inaccurate on both counts (AAAK is lossy, and the compression ratio was derived from the same flawed heuristic), the "+34% palace boost" described standard ChromaDB metadata filtering, and the 100% hybrid score lacked a public reproduction path.
On April 7 — less than 24 hours after launch — Jovovich and Sigman published a written response that addressed each point by name, acknowledged what was wrong, described the mechanism that produced the error, and committed to specific corrections. The response was notable for its directness: there was no minimising of the scale of the mistakes and no defensive framing.
The result was that critics became contributors. Several community members who filed the sharpest GitHub issues — @panuhorsmalahti, @lhl, @gizmax — are acknowledged in the project's contributor list. @gizmax reproduced the 96.6% benchmark score independently on an M2 Ultra in under five minutes, providing the public verification that the core claim deserved.
The Outcome
MemPalace v3.1.0 — the current stable release — reflects the fixes committed to in the April 7 response, plus Windows encoding support, ChromaDB version pinning, a shell injection fix in the save hooks, and multilingual search improvements contributed by the community. The project has 14 listed contributors and 125+ commits.
The benchmark result that drove the launch — 96.6% R@5 on LongMemEval with zero API calls — remains the highest published score for any free AI memory system that requires no external service. That claim survived scrutiny intact.