memlocal
your AI's memory belongs on your device.
locomo
memlocal posts an 80.0% pass rate and a 4.21 / 5 average llm score on the LoCoMo benchmark while keeping memory on-device.
the result matters because it combines benchmark quality with a local-first memory architecture: no server memory layer, no external vector database, and no requirement to hand user recall to a cloud vendor.
published scores
memlocal
local-first, embedded, on-device
80.0%
pam
cloud agent, file-first memory
74.35%
letta
server-based memory stack
74.0%
mem0
cloud or self-hosted multi-service setup
66.9%
openai chatgpt memory
cloud memory tied to hosted product
52.9%
published benchmark write-ups can vary slightly in judge model or evaluation details, but the overall picture is stable: memlocal is competitive on quality without giving up local ownership.
why local-first memory matters
single process
cozodb
graph + vector + full-text in one embedded engine
private by default
on-device
memory stays where the user lives
offline recall
always on
no network dependency for retrieval