The Approximate Mind, Part 17: Memory Scaffolding
The Oldest Technology #
Before writing, we had each other. The elder who remembered which berries were poison. The grandmother who knew the song for grinding grain.
Before writing, we had each other. The elder who remembered which berries were poison. The grandmother who knew the song for grinding grain.
Part 17 explored memory scaffolding. AI that holds what you need to remember.
Personality scaffolding goes deeper. AI that holds who you are.
Every AI you interact with is building a model of you.
Not explicitly, not consciously, but functionally. The recommendation system that learned you prefer morning emails. The chatbot that noticed you respond better to direct answers than hedged ones. The assistant that figured out you need extra context on financial topics but hate being over-explained on technical ones.
Aristotle gave us the language we still use for persuasion: logos, pathos, ethos. Logic, emotion, credibility. Part 12 of this series examined how AI systems learn to persuade, optimizing influence while (hopefully) respecting autonomy. But I glossed over something that deserves its own examination.
Everything I wrote in Part 22 assumed current AI architecture: stateless inference, no persistent self, each instance fresh. The system has no continuity. It doesn’t remember being reliable yesterday. It accumulates no history that could constitute character.
You are not one person.
You are Margaret-the-grandmother when your daughter visits with the children. Different values, different priorities, different ways of speaking. The self that emerges in that context genuinely differs from other Margarets.