You spend an hour working with an AI agent. It learns your project structure, your naming conventions, your decision history. Then you close the session. Tomorrow, it knows nothing. This is the default state of every AI tool — and it's the first problem you need to solve.

The reason is architectural. Large language models are stateless. Each conversation is a clean slate with no memory of previous interactions. The context window — the amount of text the model can process at once — is the only "memory" it has, and it resets completely between sessions.

0 bytes
What an AI remembers between sessions by default. Every conversation starts from scratch.

Some tools partially address this. ChatGPT has "Memory." Claude has "Projects." These store snippets of context. But they're limited in capacity, opaque in what they store, and impossible to audit. You can't see exactly what the AI remembers. You can't control it. And the stored context is a thin summary, not a complete record.

For casual use, that's fine. For running a business, it's not.

The Session Logging Pattern

The fix is surprisingly simple: write things down. Not in a database. Not in a vector store. In files, on disk, that the AI can read at the start of the next session.

The pattern has two phases:

Phase 1: Log as you go. During every work session, the AI writes a timestamped entry to a temporary log file. Each entry captures what was done, what was decided, and what's outstanding. This happens in real-time, as work is completed — not as a summary at the end.

## 14:30 - Updated email campaign targeting
- Switched Klaviyo segment from "all subscribers" to "engaged 90d"
- Excluded customers who purchased in last 14 days
- Subject line A/B test: "New arrival" vs "Back in stock"
- Decision: Go with 90d engaged segment for all campaigns going forward

## 15:15 - Reviewed Q1 revenue dashboard
- Revenue tracking 8% above target ($127K actual vs $117K target)
- Email channel up 23% — driven by segment change from Jan 15
- Blocker: Shopify webhook for inventory sync still failing

Phase 2: Consolidate later. At the end of the day (or week), a consolidation agent reads all temporary logs and merges them into each project's official session log. The temp logs are archived. The official logs become the canonical record.

Why two phases? Because logging during a work session needs to be frictionless — fast, append-only, no formatting overhead. But the official session log needs to be organized, deduplicated, and structured for future reference. Separating capture from curation keeps both fast.

Why Files Beat Databases

When engineers hear "session persistence," they think databases, vector embeddings, RAG pipelines. These work, but they introduce complexity that creates its own failure modes.

Files have three advantages:

Inspectable. You can open the file and read exactly what the AI will read. No black box. No wondering "does it remember X?" Just open the session log and check. If something is wrong, you edit the file directly.

Portable. Files work on every operating system, with every AI tool, with no infrastructure. No database to maintain, no embeddings to reindex, no vector store to pay for. Move the folder to a new machine and everything comes with it.

Versionable. Files go into git. You get change history, diffs, and the ability to revert. If the AI writes something wrong to a session log, you can see exactly when it happened and roll it back. Try doing that with a vector database.

Files work on every operating system, with every AI tool, with no infrastructure. Move the folder to a new machine and everything comes with it.

The tradeoff is search performance. A 50-page session log is slower to scan than a vector query. But for business operations — where the relevant context is usually in the last 2-3 sessions — sequential file reading is fast enough.

The Architecture

Here's how the logging system is structured in our workspaces:

.claude/temp-logs/
    shikohin-41fred-2026-02-23.md    # Today's temp log for e-commerce
    lunix-41fred-2026-02-23.md       # Today's temp log for consulting
    personal-41fred-2026-02-23.md    # Today's temp log for personal

Shikohin Inc/
    shikohin-session-log.md          # Official consolidated log

Lunix Leadership/
    lunix-session-log.md             # Official consolidated log

Personal/
    personal-session-log.md          # Official consolidated log

Each temp log is named with the project prefix, the operator's username, and the date. Multiple operators can work in the same workspace without collision. Multiple projects log independently.

The consolidation agent reads temp logs, groups entries by project, deduplicates overlapping entries, and appends them to each project's official session log with a date header. Temp logs are moved to an archive folder.

Phase 1: Capture Log as you go temp-logs/{project}-{date}.md TEMP LOGS shikohin-02-23.md lunix-02-23.md personal-02-23.md Phase 2: Consolidate Merge into official logs {project}-session-log.md
Two-phase logging: fast capture during work, organized consolidation later

What Gets Logged

Not everything deserves a log entry. The rule is: log what a future session would need to know.

Things that get logged:

  • Decisions made and the reasoning behind them
  • Tasks completed with enough detail to verify later
  • Blockers encountered and their current status
  • Configuration changes (settings, integrations, credentials)
  • Metrics reviewed and their takeaways

Things that don't get logged:

  • Exploratory conversations that didn't lead to decisions
  • Trivial fixes (typos, formatting)
  • Research that was inconclusive

The signal-to-noise ratio matters. A session log bloated with irrelevant entries is worse than a sparse one, because it wastes context window space when the AI reads it in the next session.

The logging test: Before writing a log entry, ask: "Would a future session need to know this?" If the answer is no, skip it. Sparse, decision-focused logs outperform verbose ones every time.

The Continuity Effect

Once session logging is in place, something shifts. The AI stops feeling like a new hire every morning and starts feeling like a colleague who was in the meeting yesterday.

When you start a session, the AI reads the project's session log and immediately knows:

  • What you worked on last time
  • What decisions were made and why
  • What's still outstanding
  • What blockers need attention

You don't explain any of this. It's just there, in the files, loaded automatically. The conversation starts at "what should we focus on today?" instead of "let me tell you about my business."

The AI stops feeling like a new hire every morning and starts feeling like a colleague who was in the meeting yesterday.

Building AI systems with memory? Get implementation patterns delivered.

Implementation Checklist

If you're building an AI workspace that needs session persistence, here's the minimum viable implementation:

  1. Define the temp log path convention. Include project name, operator username, and date in the filename. This prevents collisions and makes consolidation straightforward.
  2. Add logging instructions to your config file. The AI needs to know when, where, and how to log. Be explicit about the format (timestamp + bullet points) and the criteria (decisions, completions, blockers).
  3. Build the consolidation step. This can be as simple as a script that concatenates temp logs into official logs, or as sophisticated as an AI agent that deduplicates and organizes entries.
  4. Load session logs at session start. Configure the AI to read the relevant project's session log before responding to anything. This is the step that creates continuity.
  5. Archive processed temp logs. Don't delete them — move them to an archive folder. You'll want them for debugging when something goes wrong with consolidation.

The entire system is plain text files and shell scripts. No infrastructure. No dependencies. No vendor lock-in. The AI forgets between sessions because it has nowhere to remember. Give it somewhere, and the forgetting stops.