5 comments

  • Real_Egor
    10 minutes ago
    I recommend installing Google's Antigravity and digging into its temp files in the user folder. You'll find some interesting ideas on how to organize memory there (the memory structure consists of: Brain / Conversation / Implicits / Knowledge items / Artifacts / Annotations / etc.).

    I'd also add that memory is best organized when it's "directed" (purpose-driven). You've already started asking questions where the answers become the memories (at least, you mention this in your description). So, it's really helpful to also define the structure of the answer, or a sequence of questions that lead to a specific conclusion. That way, the memories will be useful instead of turning into chaos.

    • marciopuga
      5 minutes ago
      That is an awesome lead! I'll explore how antigravity is organizing their memory. Thanks for that
  • rodspeed
    1 hour ago
    I've been building persistent memory for Claude Code too, narrower focus though: the AI's model of the user specifically. Different goal but I kept hitting what I think is a universal problem with long-lived memory. Not all stored information is equally reliable and nothing degrades gracefully.

    An observation from 30 sessions ago and a guess from one offhand remark just sit at the same level. So I started tagging beliefs with confidence scores and timestamps, and decaying ones that haven't been reinforced. The most useful piece ended up being a contradictions log where conflicting observations both stay on the record. Default status: unresolved.

    Tiered loading is smart for retrival. Curious if you've thought about the confidence problem on top of it, like when something in warm memory goes stale or conflicts with something newer.

    • marciopuga
      43 minutes ago
      This is a really good observation and honestly one of the hardest problems I've hit too.

      Cog doesn't use confidence scores (yet — you're making me think about it), but the nightly pipeline is basically a proxy for the same thing. The /reflect pass runs twice a day and does consistency sweeps — it reads canonical files and checks that every referencing file still agrees. When facts drift (and they do, constantly), it catches and fixes them. The reinforcement signal is implicit: things that keep coming up in conversations get promoted to hot memory, things that go quiet eventually get archived to "glacier" (cold storage, still retrievable but not loaded by default).

      The closest thing to your contradictions log is probably the observations layer — raw timestamped events that never get edited or deleted. Threads (synthesis files) get rewritten freely, but the observations underneath are append-only. So when the AI's understanding changes, the old observations are still there as a paper trail.

      Where I think you're ahead is making confidence explicit. My system handles staleness through freshness (timestamps, "as of" dates on entities, pipeline frequency) but doesn't distinguish between "I'm very sure about this" and "I inferred this once." That's a real gap. Would love to see what you're building — is it public?

    • samrus
      54 minutes ago
      This is really interesting. At this point you seem to be modelling real human memory

      In my opinion, this should happen inside the LLM dorectly. Trying to scaffold it on top of the next token predictor isnt going to be fruitful enough. It wont get us the robot butlers we need.

      But obviously thays really hard. That needs proper ML research, not primpt engineering

      • Real_Egor
        1 minute ago
        [delayed]
      • rodspeed
        50 minutes ago
        You're probably right long term. If LLMs eventually handle memory natively with confidence and decay built in, scaffolding like this becomes unnecessary. But right now they don't, and the gap between "stores everything flat" and "models you with any epistemological rigor" is pretty wide. This is a patch for the meantime.

        The other thing is that even if the model handles memory internally, you probably still want the beliefs to be inspectable and editable by the user. A hidden internal model of who you are is exactly the problem I was trying to solve. Transparency might need to stay in the scaffold layer regardless.

  • CharlesW
    1 hour ago
    How is this different and/or more interesting than Superpowers' episodic-memory skill¹ or Anthropic's Auto Dream²?

    ¹ https://github.com/obra/episodic-memory ² https://claudefa.st/blog/guide/mechanics/auto-dream

    • marciopuga
      1 hour ago
      the biggest difference would be the /foresight
  • kixiQu
    1 hour ago
    I like the idea of various extensions of LLM context using transparent plaintext, automatic consolidation and summarization... but I just can't read this LLM-generated text documenting it. The style is so painful. If someone ends up finding this tooling useful I hope they write it up and I hear about it again!
  • marciopuga
    1 hour ago
    [dead]