Building a coding agent in Swift from scratch

(github.com)

79 points | by vanyaland 17 hours ago

12 comments

  • mark_l_watson
    15 hours ago
    I think this is a good learning project, based in a long perusal of the github repo. One suggestion: don’t call the CLI component of the project ‘claude’ - that seems like asking for legal takedown problems.
    • vanyaland
      13 hours ago
      Good point, I'll rename the binary. Thanks for actually going through the repo.
  • dostick
    2 hours ago
    It’s not quite clear that this project is- there’s no “Claude code” a program. There’s tui/gui app, harness, prompts, and LLM. so this is a harness part?
  • bensyverson
    13 hours ago
    I built a Swift library called Operator [0] to run the core agent loop, if it would save anyone time.

    [0]: https://github.com/bensyverson/Operator

  • maxbeech
    13 hours ago
    the interesting design tension i ran into building in this space is context management for longer sessions. the model accumulates tool call history that degrades output quality well before you hit the hard context limit - you start seeing "let me check that again" loops and increasingly hedged tool selection.a few things that helped: (1) summarizing completed sub-task outputs into a compact working-memory block that replaces the full tool call history, (2) being aggressive about dropping intermediate file read results once the relevant information has been extracted, and (3) structuring the initial system prompt so the model has a clear mental model of what "done" looks like before it starts exploring.the swift angle is actually a nice fit - the structured concurrency model maps well to the agent loop, and the strong type system makes tool schema definition less error-prone than JSON string wrangling in most other languages.
    • vanyaland
      13 hours ago
      Yeah, this is basically what I ran into too. I actually wrote about this in Stage 6 (https://ivanmagda.dev/posts/s06-context-compaction/) I went with your option (1): once history crosses a token threshold, the agent asks the model to summarize everything so far, then swaps the full history for that summary. Keeps the context window clean, though you do lose the ability to go back and reference exact earlier tool outputs.

      The hard part was picking when to trigger it. Too early and you're throwing away useful context. Too late and the model's already struggling. I ended up just using a simple token count — nothing clever, but it works.

      And yeah, the Swift angle was genuinely fun. Defining tool schemas as Codable structs that auto-generate JSON schemas at compile time, getting compiler errors instead of runtime API failures is a huge win.

  • faangguyindia
    5 hours ago
    I built my agent in python since agent is CLI.

    I used python+rich, but window resize wrecks UI layout

    This isn't the issue with nodejs based stuff.

  • brumbelow
    12 hours ago
    This is a cool idea. The stage-by-stage build makes the failure modes legible: first the loop, then tool dispatch, then persistence, then subagents/skills/compaction. A nice reminder that most of the magic is in state management and control flow
    • steve_adams_86
      9 hours ago
      I wouldn't say most of the magic is there, but I do think a lot of the progress we've seen in the last few years has been external to the models, and people sometimes miss that. For example, Claude Code has improved by leaps and bounds because the tooling has improved so much, from what I can see. But the underlying model is still what makes this relatively simple tooling so useful.
      • vanyaland
        8 hours ago
        Agreed. That's the core hypothesis behind this learning project — model is the magic, and the agent loop is just a thin, transparent wrapper around it. The goal of building it stage-by-stage was to prove you don't need a massive, complex framework to get good agentic behavior.
  • nhubbard
    14 hours ago
    How practically could we drop in Apple Intelligence once it's using Gemini as its core for a 100% local AI agent in a box?
    • NitpickLawyer
      14 hours ago
      IIUC Gemini will run in Apple's cloud infra, not on device. The only "gemini" local model is really old by today's standards, and is not that smart for local inference (newer open source models are better).
      • nhubbard
        14 hours ago
        That's what I figured. Some day eventually it will be possible. Until then, it's only LM Studio or Ollama as a potential hookup.

        I've got some ideas inspired by this project. It's promising.

  • lm2s
    13 hours ago
    Interesting, I'm also building one in Swift :D Seems like a good learning experience.
    • podlp
      5 hours ago
      I’m also working on agents in Swift with the AFM, just having it locally already installed is a huge selling point. I think narrowly-focused agents with good tooling and architecture could accomplish quite a bit, with tradeoffs in speed and cost. But I’m under the assumption that local models (like frontier models) will only get better with time
    • zingar
      7 hours ago
      What is the appeal of swift for this project? Is it just what you know?
  • vanyaland
    17 hours ago
    [dead]
  • rafaelbcs
    11 hours ago
    [dead]
  • vicchenai
    13 hours ago
    [dead]
  • felixagentai
    11 hours ago
    [flagged]
    • rob
      11 hours ago
      Maybe this is the agent OP built!