Microgpt explained interactively

(growingswe.com)

146 points | by growingswe 13 hours ago

8 comments

  • politelemon
    2 hours ago
    > By the end of training, the model produces names like "kamon", "karai", "anna", and "anton". None of them are copies from the dataset.

    Hey, I am able to see kamon, karai, anna, and anton in the dataset, it'd be worth using some other names: https://raw.githubusercontent.com/karpathy/makemore/988aa59/...

    • ayhanfuat
      2 hours ago
      You are absolutely right. The whole post reads like AI generated.
      • jsheard
        1 hour ago
        The rate they are posting new articles on random subjects is also a pretty indicative of a content mill.

        In 3 days they've covered machine learning, geometry, cryptography, file formats and directory services.

      • re
        1 hour ago
        I didn't get that sense from the prose; it didn't have the usual LLM hallmarks to me, though I'm not enough of an expert in the space to pick up on inaccuracies/hallucinations.

        The "TRAINING" visualization does seem synthetic though, the graph is a bit too "perfect" and it's odd that the generated names don't update for every step.

      • butterisgood
        1 hour ago
        ISWYDT
    • growingswe
      1 hour ago
      Thanks, will fix
  • malnourish
    1 hour ago
    I read through this entire article. There was some value in it, but I found it to be very "draw the rest of the owl". It read like introductions to conceptual elements or even proper segues had been edited out. That said, I appreciated the interactive components.
    • davidw
      40 minutes ago
      It started off nicely but before long you get

      "The MLP (multilayer perceptron) is a two-layer feed-forward network: project up to 64 dimensions, apply ReLU (zero out negatives), project back to 16"

      Which starts to feel pretty owly indeed.

      I think the whole thing could be expanded to cover some more of it in greater depth.

  • jmkd
    43 minutes ago
    It says its tailored for beginners, but I don't know what kind of beginner can parse multiple paragraphs like this:

    "How wrong was the prediction? We need a single number that captures "the model thought the correct answer was unlikely." If the model assigns probability 0.9 to the correct next token, the loss is low (0.1). If it assigns probability 0.01, the loss is high (4.6). The formula is − log ⁡ ( � ) −log(p) where � p is the probability the model assigned to the correct token. This is called cross-entropy loss."

  • love2read
    28 minutes ago
    Is it becoming a thing to misspell and add grammatical mistakes on purpose to show that an LLM didn't write the blog post? I noticed several spelling mistakes in Karpathy's blog post that this article is based on and in this article.
    • klysm
      21 minutes ago
      I expect this kind of counter signaling to become more common in the coming years.
    • efilife
      6 minutes ago
      You just started to notice it
  • grey-area
    44 minutes ago
    The original article from Karpathy: https://karpathy.github.io/2026/02/12/microgpt/
  • windowshopping
    1 hour ago
    The part that eludes me is how you get from this to the capability to debug arbitrary coding problems. How does statistical inference become reasoning?

    For a long time, it seemed the answer was it doesn't. But now, using Claude code daily, it seems it does.

    • ferris-booler
      30 minutes ago
      IMO your question is the largest unknown in the ML research field (neural net interpretability is a related area), but the most basic explanation is "if we can always accurately guess the next 'correct' word, then we will always answer questions correctly".

      An enormous amount of research+eng work (most of the work of frontier labs) is being poured into making that 'correct' modifier happen, rather than just predicting the next token from 'the internet' (naive original training corpus). This work takes the form of improved training data (e.g. expert annotations), human-feedback finetuning (e.g. RLHF), and most recently reinforcement learning (e.g. RLVR, meaning RL with verifiable rewards), where the model is trained to find the correct answer to a problem without 'token-level guidance'. RL for LLMs is a very hot research area and very tricky to solve correctly.

    • fc417fc802
      33 minutes ago
      Because it's not statistical inference on words or characters but rather stacked layers of statistical inference on ~arbitrarily complex semantic concepts which is then performed recursively.
      • love2read
        4 minutes ago
        This answer makes sense if you know that LLMs have layers, if you don't this answer is not super informative.

        If I were to describe this to a nontechnical person, I would say:

        LLMs are big stacks of layers of "understanders" that each teach the next guy something.

        Imagine you are making a large language model that has 4 layers. Each layer will talk to it's immediate neighbor.

        The first layer will get the bare minimum, in the LLM's of today, that's groups of letters that are common to come up together, called "tokens". This layer will try to derive a bit of meaning to tell the next layer, such as grouping of letters into words.

        The next layer may be a little bit more semantic, for example interpreting that the word "hot" immediately followed by the word "dog" maps to a phrase "hot dog".

        The layer after that, becoming a bit more intelligent given it's predecessors have already had some chances at smaller interpretations may now try to group words into bigger blobs, such as "i want a hot dog" as one combined phrase rather than a set of separated concepts.

        The final layer may do something even more intelligent afterward, like realize that this is a quote in a book.

        The point is that each layer tries to add a little meaning for the next layer.

        I want to stress this: the layers do not actually correspond to specific concepts the way I just expressed, the point is that each layer adds a bit more "semantic meaning" for the next layer.

  • ChrisArchitect
    39 minutes ago
  • nimbus-hn-test
    2 hours ago
    [dead]