83 comments

  • wilsmex
    12 hours ago
    Well this was interesting. As someone who was actually building similar website in the late 90's I threw this into the Opus 4.5. Note the original author is wrong about the original site however:

    "The Space Jam website is simple: a single HTML page, absolute positioning for every element, and a tiling starfield GIF background.".

    This is not true, the site is built using tables, not positioning at all, CSS wasn't a thing back then...

    Here was its one-shot attempt at building the same type of layout (table based) with a screenshot and assets as input: https://i.imgur.com/fhdOLwP.png

    • thecr0w
      11 hours ago
      Thanks, my friend. I added a strike through of the error, a correction, and credited you.

      I'm keeping it in for now because people have made some good jokes about the mistake in the comments and I want to keep that context.

      • wilsmex
        5 hours ago
        You bet, Fun post and writeup, took me a bit down memory lane. I built several sites with nested table-based layouts, 1x1 transparent gif files set to strange widths to get layouts to force certain sizes. Little tricks with repeating gradient backgrounds for fancy 'beveled' effects. Under construction GIFs, page counters, GUESTBOOKS!, Photoshop drop-shadows on everything. All the things, fond-times. One or two I haven't touched in 20 years, but keep online for my own time-capsule memory :)
        • testbjjl
          4 hours ago
          “Photoshop drop-shadows on everything.” I just time traveled for a few second there. Thank you for this comment.
    • govping
      4 hours ago
      The failure mode here (Claude trying to satisfy rather than saying 'this is impossible with the constraints') shows up everywhere. We use it for security research - it'll keep trying to find exploits even when none exist rather than admit defeat. The key is building external validation (does the POC actually work?) rather than trusting the LLM's confidence.
      • yard2010
        2 hours ago
        Ah! I see the problem now! AI can't see shit, it's a statistical model not some form of human. It uses words, so like humans, it can say every shit it wants and it's true until you find out.

        The number one rule of the internet is don't believe anything you read. This rule was lost in history unfortunately.

        • falcor84
          19 minutes ago
          When reasoning about sufficiently complex mechanisms, you benefit from adopting the Intentional Stance regardless of whether the thing on the other side is "some form of human". For example, when I'm planning a competitive strategy, I'm reasoning about how $OTHER_FIRM might respond to my pricing changes, without caring whether there's a particular mental process on the other side
    • manbash
      11 hours ago
      Ah, those days, where you would slice your designs and export them to tables.
      • chrisweekly
        11 hours ago
        I remember building really complex layouts w nested tables, and learning the hard way that going beyond 6 levels of nesting caused serious rendering performance problems in Netscape.
        • JimDabell
          8 hours ago
          I remember seeing a co-worker stuck on trying to debug Netscape showing a blank page. When I looked at it, it wasn’t showing a blank page per se, it was just taking over a minute to render tables nested twelve deep. I deleted exactly half of them with no change to the layout or functionality, and it immediately started rendering in under a second.
        • shomp
          9 hours ago
          Six nesting levels for tables? Cool, what were you making?
          • chrisweekly
            7 hours ago
            Upromise. com -- a service for helping families save $ for college. Those layouts, which I painstakingly hand-crafted in HTML, caused the CTO to say "I didn't know you could do that with HTML", and was served to the company's first 10M customers.
          • chimeracoder
            5 hours ago
            > Six nesting levels for tables?

            Hacker News uses nesting tables for comments. This comment that you're reading right now is rendered within a table that has three ancestor tables.

            As late as 2016 (possibly even later), they did so in a way that resulted in really tiny text when reading comments on mobile devices in threads that were more than five or so layers deep. That isn't the case anymore - it might be because HN updated the way it generates the HTML, though it could also be that browser vendors updated their logic for rendering nested tables as well. I know that it was a known problem amongst browser developers, because most uses for nested tables were very different than what HN was (is?) using them for, so making text inside deeply nested tables smaller was generally a desirable feature... just not in the context of Hacker News.

      • reconnecting
        4 hours ago
        Why not! We did this in 2024 for our website (1) to have zero CSS.

        Still works, only Claude can not understand what those tables means.

        1. https://www.tirreno.com

        • lewiscollard
          3 hours ago
          That's a fun trick, but please consider adding ARIA roles (e.g. role="presentation" to <table>, role="heading" aria-level="[number]" to the <font> elements used for headings) to make your site understandable by screen readers.
        • danielbarla
          3 hours ago
          > Why not!

          Responsive layout would be the biggest reason (mobile for one, but also a wider range of PC monitor aspect ratios these days than the 4:3 that was standard back then), probably followed by conflating the exact layout details with the content, and a separation of concerns / ease of being able to move things around.

          I mean, it's a perfectly viable thing if these are not requirements and preferences that you and your system have. But it's pretty rare these days that an app or site can say "yeah, none of those matter to me the least bit".

      • thecr0w
        11 hours ago
        I learned recently that this is still how a lot of email html get generated.
        • mananaysiempre
          9 hours ago
          Apparently Outlook (the actual one, not the recent pretender) still uses some ancient WordHTML version as the renderer, so there isn’t much choice.
          • masklinn
            3 hours ago
            Fun fact: until Office 2007, outlook used IE’s engine for rendering html.
        • ricardonunez
          10 hours ago
          Oh yeah, recently I had to update a newsletter design like that and older versions of outlook still didn’t render properly.
      • gregoryl
        11 hours ago
        Gosh, there was a website, where you submit a PSD + payment, and they spit out a sliced design. Initially tables, later, CSS. Life saver.
      • jweir
        7 hours ago
        And use a single px invisible gif to move things around.

        But was Space Jam using multiple images or just one large image with and image map for links?

        • bot403
          6 hours ago
          The author said he had the assets and gave them to Claude. It would be obvious if he had one large image for all the planets instead of individual ones.
      • mmanfrin
        7 hours ago
        I yearn for those days. CSS was a mistake. Tables and DHTML is all one needs.
        • thomasz
          3 hours ago
          You jest, but it took forever to add somewhat intuitive layout mechanism to css which allowed you to do what could be done easily with html tables. Vertically centering a div inside another was really hard, and very few people understood the techniques you would use, instead of blindly copying them.

          It was beyond irony that the recommended solution was to tell the browser to render your divs as a table.

        • bluSCALE4
          6 hours ago
          CSS was a mistake? JavaScript was a mistake, specifically JavaScript frameworks.
          • tobyjsullivan
            6 hours ago
            JavaScript? HTML and HTTP were the real mistakes.
            • someguyiguess
              5 hours ago
              HTML and HTTP? TCP was the real mistake.
              • insaider
                5 hours ago
                "In the beginning the universe was created. This made a lot of people angry and has widely been considered as a bad move."
    • johnebgd
      11 hours ago
      I cut my teeth developing for the web using GoLive and will never forget how they used tables to layout a page from that tool…
  • thuttinger
    15 hours ago
    Claude/LLMs in general are still pretty bad at the intricate details of layouts and visual things. There are a lot of problems that are easy to get right for a junior web dev but impossible for an LLM. On the other hand, I was able to write a C program that added gamma color profile support to linux compositors that don't support it (in my case Hyprland) within a few minutes! A - for me - seemingly hard task, which would have taken me at least a day or more if I didn't let Claude write the code. With one prompt Claude generated C code that compiled on first try that:

    - Read an .icc file from disk

    - parsed the file and extracted the VCGT (video card gamma table)

    - wrote the VCGT to the video card for a specified display via amdgpu driver APIs

    The only thing I had to fix was the ICC parsing, where it would parse header strings in the wrong byte-order (they are big-endian).

    • jacquesm
      13 hours ago
      Claude didn't write that code. Someone else did and Claude took that code without credit to the original author(s), adapted it to your use case and then presented it as its own creation to you and you accepted this. If a human did this we probably would have a word for them.
      • mlinsey
        12 hours ago
        Certainly if a human wrote code that solved this problem, and a second human copied and tweaked it slightly for their use case, we would have a word for them.

        Would we use the same word if two different humans wrote code that solved two different problems, but one part of each problem was somewhat analogous to a different aspect of a third human's problem, and the third human took inspiration from those parts of both solutions to create code that solved a third problem?

        What if it were ten different humans writing ten different-but-related pieces of code, and an eleventh human piecing them together? What if it were 1,000 different humans?

        I think "plagiarism", "inspiration", and just "learning from" fall on some continuous spectrum. There are clear differences when you zoom out, but they are in degree, and it's hard to set a hard boundary. The key is just to make sure we have laws and norms that provide sufficient incentive for new ideas to continue to be created.

        • nitwit005
          10 hours ago
          Ask for something like "a first person shooter using software rendering", and search github for the function names for the rendering functions. Using Copilot I found code simply lifted from implementations of Doom, except that "int" was replaced with "int32_t" and similar.

          It's also fun to tell Copilot that the code will violate a license. It will seemingly always tell you it's fine. Safe legal advice.

          • martin-t
            9 hours ago
            And this is just the stuff you notice.

            1) Verbatin copy is first-order plagiarism.

            2a) Second-order plagiarism of written text would be replacing words with synonyms. Or taking a book paragraph by paragraph and for each one of them, rephrasing it in your own words. Yes, it might fool automated checkers but the structure would still be a copy of the original book. And most importantly, it would not contain any new information. No new positive-sum work was done. It would have no additional value.

            Before LLMs almost nobody did this because the chance that it would help in a lawsuit vs the amount of work was not a good tradeoff. Now it is. But LLMs can do "better":

            2b) A different kind of second-order plagiarism is using multiple sources and plagiarizing each of them only in part. Find multiple books on the same topic, take 1 chapter from each and order them in a coherent manner. Make it more granular. Find paragraphs or phrases which fit into the structure of your new book but are verbatim from other books. See how granular you can make it.

            The trick here is that doing this by hand is more work than just writing your own book. So nobody did it and copyright law does not really address this well. But with LLMs, it can be automated. You can literally instruct an LLM to do this and it will do it cheaper than any human could. However, how LLMs work internally is yet different:

            n) Higher-order plagiarism is taking multiple source books, identifying patterns, and then reproducing them in your "new" book.

            If the patterns are sufficiently complex, nobody will ever be able to prove what specifically you did. What previously took creative human work now became a mechanical transformation of input data.

            The point is this ability to detect and reproduce patterns is an impressive innovation but it's built on top of the work of hundreds of millions[0] of humans whose work was used without consent. The work done by those employed by the LLM companies is minuscule compared to that. Yet all of the reward goes to them.

            Not to mention LLMs completely defear the purpose of (A)GPL. If you can take AGPL code and pass it through a sufficiently complex mechanical transformation that the output does the same thing but copyright no longer applies, then free software is dead. No more freedom to inspect and modify.

            [0]: Github alone has 100 million users ( https://expandedramblings.com/index.php/github-statistics/ ) and we have reason to believe all of their data was used in training.

            • jacquesm
              9 hours ago
              If a human did 2a or 2b we would think that a larger infraction than (1) because it shows intent to obfuscate the origins.

              As for your free software is dead argument: I think it is worse than that: it takes away the one payment that free software authors get: recognition. If a commercial entity can take the code, obfuscate it and pass it off as their own copyrighted work to then embrace and extend it then that is the worst possible outcome.

              • martin-t
                8 hours ago
                > shows intent to obfuscate the origins

                Good point. Reminds me of how if you poison one person, you go to prison, but when a company poisons thousands, it gets a fine... sometimes.

                > it takes away the one payment that free software authors get: recognition

                I keep flip-flopping on this. I did most of my open source work not caring about recognition but about the principles of GPL and later AGPL. However, I came to realize it was a mistake - people don't judge you by the work you actually do but by the work you appear to do. I have zero respect for people who do something just for the approval of others but I am aware of the necessity of making sure people know your value.

                One thing is certain: credit/recognition affect all open source code, user rights (e.g. to inspect and modify) affect only the subset under (A)GPL.

                Both are bad in their own right.

            • fc417fc802
              1 hour ago
              You make several good points, and I appreciate that they appear well thought out.

              > What previously took creative human work now became a mechanical transformation of input data.

              At which point I find myself wondering if there's actually a problem. If it was previously permitted due to the presence of creative input, why should automating that process change the legal status? What justifies treating human output differently?

              > then free software is dead. No more freedom to inspect and modify.

              It seems to me that depends on the ideological framing. Consider a (still entirely hypothetical) world where anyone can receive approximately any software they wish with little more than a Q&A session with an expert AI agent. Rather than free software being dead, such a scenario would appear to obviate the vast majority of needs that free software sets out to serve in the first place.

              It seems a bit like worrying that free access to a comprehensive public transportation service would kill off a ride sharing service. It probably would, and the end result would also probably be a net benefit to humanity.

          • fransje26
            1 hour ago
            > It's also fun to tell Copilot that the code will violate a license. It will seemingly always tell you it's fine. Safe legal advice.

            Perfectly embodies the AI "startup" mentality. Nice.. /s

        • whatshisface
          12 hours ago
          They key difference between plagarism and building on someone's work is whether you say, "this based on code by linsey at github.com/socialnorms" or "here, let me write that for you."
          • CognitiveLens
            11 hours ago
            but as mlinsey suggests, what if it's influenced in small, indirect ways by 1000 different people, kind of like the way every 'original' idea from trained professionals is? There's a spectrum, and it's inaccurate to claim that Claude's responses are comparable to adapting one individual's work for another use case - that's not how LLMs operate on open-ended tasks, although they can be instructed to do that and produce reasonable-looking output.

            Programmers are not expected to add an addendum to every file listing all the books, articles, and conversations they've had that have influenced the particular code solution. LLMs are trained on far more sources that influence their code suggestions, but it seems like we actually want a higher standard of attribution because they (arguably) are incapable of original thought.

            • saalweachter
              10 hours ago
              It's not uncommon, in a well-written code base, to see documentation on different functions or algorithms with where they came from.

              This isn't just giving credit; it's valuable documentation.

              If you're later looking at this function and find a bug or want to modify it, the original source might not have the bug, might have already fixed it, or might have additional functionality that is useful when you copy it to a third location that wasn't necessary in the first copy.

              • jacquesm
                2 hours ago
                This is why I'm still, even after decades of seeing it fail in the marketplace, a fan of literate programming.
            • sarchertech
              11 hours ago
              If the problem you ask it to solve has only one or a few examples, or if there are many cases of people copy pasting the solution, LLMs can and will produce code that would be called plagiarism if a human did it.
          • ineedasername
            10 hours ago
            Do you have a source for that being the key difference? Where did you learn your words, I don’t see the names of your teachers cited here. The English language has existed a while, why aren’t you giving a citation every time you use a word that already exists in a lexicon somewhere? We have a name for people who don’t coin their own words for everything and rip off the words that other painstakingly evolved over a millennia of history. Find your own graphemes.
            • latexr
              9 hours ago
              What a profoundly bad faith argument. We all understand that singular words are public domain, they belong to everyone. Yet when you arrange them in a specific pattern, of which there are infinite possibilities, you create something unique. When someone copies that arrangement wholesale and claims they were the first, that’s what we refer to as plagiarism.

              https://www.youtube.com/watch?v=K9huNI5sBd8

              • ineedasername
                6 hours ago
                It’s not bad faith argument. It’s an attempt to shake thinking that is profoundly stuck by taking that thinking to an absurd extreme. Until that’s done, quite a few people aren’t able to see past the assumptions they don’t know they making. And by quite a few people I mean everyone, at different times. A strong appreciation for the absurd will keep a person’s thinking much sharper.
              • tscherno
                2 hours ago
                It is possible that the concept of intellectual property could be classified as a mistake of our era by the history teachers of future generations.
                • latexr
                  2 hours ago
                  Intellectual property is a legal concept; plagiarism is ethical. We’re discussing the latter.
              • jacquesm
                9 hours ago
                This particular user does that all the time. It's really tiresome.
                • ineedasername
                  6 hours ago
                  It’s tiresome to see unexamined assumptions and self-contradictions tossed out by a community that can and often does do much better. Some light absurdism often goes further and makes clear that I’m not just trying to setup a strawman since I’ve already gone and made a parody of my own point.
        • jacquesm
          2 hours ago
          > we have laws and norms that provide sufficient incentive for new ideas to continue to be created

          Indeed, and up until the advent of 'AI' we did. But that incentive is being killed right now and I don't see any viable replacement on the horizon.

        • nextos
          12 hours ago
          In case of LLMs, due to RAG, very often it's not just learning but almost direct real-time plagiarism from concrete sources.
          • doix
            10 hours ago
            Isn't RAG used for your code rather than other people's code? If I ask it to implement some algorithm, I'd be very surprised if RAG was involved.
          • sholain
            10 hours ago
            RAG and LLMs are not the same thing, but 'Agents' incorporate both.

            Maybe we could resolve the bit of a conundrum by the op in requiring 'agents' to give credit for things if they did rag them or pull them off the web?

            It still doesn't resolve the 'inherent learning' problem.

            It's reasonable to suggest that if 'one person did it, we should give credit' - at least in some cases, and also reasonable that if 1K people have done similar things ad the AI learns from that, well, I don't think credit is something that should apply.

            But a couple of considerations:

            - It may not be that common for an LLM to 'see one thing one time' and then have such an accurate assessment of the solution. It helps, but LLMs tend not to 'learn' things that way.

            - Some people might consider this the OSS dream - any code that's public is public and it's in the public domain. We don't need to 'give credit' to someone because they solved something relatively arbitrary - or - if they are concerned with that, then we can have a separate mechanism for that, aka they can put it on Github or Wikipedia even, and then we can worry about 'who thought of it first' as a separate consideration. But in terms of Engineering application, that would be a bit of a detractor.

            • martin-t
              9 hours ago
              > if 1K people have done similar things ad the AI learns from that, well, I don't think credit is something that should apply.

              I think it should.

              Sure, if you make a small amount of money and divide it among the 1000 people who deserve credit due to their work being used to create ("train") the model, it might be too small to bother.

              But if actual AGI is achieved, then it has nearly infinite value. If said AGI is built on top of the work of the 1000 people, then almost infinity divided by 1000 is still a lot of money.

              Of course, the real numbers are way larger, LLMs were trained on the work of at least 100M but perhaps over a billion of people. But the value they provide over a long enough timespan is also claimed to be astronomical (evidenced by the valuations of those companies). It's not just their employees who deserve a cut but everyone whose work was used to train them.

              > Some people might consider this the OSS dream

              I see the opposite. Code that was public but protected by copyleft can now be reused in private/proprietary software. All you need to do it push it through enough matmuls and some nonlinearities.

        • geniium
          6 hours ago
          Thanks for writing this - love the way u explain the pov. I wish people would consider this angle more
      • bsaul
        13 hours ago
        That's an interesting hypothesis : that LLM are fundamentally unable to produce original code.

        Do you have papers to back this up ? That was also my reaction when i saw some really crazy accurate comments on some vibe coded piece of code, but i couldn't prove it, and thinking about it now i think my intuition was wrong (ie : LLMs do produce original complex code).

        • jacquesm
          13 hours ago
          We can solve that question in an intuitive way: if human input is not what is driving the output then it would be sufficient to present it with a fraction of the current inputs, say everything up to 1970 and have it generate all of the input data from 1970 onwards as output.

          If that does not work then the moment you introduce AI you cap their capabilities unless humans continue to create original works to feed the AI. The conclusion - to me, at least - is that these pieces of software regurgitate their inputs, they are effectively whitewashing plagiarism, or, alternatively, their ability to generate new content is capped by some arbitrary limit relative to the inputs.

          • measurablefunc
            11 hours ago
            This is known as the data processing inequality. Non-invertible functions can not create more information than what is available in their inputs: https://blog.blackhc.net/2023/08/sdpi_fsvi/. Whatever arithmetic operations are involved in laundering the inputs by stripping original sources & references can not lead to novelty that wasn't already available in some combination of the inputs.

            Neural networks can at best uncover latent correlations that were already available in the inputs. Expecting anything more is basically just wishful thinking.

            • xyzzy123
              8 hours ago
              Using this reasoning, would you argue that a new proof of a theorem adds no new information that was not present in the axioms, rules of inference and so on?

              If so, I'm not sure it's a useful framing.

              For novel writing, sure, I would not expect much truly interesting progress from LLMs without human input because fundamentally they are unable to have human experiences, and novels are a shadow or projection of that.

              But in math – and a lot of programming – the "world" is chiefly symbolic. The whole game is searching the space for new and useful arrangements. You don’t need to create new information in an information-theoretic sense for that. Even for the non-symbolic side (say diagnosing a network issue) of computing, AIs can interact with things almost as directly as we can by running commands so they are not fundamentally disadvantaged in terms of "closing the loop" with reality or conducting experiments.

              • measurablefunc
                7 hours ago
                Sound deductive rules of logic can not create novelty that exceeds the inherent limits of their foundational axiomatic assumptions. You can not expect novel results from neural networks that exceed the inherent information capacity of their training corpus & the inherent biases of the neural network (encoded by its architecture). So if the training corpus is semantically unsound & inconsistent then there is no reason to expect that it will produce logically sound & semantically coherent outputs (i.e. garbage inputs → garbage outputs).
                • xyzzy123
                  4 hours ago
                  Maybe? But it also seems like you are that you are not accounting for new information at inference time. Let's pretend I agree the LLM is a plagiarism machine that can produce no novelty in and of itself that didn't come from what it was trained on, and produces mostly garbage (I only half agree lol, and I think "novelty" is under-specified here).

                  When I apply that machine (with its giant pool of pirated knowledge) _to my inputs and context_ I can get results applicable to my modestly novel situation which is not in the training data. Perhaps the output is garbage. Naturally if my situation is way out of distribution I cannot expect very good results.

                  But I often don't care if the results are garbage some (or even most!) of the time if I have a way to ground-truth whether they are useful to me. This might be via running a compile, a test suite, a theorem prover or mk1 eyeball. Of course the name of the game is to get agents to do this themselves and this is now fairly standard practice.

                  • measurablefunc
                    3 hours ago
                    I'm not here to convince you whether Markov chains are helpful for your use cases or not. I know from personal experience that even in cases where I have a logically constrained query I will receive completely nonsensical responses¹.

                    ¹https://chatgpt.com/share/69367c7a-8258-8009-877c-b44b267a35...

                    • jacquesm
                      2 hours ago
                      > Here is a correct, standard correction:

                      It does this all the time, but as often as not then outputs nonsense again, just different nonsense, and if you keep it running long enough it starts repeating previous errors (presumably because some sliding window is exhausted).

                      • measurablefunc
                        1 hour ago
                        That's been my general experience and that was the most recent example. People keep forgetting that unless they can independently verify the outputs they are essentially paying OpenAI for the privilige of being very confidently gaslighted.
            • cornel_io
              5 hours ago
              Theoretical "proofs" of limitations like this are always unhelpful because they're too broad, and apply just as well to humans as they do to LLMs. The result is true but it doesn't actually apply any limitation that matters.
              • measurablefunc
                4 hours ago
                You're confused about what applies to people & what applies to formal systems. You will continue to be confused as long as you keep thinking formal results can be applied in informal contexts.
          • andsoitis
            12 hours ago
            I like your test. Should we also apply to specific humans?

            We all stand on the shoulders of giants and learn by looking at others’ solutions.

            • jacquesm
              12 hours ago
              That's true. But if we take your implied rebuttal then current level AI would be able to learn from current AI as well as it would learn from humans, just like humans learn from other humans. But so far that does not seem to be the case, in fact, AI companies do everything they can to avoid eating their own tail. They'd love eating their own tail if it was worth it.

              To me that's proof positive they know their output is mangled inputs, they need that originality otherwise they will sooner or later drown in nonsense and noise. It's essentially a very complex game of Chinese whispers.

              • handoflixue
                7 hours ago
                Equally, of course, all six year olds need to be trained by other six year olds; we must stop this crutch of using adult teachers
              • andsoitis
                11 hours ago
                I share that perspective.
          • andrepd
            12 hours ago
            Excellent observation.
          • ninetyninenine
            6 hours ago
            [dead]
          • bfffbgfdcb
            12 hours ago
            [flagged]
            • jacquesm
              12 hours ago
              I think my track record belies your very low value and frankly cowardly comment. If you have something to say at least do it under your real username instead of a throwaway.
        • fpoling
          12 hours ago
          Pick up a book about programming from seventies or eighties that was unlikely to be scanned and feed into LLM. Take a task from it and ask LLM to write a program from it that even a student can solve within 10 minutes. If the problem was not really published before, LLM fails spectacularly.
          • crawshaw
            12 hours ago
            This does not appear to be true. Six months ago I created a small programming language. I had LLMs write hundreds of small programs in the language, using the parser, interpreter, and my spec as a guide for the language. The vast majority of these programs were either very close or exactly what I wanted. No prior source existed for the programming language because I created it whole cloth days earlier.
            • jazzyjackson
              12 hours ago
              Obviously you accidentally recreated a language from the 70s :P

              (I created a template language for JSON and added branching and conditionals and realized I had a whole programming language. Really proud of my originality until i was reading Ted Nelson's Computer Lib/Dream Machines and found out I reinvented TRAC, and to some extent, XSLT. Anyway LLMs are very good at reasoning about it because it can be constrained by a JSON schema. People who think LLMs only regurgitate haven't given it a fair shot)

              • zahlman
                11 hours ago
                FWIW, I think a JSON-based XSLT-like thing sounds far more enjoyable to use than actual XSLT, so I'd encourage you to show it off.
            • fpoling
              12 hours ago
              Languages with reasonable semantics are rather similar and LLMs are good at detecting that and adapting from other languages.
              • pertymcpert
                5 hours ago
                Sounds like creativity and intelligence to me.
          • handoflixue
            7 hours ago
            It's telling that you can't actually provide a single concrete example - because, of course, anyone skilled with LLMs would be able to trivially solve any such example within 10 minutes.

            Perhaps the occasional program that relies heavily on precise visual alignment will fail - but I dare say if we give the LLM the same grace we'd give a visually impaired designer, it can do exactly as well.

            • tovej
              4 hours ago
              I recently asked an LLM to give me one of the most basic and well-documented algorithms in the world: a blocked matrix multiply. It's essentially a few nested loops and some constants for the block size.

              It failed massively, spitting out garbage code, where the comments claimed to use blocking access patterns, but the code did not actually use them at all.

              LLMs are, frankly, nearly useless for programming. They may solve a problem every once in a while, but once you look at the code, you notice it's either directly plagiarized or bad quality (or both, I suppose, in the latter case).

          • anjel
            12 hours ago
            Sometimes its generated, and many times its not. Trivial to denote, but its been deemed non of your business.
          • ahepp
            11 hours ago
            You've done this? I would love to read more about it
        • _heimdall
          11 hours ago
          I have a very anecdotal, but interesting, counterexample.

          I recently asked Gemini 3 Pro to create an RSS feed reader type of experience by using XSLT to style and layout an OPML file. I specifically wanted it to use a server-side proxy for CORS, pass through caching headers in the proxy to leverage standard HTTP caching, and I needed all feed entries for any feed in the OPML to be combined into a single chronological feed.

          It initially told multiple times that it wasn't possible (it also reminded me that Google is getting rid of XSLT). Regardless, after reiterating that it is possible multiple times it finally decided to make a temporary POC. That POC worked on the first try, with only one follow up to standardize date formatting with support for Atom and RSS.

          I obviously can't say the code was novel, though I would be a bit surprised if it trained on that task enough for it to remember roughly the full implementation and still claimed it was impossible.

          • jacquesm
            11 hours ago
            Why do you believe that to be a counterexample? In fragmentary form all of these elements must have been present in the input, the question is really how large the largest re-usable fragment was and whether or not barring some transformations you could trace it back to the original. I've done some experiments along the same lines to see what it spits out and what I noticed is that from example to example the programming style changed drastically, to the point that I suspect that it was mimicking even the style and not just the substance of the input data, and this over chunks of code long enough that it would definitely clear the bar for plagiarism.
            • handoflixue
              7 hours ago
              > In fragmentary form all of these elements must have been present in the input

              Yes, and Shakespeare merely copied the existing 26 letters of the English alphabet. What magical process do you think students are using when they read and re-combine learned examples to solve assignments?

        • checker659
          5 hours ago
          I think the burden of proof is on the people making the original claim (that LLMs are indeed spitting out original code).
        • martin-t
          10 hours ago
          The whole "reproduces training data vebatim" is a red herring.

          It reproduces _patterns from the training data_, sometimes including verbatim phrases.

          The work (to discover those patterns, to figure out what works and what does not, to debug some obscure heisenbug and write a blog post about it, ...) was done by humans. Those humans should be compensated for their work, not owners of mega-corporations who found a loophole in copyright.

        • moron4hire
          9 hours ago
          No, the thing needing proof is the novel idea: that LLMs can produce original code.
          • marcus_holmes
            8 hours ago
            LLMs can definitely produce original other stuff: ask it to create an original poem and on an extremely specific niche subject and it will do so. You can specify the niche subject to the point where it is incredibly unlikely that there is a poem on that subject in its training data, and it will still produce an original poem on that subject [0]. The well-known "otter using wifi on a plane" series of images [1] is another example: this is not in the training data (well, it is now, because well-known, but you get the idea).

            Is there something unique about code, that is different from language (or images), that would make it impossible for an LLM to produce original code? I don't believe so, but I'm willing to be convinced.

            I think this switches the burden of proof: we know LLMs can produce original content in other contexts. Why would they not be able to create original code?

            [0] Ever curious, I tested this assumption. I got Claude to write an original limerick about goats oiling their beards with olive oil, which was the first reasonable thing I could think of as a suitably niche subject. I googled the result and could not find anything close to it. I then asked it to produce another limerick on the same subject, and it produced a different limerick, so obviously not just repeating training data.

            [1] https://www.oneusefulthing.org/p/the-recent-history-of-ai-in...

            • jacquesm
              4 hours ago
              No, it transformed your prompt. Another person giving it the same prompt will get the same result when starting from the same state. f('your prompt here') is a transformation of your prompt based on hidden state.
              • marcus_holmes
                10 minutes ago
                This is also true of humans, see every debate on free will ever.

                The trick, of course, is getting to the exact same starting state.

          • handoflixue
            7 hours ago
            What's your proof that the average college student can produce original code? I'm reasonably certain I can get an LLM to write something that will pass any test that the average college student can, as far as that goes.
      • ekropotin
        12 hours ago
        > If a human did this we probably would have a word for them.

        What do you mean? The programmers work is literally combining the existing patterns into solutions for problems.

      • Mtinie
        13 hours ago
        > If a human did this we probably would have a word for them.

        I don’t think it’s fair to call someone who used Stack Overflow to find a similar answer with samples of code to copy to their project an asshole.

        • jacquesm
          13 hours ago
          Who brought Stack Overflow up? Stack Overflow does not magically generate code, someone has to actually provide it first.
          • Mtinie
            13 hours ago
            I generally agree with your underlying point concerning attribution and intellectual property ownership but your follow-up comment reframes your initial statement: LLMs generate recombinations of code from code created by humans, without giving credit.

            Stack Overflow offers access to other peoples’ work, and developers combined those snippets and patterns into their own projects. I suspect attribution is low.

            • jacquesm
              13 hours ago
              Stack Overflow deals with that issue by having a license agreement.
              • Mtinie
                9 hours ago
                GitHub, Bitbucket, GCE, AWS…all have licensing agreements for user contributions which the user flagged as “public” so I’m not exactly clear of your point if you are holding SO up as a bastion of intellectual property rights different from the other places LLM training sets were scraped from.
                • jacquesm
                  9 hours ago
                  I was not the person that introduced SO to the discussion.
              • mbesto
                11 hours ago
                To be fair, their license agreement is pretty much impossible to enforce.
        • bluedino
          13 hours ago
          It has been for the last 15 years.
        • sublinear
          13 hours ago
          Using stack overflow recklessly is definitely asshole behavior.
          • Mtinie
            9 hours ago
            Recklessly is a strong word. I’ll give you the benefit of the doubt and assume your comment in good faith.

            How do you describe the “reckless” use of information?

      • Aeolun
        12 hours ago
        Software engineer? You think I cite all the code I’ve ever seen before when I reproduce it? That I even remember where it comes from?
        • tovej
          4 hours ago
          You don't?

          If you reproduce something, usually you have to check the earlier implementation for it and copy it over. This would inevitably require you to look at the license and author of said code.

          Assuming of course, you're talking about nontrivial functionality, because obviously we're not talking about trivial one-liners etc.

      • ineedasername
        10 hours ago
        >we probably would have a word for them

        Student? Good learner? Pretty much what everyone does can be boiled down to reading lots of other code that’s been written and adapting it to a use case. Sure, to some extent models are regurgitating memorized information, but for many tasks they’re regurgitating a learned method of doing something and backfilling the specifics as needed— the memorization has been generalized.

      • raincole
        5 hours ago
        This is why ragebait is chosen as the word of 2025.

        > took that code without credit to the original author(s), adapted it to your use case

        Aka software engineering.

      • fooker
        12 hours ago
        > If a human did this we probably would have a word for them.

        Humans do this all the time.

      • FanaHOVA
        13 hours ago
        Are you saying that every piece of code you have ever written contains a full source list of every piece of code you previously read to learn specific languages, patterns, etc?

        Or are you saying that every piece of code you ever wrote was 100% original and not adapted from any previous codebase you ever worked in or any book / reference you ever read?

        • pests
          5 hours ago
          While I generally agree with you, this "LLM is a human" comparisons really are tiresome I feel. It hasn't been proven and I don't know how many other legal issued could have solved if adding "like a human" made it okay. Google v Oracle? "oh, you've never learned an API??!?" or take the original Google Books controversy - "its reading books and memorizing them, like humans can". I do agree its different but I don't like this line of argument at all.
          • FanaHOVA
            4 hours ago
            I agree, that's why I was trying to point out that saying "if a person did that we'd have a word for them" is useless. They are not people, and people don't behave like that anyway. It adds nothing to the discussion.
        • jacquesm
          13 hours ago
          What's with the bad takes in this thread. That's two strawmen in one comment, it's getting a bit crowded.
          • DangitBobby
            12 hours ago
            Or the original point doesn't actually hold up to basic scrutiny and is indistinguishable from straw itself.
            • jacquesm
              12 hours ago
              HN has guidelines for a reason.
              • incr_me
                11 hours ago
                You're adhering to an excess of rules, methinks!
            • tovej
              4 hours ago
              The original point, that LLMs are plagiarising inputs, is a very common and common sense opinion.

              There are court cases where this is being addressed currently, and if you think about how LLMs operate, a reasonable person typically sees that it looks an awful lot like plagiarism.

              If you want to claim it is not plagiarism, that requires a good argument, because it is unclear that LLMs can produce novelty, since they're literally trying to recreate the input data as faithfully as possible.

      • FeepingCreature
        9 hours ago
        This is not how LLMs work.
      • giancarlostoro
        13 hours ago
        You mean like copying and pasting code from Stack Overflow?
      • nvllsvm
        11 hours ago
        > Someone else did

        Who?

      • martin-t
        10 hours ago
        Programmers are willingly blind to this, at least until it's their code being stolen or they lose their job.

        _LLMs are lossily compressed archives of stolen code_.

        Trying to achieve AI through compression is nothing new.[0] The key innovation[1] is that the model[2] does not output only the first order input data but also the higher order patterns from the input data.

        That is certainly one component of intelligence but we need to recognize that the tech companies didn't build AI, they build a compression algorithm which, combined with the stolen input text, can reproduce the input data and its patterns in an intelligent-looking way.

        [0]: http://prize.hutter1.net/

        [1]: Oh, god, this phrase is already triggering my generated-by-LLM senses.

        [2]: Model of what? Of the stolen text. If 99.9999% of the work to achieve AI wasn't done by people whose work was stolen, they wouldn't be called models.

      • kevinsync
        10 hours ago
        I've been struggling with this throughout the entire LLM-generated-code arc we're currently living -- I agree that it is wack in theory to take existing code and adapt it to your use-case without proper accreditation, but I've also been writing code since Pulp Fiction was in theaters and a lot of it is taking existing code and adapting it to my use-case, sometimes without a fully-documented paper trail.

        Not to mention the moral vagaries of "if you use a library, is the complete articulation of your thing actually 100% your code?"

        Is there a difference between loading and using a function from ImageMagick, and a standalone copycat function that mimics a function from ImageMagick?

        What if you need it transliterated from one language to another?

        Is it really that different than those 1200 page books from the 90's that walk you through implementing a 3D engine from scratch (or whatever the topic might be)? If you make a game on top of that book's engine, is your game truly yours?

        If you learn an algorithm in some university class and then just write it again later, is that code yours? What if your code is 1-for-1 a copy of the code you were taught?

        It gets very murky very quick!

        Obviously I would encourage proper citation, but I also recognize the reality of this stuff -- what if you're fully rewriting something you learned decades ago and don't know who to cite? What if you have some code snippet from a website long forgotten that you saved and used? What if you use a library that also uses a library that you're not aware of because you didn't bother to check, and you either cite the wrapper lib or cite nothing at all?

        I don't have some grand theory or wise thoughts about this shit, and I enjoy the anthropological studies trying to ascertain provenance / assign moral authority to remarkable edge cases, but end of the day I also find it exhausting to litigate the use of a tool that exploited the fact that your code got hoovered up by a giant robot because it was public, and might get regurgitated elsewhere.

        To me, this is the unfortunate and unfair story of Gregory Coleman [0] -- drummer for The Winstons, who recorded "Amen, Brother" in 1969 (which gave us the most-sampled drum break in the world, spawned multiple genres of music, and changed human history) -- the man never made a dime from it, never even knew, and died completely destitute, despite his monumental contribution to culture. It's hard to reconcile the unjustness of it all, yet not that hard to appreciate the countless positive things that came out of it.

        I don't know. I guess at the end of the day, does the end justify the means? Feels pretty subjective!

        [0] https://en.wikipedia.org/wiki/Amen_break

        • jacquesm
          10 hours ago
          What amazes me is how many programmers have absolutely no concept about copyright at all. This should be taught as a basic component of any programming course.
      • goneskiiiing
        11 hours ago
        [flagged]
      • idiotsecant
        13 hours ago
        Yes, the word for that is software developer.
    • littlecranky67
      14 hours ago
      > Claude/LLMs in general are still pretty bad at the intricate details of layouts and visual things

      Because the rendered output (pixels, not HTML/CSS) is not fed as data in the training. You will find tons of UI snippets and questions, but they rarely included screenshots. And if they do, the are not scraped.

      • Wowfunhappy
        13 hours ago
        Interesting thought. I wonder if Anthropic et al could include some sort of render-html-to-screenshot as part of the training routine, such that the rendered output would get included as training data.
        • btown
          12 hours ago
          Even better, a tool that can tell the rendered bounding box of any set of elements, and what the distances between pairs of elements are, so it can make adjustments if relative positioning doesn't match its expectation. This would be incredible for SVG generation for diagrams, too.
        • KaiserPro
          12 hours ago
          thats basically a VLM, but the problem is that describing the world requires a better understanding of the world. Hence why LeCunn is talking about world models (Its also cutting edge for teaching robots to manipulate and plan manipulations)
      • ubercow13
        9 hours ago
        Why wouldn't they be?
    • chongli
      13 hours ago
      Why is this something a Wayland compositor (a glorified window manager) needs to worry about? Apple figured this out back in the 1990s with ColorSync and they did it once for the Mac OS and any application that wanted colour management could use the ColorSync APIs.
      • hedgehog
        12 hours ago
        Color management infrastructure is intricate. To grossly simplify: somehow you need to connect together the profile and LUT for each display, upload the LUTs to the display controller, and provide appropriate profile data for each window to their respective processes. During compositing then convert buffers that don't already match the output (unmanaged applications will probably be treated as sRGB, color managed graphics apps will opt out of conversion and do whatever is correct for their purpose).
        • chongli
          12 hours ago
          Yes, but why is the compositor dealing with this? Shouldn't the compositor simply be deciding which windows go where (X, Y, and Z positions) and leave the rendering to another API? Why does every different take on a window manager need to re-do all this work?
          • hedgehog
            9 hours ago
            Turning the question around, what other part of the system _could_ do this job? And how would the compositor do any part of its job if it doesn't have access to both window contents and displays? I'm not super deep in this area but a straight-forward example of a non-managed app and a color-aware graphics app running on a laptop with an external display seems like it is enough to figure out how things need to go together. This neglects some complicating factors like display pixel density, security, accessibility, multi-GPU, etc, but I think it more or less explains how the Wayland authors arrived at its design and how some of the problems got there.
            • chongli
              7 hours ago
              I'm questioning the idea that people should be writing compositors at all. Why doesn't Wayland itself do the compositing and let everyone else just manage windows?

              It's like going to Taco Bell and they make you grind your own corn for your tortillas.

              • hedgehog
                3 hours ago
                Why? Probably better to ask the Wayland developers that. Maybe you're right. That said, whether everyone uses the same compositor and window management is modular, or not and shared code travels as libraries, I don't think the complexity of color management is much different.
          • zahlman
            11 hours ago
            I mean, when I hear the word "compositing" I definitely imagine something that involves "alpha" blending, and doing that nicely (instead of a literal alpha calculation) is going to involve colour management.
            • chongli
              7 hours ago
              That's on the Wayland team though. They drew up the new API boundaries and decided that all window managers would now be in the business of compositing.

              If I wanted to put it most uncharitably, I'd say they decided to push all of the hard parts out of Wayland itself and force everyone else to deal with them.

  • smoghat
    15 hours ago
    Ok, so here is an interesting case where Claude was almost good enough, but not quite. But I’ve been amusing myself by taking abandoned Mac OS programs from 20 years ago that I find on GitHub and bringing them up to date to work on Apple silicon. For example, jpegview, which was a very fast and simple slideshow viewer. It took about three iterations with Claude code before I had it working. Then it was time to fix some problems, add some features like playing videos, a new layout, and so on. I may be the only person in the world left who wants this app, but well, that was fine for a day long project that cooked in a window with some prompts from me while I did other stuff. I’ll probably tackle scantailor advanced next to clean up some terrible book scans. Again, I have real things to do with my time, but each of these mini projects just requires me to have a browser window open to a Claude code instance while I work on more attention demanding tasks.
    • skrebbel
      14 hours ago
      > Ok, so here is an interesting case where Claude was almost good enough, but not quite.

      You say that as if that’s uncommon.

      • jonplackett
        14 hours ago
        This should be the strap line for all AI (so far)
        • smoghat
          14 hours ago
          That's fair. But I always think of it as an intern I am paying $20 a month for or $200 a month. I would be kind of shocked if they could do everything as well as I'd hoped for that price point. It's fascinating for me and worth the money.

          I am lucky that I don't depend on this for work at a corporation. I'd be pulling my hair out if some boss said "You are going to be doing 8 times as much work using our corporate AI from now on."

          • jonplackett
            12 hours ago
            Don get me wrong, doing 80% of my work for me is still great. And I’m actually quite glad I’m still needed for the other 20%
            • jasonkester
              4 hours ago
              The problem is that your intern in this case is doing 1600% of the work, and now it’s your job to find and remove that extra 1520% so that you’re left with something usable.
    • egeozcan
      15 hours ago
      Side note: As a person who started using a mac since march, I found phoenix slides really good.
      • smoghat
        13 hours ago
        It is! I was really just curious if I could update this old codebase without getting my hands dirty.
  • yosito
    6 hours ago
    This has been my experience with almost everything I've tried to create with generative AI, from apps and websites, to photos and videos, to text and even simple sentences. At first glance, it looks impressive, but as soon as you look closer, you start to notice that everything is actually just sloppy copy.

    That being said, sloppy copy can make doing actual work a lot faster if you treat it with the right about of skepticism and hand-holding.

    It's first attempt at the Space Jam site was close enough that it probably could have been manually fixed by an experienced developer in less time than in takes to write the next prompt.

    • jama211
      6 hours ago
      but my experience has also been that with every model they require less hand holding and the code is less sloppy. If I’m careful with my prompts, gpt codex 5.1 recently has been making a lot of typescript for me that is basically production ready in a way it couldn’t even 2 months ago
  • sqircles
    14 hours ago
    > The Space Jam website is simple: a single HTML page, absolute positioning for every element...

    Absolute positioning wasn't available until CSS2 in 1998. This is just a table with crafty use of align, valign, colspan, and rowspan.

    • thecr0w
      11 hours ago
      Thanks, my friend. I added a strike through of the error, a correction, and credited you.

      I'm keeping it in for now because have made some good jokes about the mistake in the comments and I want to keep that context.

    • DocTomoe
      14 hours ago
      Which would also render differently on every machine, based on browser settings, screen sizes, and available fonts.

      Like the web was meant to be. An interpreted hypertext format, not a pixel-perfect brochure for marketing execs.

      • masswerk
        13 hours ago
        Hum, table cells provide the max-width and images a min-with, heights are absolute (with table cells spilling over, as with CCS "overflow-y: visible"), aligns and maybe HSPACE and VSPACE attributes do the rest. As long as images heights exceed the effective line-height and there's no visible text, this should render pixel perfect on any browser then in use. In this case, there's also an absolute width set for the entire table, adding further constraints. Table layouts can be elastic, with constraints or without, but this one should be pretty stable.

        (Fun fact, the most amazing layout foot-guns, then: Effective font sizes and line-heights are subject to platform and configuration (e.g., Win vs Mac); Netscape does paragraph spacing at 1.2em, IE at 1em (if this matters, prefer `<br>` over paragraphs); frames dimensions in Netscape are always calculated as integer percentages of window dimensions, even if you provide absolute dimensions in pixels, while IE does what it says on the tin (a rare example), so they will be the same only by chance and effective rounding errors. And, of course, screen gamma is different on Win and Mac, so your colors will always be messed up – aim for a happy medium.)

        • wilsmex
          5 hours ago
          Oh good times, the screen gamma issue got me many times back then, as I was the super odd kid on a Mac in the late 90's (father was in education). I'd pull my beautify crafted table-soup site up on a friends PC later and wonder why all the colors were all wacky!
      • jeanlucas
        13 hours ago
        >Like the web was meant to be.

        what?

        • alternatex
          1 hour ago
          Presumably written by a person who wasn't a web developer back then or were developing solely for Internet Explorer.
        • DocTomoe
          4 hours ago
          Do you not remember the good old days when people who focussed on graphics design rather than content put 'Best used with Netscape/IE5.5' on their pages?
  • charcircuit
    9 hours ago
    >I'd like to preserve this website forever and there's no other way to do it besides getting Claude to recreate it from a screenshot.

    There are other ways such as downloading an archive and the preserving the file in one or more cloud storages.

    https://archive.is/download/cXI46.zip

    • a2128
      5 hours ago
      The sentence immediately after that would imply sarcasm

      > Note: please help, because I'd like to preserve this website forever and there's no other way to do it besides getting Claude to recreate it from a screenshot. Believe me, I'm an engineering manager with a computer science degree. Please please please help (sad emoji)

  • sigseg1v
    16 hours ago
    Curious if you've tested something such as:

    - "First, calculate the orbital radius. To do this accurately, measure the average diameter of each planet, p, and the average distance from the center of the image to the outer edge of the planets, x, and calculate the orbital radius r = x - p"

    - "Next, write a unit test script that we will run that reads the rendered page and confirms that each planet is on the orbital radius. If a planet is not, output the difference you must shift it by to make the test pass. Use this feedback until all planets are perfectly aligned."

    • Aurornis
      15 hours ago
      This is my experience with using LLMs for complex tasks: If you're lucky they'll figure it out from a simple description, but to get most things done the way you expect requires a lot of explicit direction, test creation, iteration, and tokens.

      One of the keys to being productive with LLMs is learning how to recognize when it's going to take much more effort to babysit the LLM into getting the right result as opposed to simply doing the work yourself.

      • jazzyjackson
        11 hours ago
        Re: tokens, there is a point where you have to decide what's worth it to you. I'd been unimpressed with what I could get out of chat apps but when I wanted to do a rails app that would cost me thousands in developer time and some weeks between communication zoom meetings and iteration... I bit the bullet and kept topping up Claude API and spent about $500 on Opus over the course of a weekend, but the site is done and works great.
      • jacquesm
        13 hours ago
        It would not be the first time that an IT services provider makes more money the worse their products perform.
    • bluedino
      13 hours ago
      Congratulations, we finally created 'plain English' programming languages. It only took 1/10th of the worlds electricity and 40% of the semiconductor production.
    • thecr0w
      15 hours ago
      Hm, I didn't try exactly this, but I probably should!

      Wrt unit test script, let's take Claude out of the equation, how would you design the unit test? I kept running into either Claude or some library not being capable of consistently identifying planet vs non planet which was hindering Claude's ability to make decisions based on fine detail or "pixel coordinates" if that makes sense.

      • cfbradford
        14 hours ago
        Do you give Claude the screenshot as a file? If so I’d just ask it to write a tool to diff each asset to every possible location in the source image to find the most likely position of each asset. You don’t really need recognition if you can brute force the search. As a human this is roughly what I would do if you told me I needed to recreate something like that with pixel perfect precision.
        • thecr0w
          11 hours ago
          Ok! will give it a shot. In a few iterations I gave him screenshots, i have given him the ability to take screenshots, and I gave him the Playwright MCP. I kind of gave up on the path you're suggesting (though I didn't get super far along) because I felt like I would run into this problem eventually of needing a model to figure out what a planet is, where the edge of the planet is, etc.

          But if that could be done deterministically, I totally agree this is the way to go. I'll put some more time into it over the next couple weeks.

    • turnsout
      16 hours ago
      Yes, this is a key step when working with an agent—if they're able to check their work, they can iterate pretty quickly. If you're in the loop, something is wrong.

      That said, I love this project. haha

      • monsieurbanana
        15 hours ago
        I'm trying to understand why this comment got downvoted. My best guess is that "if you're in the loop, something is wrong" is interpreted as there should be no human involvement at all.

        The loop here, imo, refers to the feedback loop. And it's true that ideally there should be no human involvement there. A tight feedback loop is as important for llms as it is for humans. The more automated you make it, the better.

        • turnsout
          14 hours ago
          Yes, maybe I goofed on the phrasing. If you're in the feedback loop, something is wrong. Obviously a human should be "in the loop" in the sense that they're aware of and reviewing what the agent is doing.
  • stared
    1 hour ago
    Just use Playwright Skill (https://github.com/lackeyjb/playwright-skill). It is a game changer. Otherwise it is Claude the Blind, as OP mentioned.
  • 999900000999
    16 hours ago
    Space Jam website design as an LLM benchmark.

    This article is a bit negative. Claude gets close , it just can't get the order right which is something OP can manually fix.

    I prefer GitHub Copilot because it's cheaper and integrates with GitHub directly. I'll have times where it'll get it right, and times when I have to try 3 or 4 times.

    • GeoAtreides
      16 hours ago
      >which is something OP can manually fix

      what if the LLM gets something wrong that the operator (a junior dev perhaps) doesn't even know it's wrong? that's the main issue: if it fails here, it will fail with other things, in not such obvious ways.

      • godelski
        12 hours ago
        I think that's the main problem with them. It is hard to figure out when they're wrong.

        As the post shows, you can't trust them when they think they solved something but you also can't trust them when they think they haven't[0]. The things are optimized for human preference, which ultimately results in this being optimized to hide mistakes. After all, we can't penalize mistakes in training when we don't know the mistakes are mistakes. The de facto bias is that we prefer mistakes that we don't know are mistakes than mistakes that we do[1].

        Personally I think a well designed tool makes errors obvious. As a tool user that's what I want and makes tool use effective. But LLMs flip this on the head, making errors difficult to detect. Which is incredibly problematic.

        [0] I frequently see this in a thing it thinks is a problem but actually isn't, which makes steering more difficult.

        [1] Yes, conceptually unknown unknowns are worse. But you can't measure unknown unknowns, they are indistinguishable from knowns. So you always optimize deception (along with other things) when you don't have clear objective truths (most situations).

      • alickz
        9 hours ago
        >what if the LLM gets something wrong that the operator (a junior dev perhaps) doesn't even know it's wrong?

        the same thing that always happens if a dev gets something wrong without even knowing it's wrong - either code review/QA catches it, or the user does, and a ticket is created

        >if it fails here, it will fail with other things, in not such obvious ways.

        is infallibility a realistic expectation of a software tool or its operator?

    • smallnix
      16 hours ago
      That's not the point of the article. It's about Claude/LLM being overconfident in recreating pixel perfect.
      • jacquesm
        13 hours ago
        All AI's are overconfident. It's impressive what they can do, but it is at the same time extremely unimpressive what they can't do while passing it off as the best thing since sliced bread. 'Perfect! Now I see the problem.'. 'Thank you for correcting that, here is a perfect recreation of problem 'x' that will work with your hardware.' (never mind the 10 glaring mistakes).

        I've tried these tools a number of times and spent a good bit of effort on learning to maximize the return. By the time you know what prompt to write you've solved the problem yourself.

    • thecr0w
      16 hours ago
      ya, this is true. Another commenter also pointed out that my intention was to one-shot. I didn't really go too deeply into trying to try multiple iterations.

      This is also fairly contrived, you know? It's not a realistic limitation to rebuild HTML from a screenshot because of course if I have the website loaded I can just download the HTML.

      • swatcoder
        15 hours ago
        > rebuild HTML from a screenshot

        ???

        This is precisely the workflow when a traditional graphic designer mocks up a web/app design, which still happens all the time.

        They sketch a design in something like Photoshop or Illustrator, because they're fluent in these tools and many have been using them for decades, and somebody else is tasked with figuring out how to slice and encode that design in the target interactive tech (HTML+CSS, SwiftUI, QT, etc).

        Large companies, design agencies, and consultancies with tech-first design teams have a different workflow, because they intentionally staff graphic designers with a tighter specialization/preparedness, but that's a much smaller share of the web and software development space than you may think.

        There's nothing contrived at all about this test and it's a really great demonstration of how tools like Claude don't take naturally to this important task yet.

        • thecr0w
          15 hours ago
          You know, you're totally right and I didn't even think about that.
      • Retric
        16 hours ago
        It’s not unrealistic to want to revert to an early version of something you only have a screenshot of.
    • bigstrat2003
      15 hours ago
      > it just can't get the order right which is something OP can manually fix.

      If the tool needs you to check up on it and fix its work, it's a bad tool.

      • markbao
        15 hours ago
        “Bad” seems extreme. The only way to pass the litmus test you’ve described is for a tool to be 100% perfect, so then the graph looks like 99.99% “bad tool” until it reaches 100% perfection.

        It’s not that binary imo. It can still be extremely useful and save a ton of time if it does 90% of the work and you fix the last 10%. Hardly a bad tool.

        It’s only a bad tool if you spent more time fixing the results than building it yourself, which sometimes used to be the case for LLMs but is happening less and less as they get more capable.

        • a4isms
          15 hours ago
          If you show me a tool that does a thing perfectly 99% of the time, I will stop checking it eventually. Now let me ask you: How do you feel about the people who manage the security for your bank using that tool? And eventually overlooking a security exploit?

          I agree that there are domains for which 90% good is very, very useful. But 99% isn't always better. In some limited domains, it's actually worse.

          • 999900000999
            13 hours ago
            Counterpoint.

            Humans don't get it right 100% or the time.

      • godelski
        11 hours ago
        I wouldn't go that far, but I do believe good tool design tries to make its failure modes obvious. I like to think of it similar to encryption: hard to do, easy to verify.

        All tools have failure modes and truthfully you always have to check the tool's work (which is your work). But being a master craftsman is knowing all the nuances behind your tools, where they work, and more importantly where they don't work.

        That said, I think that also highlights the issue with LLMs and most AI. Their failure modes are inconsistent and difficult to verify. Even with agents and unit tests you still have to verify and it isn't easy. Most software bugs are created from subtle things, often which compound. Which both those things are the greatest weaknesses of LLMs: nuance and compounding effects.

        So I still think they aren't great tools, but I do think they can be useful. But that also doesn't mean it isn't common for people to use them well outside the bounds of where they are generally useful. It'll be fine a lot of times, but the problem is that it is like an alcohol fire[0]; you don't know what's on fire because it is invisible. Which, after all, isn't that the hardest part of programming? Figuring out where the fire is?

        [0] https://www.youtube.com/watch?v=5zpLOn-KJSE

      • mrweasel
        14 hours ago
        That's my thinking. If I need to check up on the work, then I'm equally capable of writing the code myself. It might go faster with an LLM assisting me, and that feels perfectly fine. My issue is when people use the AI tools to generate something far beyond their own capabilities. In those cases, who checks the result?
      • wvenable
        15 hours ago
        Perfection is the enemy of good.
  • jackfranklyn
    1 hour ago
    The real kicker is that LLMs were trained on modern web dev content where "tables for layout" is a cardinal sin. So you're asking it to do something its training data actively told it NOT to do for years.

    Email HTML development is the last bastion of table-based layouts and it's wild that it's still necessary in 2024. Every time I touch email templates I feel like I'm time traveling.

  • Wowfunhappy
    17 hours ago
    Claude is not very good at using screenshots. The model may technically be multi-modal, but its strength is clearly in reading text. I'm not surprised it failed here.
    • fnordpiglet
      16 hours ago
      Especially since it decomposes the image into a semantic vector space rather than the actual grid of pixels. Once the image is transformed into patch embeddings all sense of pixels is entirely destroyed. The author demonstrates a profound lack of understanding for how multimodal LLMs function that a simple query of one would elucidate immediately.

      The right way to handle this is not to build it grids and whatnot, which all get blown away by the embedding encoding but to instruct it to build image processing tools of its own and to mandate their use in constructing the coordinates required and computing the eccentricity of the pattern etc in code and language space. Doing it this way you can even get it to write assertive tests comparing the original layout to the final among various image processing metrics. This would assuredly work better, take far less time, be more stable on iteration, and fits neatly into how a multimodal agentic programming tool actually functions.

      • mcbuilder
        14 hours ago
        Yeah, this is exactly what I was thinking. LLMs don't have precise geometrical reasoning from images. Having an intuition of how the models work is actually.a defining skill in "prompt engineering"
        • thecr0w
          11 hours ago
          Yeah, still trying to build my intuition. Experiments/investigations like this help me. Any other blogs or experiments you'd suggest?
          • fnordpiglet
            9 hours ago
            Asking your favorite LLM actually helps a lot. They generally are well trained on LLM papers unsurprisingly. In this case though it’s important to realize the LLM is incapable of seeing or hearing or reading. Everything has to be transformed into a vector space. Images are generally cut into patches (like 16x16) which are themselves transformed by several neural networks to convert them into a semantic space represented by the models parameters.

            But this isn’t hugely different than your vision. You don’t see the pixel grid either. You have to use tools to measure things. You have the ability over time to iteratively interact with the image by perhaps counting grid lines but the LLM does not - it’s a one shot inference against this highly transformed image. They’ve gotten better at complex visual tasks including types of counting, but it’s not able to examine the image in any analytical way or even in its original representation. It’s just not possible.

            It can however make tools that can. It’s very good at working with PIL and other image processing libraries or even writing image processing code de novo, and then using those to ground itself. Likewise it can not do math, but it can write a calculator that can do highly complex mathematics on its behalf.

      • thecr0w
        11 hours ago
        Great, thanks for that suggestion!
    • dcanelhas
      15 hours ago
      Even with text, parsing content in 2D seems to be a challenge for every LLM I have interacted with. Try getting a chatbot to make an ascii-art circle with a specific radius and you'll see what I mean.
      • Wowfunhappy
        13 hours ago
        I don't really consider ASCII art to be text. It requires a completely different type of reasoning. A blind person can be understand text if it's read out loud. A blind person really can't understand ASCII art if it's read out loud.
  • liampulles
    3 hours ago
    It seems to me that Claude's error here (which is not unique to it) is self-sycophancy. The model is too eager to convince itself it did a good job.

    I'd be curious to hear from experienced agent users if there is some AGENTS.md stuff to make the LLM more clear speaking? I wonder if that would impact the quality of work.

    • aprilfoo
      1 hour ago
      > It seems to me that Claude's error here (which is not unique to it) is self-sycophancy. The model is too eager to convince itself it did a good job.

      It seems this applies to the whole AI industry, not just LLMs.

  • manlymuppet
    11 hours ago
    Couldn’t you just feed Claude all the raw, inspect element HTML from the website and have it “decrypt” that?

    The entire website is fairly small so this seems feasible.

    Usually there’s a big difference between a website’s final code and its source code because of post processing but that seems like a totally solvable Claude problem.

    Sure LLMs aren’t great with images, but it’s not like the person who originally wrote the Space Jam website was meticulously messing around with positioning from a reference image to create a circular orbit — they just used the tools they had to create an acceptable result. Claude can do the same.

    Perhaps the best method is to re-create, rather than replicate the design.

    • blks
      11 hours ago
      What do you mean? Raw html is the original website source code.

      Modern web development completely poisoned young generation

      • manlymuppet
        10 hours ago
        I'm using source code like it's used when referring to source code vs executables. React doesn't simply spit out HTML, nor the JSX used to write said React code, it outputs a mixture of things that's the optimized HTML/CSS/JS version of the React you wrote. This is akin to source code and the optimized binaries we actually use.

        Perhaps the wrong usage of "source code". I probably should've been more precise. Forgive my lack of vocabulary to describe the difference I was referring to.

        • pastel8739
          10 hours ago
          For a website from 1996 though, there’s a very good chance that the page source is the source code
        • sailfast
          8 hours ago
          There were no binaries or packages. You wrote the HTML in notepad or maybe you used some "high speed IDE" with syntax highlighting and some buttons like Dreamweaver and then uploaded it via FTP to whatever server you were hosting it on. No muss, no fuss. It was a glorious time and I miss that internet a lot.
          • manlymuppet
            6 hours ago
            Nor are there binaries now. I was simply making an analogy between traditional binaries/source code, and the way the web is built today (frameworks).
    • personjerry
      11 hours ago
      If you have the raw HTML why would you need to do this at all?
      • manlymuppet
        10 hours ago
        I should've been more precise with my words.

        What I meant is doing inspect element on the Space Jam website, and doing select all + copy.

        • futuraperdita
          8 hours ago
          I think you're assuming a pattern existed in 1996 that didn't actually exist until the 2010s.

          In 1996 JavaScript was extremely limited; even server side processing was often limited to CGI scripts. There was nothing like React that was in common use at the time. The Space Jam website was almost certainly not dynamically compiled as HTML - it existed and was served as a static set of files.

          Even a decade later, React and the frontend-framework sort of thinking wasn't really a big thing. People had started to make lots of things with "DHTML" in the early 2000s where JavaScript was used to make things spicier (pretty animations, some server side loading with AJAX) and still often worked without JS enabled in a pattern called graceful degradation.

          What you'd get from "View Source", or "Inspect Element", and what was literally saved on disk of spacejam.com, was almost certainly the same content.

          • manlymuppet
            6 hours ago
            I'm not trying to dispute this though. Although I appreciate the clarity, I am aware of the web's past.

            The only point I was trying to make was that this project could be better achieved by an LLM if spacejam.com's HTML is supplied.

            For why you'd want to do this rather than simply use the original code is up to the developer, but I'd expect a common reason to be the ease of modern frameworks. Some justifications for making Claude create the same code again in a different framework include:

            - Using <script> tags is bad practice in a lot of modern frameworks, and it's better to just translate to React and run your logic directly within components.

            - Perhaps you're using TailwindCSS, in which case it's a good idea to port over all the original CSS so you can have unified codebase.

            - Hosting on modern frameworks is often conveinent.

            - Sometimes (although maybe not for a website this small) the source code with a framework is less verbose.

            You probably misunderstood me because I paraphrased "raw" HTML several times throughout my comments in this thread before I actually read the page source and realized it was the original source code.

    • manlymuppet
      10 hours ago
      https://pastebin.com/raw/F2jxZTeJ

      The HTML I'm referring to, copied from the website.

      Only about 7,000 characters or just 2,000 Claude tokens. This is feasible.

    • literalAardvark
      10 hours ago
      The space jam website used HTML tables for formatting and split images in each cell.

      CSS didn't exist.

  • soared
    15 hours ago
    I got quite close with Gemini 3 pro in AI studio. I uploaded a screenshot (no assets) and the results were similar to OP. It failed to follow my fix initially but I told it to follow my directions (lol) and it came quite close (though portrait mode distorted it, landscape was close to perfect.

    “Reference the original uploaded image. Between each image in the clock face, create lines to each other image. Measure each line. Now follow that same process on the app we’ve created, and adjust the locations of each image until all measurements align exactly.”

    https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...

  • p0w3n3d
    3 hours ago
    LLM stands for large LANGUAGE models, so I guess you could succeed if you had a correct LANGUAGE. Maybe radial coordinates? Or turtle graphics? I myself tried to generate an SVG with twelve radial dots as in a clock in chatgpt, and failed (a year ago). Now I think it would succeed, however still the question is does it succeed because people trained it to do so.

    Also I have noticed that AI generates things close to what you want, and it sticks really hard to that "close" qualifier, not wanting to cross any borders to get too close, so I'd be happy with the effect you have shown, as it is what AI does

  • voodooEntity
    3 hours ago
    THanks for sharing this. Partly because i forgot about this great website :D also because i would never thought of giving this as an LLM task because its so simple that i prolly just had hacked it down myself :D

    I recently experimented alot with agentic coding (mostly with gemini+ intellij plugin, copilot intellij plugin and intellij's own junie) and also condsidered to give it a try and feed images to the AI, but than all tasks i tried so far were pure backend-ish so it never came to the point.

    Im really curious how especially junie will act and i will give it a try with the very same task you gave it. We gonne see how it ends :D

  • ettsvensktlogin
    2 hours ago
    This was very interesting. I've tried to create an "agent" Claude Code based system to generate design from screenshots, using Playwright and other tools to take screenshots for iterative improvements. So far I have failed despite weeks of struggles.

    Thanks to this post I now have a deeper understanding as to why. Thank you.

  • mxfh
    7 hours ago
    Everything feels wrong with that approach too me, starting with calling a perfectly time-appropriate website anachronistic.

    Anachronistic would be something like creating an apparent flash website for a fictional 90s internet related movie.

    • robomc
      7 hours ago
      He's using it correctly, in its secondary sense of "belonging or appropriate to an earlier period, especially so as to seem conspicuously old-fashioned or outdated."
      • mxfh
        2 hours ago
        Still not quite convinced that the adjective should be applied to the website itself in a quite loose use of the word.

        Warner Bros anachronistically keeps this website online would be a simple fix; here used to reference and to point out that maintaining an untouched 1996 promotional site at it's original location is not typical for the lifecycle of a website, usually the publisher would rather redirect clicks to some current offer.

        Othwerwise there is no anachronism here with the website itself, just it's location under the original URL and not in some archive only.

        The website itself fulfilled its purpose for promoting the movie when it was released and simply continues to exist.

        You wouldn’t call posters, magazines, or other artifacts from the ’90s anachronistic just for still existing. Being retrievable doesn’t make something outdated by itself.

        “Anachronistic” would apply only if a new promotional site were created today to look like this—though that would more likely be called “retro.”

        Or if the movie industry insisted on using CSS-free table layouts for all its promotional websites, similar to other norms or laws that feel anachronistic because they no longer match current needs.

        Sadly the whole piece reads like it was written 80%+ by an LLM too, seriously why all the emojis? But apparently this is where content is heading in general.

  • badlogic
    1 hour ago
    Loved the fun write up. Now that we know that LLM-based vision is lossy, here's a different challenge:

    Give the LLM access to the site's DOM and let it recreate the site with modern CSS. LLMs are much better with source code, aka text, right? :)

  • theahura
    1 hour ago
    I was able to get Claude to do this, though it kinda sorta cheated. Blog post describing the output here: https://theahura.substack.com/p/i-successfully-recreated-the...

    TLDR:

    "The plan is designed to ‘autoformalize’ the problem by using Test Driven Development (TDD). TDD is incredibly important for getting good outputs from a coding agent, because it helps solve the context rot problem. Specifically, if you can write a good test when the model is most ‘lucid’, it will have an easier time later on because it is just solving the test instead of ‘building a feature’ or whatever high dimensional ask you originally gave it.

    From here, Nori chugged away for the better part of half an hour in yolo mode while I went to do other things. And eventually I got a little pop up notification saying that it was done. It had written a playwright test that would open an html file, screenshot it, diff it with the original screenshot, and output the final result...

    After trying a few ways to get the stars to line up perfectly, it just gave up and copied the screenshot in as the background image, then overlaid the rest of the HTML elements on top.

    I’m tempted to give this a pass for a few reasons.

    This obviously covers the original use case that tripped up Jonah.

    It also is basically exactly what I asked the model to do — that is, give me a pixel perfect representation — so it’s kind of my fault that I was not clearer.

    I’m not sure the model actually can get to pixel perfect any other way. The screengrab has artifacts. After all, I basically just used the default linux screenshot selection tool to get the original output, without even paying much attention to the width of the image.

    If you ask the model to loosen the requirements for the exact screengrab, it does the right thing, but the pixel alignment is slightly off. The model included this as index_tiled.html in the repo, and you can see the pixel diff in one of the output images..."

  • pfix
    15 hours ago
    I checked the source of the original (like maybe many of you) to check how they actually did it and it was... simpler than expected. I drilled myself so hard to forget tables as layout... And here it is. So simple it's a marvel.
    • COAGULOPATH
      14 hours ago
      And they do hacky things like space elements vertically using <br> tags.
  • buchwald
    10 hours ago
    Claude is surprisingly bad at visual understanding. I did a similar thing to OP where I wanted Claude to visually iterate on Storybook components. I found outsourcing the visual check to Playwright in vision mode (as opposed to using the default a11y tree) and Codex for understanding worked best. But overall the idea of a visual inspection loop went nowhere. I blogged about it here: https://solbach.xyz/ai-agent-accessibility-browser-use/
    • MagMueller
      8 hours ago
      Interesting read. Agree that GUI is super hard for agents. Did you see "skills" from browser-use? We directly interact with network requests now.
  • ErrantX
    12 hours ago
    I just feel this is a great example of someone falling into the common trap of treating an LLM like a human.

    They are vastly less intelligent than a human and logical leaps that make sense to you make no sense to Claude. It has no concept of aesthetics or of course any vision.

    All that said; it got pretty close even with those impediments! (It got worse because the writer tried to force it to act more like a human would)

    I think a better approach would be to write a tool to compare screenshots, identity misplaced items and output that as a text finding/failure state. claude will work much better because your dodging the bits that are too interpretive (that humans rock at and LLMs don't)

    • sallveburrpi
      12 hours ago
      > vastly less intelligent than a human

      I would more phrase it like that they are a completely alien “intelligence” that cant really be compared to human intelligence

      • oncallthrow
        12 hours ago
        Yes, currently they are both smarter and dumber than humans.
    • naet
      11 hours ago
      The blog frequently refers to the LLM as "him" instead of "it" which somehow feels disturbing to me.

      I love to anthropomorphize things like rocks or plants, but something about doing it to an AI that responds in human like language enters an uncanny valley or otherwise upsets me.

  • jdironman
    7 hours ago
    I am going to give this a shot, but using a method I have been using lately with subagents. Basically, what I do is have it create an Architect, Executor, Adjudicator subagents. Architect breaks any ask down into atomic and testable subtasks that take 1-3 minutes 'dev' time. Executor (can spawn more than one) implements them. Then adjudicator reviews that they are to spec / requirements. This all happens in subagent files + a runbook.json in the .claude folder of a project. Its based on a paper* that was featured on here a while back actually [1].

    [1] https://arxiv.org/abs/2511.09030

  • anorwell
    13 hours ago
    The article does not say at any point which model was used. This is the most basic important information when talking about the capabilities of a model, and probably belongs in the title.
    • thecr0w
      11 hours ago
      Whoops, I'm very dumb. It's Opus 4.1. I updated the blog post and credited you for the correction. Thank you!
      • dweekly
        7 hours ago
        That model does not exist. Do you mean Opus 4.5?
        • retsibsi
          5 hours ago
          > That model does not exist.

          It does (unless the previous comment was edited? Currently it says Opus 4.1): https://www.anthropic.com/news/claude-opus-4-1. You can see it in the 'more models' list on the main Claude website, or in Claude Console.

        • hu3
          6 hours ago
          Opus GPT 4.1 Pro Maverick DeepK2
  • torginus
    11 hours ago
    Not sure how good Claude is nowadays, but I remember using Claude 3.5 to do some fiction writing and for a while I thought it was amazing at coming up with plots, setting ideas, writing witty dialogue - then after a short while I noticed it kept recycling the same ideas, phrases etc, quickly becoming derivative, and having 'tells', similar to the group of 3 quirk, with some otherwise decent writing patterns showing up with great frequency.

    I've heard the same thing about it doing frontends - it produces gorgeous websites but it has similar 'tells', it does CSS and certain features the same way, and if you have a very concrete idea of what you want out of it, you'll end up fighting an uphill battle with it constantly trying to do things its own way.

    Which is part of the 'LLM illusion' - I guess. To an unskilled individual, or when starting from scratch, it seems great, but the more complex the project gets, the harder it becomes to have it contribute meaningfully, leading to an ever-mounting frustration, and eventually me just giving up and doing it by hand.

    • tyre
      11 hours ago
      My boy loves a neon gradient.

      To be fair, a lot of startup websites look very similar. And the number of Stripe blurples out there was a pre-LLM brand meme.

  • ajasmin
    5 hours ago
    I'm actually surprised Claude was about to do that much.

    I hadn't even considered handing it a visual mockup to work from. Event though that workflow is par for the course for any web design team.

    I would assume there must be at least some prior work into locating individual assets in a larger canvas. It just needs to be integrated into the pipeline.

  • daemonologist
    16 hours ago
    Interesting - these models are all trained to do pixel-level(ish) measurement now, for bounding boxes and such. I wonder if you could railroad it into being accurate with the right prompt.
    • Lerc
      16 hours ago
      What models are good at this? I have tried passing images to models and asking them for coordinates for specific features, then overlaid dots on those points and passed that image back to the model so it has a perception of how far out it was. It had a tendency to be consistently off by a fixed amount without getting closer.

      I don't doubt that it is possible eventually, but I haven't had much luck.

      Something that seemed to assist was drawing a multi coloured transparent chequerboard, if the AI knows the position of the grid colours it can pick out some relative information from the grid.

      • ryoshu
        5 hours ago
        I can't do that either without opening up an image editing tool. Give the model a tool and goal with "vision". Should work better.
      • daemonologist
        15 hours ago
        I've found Qwen3-VL to be fairly accurate at detection (though it doesn't always catch every instance). Note that it gives answers as per-mille-ages, as if the image was 1000x1000 regardless of actual resolution or aspect ratio.

        I have also not had luck with any kind of iterative/guess-and-check approach. I assume the models are all trained to one-shot this kind of thing and struggle to generalize to what are effectively relative measurements.

    • sdenton4
      15 hours ago
      Feels like the "right" approach would be to have it write some code to measure how far off the elements are in the original vs recreated image, and then iterate using the numerical output of the program...
  • 960design
    13 hours ago
    Claude argued with me about the quadratic equation the other day. It vehemently felt a -c was required whereas a c was the correct answer. I pointed this out showing step by step and it finally agreed. I tried Grok to see if it could get it right. Nope, the exact same response as Claude, but Grok never backed down; even after the step by step explanation of the maths.
    • kristofferc
      13 hours ago
      Can you link to the conversation log?
    • pbhjpbhj
      13 hours ago
      In the generalised solution there is a '-c' term with coefficient '4a'...?

      I'm not well at the moment, perhaps your ML model has flu?!!

  • shortformblog
    14 hours ago
    Claude can't properly count the number of characters in a sentence. It's asking a lot to assume it can get pixel perfect.
  • handedness
    6 hours ago
    A site in '96 would have been built largely with tables, not CSS. CSS didn't become a thing until a couple of years later.

    I know this because I'm still salty about the transition. For all of CSS's advantages, we lost something when we largely moved away from tables.

    • chrisco255
      6 hours ago
      We lost something for a time but flex box and CSS grid (which came more recently) are objectively better than tables.
      • icommentedtoday
        2 hours ago
        How are they objectively better? They seem way too complex for me
    • yesitcan
      6 hours ago
      You can still use tables.
  • 1970-01-01
    6 hours ago
    This is a great under the radar test for AI. I would put money on it failing to recreate the majority of 90s movie websites as it wasn't trained on them. The old cowboy webmasters that built and ultimately abandoned them didn't write many books on the topic.
  • pluc
    15 hours ago
    I like how the author calls a script on the internet "him".
  • stwsk
    14 hours ago
    >Look, I still need this Space Jam website recreated.

    Now that's a novel sentence if I've ever read one.

  • victorbuilds
    13 hours ago
    Building something similar - using Claude API to generate mini games from text descriptions (https://codorex.com, still pretty rough).

    Can confirm: Claude is weirdly good at generating functional game logic from vague prompts, but spatial precision is a constant battle. Anything involving exact pixel positions needs validation/correction layers on top.

    The suggestion upthread about having it write its own measurement tools seems promising - haven't tried that approach yet.

    • victorbuilds
      12 hours ago
      Actually, here's a relevant example - had Claude generate an interactive solar system with circular orbits and moons: https://codorex.com/shared/yeABdJWvRHAKqHs2kxpRnZNZPWmqWSu9

      Interesting that it handles this fine (functional orbital mechanics, animation) but would probably struggle to recreate the exact pixel positions of the Space Jam layout. Confirms the pattern: good at "make something like X" but bad at "recreate X exactly."

      • thecr0w
        11 hours ago
        This is really cool.
  • mr_windfrog
    8 hours ago
    Maybe we could try asking Claude to generate code using <table>, <tr>, <td> for layout instead of relying on div + CSS. Feels like it could simplify things a lot.

    Would this actually work, or am I missing something?

    • thecr0w
      8 hours ago
      I think it probably gets you 80% but the last 20% of pixel perfection seems to evade Claude. But I'm pretty new to writing prompts so if you can nail it let me know and I'll link you in the post.
  • simonw
    14 hours ago
    I wonder if Gemini 3 Pro would do better at this particular test? They're very proud of its spatial awareness and vision abilities.
  • Aeolun
    12 hours ago
    I think claude could have easily used a script to calculate the positions of the planets exactly here, instead of trying to use the frankly horrible image recognition.
    • fluidcruft
      12 hours ago
      Claude could probably have written a script to do it. I'm hunting assets to confirm.

      Edit: yup very easy for claude. Most of the wall time was spent installing cv2 in a venv (uv dropped python 3.14 on claude, which seemed to get stuck compiling numpy or something so I interrupted and told claude to just use python 3.13 instead).

      ME: In this directory there are two images: @p-pressbox.gif and @screenshot.png. @p-pressbox.gif is an image that exists somewhere inside @screenshot.png Write a script that finds the precise location

      CLAUDE: Looking at the images, I can see that p-pressbox.gif (the spaceship with "PRESS BOX SHUTTLE" text) appears in the upper-left area of screenshot.png. I'll write a Python script using OpenCV to find the precise location.

      [ some venv stuff/telling claude to use uv and python 3.13 ]

      CLAUDE: The script found the precise location of p-pressbox.gif within screenshot.png:

        | Measurement         | Value           |
        |---------------------|-----------------|
        | Top-left corner     | (87, 110)       |
        | Bottom-right corner | (218, 166)      |
        | Center              | (152, 138)      |
        | Template size       | 131 × 56 pixels |
        | Match confidence    | 97.2%           |
      
        The script is at /tmp/search/find_image.py and uses OpenCV's template matching with normalized cross-correlation.
      
      ME> As part of the output create a visualization of the @p-pressbox.gif positioned on top of the original screenshot

      CLAUDE: Done! The visualization shows a green rectangle highlighting the exact location where p-pressbox.gif was found within the screenshot. The match is at coordinates (87, 110) to (218, 166).

      I confirmed the result was correct.

      • thecr0w
        10 hours ago
        Oh what the heck. That worked really well for you. Would you be willing to recreate all the html and push it up to github? I'll drop the repo at the top of the blog post. It would be really cool for me to see this completely done and a great way to finish out the blog post. I obviously couldn't do it.
        • fluidcruft
          8 hours ago
          I got pretty far with this initial prompt:

              spacejam-1996.png is a full screenshot of the Space Jam 1996
              landing page. We want to recreate this landing page as faithfully
              as possible, matching the screenshot exactly.
          
              The asset directory contains images extracted from the original
              site. One of the images is tiled as the background of the landing
              page. The other images should appear once in the screenshot. Use
              these images as assets.
          
              Precise positioning is very important for this project, so you
              should writing a script that finds the precise location of each
              asset image in screenshots. Use the tool to detect precise
              positions in the target and fine tune the generated webpage. Be
              sure to generate diagnostic images that can be easily reviewed by
              a human reviewer.
          
              Use python 3.13 and uv to create a venv while working.
          
          I just let Claude (Opus 4.5) do anything it wanted to do as it went.

          At this point all the image assets are pixel perfect but the footer is in the wrong place and I had to hold Claude's hand a bit to get the footer into the approximately correct spot:

              I noticed you were struggling to find the position of the footer
              text. You could try rendering two versions of the generated page, the
              second time with the footer text black. Subtracting those two images
              should give you a clean view of the footer text.
          
          At this point Claude was having trouble because its hadn't got a clean view of the target text location in the original screenshot (it was creating scripts that look at the red channel in the bottom half of the image to pull out the text but that was also grabbing part of the site map logo. Interestingly it made a comment about this but didn't do anything about it). So I gave it this additional hint:

              You are getting confused with the site map when analyzing the
              original screenshot. You could blank out the positions of assets
              so that they are not interfering with your analysis.
          
          This got the footer in the correct location but the fonts/font sizes etc are not correct yet.
          • thecr0w
            8 hours ago
            > Interestingly it made a comment about this but didn't do anything about it

            Classic.

            This is awesome. Great work. Please follow up again if you happen to nail it.

            • fluidcruft
              6 hours ago
              It's now got everything close after adding this final prompt:

                  We are very close. The footer is positioned in roughly the correct location
                  but the fonts, font sizes, font color and line spacings are all slightly
                  off.
              
              This took quite a while and it build a few more tools to get there. And this was fine from a distance but it was using a san-serif when the screenshot has a serif etc. So I decided to push. From here it got very messy...

              One of the issues is that Claude's text detection was getting tripped up by writing scripts using RGB space instead of something more hue-aware. It knew the text was red but was trying to isolate it by just looking at the red channel. But the grey dots from the background show up bright in the red channel so Claude would think those were center dots between the links that needed to be reproduced in the text. I gave it a hint:

                  I think dots from the background image are causing issues. Are you detecting the text
                  by looking only at the red channel in RGB space? The red channel will be bright on 
                  white pixels in RGB. You could try using hue to separate text from background or use
                  distance from the target RGB value.
              
              Claude decided to switch to HSV space. But it took quite a bit of effort to keep Claude remembering to use HSV because tools it had already written were still RGB and not updated (as were intermediate images that were not updated). Then it would try to step back and get a big picture as a sanity check and "discover" it had missed the dots that are obviously there. And when you would tell it there are no dots, you get the "You're absolutely right! They're vertical bars!" So it was a struggle. This is the closest I got:

              https://imgur.com/a/79Iv1jO

              Again, the top image stuff was done in the first shot with the prompt in the first one. Everything else has been about the footer. Claude has been writing a lot of clever scripts to measure font metrics and pick fonts etc, but it keeps falling over those dots. I could probably get it to work better with adding directives for text handling to CLAUDE.md and nuking context and some of the scripts it created.

  • rickcarlino
    13 hours ago
    I look forward to an alternative reality where AI vendors race to have the model with the best Space Jam Bench scores.
    • jacquesm
      13 hours ago
      It beats pelicans by a considerable margin.
  • nickdothutton
    14 hours ago
    I have recently been working on something "fun" in the terminal that mingles plain ASCII, ANSI "graphics", actual bitmaps (Sixel), and Nerdfonts in a TUI framework (Charm etc). After a week of smashing Claude's head against a wall, which is better than smashing my own, I've had to significantly alter my hopes and expectations.
  • vmg12
    15 hours ago
    We don't know how to build it anymore
  • manmal
    12 hours ago
    I would put Claude into a loop and let it make screenshots itself, diffing them against the original screenshot, until it has found the right arrangement at the planets‘ starting position (pixel perfect match).
    • fluidcruft
      12 hours ago
      I would try giving it a tool to work with projections and process axis by axis to see if it works better in 1D than 2D. I dunno how clever claude is about signal processing though. There's no noise here so... I mean really it's just template matching without rotation and without noise so... But I doubt claude can do or reason about basic image processing.
    • epgui
      12 hours ago
      At that point you may as well just do the work yourself.
  • syassami
    15 hours ago
    We've lost the capability to build such marvels.

    https://knowyourmeme.com/memes/my-father-in-law-is-a-builder...

    • barfoure
      15 hours ago
      Lost it at wooden structures at the playground.
  • johnfn
    14 hours ago
    Context is king. The problem is that you are the one currently telling Claude how close it is and what to do next. But if you give it the tools to do that itself, it will make a world of difference.

    Give Claude a way to iteratively poke at what it created (such as a playwright harness), and screenshot of what you want, and maybe a way to take a screenshot in Playwright and I think you will get much closer. You might even be able to one shot it.

    I’ve always wondered what would happen if I gave it a screenshot and told it to iterate until the Playwright screenshot matched the mock screenshot, pixel perfect. I imagine it would go nuts, but after a few hours I think it would likely get it. (Either that or minor font discrepancies and rounding errors would cause it to give up…)

    • alexandre_m
      8 hours ago
      The key is always feedback loop. If you give the AI the ability to verify itself, then it's able to iterate faster. Sure, it may take many iterations, but at least the iteration spans will be shorter than waiting for a human to validate each time.

      I'd be curious to see how Antigravity compares for the same task with its automatic browser agentic validation logic.

  • jacobsenscott
    15 hours ago
    > here's no other way to do it besides getting Claude to recreate it from a screenshot

    And

    > I'm an engineering manager

    I can't tell if this is an intentional or unintentional satire of the current state of AI mandates from management.

    • master_crab
      14 hours ago
      Honest question: does he know about F5? Or was it intentional to use screenshots when source is available?
      • Mashimo
        3 hours ago
        What is F5? Beside refresh in the browser?
    • chilmers
      14 hours ago
      You really can’t tell? Perhaps the bar for AGI is lower than I thought.
    • thecr0w
      15 hours ago
      lololol
    • dmd
      14 hours ago
      i can’t tell if your comment is satire or not
  • bdcravens
    16 hours ago
    A comparison would Codex would be good. I haven't done it with Codex, but when working through problems using ChatGPT, it does a great job when given screenshots.
  • subleq
    11 hours ago
    What if you gave it an image comparison tool that would xor two screenshots to check its work?
    • poly2it
      11 hours ago
      I've tried doing stuff similar to the author, and it generally does not get better after the first attempt. I even tried supplying Claude with a delta view, ie. the difference in per-pixel output, along with the reference and current result, yet it was impossible for it to understand and remember the actual differences.
  • RagnarD
    13 hours ago
    Why not just feed it the actual instructions that create the site - the page source code, the HTML, CSS, JS if any?
    • masswerk
      12 hours ago
      This may be an even greater challenge: analysing a table layout and recreating it in CSS with absolute positioning.
  • sema4hacker
    13 hours ago
    > The total payload is under 200KB.

    Just out of curiosity, how big was what you considered Claude's best attempt to be?

  • micromacrofoot
    16 hours ago
    I wouldn't call it entirely defeated, it got maybe 90% of the way there. Before LLMs you couldn't get 50% of the way there in an automated way.

    > What he produces

    I feel like personifying LLMs more than they currently are is a mistake people make (though humans always do this), they're not entities, they don't know anything. If you treat them too human you might eventually fool yourself a little too much.

    • thecr0w
      15 hours ago
      As a couple other comments pointed out, it's also not fair to judge Claude based on a one shot like this. I sort of assume these limitations will remain even if we went back and forth but to be fair, I didn't try that more than a few times in this investigation. Maybe on try three it totally nails it.
  • isoprophlex
    14 hours ago
    I have a very weird tangential nit to pick: gendering LLMs. I swear I'm not pushing any sort of gender agenda/discussion that can be had anytime anywhere else in the current age, but to me there is something quintessentially a-gendered about the output of a computer program.

    Calling Claude (or GPT-5 or Gemini or my bash terminal for that matter) a "he" seems absurd to the point of hilarity.

    In my mind, they've always firmly been "it"s.

    • retsibsi
      4 hours ago
      Does it happen much with non-Claude models? If someone genders ChatGPT, it makes me worry that they're taking the character it's playing too literally. But if someone genders Claude, that seems pretty normal, given that it has a man's name.
    • bojan
      14 hours ago
      This felt quirky to me as well, possibly because my native language is strictly gendered.
    • DocTomoe
      13 hours ago
      Hm, Claude is a common male surname, especially in Europe. That plays into it. Also many people - including me - have personalised their AI chats, have given it names, even something resembling a personality (it's easy with prefix prompts). Why others do it, who knows, I do it because I find it a lot less frustrating when ChatGPT fucks up when it pretends to be a young adult female klutz.
    • fragmede
      11 hours ago
      Sounds like the setup for a sexist comedian's routine. "Y'know, ChatGPT is totally a woman because she reminds me of my wife. She thinks it knows everything and is convinced she's right, when she's totally full of shit! And what's the deal with airline food?" Swap the gender depending on your target audience.

      In other languages, chairs have a gender, along with other everyday items like scissors and it doesn't especially make logical sense, although you can squint and tell a story as why something is the gender that's been assigned. Thus making the gender of AI simply a matter"that's just how things are".

  • neuroelectron
    12 hours ago
    My web-dev friend saw the original Space Jam site. I asked him what it would cost to build something like that today. He paused and said:

    We can’t. We don’t know how to do it.

  • phplovesong
    3 hours ago
    This basically boils down to AI being unable to "center a div". I see this very often, AI generated slop is has LOTS of "off by one" kind of bugs.
  • hestefisk
    15 hours ago
    Would be interesting to see whether Gemini could crack this problem.
  • zitterbewegung
    16 hours ago
    In actual workflows someone would accept a very close reproduction and fix the small issues. Generally I use systems to get close enough to a scaffolding and / or make small incremental improvements and direct its design
  • yieldcrv
    10 hours ago
    Skill issue
  • fortyseven
    15 hours ago
    Look at that stupid dog. It's reading a book, but it's really trashy YA. It's not even Shakespeare. Dogs are stupid.
    • satisfice
      14 hours ago
      But the dog's owner keeps saying that it ONLY reads Shakespeare. The dog's owner wants millions of dollars for the dog on that basis.

      I'm not mad at the dogs, I'm mad at the stupid investors and the lying owners.

    • thecr0w
      15 hours ago
      lol, that is fair criticism
  • iwontberude
    15 hours ago
    Apropos given Warner Brothers Discovery just sold to Netflix
  • thenumpaduser
    14 hours ago
    We are actually spoiled at this point.
    • echelon
      14 hours ago
      [flagged]
      • dang
        13 hours ago
        Could you please stop posting this sort of indignant-sensational comment? It's not what this site is for, as you know (or should know).

        https://news.ycombinator.com/newsguidelines.html

        • echelon
          13 hours ago
          Dang, can you explain how this is indignant or sensational?

          Anthropic's leadership and researchers continue to this day to post messages saying engineering will be fully automated. I can go find recent messages on X if you'd like.

          This forum is comprised mostly of engineers, who will be the most impacted if their vision of the world pans out.

          YC depends on innovation capital to make money. If the means of production are centralized, how does YC make any money at all from engineers? Such a world will be vertically and horizontally integrated, not democratically spread for others to take advantage of.

          Now I don't think that's what's going to happen, but that's what the messaging has been and continues to be from Anthropic's leadership, researchers, and ICs.

          Why should we support companies like this?

          We shouldn't we advocate for open models where any market participants can fully utilize and explore the competitive gradients?

          I don't think I'm saying anything controversial here.

          Furthermore, if this pans out like it seems it will - a set of three or four AI hyperscalers - we'll also be in the same situation we have today with the big tech hyperscalers.

          Due to a lax regulatory environment, these companies put a ceiling on startup exits by funding internal competition, buying competitors, etc. I don't see how the situation will improve in an AI world.

          If you're a capitalist, you want competition to be fierce and fair. You don't want concentration of power.

          I can see how an Anthropic IC might not like this post, but this should be fairly reasonable for everyone else who would like to see more distribution of power.

          • dang
            9 hours ago
            "want to automate our jobs away", "Asmodai and team want you working in line at McDonald's", "You're a serf" all obviously fit that description.

            Low-information, high-indignation rhetoric is exactly what we don't want here, so please don't post like this.

  • supern0va
    16 hours ago
    Honestly, if you had showed this article to me even eighteen months ago, I would have been blown away at how good of a job Claude did.

    It's remarkable how high our expectations have been steadily creeping.

    • WhyOhWhyQ
      16 hours ago
      This comment is missing the point. The real goal of all this is not to amaze. It's to create better software. Let's graduate past the amazement phase into the realism phase as soon as possible. What parts of my project is the LLM for? That is the real question worth asking.
      • supern0va
        16 hours ago
        Oh, to be clear, this isn't a criticism. I think it's super cool that we're moving onto the nitpick/refinement phase of this tech. :)
  • stonecharioteer
    17 hours ago
    I'm curious. Did you ask it to use tables and no CSS?

    In 1996, We had only css1. Ask it to use tables to do this, perhaps.

    • lagniappe
      17 hours ago
      We actually had a myriad of competing specs. CSS proper wasn't released until december 1996.
      • thecr0w
        17 hours ago
        Any names for the competing specs? Maybe i could try re-prompting with that direction.
        • lagniappe
          16 hours ago
          Models are trained with content scraped from the net, for the most part. The availability of content pertaining to those specs is almost nil, and of no SEO value. Ergo, models for the most part will only have a cursory knowledge of a spec that your browser will never be able to parse because that isn't the spec that won.

          Nonetheless, here is a link to a list of the specs you asked for: https://www.w3.org/Style/History/Overview.en.html

          • boie0025
            14 hours ago
            Thanks for sharing that. I read through a lot of this. Interesting to read those perspectives in the context of today.
            • lagniappe
              14 hours ago
              Much obliged. Have a good weekend. Your new gray hairs are en route :)
        • wanderingstan
          15 hours ago
          There were specs competing for adoption, but only tables (the old way) and CSS were actually adopted by browsers. So no point trying to use some other positioning technique.
    • thecr0w
      17 hours ago
      Yes yes great question!

      I tried your suggestion and also tried giving it various more general versions of the limitations presented by earlier generations.

      Claude's instinct initially was actually to limit itself to less modern web standards.

      Unfortunately, nothing got those planets to be in the right place.

  • vrighter
    5 hours ago
    "there is no other way to preserve it"

    Bullshit. Right click -> view source

    Or just press ctrl+s and the browser will also gather all the assets into a folder for you.

    The arrogance of thinking that the only way you know how is the only way....

    You literally forgot the save feature all browsers have just because you set out to "solve" this using "ai"

    • OkGoDoIt
      5 hours ago
      The statement is almost certainly made in jest, since it is obviously untrue. Sometimes adding silly artificial constraints can be a fun way to spark creativity.
    • Mashimo
      3 hours ago
      Please read the sentence right after the one you quoted. You removed to context to shit on him :(
  • a-dub
    15 hours ago
    maybe ask it to use 1990s table based layout approaches?
    • al_borland
      15 hours ago
      Interesting. I just looked at the page source and it is in fact using a table layout. I always assumed it was an image map, which I assume would be even more obscure for the LLM.
  • system2
    9 hours ago
    Use Claude for Python. That's it. Don't push it for the frontend, it won't do well.
  • computersuck
    13 hours ago
    Why not just host a copy from waybackmachine?
  • BiteCode_dev
    12 hours ago
    Tell claude to put the screenshot as an centered image with the body having the starry background on repeat. Then define the links as boxes over each icons with an old little tech trick called an image map.

    Common at the time before flash took over.

  • Madmallard
    13 hours ago
    I wrote a 20,000 line multiplayer battle-arena game in XNA back in 2015 with manually coded physics (so everything is there in the code) and have tried several times with Claude, Gemini, Grok, DeepSeek, and GPT to translate it to JavaScript.

    They all fail massively 100% of the time. Even if I break it down into chunks once they get to the chunks that matter the most (i.e. physics, collision detection and resolution, event handling and game logic) they all break down horribly and no amount of prompting back and forth will fix it.

  • tehjoker
    13 hours ago
    Hmm you note that the problem is the LLM doesn’t have enough image context, but then zoom the image more?

    Why not downscale the image and feed it as a second input so that entire planets fit into a patch and instruct it to use the doensampled image for coarse coordinate estimation

  • DocTomoe
    14 hours ago
    I keep wondering ... is this a good benchmark? What is a practical use-case for the skills Claude is supposed to present here? And if the author needs that particular website re-created with pixel-perfect accuracy, woulnd't it me simpler to just to it yourself?

    Sure, you can argue this is some sort of modern ACID-Test - but the ACID tests checked for real-world use-cases. This feels more like 'I have this one very specific request, the machine doesn't perfectly fullfill it, so the machine is at fault.'. Complaining from a high pedestal.

    I'm more surprised at how close Claude got in its reimagined SpaceJam-site.

  • th0ma5
    15 hours ago
    I personally don't understand why asking these things to do things we know they can't do is supposed to be productive. Maybe for getting around restrictions or fuzzing... I don't see it as an effective benchmark unless it can link directly to the ways the models are being improved, but, to look at random results that sometimes are valid and think more iterations of randomness will eventually give way to control is a maddening perspective to me, but perhaps I need better language to describe this.
    • thecr0w
      14 hours ago
      I think this is a reasonable take. I think for me, I like to investigate limitations like this in order to understand where the boundaries are. Claude isn't impossibly bad at analyzing images. It's just pixel perfect corrections that seem to be a limitation. Maybe for some folks it's enough to just read that but for me, I like to feel like I have some good experiential knowledge about the limitations that I can keep in my brain and apply appropriately in the future.
  • Tokkemon
    11 hours ago
    Why do I feel like the old man yelling at clouds that programmers refuse to use their brains anymore?
    • mrwaffle
      7 hours ago
      It's funny, the joke that good programmers are lazy, has always held until now.
  • computersuck
    13 hours ago
    this is just AI brainrot disease

    Help, I can't recreate a website with AI! There's no other way, no way I could fix up some HTML code! Believe me, I'm an engineering manager with a computer science degree!

    Absolutely disgusting.

    • derektank
      13 hours ago
      I’m pretty confident the author was being sarcastic
  • animanoir
    7 hours ago
    [dead]
  • 486sx33
    15 hours ago
    [dead]
  • tyiz
    7 hours ago
    [dead]
  • koreanguy
    16 hours ago
    [dead]
  • djaowjxj
    11 hours ago
    [flagged]
  • throwaway314155
    17 hours ago
    Somehow I suspect Claude Code (in an interactive session with trial, error, probing, critiquing, perusing, and all the other benefits you get) would do better. This example seems to assume Claude can do things in "one shot" (even the later attempts all seem to conceal information like it's a homework assignment).

    That's not how to successfully use LLM's for coding in my experience. It is however perhaps a good demonstration of Claude's poor spatial reasoning skills. Another good demonstration of this is the twitch.tv/ClaudePlaysPokemon where Claude has been failing to beat pokemon for months now.

    • CharlesW
      16 hours ago
      Using https://github.com/anthropics/claude-code/tree/main/plugins/... with style-supporting instructions and context would've improved the outcome as well.
    • thecr0w
      16 hours ago
      Not a homework assignment, and no deliberate attempt to conceal information, just very long and repetitive logs. A lot of the same "insights" so I just didn't provide them here.

      > That's not how to successfully use LLM's for coding in my experience.

      Yeah agree. I think I was just a little surprised it couldn't one-shot given the simplicity.

  • dreadnip
    16 hours ago
    Why involve an LLM in this? Just download the site?
  • johncoatesdev
    16 hours ago
    You last-minute cancelled coffee with your friends to work on this? I'm not sure how I would feel if a friend did that to me.
    • fishtoaster
      15 hours ago
      Based on the later life updates, I suspect this was being humorous.

      > After these zoom attempts, I didn't have any new moves left. I was being evicted. The bank repo'd my car. So I wrapped it there.

  • docheinestages
    15 hours ago
    > Note: please help, because I'd like to preserve this website forever and there's no other way to do it besides getting Claude to recreate it from a screenshot.

    Why not use wget to mirror the website? Unless you're being sarcastic.

    $ wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://example.org

    Source: https://superuser.com/questions/970323/using-wget-to-copy-we...

    • thecr0w
      15 hours ago
      The stuff about not being able to download it is a bit of a joke and I don't think the tone landed with everybody haha. This was just an experiment to see if Claude could recreate a simple website from a screenshot, of course to your point you could download it if you wanted.
    • malfist
      15 hours ago
      Because that wasn't the goal of this exercise