I Don't Like Magic

(adactio.com)

84 points | by edent 3 days ago

20 comments

  • vladms
    1 hour ago
    The advantage of frameworks is to have a "common language" to achieve some goals together with a team. A good framework hides some of the stupid mistakes you would do when you would try to develop that "language" from scratch.

    When you do a project from scratch, if you work enough on it, you end up wishing you would have started differently and you refactor pieces of it. While using a framework I sometimes have moments where I suddenly get the underlying reasons and advantages of doing things in a certain way, but that comes once you become more of a power user, than at start, and only if you put the effort to question. And other times the framework is just bad and you have to switch...

    • sodapopcan
      1 hour ago
      The problem with this is that it means you have to read guides which it seems no one wants to do. It drives me nuts.

      But ya, I hate when people say they don't like "magic." It's not magic, it's programming.

      • coldtea
        1 hour ago
        Most however are surely capable of understanding a simple metaphor, in which "magic" in the context of coding means "behavior occuring implicitly/as a black box".

        Yes, it's not magic as in Merlin or Penn and Teller. But it is magic in the aforementioned sense, which is also what people complain about.

      • monkpit
        30 minutes ago
        Magic refers to specific techniques used in programming, an people generally dislike these techniques once they have formed any opinion.
      • WJW
        1 hour ago
        Oh no! Reading!

        Sorry for the snark but why is this such a problem?

        • fragmede
          25 minutes ago
          Because people won't do it.
  • overgard
    13 minutes ago
    React is a weird beast. I've been using it for years. I think I like it? I use it for new projects too, probably somewhat as a matter of familiarity. I'm not entirely convinced it's a great way to code, though.

    My experience with it is that functional components always grow and end up with a lot of useEffect calls. Those useEffects make components extremely brittle and hard to reason about. Essentially it's very hard to know what parts of your code are going to run, and when.

    I'm sure someone will argue, just refactor your components to be small, avoid useEffect as much as possible. I try! But I can't control for other engineers. And in my experience, nobody wants to refactor large components, because they're too hard to reason about! And the automated IDE tools aren't really built well to handle refactoring these things, so either you ask AI to do it or it's kind of clunky by-hand. (WebStorm is better than VSCode at this, but they're both not great)

    The other big problem with it is it's just not very efficient. I don't know why people think the virtual DOM is a performance boost. It's a performance hack to get around this being a really inefficient model. Yes, I know computers are fast, but they'd be a lot faster if we were writing with better abstractions..

  • socalgal2
    1 hour ago
    You could walk through the framework so you then understand it. There are several "let's create react from scratch" articles

    https://pomb.us/build-your-own-react/

    Certain frameworks were so useful they arguably caused an explosion the productivity. Rails seems like one. React might be too.

    • xp84
      20 minutes ago
      Thanks for this! I've mostly avoided getting too into React and its ilk, mainly because I hate how bloated the actual code generated by that kind of application tends to be. But also I am enjoying going through this. If I can complete it, I think I will be more informed about how React really works.
    • yellowapple
      59 minutes ago
      Thanks to that page letting me see how many dozens of lines of code React needs to do the equivalent of

          const element = document.createElement("h1");
          element.innerHTML = "Hello";
          element.setAttribute("title", "foo");
          const container = document.getElementById("root");
          container.appendChild(element);
      
      I now have even less interest in ever touching a React codebase, and will henceforth consider the usage of React a code smell at best.
      • htnthrow11220
        49 minutes ago
        To be fair, if all you need is to add elements to a child you don’t need React.

        Maybe nobody needs React, I’m not a fan. But a trivial stateless injection of DOM content is no argument at all.

      • Mogzol
        19 minutes ago
        The "magic" of React though is in its name, it's reactive. If all you're doing is creating static elements that don't need to react to changes in state then yeah, React is overkill. But when you have complex state and need all your elements to update as that state changes, then the benefits of React (or similar frameworks) become more apparent. Of course it's all still possible in vanilla JS, but it starts to become a mess of event handlers and DOM updates and the React equivalent starts to look a lot more appealing.
      • madeofpalk
        49 minutes ago
        All of that is the JavaScript equivalent of

            <h1 title=foo>Hello</h1>
        
        I have even less interest in touching any of your codebases!
        • yellowapple
          13 minutes ago
          Well I'd hesitate to touch any of my codebases, too, so that's fair :)
      • fragmede
        25 minutes ago
        Given the verbosity of Java's hello world vs Python's, you'd walk away with the conclusion that Java should never be used for anything, but that would be a mistake.
        • ZeWaka
          20 minutes ago
          Clearly Java only belongs on things like credit cards and Minecraft /s
  • Klonoar
    35 minutes ago
    I feel like a lot of the comments here are from people who either weren't around for, or didn't grow up in, the era where adactio and the wider web dev scene (Zeldman, etc) were the driving force of things on the web.

    If you've only been in a world with React & co, you will probably have a more difficult time understanding the point they're contrasting against.

    (I'm not even saying that they're right)

    • insin
      26 minutes ago
      I was around for that era (I may have made an involuntary noise when Zeldman once posted something nice about a thing I made), but being averse to "abstraction in general" is a completely alien concept to me as a software developer.
  • SirMaster
    1 hour ago
    So you don’t like compilers? Or do you really full understand how they are working? How they are transforming your logic and your asynchronous code into machine code etc.
    • mgaunard
      1 hour ago
      I think most traditional software engineers do indeed understand what transformations compilers do.
      • clnhlzmn
        10 minutes ago
        I think you're mistaken on that. Maybe me and the engineers I know are below average on this but even our combined knowledge of the kinds of things _real_ compilers get up to probably only scratches the surface. Don't get me wrong, I know what compilers do _in principle_. Hell I've even built a toy compiler or two. But the compilers I use for work? I just trust that the know what they're doing.
      • advael
        41 minutes ago
        Yea, the pervasiveness of this analogy is annoying because it's wrong (because a compiler is deterministic and tends to be a single point of trust, rather than trusting a crowdsourced package manager or a fuzzy machine learning model trained on a dubiously-curated sampling of what is often the entire internet), but it's hilarious because it's a bunch of programmers telling on themselves. You can know, at least at a high level of abstraction, what a compiler is doing with some basic googling, and a deeper understanding is a fairly common requirement in computer science education at the undergrad level

        Don't get me wrong, I don't think you need or should need a degree to program, but if your standard of what abstractions you should trust is "all of them, it's perfectly fine to use a bunch of random stuff from anywhere that you haven't the first clue how it works or who made it" then I don't trust you to build stuff for me

      • UncleMeat
        43 minutes ago
        I'd wager a lot of money that the huge majority of software engineers are not aware of almost any transformations that an optimizing compiler does. Especially after decades of growth in languages where most of the optimization is done in JIT rather than a traditional compilation process.

        The big thing here is that the transformations maintain the clearly and rigorously defined semantics such that even if an engineer can't say precisely what code is being emitted, they can say with total confidence what the output of that code will be.

        • fragmede
          22 minutes ago
          They can't! They can fairly safely assume that the binary corresponds correctly to the C++ they've written, but they can't actually claim anything about about the output other than "it compiles".
      • fragmede
        28 minutes ago
        Not in any great detail. Gold vs ld isn't something I bet most programmers know rigorously, and thats fine! Compilers aren't deterministic, but we don't care because they're deterministic enough. Debian started a reproducible computing project in 2013 and, thirteen years later, we can maybe have that happen if you set everything up juuuuuust right.
      • mberning
        34 minutes ago
        They also realize that adding two integers in a higher level language could look quite different when compiled depending on the target hardware, but they still understand what is happening. Contrast that with your average llm user asking it to write a parser or http client from scratch. They have no idea how either of those things work nor do they have any chance at all of constructing one on their own.
    • sigbottle
      52 minutes ago
      [Autovectorization is not a programming model](https://pharr.org/matt/blog/2018/04/18/ispc-origins).

      Sure, obviously, we will not undersatnd every single little thing down to the tiniest atoms of our universe. There are philosophical assumptions underlying everything and you can question them (quite validly!) if you so please.

      However, there are plenty of intermediate mental models (or explicit contracts, like assembly, elf, etc.) to open up, both in "engineeering" land and "theory" land, if you so choose.

      Part of good engineering as well is deciding exactly when the boundary of "don't cares" and "cares" are, and how you allow people to easily navigate the abstraction hierarchy.

      That is my impression of what people mean when they don't like "magic".

      • mananaysiempre
        8 minutes ago
        > Then, when it fails [...], you can either poke it in the right ways or change your program in the right ways so that it works for you again. This is a horrible way to program; it’s all alchemy and guesswork and you need to become deeply specialized about the nuances of a single [...] implementation

        In that post, the blanks reference a compiler’s autovectorizer. But you know what they could also reference? An aggresively opaque and undocumented, very complex CPU or GPU microarchitecture. (Cf. https://purplesyringa.moe/blog/why-performance-optimization-....)

    • eleventyseven
      1 hour ago
      At least compilers are deterministic
  • thestackfox
    42 minutes ago
    I get the sentiment, but "I don’t like magic" feels like a luxury belief.

    Electricity is magic. TCP is magic. Browsers are hall-of-mirrors magic. You’ll never understand 1% of what Chromium does, and yet we all ship code on top of it every day without reading the source.

    Drawing the line at React or LLMs feels arbitrary. The world keeps moving up the abstraction ladder because that’s how progress works; we stand on layers we don’t fully understand so we can build the next ones. And yes LLM outputs are probabilistic, but that's how random CSS rendering bugs felt to me before React took care of them

    The cost isn’t magic; the cost is using magic you don’t document or operationalize.

  • noelwelsh
    2 hours ago
    If you have this attitude I hope you write everything in assembly. Except assembly is compiled into micro-ops, so hopefully you avoid that by using an 8080 (according to a quick search, the last Intel CPU to not have micro-ops.)

    In other words, why is one particular abstraction (e.g. Javscript, or the web browser) ok, but another abstraction (e.g. React) not? This attitude doesn't make sense to me.

    • pibaker
      5 minutes ago
      > In other words, why is one particular abstraction (e.g. Javscript, or the web browser) ok, but another abstraction (e.g. React) not? This attitude doesn't make sense to me.

      Most moral panic over the Evil Big Frameworks are symptoms of mental illnesses — usually obsessive compulsive disorder. It need not make sense. We don't negotiate with mental illnesses.

    • kens
      1 hour ago
      Did someone ask about Intel processor history? :-) The Intel 8080 (1974) didn't use microcode, but there were many later processors that didn't use microcode either. For instance, the 8085 (1976). Intel's microcontrollers, such as the 8051 (1980), didn't use microcode either. The RISC i860 (1989) didn't use microcode (I assume). The completely unrelated i960 (1988) didn't use microcode in the base version, but the floating-point version used microcode for the math, and the bonkers MX version used microcode to implement objects, capabilities, and garbage collection. The RISC StrongARM (1997) presumably didn't use microcode.

      As far as x86, the 8086 (1978) through the Pentium (1993) used microcode. The Pentium Pro (1995) introduced an out-of-order, speculative architecture with micro-ops instead of microcode. Micro-ops are kind of like microcode, but different. With microcode, the CPU executes an instruction by sequentially running a microcode routine, made up of strange micro-instructions. With micro-ops, an instruction is broken up into "RISC-like" micro-ops, which are tossed into the out-of-order engine, which runs the micro-ops in whatever order it wants, sorting things out at the end so you get the right answer. Thus, micro-ops provide a whole new layer of abstraction, since you don't know what the processor is doing.

      My personal view is that if you're running C code on a non-superscalar processor, the abstractions are fairly transparent; the CPU is doing what you tell it to. But once you get to C++ or a processor with speculative execution, one loses sight of what's really going on under the abstractions.

    • sevensor
      38 minutes ago
      A good abstraction relieves you of concern for the particulars it abstracts away. A bad abstraction hides the particulars until the worst possible moment, at which point everything spills out in a messy heap and you have to confront all the details. Bad abstractions existed long before React and long before LLMs.
    • kalterdev
      2 hours ago
      You can learn JavaScript and code for life. You can’t learn React and code for life.

      Yeah, JavaScript is an illusion (to be exact, a concept). But it’s the one that we accept as fundamental. People need fundamentals to rely upon.

      • satvikpendem
        1 hour ago
        > You can’t learn React and code for life.

        Sure you can, why can't you? Even if it's deprecated in 20 years, you can still run it and use it, fork it even to expand upon it, because it's still JS at the end of the day, which based on your earlier statement you can code for life with.

    • pessimizer
      2 hours ago
      Are you seriously saying that you can't understand the concept of different abstractions having different levels of usefulness? That's the law of averages taken to cosmic proportions.

      If this is true, why have more than one abstraction?

      • selridge
        1 hour ago
        I just think everyone who says they don't like magic should be forced to give an extemporaneous explanation of paging.
  • tokenless
    2 hours ago
    The AI pilled view is coding is knitting and AI is an automated loom.

    But it is not quite the case. The hand coded solution may be quicker than AI at reaching the business goal.

    If there is an elegant crafted solution that stays in prod 10 years and just works it is better than an initially quicker AI coded solution that needs more maintenance and demands a team to maintain it.

    If AI (and especially bad operators of AI) codes you a city tower when you need a shed, the tower works and looks great but now you have 500k/y in maintaining it.

    • james_marks
      1 hour ago
      Doesn’t the loom metaphor still hold? A badly operated loom will create bad fabric the same way badly used AI will make unsafe, unscalable programs.

      Anything that can be automated can be automated poorly, but we accept that trained operators can use looms effectively.

      • tokenless
        50 minutes ago
        The difference is the loom is performing linear work.

        Programming is famously non-linear. Small teams making billion dollar companies due to tech choices that avoid needing to scale up people.

        Yes you need marketing, strategy, investment, sales etc. But on the engineering side, good choices mean big savings and scalability with few people.

        The loom doesn't have these choises. There is no make a billion tshirts a day for a well configured loom.

        Now AI might end up either side of this. It may be too sloppy to compete with very smart engineers, or it may become so good that like chess no one can beat it. At that point let it do everything and run the company.

      • sixtyj
        1 hour ago
        Loom is a good metaphor.
  • sigbottle
    54 minutes ago
    > And so now we have these “magic words” in our codebases. Spells, essentially. Spells that work sometimes. Spells that we cast with no practical way to measure their effectiveness. They are prayers as much as they are instructions.

    Autovectorization is not a programming model. This still rings true day after day.

  • wa008
    2 hours ago
    What I cannot build. I do not understand
    • AlotOfReading
      1 hour ago
      I'm not sure this is a useful way to approach "magic". I don't think I can build a production compiler or linker. It's fair to say that I don't fully understand them either. Yet, I don't need a "full" understanding to do useful things with them and contribute back upstream.

      LLMs are vastly more complicated and unlike compilers we didn't get a long, slow ramp-up in complexity, but it seems possible we'll eventually develop better intuition and rules of thumb to separate appropriate usage from inappropriate.

  • vandahm
    1 hour ago
    I've used React on projects and understand its usefulness, but also React has killed my love of frontend development. And now that everyone is using it to build huge, clunky SPAs instead of normal websites that just work, React has all but killed my love of using the web, too.
  • hyperhopper
    1 hour ago
    This person's distinction between "library" and "framework" is frankly insane.

    React, which just is functions to make DOM trees and render them is a framework? There is a reason there are hundreds of actual frameworks that exist to make structure about using these functions.

    At this point, he should stop using any high level language! Java/python are just a big frameworks calling his bytecode, what magical frameworks!

  • xantronix
    2 hours ago
    Predicated upon the definition of "magic" provided in the article: What is it, if anything, about magic that draws people to it? Is there a process wherein people build tolerance and acceptance to opaque abstractions through learning? Or, is it acceptance that "this is the way things are done", upheld by cargo cult development, tutorials, examples, and the like, for the sake of commercial expediency? I can certainly understand that seldom is time afforded to building a deep understanding of the intent, purpose, and effect of magic abstractions under such conditions.

    Granted, there are limits to how deep one should need to go in understanding their ecosystem of abstractions to produce meaningful work on a viable timescale. What effect does it have on the trade to, on the other hand, have no limit to the upward growth of the stack of tomes of magical frameworks and abstractions?

    • pdonis
      1 hour ago
      > What is it, if anything, about magic that draws people to it?

      Simple: if it's magic, you don't have to do the hard work of understanding how it works in order to use it. Just use the right incantation and you're done. Sounds great as long as you don't think about the fact that not understanding how it works is actually a bug, not a feature.

      • wvenable
        1 hour ago
        > Sounds great as long as you don't think about the fact that not understanding how it works is actually a bug, not a feature.

        That's such a wrong way of thinking. There is simply a limit on how much a single person can know and understand. You have to specialize otherwise you won't make any progress. Not having to understand how everything works is a feature, not a bug.

        You not having to know the chemical structure of gasoline in order to drive to work in the morning is a good thing.

        • xantronix
          41 minutes ago
          But having to know how a specific ORM composes queries targetting a specific database backend, however, is where the magic falls apart; I would rather go without than deal with such pitfalls. If I were to hazard a guess, things like this are where the author and I are aligned.
          • wvenable
            34 minutes ago
            > to know how a specific ORM composes queries targetting a specific database backend, however, is where the magic falls apart

            I've never found this to be a particular problem. Most ORMs are actually quite predictable. I've seen how my ORM constructs constructs queries for my database and it's pretty ugly but also it's actually also totally good. I've never really gained any insight that way.

            But the sheer amount of time effort I've saved by using an ORM to basically do the same boring load/save pattern over and over is immeasurable. I can even imagine going back and doing that manually -- what a waste of time, effort, and experience that would be.

      • socalgal2
        1 hour ago
        Or is just a specialization choice. Taxi drivers don't care how a car works, they hire a mechanic for that. Doctors don't care how a catscan works they just care that it provides the data they need in a useful format.
        • xantronix
          35 minutes ago
          This analogy baffles me. I don't think anybody here is making the argument that we must know how all of our tools work at a infinitesimally fundamental level. Rather, I think software is an endless playground and refuge for people who like to make their own flavours of magic for the sake of magic.
        • c22
          1 hour ago
          I like the definition of magic I learned from Penn Jillette, (paraphrased): magic is just someone spending way more resources to produce the result than you expected.
      • farley13
        1 hour ago
        I know magic has a nice Arthur C. Clarke ring to it, but I think arguing about magic obscures the actual argument.

        It's about layers of abstraction, the need to understand them, modify them, know what is leaking etc.

        I think people sometimes substitute magic when they mean "I suddenly need to learn a lower layer I assumed was much less complex ". I don't think anyone is calling the linux kernal magic. Everyone assumes it's complex.

        Another use of "magic" is when you find yourself debugging a lower layer because the abstraction breaks in some way. If it's highly abstracted and the inner loop gives you few starting points ( while (???) pickupWorkFromAnyWhere() )). It can feel kafkaesque.

        I sleep just fine not knowing how much software I use exactly works. It's the layers closest to application code that I wish were more friendly to the casual debugger.

        • xantronix
          42 minutes ago
          To me, it's much less of an issue when it works, obviously, but far more of a headache when I need to research the "magic" in order to make something work which would be fairly trivially implemented with fewer layers of abstraction.
    • 3form
      2 hours ago
      I think it's "this is the way things are done in order to achieve X". Where people don't question neither whether this is the only way to achieve X, nor whether they do really care about X in the first place.

      It seems common with regard to dependency injection frameworks. Do you need them for your code to be testable? No, even if it helps. Do you need them for your code to be modular? You don't, and do you really need modularity in your project? Reusability? Loose coupling?

  • sodapopcan
    1 hour ago
    If you are the only person who ever touches your code, fine, otherwise I despise this attitude and would insta-reject any candidate who said this. In a team setting, "I don't like magic" and "I don't want to learn a framework" means: "I want you to learn my bespoke framework I'm inevitably going to write."
  • lo_zamoyski
    5 minutes ago
    This reads like a transcript of a therapy session. He never gives any real reasons. It's mostly a collection of assertions. This guy must never have worked on anything substantial. He also must underestimate the difficulty of writing software as well as his reliance on the work of others.

    > I don’t like using code that I haven’t written and understood myself.

    Why stop with code? Why not refine beach sand to grow your own silicon crystal to make your own processor wafers?

    Division of labor is unavoidable. An individual human being cannot accomplish all that much.

    > If you’re not writing in binary, you don’t get to complain about an extra layer of abstraction making you uncomfortable.

    This already demonstrates a common misconception in the field. The physical computer is incidental to computer science and software engineering per se. It is an important incidental tool, but conceptually, it is incidental. Binary is not some "base reality" for computation, nor do physical computers even realize binary in any objective sense. Abstractions are not over something "lower level" and "more real". They are the language of the domain, and we may simulate them using other languages. In this case, physical computer architectures provide assembly languages as languages in which we may simulate our abstractions.

    Heck, even physical hardware like "processors" are abstractions; objectively, you cannot really say that a particular physical unit is objectively a processor. The physical unit simulates a processor model, its operations correspond to an abstract model, but it is not identical with the model.

    > My control freakery is not typical. It’s also not a very commercial or pragmatic attitude.

    No kidding. It's irrational. It's one thing to wish to implement some range of technology yourself to get a better understanding of the governing principles, but it's another thing to suffer from a weird compulsion to want to implement everything yourself in practice...which he obviously isn't doing.

    > Abstractions often really do speed up production, but you pay the price in maintenance later on.

    What? I don't know what this means. Good abstractions allow us to better maintain code. Maintaining something that hasn't been structured into appropriate abstractions is a nightmare.

  • cbeach
    56 minutes ago
    > I’ve always avoided client-side React because of its direct harm to end users (over-engineered bloated sites that take way longer to load than they need to).

    A couple of megabytes of JavaScript is not the "big bloated" application in 2026 that is was in 1990.

    Most of us have phones in our pockets capable of 500Mbps.

    The payload of an single page app is trivial compared to the bandwidth available to our devices.

    I'd much rather optimise for engineer ergonomics than shave a couple of milliseconds off the initial page load.

    • nosefurhairdo
      40 minutes ago
      React + ReactDOM adds ~50kb to a production bundle, not even close to a couple of mbs. React with any popular routing library also makes it trivial to lazy load js per route, so even with a huge application your initial js payload stays small. I ship React apps with a total prod bundle size of ~5mb, but on initial load only require ~100kb.

      The idea that React is inherently slow is totally ignorant. I'm sympathetic to the argument that many apps built with React are slow (though I've not seen data to back this up), or that you as a developer don't enjoy writing React, but it's a perfectly fine choice for writing performant web UI if you're even remotely competent at frontend development.

  • skydhash
    2 hours ago
    I also don't like magic, but React is the wrong definition of magic in this case. It's an abstraction layer for UI and one that is pretty simple when you think about it conceptually. The complexity is by third party library that are building on top of it, but proposing complex machineries instead of simple ones. Then you have a culture of complexity around simple technology.

    But it does seems that culture of complexity is more pervasive lately. Things that could have been a simple gist or a config change is a whole program that pulls tens of dependencies from who knows who.

  • huflungdung
    2 hours ago
    [dead]