URI:
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   I Don't Like Magic
       
       
        atoav wrote 10 hours 37 min ago:
        This kind of "magical UX design" is the most common source of me
        cursing equipment I interact with. A good example
        
        When I design hardware interfaces one of my main rules is that user
        agency should be maximized where needed. That requires the manufacturer
        to trust (or better: ensure) that the user has a meaningful mental
        model of the device they are using. Your interface then has to honor
        this mental model at all times.
        
        So build a hammer and show the user how to use it effectively, don't
        build a SmartNailPuncher3000 that may or may not work depending if the
        user is holding it right and has selected the wrong mode by touching
        the wrong part.
       
        nickm12 wrote 13 hours 36 min ago:
        This is such a strange take. The definition of "magic" in this post is
        apparently "other people's code" and it even admits that that no
        practical program can avoid depending on other people's code. I think
        what the author is really saying that they like to minimize
        dependencies and abstractions, particularly in web client development,
        and then throws in a connection to coding assistants.
        
        I don't see it, either the notion that other people's code is to be
        avoided for its own sake nor that depending on LLM-generated code is
        somehow analogous to depending on React.
       
        dnautics wrote 19 hours 20 min ago:
        the advantage of frameworks is that there are about 20-ish
        security/critical usage considerations, of which you will remember
        about 5.  if you don't use a framework, you are so much more likelihood
        of getting screwed.  you should use a framework when theres just shit
        you dont think of that could bite you in the ass[0].  for everything
        else, use libraries.
        
        [0] this includes for example int main(), which is a hook for a
        framework.  c does a bunch of stuff in __start (e.g. in linux, i don't
        know what the entrypoint is in other languages) that you honestly don't
        want to do every. single. time./ for every single OS
       
        kosmotaur wrote 20 hours 30 min ago:
        > I get that. But I still draw a line. When it comes to front-end
        development, that line is for me to stay as close as I can to raw HTML,
        CSS, and JavaScript. After all, that’s what users are going to get in
        their browsers.
        
        No it’s not. They will get shown a collection of pixels, a bunch of
        which will occupy coordinates (in terms of an abstraction that holds
        the following promise) such that if the mouse cursor (which is yet
        another abstraction) matches those coordinates, a routine derived from
        a script language (give me an A!) will be executed mutating the DOM
        (give me a B!) which is built on top of more abstractions than it would
        take to give me the remaining S.T.R.A.C.T.I.O.N. three times over.
        Three might be incorrect, just trying to abstract away so that I
        don’t end up dumping every book on computers in this comment.
        
        Ignorance at a not so fine level. Reads like “I’ve established
        myself confidently in the R.A.C. band, therefore anything that comes
        after is yucky yucky”.
       
          dang wrote 19 hours 50 min ago:
          Please make your substantive points without calling names, shallow
          dismissals, or swipes.
          
          This is in the site guidelines: [1] .
          
  HTML    [1]: https://news.ycombinator.com/newsguidelines.html
       
        lo_zamoyski wrote 20 hours 50 min ago:
        This reads like a transcript of a therapy session. He never gives any
        real reasons. It's mostly a collection of assertions. This guy must
        never have worked on anything substantial. He also must underestimate
        the difficulty of writing software as well as his reliance on the work
        of others.
        
        > I don’t like using code that I haven’t written and understood
        myself.
        
        Why stop with code? Why not refine beach sand to grow your own silicon
        crystal to make your own processor wafers?
        
        Division of labor is unavoidable. An individual human being cannot
        accomplish all that much.
        
        > If you’re not writing in binary, you don’t get to complain about
        an extra layer of abstraction making you uncomfortable.
        
        This already demonstrates a common misconception in the field. The
        physical computer is incidental to computer science and software
        engineering per se. It is an important incidental tool, but
        conceptually, it is incidental. Binary is not some "base reality" for
        computation, nor do physical computers even realize binary in any
        objective sense. Abstractions are not over something "lower level" and
        "more real". They are the language of the domain, and we may simulate
        them using other languages. In this case, physical computer
        architectures provide assembly languages as languages in which we may
        simulate our abstractions.
        
        Heck, even physical hardware like "processors" are abstractions;
        objectively, you cannot really say that a particular physical unit is
        objectively a processor. The physical unit simulates a processor model,
        its operations correspond to an abstract model, but it is not identical
        with the model.
        
        > My control freakery is not typical. It’s also not a very commercial
        or pragmatic attitude.
        
        No kidding. It's irrational. It's one thing to wish to implement some
        range of technology yourself to get a better understanding of the
        governing principles, but it's another thing to suffer from a weird
        compulsion to want to implement everything yourself in practice...which
        he obviously isn't doing.
        
        >  Abstractions often really do speed up production, but you pay the
        price in maintenance later on.
        
        What? I don't know what this means. Good abstractions allow us to
        better maintain code. Maintaining something that hasn't been structured
        into appropriate abstractions is a nightmare.
       
          ompogUe wrote 18 hours 24 min ago:
          >> Abstractions often really do speed up production, but you pay the
          price in maintenance later on.
          
          > What? I don't know what this means. Good abstractions allow us to
          better maintain code. Maintaining something that hasn't been
          structured into appropriate abstractions is a nightmare.
          
          100% agree with this. Name it well, maintain it in one place ...
          profit.
          
          It's the not abstracting up front that can catch you: The countless
          times I have been asked to add feature x, but that it is a
          one-off/PoC. Which sometimes even means it might not get the full
          TDD/IoC/feature flag treatment (which aren't always available
          depending upon the client's stack).
          
          Then, months later get asked to created an entire application or
          feature set on top of that. Abstracting that one-off up into a
          method/function/class tags and bags it: it is now named and better
          documented. Can be visible in IDE, called from anywhere and looped
          over if need be.
          
          There is obviously a limit to where the abstraction juice isn't worth
          the squeeze, but otherwise, it just adds superpowers as time goes on.
       
          antonvs wrote 20 hours 4 min ago:
          It all essentially amounts to saying, “I don’t have the
          personality to competently use technology.”
       
        overgard wrote 20 hours 58 min ago:
        React is a weird beast. I've been using it for years. I think I like
        it? I use it for new projects too, probably somewhat as a matter of
        familiarity. I'm not entirely convinced it's a great way to code,
        though.
        
        My experience with it is that functional components always grow and end
        up with a lot of useEffect calls. Those useEffects make components
        extremely brittle and hard to reason about. Essentially it's very hard
        to know what parts of your code are going to run, and when.
        
        I'm sure someone will argue, just refactor your components to be small,
        avoid useEffect as much as possible. I try! But I can't control for
        other engineers. And in my experience, nobody wants to refactor large
        components, because they're too hard to reason about! And the automated
        IDE tools aren't really built well to handle refactoring these things,
        so either you ask AI to do it or it's kind of clunky by-hand. (WebStorm
        is better than VSCode at this, but they're both not great)
        
        The other big problem with it is it's just not very efficient. I don't
        know why people think the virtual DOM is a performance boost. It's a
        performance hack to get around this being a really inefficient model.
        Yes, I know computers are fast, but they'd be a lot faster if we were
        writing with better abstractions..
       
          Izkata wrote 17 hours 55 min ago:
          > so either you ask AI to do it
          
          I dunno, AI tools love adding not only useEffect but also unnecessary
          useMemo.
          
          > I don't know why people think the virtual DOM is a performance
          boost.
          
          It was advertised as one of the advantages when React was new, due to
          the diffing browsers would only need to render the parts that changed
          instead of shoving a whole subtree into the page and the having to do
          all of it (because remember this came out in the era of jquery and
          mustachejs generating strings of HTML from templates instead of
          targeted updates).
       
            MrJohz wrote 13 hours 48 min ago:
            Patching the DOM existed long before React, that wasn't a new
            technique.  IIRC, the idea was more that the VDOM helped by making
            batching easier and reducing layout thrashing, where you write to
            the DOM (scheduling an asynchronous layout update), read from the
            DOM (forcing that thenlayout update to be executed synchronously
            now).
            
            That said, none of that is specific to the VDOM, and I think a lot
            of the impression that "VDOM = go fast" comes from very early
            marketing that was later removed. I think also people understand
            that the VDOM is a lightweight, quick-to-generate version of the
            DOM, and then assume that the VDOM therefore makes things fast, but
            forget about (or don't understand) the patching part of React,
            which is also necessary if you've got a VDOM and which is slow.
       
        Klonoar wrote 21 hours 20 min ago:
        I feel like a lot of the comments here are from people who either
        weren't around for, or didn't grow up in, the era where adactio and the
        wider web dev scene (Zeldman, etc) were the driving force of things on
        the web.
        
        If you've only been in a world with React & co, you will probably have
        a more difficult time understanding the point they're contrasting
        against.
        
        (I'm not even saying that they're right)
       
          insin wrote 21 hours 11 min ago:
          I was around for that era (I may have made an involuntary noise when
          Zeldman once posted something nice about a thing I made), but being
          averse to "abstraction in general" is a completely alien concept to
          me as a software developer.
       
            Klonoar wrote 20 hours 43 min ago:
            Yes, but I'm in so many words stating that that particular era of
            web dev was notorious for the discussion of "is this software
            engineering or not".
            
            It's just such a different concept/vibe/whatever compared to modern
            frontend development. Brad Frost is another notable person in this
            overall space who's written about the changes in the field over the
            years.
       
        thestackfox wrote 21 hours 27 min ago:
        I get the sentiment, but "I don’t like magic" feels like a luxury
        belief.
        
        Electricity is magic. TCP is magic. Browsers are hall-of-mirrors magic.
        You’ll never understand 1% of what Chromium does, and yet we all ship
        code on top of it every day without reading the source.
        
        Drawing the line at React or LLMs feels arbitrary. The world keeps
        moving up the abstraction ladder because that’s how progress works;
        we stand on layers we don’t fully understand so we can build the next
        ones. And yes LLM outputs are probabilistic, but that's how random CSS
        rendering bugs felt to me before React took care of them
        
        The cost isn’t magic; the cost is using magic you don’t document or
        operationalize.
       
          Spivak wrote 15 hours 44 min ago:
          When everything is magic I think we need a new definition of magic or
          maybe a new term to encapsulate what's being described here.
          
          The key feature of magic is that it breaks the normal rules of the
          universe as you're meant to understand it. Encapsulation or
          abstraction therefore isn't, on its own, magical. Magic variables are
          magic because they break the rules of how variables normally work.
          Functional components/hooks are magic because they're a freaky DSL
          written in JS syntax where your code makes absolutely no sense taken
          as regular JS. Type hint and doctype based programming in Python is
          super magical because type hints aren't supposed to affect behavior.
       
          est wrote 15 hours 56 min ago:
          > Electricity is magic. TCP is magic.
          
          Hmm, they aren't if you have a degree.
          
          > Browsers are hall-of-mirrors magic
          
          More like Chromium with billions LoC of C++ is magic. I think browser
          shouldn't be that complex.
       
            t-writescode wrote 14 hours 35 min ago:
            I took classes on how hardware works with software, and I still am
            blown away when I really think deeply about how on earth we can
            make a game render something at 200fps, even if I can derive how it
            should work.
            
            It’s quite magical.
       
            not_kurt_godel wrote 14 hours 36 min ago:
            > I think browser shouldn't be that complex.
            
            how is browser formed. how curl get internent
       
          dnautics wrote 19 hours 18 min ago:
          int main() is magic (and it's a framework).
       
        sigbottle wrote 21 hours 39 min ago:
        > And so now we have these “magic words” in our codebases. Spells,
        essentially. Spells that work sometimes. Spells that we cast with no
        practical way to measure their effectiveness. They are prayers as much
        as they are instructions.
        
        Autovectorization is not a programming model. This still rings true day
        after day.
       
        cbeach wrote 21 hours 41 min ago:
        > I’ve always avoided client-side React because of its direct harm to
        end users (over-engineered bloated sites that take way longer to load
        than they need to).
        
        A couple of megabytes of JavaScript is not the "big bloated"
        application in 2026 that is was in 1990.
        
        Most of us have phones in our pockets capable of 500Mbps.
        
        The payload of an single page app is trivial compared to the bandwidth
        available to our devices.
        
        I'd much rather optimise for engineer ergonomics than shave a couple of
        milliseconds off the initial page load.
       
          blturner wrote 19 hours 35 min ago:
          Sure, amongst the wealthy. Suggest reading Alex Russell on this
          topic:
          
  HTML    [1]: https://infrequently.org/series/performance-inequality/
       
          nosefurhairdo wrote 21 hours 25 min ago:
          React + ReactDOM adds ~50kb to a production bundle, not even close to
          a couple of mbs. React with any popular routing library also makes it
          trivial to lazy load js per route, so even with a huge application
          your initial js payload stays small. I ship React apps with a total
          prod bundle size of ~5mb, but on initial load only require ~100kb.
          
          The idea that React is inherently slow is totally ignorant. I'm
          sympathetic to the argument that many apps built with React are slow
          (though I've not seen data to back this up), or that you as a
          developer don't enjoy writing React, but it's a perfectly fine choice
          for writing performant web UI if you're even remotely competent at
          frontend development.
       
        SirMaster wrote 21 hours 47 min ago:
        So you don’t like compilers? Or do you really full understand how
        they are working? How they are transforming your logic and your
        asynchronous code into machine code etc.
       
          sigbottle wrote 21 hours 37 min ago:
          [Autovectorization is not a programming model]( [1] ).
          
          Sure, obviously, we will not undersatnd every single little thing
          down to the tiniest atoms of our universe. There are philosophical
          assumptions underlying everything and you can question them (quite
          validly!) if you so please.
          
          However, there are plenty of intermediate mental models (or explicit
          contracts, like assembly, elf, etc.) to open up, both in
          "engineeering" land and "theory" land, if you so choose.
          
          Part of good engineering as well is deciding exactly when the
          boundary of "don't cares" and "cares" are, and how you allow people
          to easily navigate the abstraction hierarchy.
          
          That is my impression of what people mean when they don't like
          "magic".
          
  HTML    [1]: https://pharr.org/matt/blog/2018/04/18/ispc-origins
       
            Gibbon1 wrote 20 hours 35 min ago:
            I phrase I use is spooky action at a distance. Quantum entanglement
            but with software.
       
            mananaysiempre wrote 20 hours 53 min ago:
            > Then, when it fails [...], you can either poke it in the right
            ways or change your program in the right ways so that it works for
            you again. This is a horrible way to program; it’s all alchemy
            and guesswork and you need to become deeply specialized about the
            nuances of a single [...] implementation
            
            In that post, the blanks reference a compiler’s autovectorizer.
            But you know what they could also reference? An aggresively opaque
            and undocumented, very complex CPU or GPU microarchitecture. (Cf.
            [1] .)
            
  HTML      [1]: https://purplesyringa.moe/blog/why-performance-optimizatio...
       
          mgaunard wrote 21 hours 45 min ago:
          I think most traditional software engineers do indeed understand what
          transformations compilers do.
       
            clnhlzmn wrote 20 hours 55 min ago:
            I think you're mistaken on that. Maybe me and the engineers I know
            are below average on this but even our combined knowledge of the
            kinds of things _real_ compilers get up to probably only scratches
            the surface. Don't get me wrong, I know what compilers do _in
            principle_. Hell I've even built a toy compiler or two. But the
            compilers I use for work? I just trust that the know what they're
            doing.
       
            fragmede wrote 21 hours 13 min ago:
            Not in any great detail. Gold vs ld isn't something I bet most
            programmers know rigorously, and thats fine! Compilers aren't
            deterministic, but we don't care because they're deterministic
            enough. Debian started a reproducible computing project in 2013
            and, thirteen years later, we can maybe have that happen if you set
            everything up juuuuuust right.
       
            mberning wrote 21 hours 19 min ago:
            They also realize that adding two integers in a higher level
            language could look quite different when compiled depending on the
            target hardware, but they still understand what is happening.
            Contrast that with your average llm user asking it to write a
            parser or http client from scratch. They have no idea how either of
            those things work nor do they have any chance at all of
            constructing one on their own.
       
            advael wrote 21 hours 25 min ago:
            Yea, the pervasiveness of this analogy is annoying because it's
            wrong (because a compiler is deterministic and tends to be a single
            point of trust, rather than trusting a crowdsourced package manager
            or a fuzzy machine learning model trained on a dubiously-curated
            sampling of what is often the entire internet), but it's hilarious
            because it's a bunch of programmers telling on themselves. You can
            know, at least at a high level of abstraction, what a compiler is
            doing with some basic googling, and a deeper understanding is a
            fairly common requirement in computer science education at the
            undergrad level
            
            Don't get me wrong, I don't think you need or should need a degree
            to program, but if your standard of what abstractions you should
            trust is "all of them, it's perfectly fine to use a bunch of random
            stuff from anywhere that you haven't the first clue how it works or
            who made it" then I don't trust you to build stuff for me
       
            UncleMeat wrote 21 hours 28 min ago:
            I'd wager a lot of money that the huge majority of software
            engineers are not aware of almost any transformations that an
            optimizing compiler does. Especially after decades of growth in
            languages where most of the optimization is done in JIT rather than
            a traditional compilation process.
            
            The big thing here is that the transformations maintain the clearly
            and rigorously defined semantics such that even if an engineer
            can't say precisely what code is being emitted, they can say with
            total confidence what the output of that code will be.
       
              skydhash wrote 19 hours 20 min ago:
              > the huge majority of software engineers are not aware of almost
              any transformations that an optimizing compiler does
              
              They may not, but they can be. Buy a book like "Engineering a
              Compiler", familiarize yourself with the Optimization chapters,
              study some papers and the compiler source code (most are OSS).
              Optimization techniques are not spell locked in a cave under a
              mountain waiting for the chosen one.
              
              We can always verify the compiler that way, but it's costly.
              Instead, we trust the developers just like we trust that the
              restaurant's chef are not poisoning our food.
       
              fragmede wrote 21 hours 7 min ago:
              They can't! They can fairly safely assume that the binary
              corresponds correctly to the C++ they've written, but they can't
              actually claim anything about about the output other than "it
              compiles".
       
          eleventyseven wrote 21 hours 45 min ago:
          At least compilers are deterministic
       
            ZeWaka wrote 21 hours 4 min ago:
            mostly
       
              skydhash wrote 19 hours 18 min ago:
              So you're saying on two runs with the same source code and the
              same environment, it may produce different opcodes?
       
        hyperhopper wrote 21 hours 49 min ago:
        This person's distinction between "library" and "framework" is frankly
        insane.
        
        React, which just is functions to make DOM trees and render them is a
        framework? There is a reason there are hundreds of actual frameworks
        that exist to make structure about using these functions.
        
        At this point, he should stop using any high level language!
        Java/python are just a big frameworks calling his bytecode, what
        magical frameworks!
       
          dnautics wrote 19 hours 15 min ago:
          library vs framework (you call a library, a framework calls you) is
          pretty typical and arguably very useful distinction.
          
          calling a framework necessarily magic is the weird thing.
       
        socalgal2 wrote 22 hours 21 min ago:
        You could walk through the framework so you then understand it. There
        are several "let's create react from scratch" articles [1] Certain
        frameworks were so useful they arguably caused an explosion the
        productivity. Rails seems like one. React might be too.
        
  HTML  [1]: https://pomb.us/build-your-own-react/
       
          xp84 wrote 21 hours 5 min ago:
          Thanks for this! I've mostly avoided getting too into React and its
          ilk, mainly because I hate how bloated the actual code generated by
          that kind of application tends to be. But also I am enjoying going
          through this. If I can complete it, I think I will be more informed
          about how React really works.
       
          yellowapple wrote 21 hours 44 min ago:
          Thanks to that page letting me see how many dozens of lines of code
          React needs to do the equivalent of
          
              const element = document.createElement("h1");
              element.innerHTML = "Hello";
              element.setAttribute("title", "foo");
              const container = document.getElementById("root");
              container.appendChild(element);
          
          I now have even less interest in ever touching a React codebase, and
          will henceforth consider the usage of React a code smell at best.
       
            llbbdd wrote 20 hours 31 min ago:
            This code only works if run in a browser composed of millions of
            lines of C++.
       
              lioeters wrote 15 hours 53 min ago:
              I felt a great disturbance in the Force, as if millions of React
              devs cried out in terror and were suddenly silenced.
       
            Mogzol wrote 21 hours 4 min ago:
            The "magic" of React though is in its name, it's reactive. If all
            you're doing is creating static elements that don't need to react
            to changes in state then yeah, React is overkill. But when you have
            complex state and need all your elements to update as that state
            changes, then the benefits of React (or similar frameworks) become
            more apparent. Of course it's all still possible in vanilla JS, but
            it starts to become a mess of event handlers and DOM updates and
            the React equivalent starts to look a lot more appealing.
       
            fragmede wrote 21 hours 10 min ago:
            Given the verbosity of Java's hello world vs Python's, you'd walk
            away with the conclusion that Java should never be used for
            anything, but that would be a mistake.
       
              oftenwrong wrote 15 hours 20 min ago:
              #!/usr/bin/env java --source 25
                  void main() {
                  IO.println("Hello, World!");
                  }
       
                t-writescode wrote 14 hours 38 min ago:
                Many of Java’s hatred comes from old Java. The rest comes
                from Spring.
       
              ZeWaka wrote 21 hours 5 min ago:
              Clearly Java only belongs on things like credit cards and
              Minecraft
              /s
       
                mey wrote 18 hours 12 min ago:
                Why do you have to remind us that Java Card exists?
       
            htnthrow11220 wrote 21 hours 34 min ago:
            To be fair, if all you need is to add elements to a child you
            don’t need React.
            
            Maybe nobody needs React, I’m not a fan. But a trivial stateless
            injection of DOM content is no argument at all.
       
            madeofpalk wrote 21 hours 34 min ago:
            All of that is the JavaScript equivalent of
            
                Hello
            
            I have even less interest in touching any of your codebases!
       
              yellowapple wrote 20 hours 58 min ago:
              Well I'd hesitate to touch any of my codebases, too, so that's
              fair :)
       
                bossyTeacher wrote 20 hours 24 min ago:
                Stop touching my vanilla.js codebase, you naughty!
       
        sodapopcan wrote 22 hours 28 min ago:
        If you are the only person who ever touches your code, fine, otherwise
        I despise this attitude and would insta-reject any candidate who said
        this.  In a team setting, "I don't like magic" and "I don't want to
        learn a framework" means: "I want you to learn my bespoke framework I'm
        inevitably going to write."
       
          llbbdd wrote 20 hours 9 min ago:
          Every non-React app eventually contains a less version of React by
          another name.
       
        vandahm wrote 22 hours 29 min ago:
        I've used React on projects and understand its usefulness, but also
        React has killed my love of frontend development. And now that everyone
        is using it to build huge, clunky SPAs instead of normal websites that
        just work, React has all but killed my love of using the web, too.
       
        vladms wrote 22 hours 43 min ago:
        The advantage of frameworks is to have a "common language" to achieve
        some goals together with a team. A good framework hides some of the
        stupid mistakes you would do when you would try to develop that
        "language" from scratch.
        
        When you do a project from scratch, if you work enough on it, you end
        up wishing you would have started differently and you refactor pieces
        of it. While using a framework I sometimes have moments where I
        suddenly get the underlying reasons and advantages of doing things in a
        certain way, but that comes once you become more of a power user, than
        at start, and only if you put the effort to question. And other times
        the framework is just bad and you have to switch...
       
          goatlover wrote 19 hours 41 min ago:
          It's funny how Lisp has been criticized for its ability to create a
          lot of macros and DSLs, then Java & JavaScript came along and there
          was an explosion of frameworks and transpiled languages in JVM, Node
          or the Browser.
       
            bitwize wrote 15 hours 58 min ago:
            "The problem with Scheme is all of the implementations that are
            incompatible with one another because they each add their own
            nonstandard feature set because the standard language is too
            small." Sometimes with an added subtext of "you fools, you should
            have just accepted R6RS, that way all Schemes would look like Chez
            Scheme or Racket and you'd avoid this problem".
            
            Meanwhile in JavaScript land: Node, Deno, Bun, TypeScript, JSX, all
            the browser implementations which may or may not support certain
            features, polyfills, transpiling, YOLOOOOO
       
          jv22222 wrote 20 hours 41 min ago:
          I used Claude to document, in great detail, a 500k-line codebase in
          about an hour of well-directed prompts. Just fully explained it, how
          it all worked, how to get started working on it locally, the nuance
          of the old code, pathways, deployments using salt-stack to AWS, etc.
          
          I don't think the moat of "future developers won't understand the
          codebase" exists anymore.
          
          This works well for devs who write their codebase using React, etc.,
          and also the ones rolling their own JavaScript (of which I personally
          prefer).
       
            taneq wrote 11 hours 42 min ago:
            How did you vet the quality of the documentation? I have no doubt
            that an LLM could produce a great deal of plausible-sounding
            documentation in short order. Even assuming you’re already
            completely familiar with the code base, reading through that
            documentation and fact checking it would take a great deal of
            effort.
            
            What’s the quality like? I’d expect it to be riddled with
            subtly wrong explanations. Is Claude really that much better than
            older models (eg. GPT-4)?
            
            Edit: Oops, just saw your other comment saying you’d verified it
            manually.
       
            sriku wrote 18 hours 0 min ago:
            This - I even ran Claude to produce a security eval of openclaw for
            fun and it was mostly spot on -
            
  HTML      [1]: https://sriku.org/files/openclaw-secreport-claude-13feb202...
       
            hyencomper wrote 18 hours 8 min ago:
            If your project is on Github, you can also use [1] . I have used it
            to get an overview of a new codebase quickly.
            
  HTML      [1]: https://deepwiki.com/
       
            pazimzadeh wrote 19 hours 9 min ago:
            Hey, I also sent this to feedback@nugget.one, but just in case it
            doesn't arrive:
            
            I wasn't able to get into your 'startup ideas' site.
            
            Signing in with google led to internal server error, and signing in
            with a password, I never received the verification email.
            
            Thought I would let you know. Can't wait to get those sweet startup
            ideas....!
       
              jv22222 wrote 19 hours 0 min ago:
              Thanks, I've been very focused on lightwave and as a result let
              that one slide a bit. I'll try to get it working in next week or
              so.
       
            vladms wrote 20 hours 4 min ago:
            To make a parallel to actual human language: you can understand
            well a foreign language and not be able to speak it at the same
            level.
            
            I found myself in that situation with both foreign languages and
            with programming languages / frameworks - understanding is much
            easier than creating something good. You can of course revert to a
            poorer vocabulary / simpler constructions (in both cases), but an
            "expert" speaker/writer will get a better result. For many cases
            the delta can be ignored, for some cases it matters.
       
            bossyTeacher wrote 20 hours 20 min ago:
            > I used Claude to document, in great detail, a 500k-line codebase
            in about an hour of well-directed prompts
            
            Yes, but have you fully verified that the documentation generated
            matches the code? This is like me saying I used Claude to generate
            a year long workout plan. And that is lovely. But the generated
            thing needs to match what you wanted it for. And for that, you need
            verification. For all you know, half of your document is not only
            nonsense but it is not obvious that it's nonsense until you run the
            relevant code and see the mismatch.
       
              jv22222 wrote 20 hours 7 min ago:
              Yes, since I spent over 10 years writing it in the first place it
              was easy to verify!
       
                sodapopcan wrote 5 hours 30 min ago:
                This is a key piece of information you left out of your
                original post.
       
          sodapopcan wrote 22 hours 31 min ago:
          The problem with this is that it means you have to read guides which
          it seems no one wants to do.  It drives me nuts.
          
          But ya, I hate when people say they don't like "magic."  It's not
          magic, it's programming.
       
            bryanrasmussen wrote 16 hours 36 min ago:
             [1] in my experience among personality types of programmers both
            laborers and artists are opposed to the reading of guides, I think
            the laborers due to laziness and the artists due to a high
            susceptibility to boredom and most guides are not written to the
            intellectually engaging level of SICP.
            
            Craftsmen are naturally the type to read the guide through.
            
            Of course if you spend enough time in the field you end up just
            reading the docs, more or less, because everybody ends up adapting
            craftsmen habits over time.
            
  HTML      [1]: https://medium.com/luminasticity/laborers-craftsmen-and-ar...
       
            monkpit wrote 21 hours 15 min ago:
            Magic refers to specific techniques used in programming, an people
            generally dislike these techniques once they have formed any
            opinion.
       
              bryanrasmussen wrote 16 hours 41 min ago:
              do people generally dislike magic once they have formed an
              opinion, or is it just that people who dislike magic are more
              prone to voicing that opinion, why, if magic is disliked by
              people experienced enough to form opinions, does it keep coming
              back around?
              
              I would suppose the people who create "magic" solutions have at
              least voiced an opinion that they like magic and the people who
              take up those solutions the same, for the record I too dislike
              magic but my feeling is that I am somewhat in the minority on
              that.
       
                sodapopcan wrote 5 hours 35 min ago:
                There is a HUGE difference between framework/library "magic"
                and business logic "magic."  When framework/library "magic" is
                documented it's awesome, you just need to take the time to
                learn it.
       
            WJW wrote 22 hours 4 min ago:
            Oh no! Reading!
            
            Sorry for the snark but why is this such a problem?
       
              fragmede wrote 21 hours 10 min ago:
              Because people won't do it.
       
                WJW wrote 8 hours 31 min ago:
                Sounds like a them problem. If they can't be bothered to learn
                how to use their tools, it won't be a surprise that they then
                won't know how to use them. A free advantage to those of us
                that do dedicate the time to read the docs I guess.
       
                  sodapopcan wrote 5 hours 33 min ago:
                  At least in web development it really seems to have become
                  widely accepted, at least at many places, that people aren't
                  expected to be anywhere near experts in the tools they use
                  every day.
       
            coldtea wrote 22 hours 5 min ago:
            Most however are surely capable of understanding a simple metaphor,
            in which "magic" in the context of coding means "behavior occuring
            implicitly/as a black box".
            
            Yes, it's not magic as in Merlin or Penn and Teller. But it is
            magic in the aforementioned sense, which is also what people
            complain about.
       
        tokenless wrote 22 hours 49 min ago:
        The AI pilled view is coding is knitting and AI is an automated loom.
        
        But it is not quite the case. The hand coded solution may be quicker
        than AI at reaching the business goal.
        
        If there is an elegant crafted solution that stays in prod 10 years and
        just works it is better than an initially quicker AI coded solution
        that needs more maintenance and demands a team to maintain it.
        
        If AI (and especially bad operators of AI) codes you a city tower when
        you need a shed, the tower works and looks great but now you have
        500k/y in maintaining it.
       
          james_marks wrote 22 hours 12 min ago:
          Doesn’t the loom metaphor still hold? A badly operated loom will
          create bad fabric the same way badly used AI will make unsafe,
          unscalable programs.
          
          Anything that can be automated can be automated poorly, but we accept
          that trained operators can use looms effectively.
       
            bigstrat2003 wrote 18 hours 4 min ago:
            The difference is that one can make good cloth with a loom using
            less effort than before. With AI one has to choose between less
            effort, or good quality. You can't get both.
       
              rerdavies wrote 6 hours 12 min ago:
              It really isn't that hard to get good code with less effort using
              an AI.
       
            WJW wrote 20 hours 43 min ago:
            Anything that can be automated can be automated poorly indeed. But
            while it has been proven that textile manufacturing can be
            automated well (or at least better than a hand weaver ever could),
            the jury is still out if programming can be sufficiently automated
            at all. Even if programming can be completely automated, it's also
            unclear if the current LLM strategy will be enough or whether we'll
            have another 30 year AI winter before something better comes along.
       
            tokenless wrote 21 hours 35 min ago:
            The difference is the loom is performing linear work.
            
            Programming is famously non-linear. Small teams making billion
            dollar companies due to tech choices that avoid needing to scale up
            people.
            
            Yes you need marketing, strategy, investment, sales etc. But on the
            engineering side, good choices mean big savings and scalability
            with few people.
            
            The loom doesn't have these choises. There is no make a billion
            tshirts a day for a well configured loom.
            
            Now AI might end up either side of this. It may be too sloppy to
            compete with very smart engineers, or it may become so good that
            like chess no one can beat it. At that point let it do everything
            and run the company.
       
            sixtyj wrote 22 hours 1 min ago:
            Loom is a good metaphor.
       
        noelwelsh wrote 22 hours 53 min ago:
        If you have this attitude I hope you write everything in assembly.
        Except assembly is compiled into micro-ops, so hopefully you avoid that
        by using an 8080 (according to a quick search, the last Intel CPU to
        not have micro-ops.)
        
        In other words, why is one particular abstraction (e.g. Javscript, or
        the web browser) ok, but another abstraction (e.g. React) not? This
        attitude doesn't make sense to me.
       
          jemmyw wrote 8 hours 40 min ago:
          And actually further to your point, I would assume that many more
          people who code in Javascript have read the React codebase and not
          the v8 codebase.
          
          I've read the react source, and some of v8. Imagine how you'd
          implement hooks, you're probably not too far away. It's messier than
          you'd hope, but that's kind of the point of an abstraction anyway.
          It's really not magic, I really dislike that term when all you're
          doing is building on something that is pretty easy to read and
          understand. v8 on the other hand is much harder, although I will say
          I found the code better organised and explained than React.
       
          ookblah wrote 12 hours 52 min ago:
          his line (admittedly he acknowledges) is just purely arbitrary and
          thus basically boils down to his own comfort and opinion.  i guess we
          are all entitled to that, so maybe nothing to really take away from
          all this.  has he read the whole react codebase line by line to
          understand what works and doesn't? just handwaves it away as some
          unneeded "abstraction".
       
          sevensor wrote 21 hours 23 min ago:
          A good abstraction relieves you of concern for the particulars it
          abstracts away. A bad abstraction hides the particulars until the
          worst possible moment, at which point everything spills out in a
          messy heap and you have to confront all the details. Bad abstractions
          existed long before React and long before LLMs.
       
          kens wrote 22 hours 0 min ago:
          Did someone ask about Intel processor history? :-) The Intel 8080
          (1974) didn't use microcode, but there were many later processors
          that didn't use microcode either. For instance, the 8085 (1976).
          Intel's microcontrollers, such as the 8051 (1980), didn't use
          microcode either. The RISC i860 (1989) didn't use microcode (I
          assume). The completely unrelated i960 (1988) didn't use microcode in
          the base version, but the floating-point version used microcode for
          the math, and the bonkers MX version used microcode to implement
          objects, capabilities, and garbage collection. The RISC StrongARM
          (1997) presumably didn't use microcode.
          
          As far as x86, the 8086 (1978) through the Pentium (1993) used
          microcode. The Pentium Pro (1995) introduced an out-of-order,
          speculative architecture with micro-ops instead of microcode.
          Micro-ops are kind of like microcode, but different. With microcode,
          the CPU executes an instruction by sequentially running a microcode
          routine, made up of strange micro-instructions. With micro-ops, an
          instruction is broken up into "RISC-like" micro-ops, which are tossed
          into the out-of-order engine, which runs the micro-ops in whatever
          order it wants, sorting things out at the end so you get the right
          answer. Thus, micro-ops provide a whole new layer of abstraction,
          since you don't know what the processor is doing.
          
          My personal view is that if you're running C code on a
          non-superscalar processor, the abstractions are fairly transparent;
          the CPU is doing what you tell it to. But once you get to C++ or a
          processor with speculative execution, one loses sight of what's
          really going on under the abstractions.
       
            noelwelsh wrote 12 hours 33 min ago:
            That was interesting. Thanks!
       
          pessimizer wrote 22 hours 46 min ago:
          Are you seriously saying that you can't understand the concept of
          different abstractions having different levels of usefulness? That's
          the law of averages taken to cosmic proportions.
          
          If this is true, why have more than one abstraction?
       
            antonvs wrote 20 hours 6 min ago:
            Are you seriously saying you can’t understand the parallel being
            drawn here?
            
            If you “don’t like magic”, you can’t use a compiler.
       
              skydhash wrote 19 hours 29 min ago:
              Is a compiler magic? Did they came from an electronic heaven?
              There are plenty of books, papers, courses,... that explains how
              compiler works. When people are talking about "magic", it usually
              means choosing a complex solution over a simple one, but with an
              abstraction that is ill-fitted. Then they use words like
              user-friendly, easy to install with curl|bash, etc to lure us
              into using it.
       
                antonvs wrote 18 hours 42 min ago:
                I’m referring to what it actually says in the article, such
                as, “I don’t like using code that I haven’t written and
                understood myself.”
                
                Reading comprehension is not magic.
       
            selridge wrote 22 hours 40 min ago:
            I just think everyone who says they don't like magic should be
            forced to give an extemporaneous explanation of paging.
       
          kalterdev wrote 22 hours 48 min ago:
          You can learn JavaScript and code for life. You can’t learn React
          and code for life.
          
          Yeah, JavaScript is an illusion (to be exact, a concept). But it’s
          the one that we accept as fundamental. People need fundamentals to
          rely upon.
       
            dnlzro wrote 17 hours 56 min ago:
            The only reason why you regard JavaScript as “fundamental” is
            that it’s built into the browser. Sure, you can draw that line,
            but at least acknowledge that there’s many places to draw the
            line.
            
            I’d rather make comparative statements, like “JavaScript is
            more fundamental than React,” which is obviously true. And then
            we can all just find the level of abstraction that works for us,
            instead of fighting over what technology is “fundamental.”
       
            satvikpendem wrote 21 hours 50 min ago:
            > You can’t learn React and code for life.
            
            Sure you can, why can't you? Even if it's deprecated in 20 years,
            you can still run it and use it, fork it even to expand upon it,
            because it's still JS at the end of the day, which based on your
            earlier statement you can code for life with.
       
        skydhash wrote 22 hours 56 min ago:
        I also don't like magic, but React is the wrong definition of magic in
        this case. It's an abstraction layer for UI and one that is pretty
        simple when you think about it conceptually. The complexity is by third
        party library that are building on top of it, but proposing complex
        machineries instead of simple ones. Then you have a culture of
        complexity around simple technology.
        
        But it does seems that culture of complexity is more pervasive lately.
        Things that could have been a simple gist or a config change is a whole
        program that pulls tens of dependencies from who knows who.
       
        wa008 wrote 22 hours 59 min ago:
        What I cannot build. I do not understand
       
          zem wrote 19 hours 47 min ago:
          if only the opposite were true!
       
          AlotOfReading wrote 21 hours 53 min ago:
          I'm not sure this is a useful way to approach "magic". I don't think
          I can build a production compiler or linker. It's fair to say that I
          don't fully understand them either. Yet, I don't need a "full"
          understanding to do useful things with them and contribute back
          upstream.
          
          LLMs are vastly more complicated and unlike compilers we didn't get a
          long, slow ramp-up in complexity, but it seems possible we'll
          eventually develop better intuition and rules of thumb to separate
          appropriate usage from inappropriate.
       
        xantronix wrote 23 hours 14 min ago:
        Predicated upon the definition of "magic" provided in the article: What
        is it, if anything, about magic that draws people to it?  Is there a
        process wherein people build tolerance and acceptance to opaque
        abstractions through learning?    Or, is it acceptance that "this is the
        way things are done", upheld by cargo cult development, tutorials,
        examples, and the like, for the sake of commercial expediency?    I can
        certainly understand that seldom is time afforded to building a deep
        understanding of the intent, purpose, and effect of magic abstractions
        under such conditions.
        
        Granted, there are limits to how deep one should need to go in
        understanding their ecosystem of abstractions to produce meaningful
        work on a viable timescale.  What effect does it have on the trade to,
        on the other hand, have no limit to the upward growth of the stack of
        tomes of magical frameworks and abstractions?
       
          pdonis wrote 22 hours 39 min ago:
          > What is it, if anything, about magic that draws people to it?
          
          Simple: if it's magic, you don't have to do the hard work of
          understanding how it works in order to use it. Just use the right
          incantation and you're done. Sounds great as long as you don't think
          about the fact that not understanding how it works is actually a bug,
          not a feature.
       
            wvenable wrote 21 hours 53 min ago:
            > Sounds great as long as you don't think about the fact that not
            understanding how it works is actually a bug, not a feature.
            
            That's such a wrong way of thinking.  There is simply a limit on
            how much a single person can know and understand.  You have to
            specialize otherwise you won't make any progress.  Not having to
            understand how everything works is a feature, not a bug.
            
            You not having to know the chemical structure of gasoline in order
            to drive to work in the morning is a good thing.
       
              xantronix wrote 21 hours 26 min ago:
              But having to know how a specific ORM composes queries targetting
              a specific database backend, however, is where the magic falls
              apart; I would rather go without than deal with such pitfalls. 
              If I were to hazard a guess, things like this are where the
              author and I are aligned.
       
                wvenable wrote 21 hours 19 min ago:
                > to know how a specific ORM composes queries targetting a
                specific database backend, however, is where the magic falls
                apart
                
                I've never found this to be a particular problem.  Most ORMs
                are actually quite predictable.  I've seen how my ORM
                constructs constructs queries for my database and it's pretty
                ugly but also it's actually also totally good.    I've never
                really gained any insight that way.
                
                But the sheer amount of time effort I've saved by using an ORM
                to basically do the same boring load/save pattern over and over
                is immeasurable.  I can even imagine going back and doing that
                manually -- what a waste of time, effort, and experience that
                would be.
       
            farley13 wrote 22 hours 0 min ago:
            I know magic has a nice Arthur C. Clarke ring to it, but I think
            arguing about magic obscures the actual argument.
            
            It's about layers of abstraction, the need to understand them,
            modify them, know what is leaking etc.
            
            I think people sometimes substitute magic when they mean "I
            suddenly need to learn a lower layer I assumed was much less
            complex ". I don't think anyone is calling the linux kernal magic.
            Everyone assumes it's complex.
            
            Another use of "magic" is when you find yourself debugging a lower
            layer because the abstraction breaks in some way. If it's highly
            abstracted and the inner loop gives you few starting points ( while
            (???) pickupWorkFromAnyWhere()    )). It can feel kafkaesque.
            
            I sleep just fine not knowing how much software I use exactly
            works. It's the layers closest to application code that I wish were
            more friendly to the casual debugger.
       
              xantronix wrote 21 hours 27 min ago:
              To me, it's much less of an issue when it works, obviously, but
              far more of a headache when I need to research the "magic" in
              order to make something work which would be fairly trivially
              implemented with fewer layers of abstraction.
       
            socalgal2 wrote 22 hours 19 min ago:
            Or is just a specialization choice. Taxi drivers don't care how a
            car works, they hire a mechanic for that. Doctors don't care how a
            catscan works they just care that it provides the data they need in
            a useful format.
       
              xantronix wrote 21 hours 20 min ago:
              This analogy baffles me.  I don't think anybody here is making
              the argument that we must know how all of our tools work at a
              infinitesimally fundamental level.  Rather, I think software is
              an endless playground and refuge for people who like to make
              their own flavours of magic for the sake of magic.
       
                socalgal2 wrote 18 hours 38 min ago:
                I feel like I'm responding more to the op. Maybe a more
                concrete example, there are several hit games, Undertale is one
                I know personally, where the creator is an artist who learned
                just enough programming in a relatively high level language to
                ship a hit and beloved game. They didn't need to know the
                details of how graphics get put on the screen, nor did they
                need to learn memory management or bytes and bits.
                
                > I don’t like using code that I haven’t written and
                understood myself.
                
                Maybe it's true for the author but it's not true for lots of
                productive people in every field and there's plenty of examples
                of excellence operating at a higher level.
       
              c22 wrote 22 hours 7 min ago:
              I like the definition of magic I learned from Penn Jillette,
              (paraphrased): magic is just someone spending way more resources
              to produce the result than you expected.
       
          3form wrote 22 hours 56 min ago:
          I think it's "this is the way things are done in order to achieve X".
          Where people don't question neither whether this is the only way to
          achieve X, nor whether they do really care about X in the first
          place.
          
          It seems common with regard to dependency injection frameworks. Do
          you need them for your code to be testable? No, even if it helps. Do
          you need them for your code to be modular? You don't, and do you
          really need modularity in your project? Reusability? Loose coupling?
       
       
   DIR <- back to front page