URI:
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   I'm Kenyan. I don't write like ChatGPT, ChatGPT writes like me
       
       
        joshstrange wrote 2 hours 30 min ago:
        Nothing irks me quite as much as "Did you use ChatGPT/AI on this?" or
        assumptions that it was used.
        
        Just the other week a client reached out and asked a bunch of questions
        that resulted in me writing 15+ SQL queries (not small/basic ones) from
        scratch and then doing some more math/calculations on top of that to
        get the client the numbers they were looking for. After spending an
        hour or two on it and writing up my response they said something to the
        effect up "Thanks for that! I hope AI made it easy to get that all
        together!".
        
        I'm sure they were mostly being nice and trying (badly) to say "I hope
        it wasn't too much trouble" but it took me a few iterations to put
        together a reply that wasn't confrontational. No, I didn't use AI,
        mostly because they absolutely suck at that kind of thing. Oh, they
        might spit of convincing SQL statements, those SQL statements might
        even work and return data, but the chance they got the right numbers is
        very low in my experience (yes, I've tried).
        
        The nuance in a database schema, especially one that's been around for
        a while and seen its share of additions/migrations/etc, is something
        LLMs do not handle well. Sure, if you want a count of users an LLM can
        probably do that, but anything more complicated that I've tried falls
        over very quickly.
        
        The whole ordeal frustrated me quite a bit because it trivialized and
        minimized what was real work that I did (non-billed work, trying to be
        nice). I wouldn't do this because I'm a professional but there was a
        moment when I thought "Next time I'll just reply with AI Slop instead
        and let them sort it out". It really took the wind out of my sails and
        made me regret the effort I put into getting them the data they asked
        for.
       
        aryan1silver wrote 10 hours 14 min ago:
        This is so true, and I say this myself coming from an Indian education
        system that  my vocabulary has gone through an objective optimization
        function similar to that of these LLMs XD
       
        delis-thumbs-7e wrote 12 hours 37 min ago:
        I think the biggest problem in LLM-generated text is that it is
        semantically empty - ie. void of meaning. Most people do not realise
        how it is them, not some ”artificial intelligence”, who provides
        meaning to what is essentially very sophisticated word salad with
        RAG-sourced pieces of information dribbled between as the protein. Just
        search for Weizenbaum’s ELIZA.
        
        If you read some English public school essay by a pupil who has not
        read their homework, effect is very similar: a lot of complex sentences
        peppered with non-Celtic words, but utterly without meaning. In simple
        terms, the writer does not know what the hell they are talking about,
        although they know how to superficially string words together into a
        structured and coherent text. Even professional writers do this, when
        they have a deadline and not a single original idea what to write
        about.
        
        But we do not write just to fart language on paper or screen, we write
        to convey a meaning, a message. To communicate. One can of course find
        meaning from tea leaves and whatnot, but truly it is a communal
        experience to write with an intention and to desperately try to pass
        one’s ideas and emotions forward to one’s common enby.
        
        This is what lacks in the million of GPT-generated Linkedin-posts,
        hecause in the end they are just structure without content, empty
        shells. Sometimes of course one can get something genuinely good by
        accident, but it is fairly rare. Usually it is just flexing of syntax
        in a way both tepid and without heart. And it is unlikely that LLM’s
        can overcome this hurdle, since people writing without intent cannot
        either. They are just statistical models guessing words after all.
       
        grayxu wrote 14 hours 53 min ago:
        same for chinese students
       
        delifue wrote 15 hours 11 min ago:
        I put some article content to Pangram. Pangram says it's AI [1] The
        author's writing style is really similar to AI. AI already somehow
        passed Turing test. The AI detectors are not that trustworthy (but
        still useful).
        
  HTML  [1]: https://www.pangram.com/history/282d7e59-ab4b-417c-9862-10d633...
       
        protocolture wrote 18 hours 36 min ago:
        What gets me these days is sort of this structure.
        
        X isnt just Y its a  Z!
       
          esafak wrote 4 hours 54 min ago:
          It's even on computer-narrated Youtube videos now. It is infuriating.
       
        danielodievich wrote 20 hours 7 min ago:
        In russian there is a saying that translates "try to prove that you are
        not a camel" which describes the impossibility of proving what is
        obviously not true to an unwilling and/or obtuse party.
        
        According to russian language wikipedia ( [1] ) the original tale go
        out to famous Persian poet Rumi from XII century, which just makes me
        tickled pink about how awesome language is.
        
  HTML  [1]: https://ru.wikipedia.org/wiki/%D0%94%D0%BE%D0%BA%D0%B0%D0%B7%D...
       
        Halan wrote 20 hours 25 min ago:
        If you think about societies still in English colonial hangover and
        ChatGPT you might find that they have similar reasons to speak the way
        they speak.
        
        Both aim at using an English that is safe, controlled and 
        policed for fear of negative evaluation.
       
        ropable wrote 20 hours 25 min ago:
        This author writes in ESL better than 99% of the people I've worked
        with in an English-native country, including myself. It's fascinating
        to read just how much more emphasis good-quality written English seems
        to have in Kenya than it does here in Australia (at least in the public
        education system where I have experience). I suppose that it's
        understandable, given that it gates access to higher-level education
        opportunities.
        
        I don't really understand the aversion some people have to the use of
        LLMs to generate or refine written communication. It seems trigger the
        "that's cheating!" outrage impulse.
       
          normie3000 wrote 19 hours 2 min ago:
          I don't think the author mentioned that English is their second
          language. English is an official language of Kenya, and there's a
          reasonable chance it's the author's home language.
       
        theLegionWithin wrote 20 hours 48 min ago:
        ai slop
       
        OG_BME wrote 21 hours 0 min ago:
        Pangram, the best AI-detector I know of, flagged this as 100% AI
        generated.
        
        That's just sad. I really feel for this author.
       
        ern wrote 21 hours 1 min ago:
        I firmly believe that the heuristics that
        teachers/lecturers/instructors worldwide use to avoid engaging with
        reams of mundane text have been successfully by LLMs, and that's why
        they were so hostile to them initially.
        
        They have to actually read material, and not just use the structure as
        a proxy for ability.
       
        blitz_skull wrote 21 hours 30 min ago:
        It actually really bothers me that somehow the long dash has become
        such a "giveaway" that now I have to consider how I write. I actually
        used the long dash a lot in my normal writing, but now that everyone
        considers it some sort of "giveaway," it's like a tic that I can't help
        but notice.
        
        It feels very natural to me. But if everyone and their mother considers
        it a "giveaway", I'd be a fool not to consider it. * sigh *
       
        behringer wrote 21 hours 31 min ago:
        The tell isn't the emdash alone, it's the emdash character being used
        in place of a dash.
       
        userbinator wrote 21 hours 36 min ago:
        If you are "average", you will sound like an AI, because an AI is the
        average of its training data. I don't think it's only Kenyans; I've
        seen the same distinctive "dialect" from many others.
       
          creata wrote 19 hours 58 min ago:
          The "voice" of chatbots comes from the stuff after the main training,
          which is very different to the average human voice. Even the main
          training would give you something like "average on the internet"
          voice, which is quite different to the average human voice.
       
        gcanyon wrote 22 hours 8 min ago:
        > You didn’t just ‘walk’; you ‘strode purposefully’,
        ‘trudged wearily’, or ‘ambled nonchalantly’.
        
        ‘Striding’ is ‘purposeful’; ‘trudging’ expresses
        ‘weariness’; ‘ambling’ implies ‘nonchalance’.
        
        Good verb choice reduces adverb dependence.
       
        ChuckMcM wrote 22 hours 42 min ago:
        On social media I've been accused of being AI twice now :-). I suspect
        it is a vocabulary thing but still it is always amusing.
       
        iLemming wrote 22 hours 57 min ago:
        It could be also because, we foreigners learn to write English prose
        through our reading comprehension, not via our listening circuitry. The
        text probably feels "normal" to me when I read it back to myself, but
        there's no "proper" feedback loop from the native speakers - I have
        zero idea how my written shit sounds to a native ear when they try to
        read it. I still do agree though, it feels so friggin' annoying these
        days to have to deliberately butcher some words and make sure there's a
        typo somehwere in your text, just to convince people that tis indeed a
        crap straight from my "head to the paper", not a slop.
       
        iainctduncan wrote 23 hours 2 min ago:
        This essay is effin' brilliant, and beautifully written.
        
        I'm not Kenyan, but I was raised in a Canadian family of academics,
        where mastering thoughtful – but slightly archaic – writing was
        expected of me. I grew up surrounded by books that would now be
        training material, and who's prose would likely now be flagged as
        ChatGPT.
        
        Just another reason to hate all this shit.
       
        Animats wrote 23 hours 6 min ago:
        I was just reading the 1897 style guide of the City News Bureau
        (Chicago), in the book "Hello, Sweetheart, Get Me Rewrite!". Some
        highlights:
        
        - Do not confuse 'night' with 'evening'.
        
        - This office spells it 'programme'.
        
        - Hotels are 'kept', not 'run'.
        
        - Dead men do not leave 'wives', but they may leave 'widows'.
        
        - 'Very' is a word often used without discrimination. It is not
        difficult to express the same meaning when it is eliminated.
        
        - The relative pronoun 'that' is used about three times superfluously
        to the one time that it helps the sense.
        
        - Do not write 'this city' when you mean Chicago.
       
          koakuma-chan wrote 20 hours 15 min ago:
          Americans have no idea what evening is. It's always night for them.
       
            esafak wrote 5 hours 12 min ago:
            We work past the evening without noticing :)
       
          zahlman wrote 23 hours 2 min ago:
          For what it's worth, LWN maintains the same attitude towards "very"
          today.
       
            Animats wrote 22 hours 29 min ago:
            "Very unique" has its own problem.[1] It started appearing online
            around 2007, 
            according to Google Trends. Sometimes in the description of NFTs.
            As an absolute adjective, "unique" should not be modified.[1] But
            now, "very unique" has joined the vocabulary of meaningless
            marketing phrases.
            
  HTML      [1]: https://www.merriam-webster.com/grammar/very-unique-and-ab...
       
        rdtsc wrote 23 hours 24 min ago:
        > I don't write like ChatGPT. ChatGPT, in its strange, disembodied,
        globally-sourced way, writes like me.
        
        We will all soon write and talk like ChatGPT. Kids growing up asking
        ChatGPT for homework help, people use it for therapy, to resumes, for
        CVs, for their imaginary romantic "friends", asking every day questions
        from the search engine they'll get some LLM response. After some time
        you'll find yourself chatting with your relative or a coworker over
        coffee and instead of hearing, "lol, Jim, that's bullshit" you'll hear
        something like "you're absolutely right, here let me show you a
        bulleted list why this is the case...". Even more scarier, you'll soon
        hear yourself say that to someone, as well.
       
          t0lo wrote 13 hours 53 min ago:
          Yep will kill myself if it gets to this stage
       
          username223 wrote 22 hours 16 min ago:
          (star-eyes emoji) You are absolutely correct, Jim!
          
          (check-mark emoji) Add more emoji — humans love them!
          (red x emoji) Avoid negative words like "bullshit" and "scarier."
          
          (thumbs-up emoji) Before long you'll get past the human feedback of
          reinforcement learning! (smiley-face)
       
        RIMR wrote 1 day ago:
        I have a degree in Journalism and now work in customer support.
        Occasionally, people accuse me of being an AI because of my writing
        style.
        
        Thankfully, no one I report to internally wants me to simplify my
        English to prevent LLM accusations. The work I do requires deliberate
        use of language.
       
        kouru225 wrote 1 day ago:
        Actors have known this for decades: self-expression isn’t only a
        stage problem. It’s a life problem. Most people fail to express
        themselves on an hourly basis. Being good at expressing yourself is
        unnatural. Having clarity of what “yourself” even is is unnatural.
        The truth is that we’re all making comments, jokes, deciding what’s
        important and what not using old programming in our brains…
        programming that was given to us by our childhood and our education.
        Very few people can consistently have the luxury of being/ability to be
        creative with that old programming, and even those that can often have
        to plan ahead of time/rigidly control the environment in order to
        achieve a creative result.
        
        The exact same problem exists with writing. In fact, this problem seems
        to exist across all fields: science, for example, is filled with people
        who have never done a groundbreaking study, presented a new idea, or
        solved an unsolved problem. These people and their jobs are so common
        that the education system orients itself to teach to them rather than
        anyone else. In the same way, an education in literature focused on the
        more likely traits you’ll need to get a job: hitting deadlines,
        following the expected story structure, etc etc.
        
        Having confined ourselves to a tiny little box, can we really be
        surprised that we’re so easy to imitate?
       
        ChosenEnd wrote 1 day ago:
        TIL Kinyarwanda is the national language of Rwanda, not Kenya
       
        cadamsdotcom wrote 1 day ago:
        It is a shame that the author has to change to keep up, and I feel
        their pain but .. it’s also the price of progress. We all do things
        to keep up when change comes for our work and skill sets.
        
        LLMs - like all tools - reduce redundant & repetitive work. In the case
        of LLMs it’s now easy to generate cookie cutter prose. Which raises
        the bar for truly saying something original. To say something original
        now, you must also put in the work to say it in an original way. In
        particular by cutting words and rephrasing even more aggressively,
        which saves your reader time and can take their thinking in new
        directions.
        
        Change is a constant, and good changes tend to gain mass adoption. Our
        ancestors survived because they adapted.
       
          xandrius wrote 10 hours 26 min ago:
          I think you say that so easily because it doesn't actually impact
          you. It'd be absolutely pissed off if I had to constantly watch out
          how I naturally write because otherwise people will shame me for
          thinking I had used AI.
       
        zephyrthenoble wrote 1 day ago:
        Always interesting (in an informative way) to see people "defending"
        em-dashes from my personal perspective. Before you get mad, let me
        explain: before ChatGPT, I only ever saw em-dashes when MS Word would
        sometimes turn a dash into a "longer dash" as I always thought of it. 
        I have NEVER typed an em-dash, and I don't know how to do it on Windows
        or Android.  I actually remember having issues with running a program
        that had em-dashes where I needed to subtract numbers and got errors,
        probably from younger me writing code in something other than an IDE. 
        Em-dashes always seem very out of place to me.
        
        Some things I've learned/realized from this thread:
        
        1. You can make an em-dash on Macs using -- or a keyboard shortcut
        
        2. On Windows you can do something like Alt + 0151 which shows why I
        have never done it on purpose... (my first ever —)
        
        3. Other people might have em-dashes on their keyboard?
        
        I still think it's a relatively good marker for ChatGPT-generated-text
        iff you are looking at text that probably doesn't apply to the above
        situations (give me more if you think of them), but I will keep in mind
        in the future that it's not a guarantee and that people do not have the
        exact same computer setup as me.  Always good to remember that.  I
        still do the double space after the end of a sentence after all.
       
          elephanlemon wrote 5 hours 14 min ago:
          In Microsoft Word, double hyphens convert to em dashes. Seems to be
          the case on the iOS keyboard as well.
       
          viccis wrote 23 hours 35 min ago:
          I actually checked HN's comment data corpus to see if em dash usage
          rose after AI adoption became more widespread. I was kind of shocked
          to see that it did not.
          
          Its overuse is definitely a marker of either AI or a poorly written
          body of text. In my opinion, if you have to rely on excessive
          parentheticals, then you are usually off restructuring your sentences
          to flow more clearly.
       
          lebca wrote 23 hours 58 min ago:
          Just a reminder that our experience does not necessarily invalidate
          someone else's experience.
          
          Eg, I was typing Alt-0151 and Alt-0150 (en-dash) on the reg in my
          middle school and high school essays along with in AIM. While some of
          my classmates were probably typing double hyphens, my group of
          friends were using the keyboard shortcuts, so I am now learning from
          this "detect an LLM" faze that there's a vocal group of people who do
          not share this experience or perspective of human communication. And
          that having a mother who worked in technical publishing who insisted
          I use the correct punctuation rather than two hyphens was not part of
          everyone's childhood.
       
          AlanYx wrote 1 day ago:
          Maybe I'm weird, but one of the first things I've always done when
          setting up emacs is to enable Typo mode (or Typopunct) for writing
          modes, which handles typing en and em dashes and "smart" quotation
          marks in a fairly natural way.
       
          ericmcer wrote 1 day ago:
          I actually got punked during a demo because I wrote some terminal
          commands and stored them in the macOS notepad and didnt notice it had
          changed -- to —.
          
          When I copy and pasted them in it failed obviously so... yeah. If you
          have terminal commands that use `--` don't copy+paste them out of
          notepad.
       
          Wowfunhappy wrote 1 day ago:
          Well, (some) people on HN definitely used them before ChatGPT. [1]
          (And as #9 on the leaderboard, I feel the need to defend myself!)
          
  HTML    [1]: https://www.gally.net/miscellaneous/hn-em-dash-user-leaderbo...
       
            wink wrote 5 hours 7 min ago:
            Unfortunately this table doesn't show us where the em-dash users
            are coming from and if they are native speakers.
            
            It's not that it doesn't exist in my native language, but I don't
            remember seeing them very often outside of print books, and I even
            know a couple typo nerds.
            
            Maybe I'm totally off, and maybe it's the same as double spacing
            after a '.'. I had not heard of this until I was ~30 and then saw
            some Americans writing about it.
       
          ryeights wrote 1 day ago:
          Shift+Win/Option+-. And holding - gives you en/em dash on iOS and
          Android. Personally I love using em dashes so this whole AI thing is
          a real disaster for me.
       
        ChosenEnd wrote 1 day ago:
        > Human touch. Human touch. I’ll give you human touch, you—
        
        > TECHNICAL DIFFICULTIES PLEASE STAND BY
        
        This actually made me pee myself out loud!
       
        didibus wrote 1 day ago:
        Even more so, I think most of the curated data for the fine tuning
        phase is hand crafted from people from countries like Kenya if I
        recall.
       
        unsupp0rted wrote 1 day ago:
        I use semi-colons and em-dashes liberally too. But I tend to do a
        second pass to avoid redundancy.
        
        e.g. > [...] and there is - in my observational opinion - a rather dark
        and insidious slant to it
        
        Let's leave it at "insidious" and "in my opinion". Or drop "in my
        opinion" entirely, since it goes without saying.
        
        Just take one dip and end it.
        
        ( [1] )
        
  HTML  [1]: https://www.youtube.com/watch?v=RfprRZQxWps
       
        WalterBright wrote 1 day ago:
        The article was obviously generated by ChatGPT.
       
        j45 wrote 1 day ago:
        ChatGPT definitely writes like it's trainers (like Kenya).
        
        Kenya writes like the British taught before they left, and necessarily
        they didn't speak or write how they did.
       
        jdkee wrote 1 day ago:
        ""The cat sat on the...", your brain, and the AI, will predict the word
        "floor.""
        
        The models mostly say "mat".
       
        nitwit005 wrote 1 day ago:
        This is also happening to artists, people who make YouTube shorts, and
        similar. Everyone gets accused of being AI if the feel happens to
        match.
        
        I'm sure there's some voice actor out there who can't get work because
        they sound too similar to the generated voices that appear in TikTok
        videos.
       
          esafak wrote 6 hours 30 min ago:
          My peeve is that I can't block them. They could easily let you block
          users but they don't.
       
          Uehreka wrote 22 hours 53 min ago:
          I did a video a couple years ago about a thing I did in Factorio[0]
          and got a couple comments who didn’t ask if I had used an AI voice,
          they just straight up told me that the AI voice I used was off
          putting. I didn’t use an AI voice, in fact I appeared on camera at
          the end of the video in part so that people wouldn’t have to guess,
          but I guess people who thought I was AI didn’t feel like watching
          the whole video.
          
          I suppose I don’t mind people using AI voices if they have a thick
          accent or are shy about their voice, but if I’m watching a video
          and clock the voice as AI (usually because the tone is professional
          but has no expression and then the speaker mispronounces a common
          word or acronym) it does make me start to wonder if the script is AI.
          There are a lot of people churning out tutorials that seem useful at
          first but turn out to have no content (“draw the rest of the owl”
          type stuff) because they asked AI to create a tutorial for something
          and didn’t edit or reprompt based on the output. The video essay
          world is also starting to get hit pretty hard, to the point that
          I’m less willing than ever to watch content unless I already know
          the creator’s work.
          
          [0] Shameless plug:
          
  HTML    [1]: https://youtu.be/PGiTkkMOfiw
       
            Aachen wrote 12 hours 16 min ago:
            Off topic but I really enjoyed that video, thanks!
            
            The voice... idk, I don't hear a lot of voices where I think or
            know of was generated so I'm not qualified to say but it didn't
            give me generated vibes. There's no glitches or mispronounciations
            that I'd expect to pop up at least a few times across 15 minutes of
            material
       
            6SixTy wrote 18 hours 13 min ago:
            I remember a video where the only thing that stuck with me was that
            the guy was working on his English skills, and used an AI voice.
            That's not abnormal by any stretch of the imagination nor
            memorable, but how he edited the AI voice made it about as natural
            as a normal voice.
       
          leoc wrote 1 day ago:
          I doubt it’s affecting his work, but my impression is that if
          anyone is owed money from the use of their personal likeness in AI
          image generators (as a source for generic figures, not as someone
          specifically requested by the prompt) then Pierce Brosnan is likely
          near the front of the queue.
       
        OutOfHere wrote 1 day ago:
        It is highly inappropriate to accuse anyone with the claim that their
        writing is AI generated. This often is used as an excuse to unfairly
        discredit the content of the message. Whether the content is or isn't
        AI generated can't be determined with any confidence, and even if it
        is, it is improper to ignore the message. If you're going to criticize
        a message, do so on the basis of its actual content, not its alleged
        authorship.
       
        jinushaun wrote 1 day ago:
        I had a similar experience recently during code review. I was told to
        remove extra comments produced by Cursor. I was like, “I didn’t use
        Cursor for any of this PR…”
        
        I also love and use em-dashes regularly. ChatGPT writes like me.
       
        buyTheDip wrote 1 day ago:
        Excellent article. Insightful observation, expressed and written well.
        Just an opinion from a Canadian borne and American raised native
        English speaker.
       
        elzbardico wrote 1 day ago:
        BS. ChatGPT writes in the sterile and boring manner of the average
        graduate of business, marketing or journalism: it is dull, safe,
        somewhat pompous but professional, the ideal style for corporate
        communication.
        
        Basically, for two reasons:
        
        1) A giant portion of all internet text was written by those same
        folks.
        2) Those folks are exactly the people anyone would hire to RLHF the
        models to have a safe, commercially desirable output style.
        
        I am pretty convinced the models could be more fluent, spontaneous and
        original, but then it could jeopardize the models' adoption in the
        corporate world, so, I think the labs intentionally fine-tuned this
        style to death.
       
        vultour wrote 1 day ago:
        This post doesn't read anything like ChatGPT. Correct grammar does not
        indicate ChatGPT. Em-dashes don't indicate ChatGPT. Assessing whether
        something was generated using an LLM requires multiple signals, you
        can't simply decry a piece of text as AI-generated because you noticed
        an uncommon character.
        
        Unfortunately I think posts like this only seem to detract from valid
        criticisms. There is an actual ongoing epidemic of AI-generated content
        on the internet, and it is perfectly valid for people to be upset about
        this. I don't use the internet to be fed an endless stream of
        zero-effort slop that will make me feel good. I want real content
        produced by real people; yet posts like OP only serve to muddy the
        waters when it comes to these critiques. They latch onto opinions of
        random internet bottom-feeders (a dash now indicates ChatGPT?
        Seriously?), and try to minimise the broader skepticism against AI
        content.
        
        I wonder whether people like the Author will regret their stance once
        sufficient amount of people are indoctrinated and their content becomes
        irrelevant. Why would they read anything you have to say if the magic
        writing machine can keep shitting out content tailored for them 24/7?
       
        romaniv wrote 1 day ago:
        The fact that everyone is now constantly forced to use (oftentimes
        faulty) personal heuristics to determine whether or not they read slop
        is the real problem here.
        
        AI companies and some of their product users relentlessly exploit the
        communication systems we've painstakingly built up since 1993. We (both
        readers and writers) shouldn't be required to individually adapt to
        this exploitation. We should simply stop it.
        
        And yes, I believe that the notion this exploitation is unstoppable and
        inevitable is just crude propaganda. This isn't all that different from
        the emergence of email spam. One way or the other this will eventually
        be resolved. What I don't know is whether this will be resolved in a
        way that actually benefits our society as a whole.
       
          JumpCrisscross wrote 1 day ago:
          > fact that everyone is now constantly forced to use (oftentimes
          faulty) personal heuristics to determine whether or not they read
          slop is the real problem here
          
          It would be ironic and terrific if AI causes ordinary Americans to
          devote more time to evaluting their sources.
       
        0xbadcafebee wrote 1 day ago:
        It's pretty rude to "accuse" someone of using AI. Would you yell
        "Dictionary!", "Grammarly!", "Reference manual!", "Newspaper quote!" at
        them? Maybe "Harvard!" or "Tutored!" ? You don't know who they are or
        what their life is like. Maybe they're blind and using it as an
        assisted device. Maybe their hand is injured and they use it to output
        information faster. Maybe they're old, infirm, a non-native English
        speaker, etc. Maybe they're just a regular person who feels insecure
        writing, and wants to use new technology to give them the confidence to
        write/comment more. Or, maybe they just talk like that.
        
        Let's say you happen to be lucky, don't accuse someone unfairly, and
        they are using ChatGPT to write what they said. Who cares?! What is it
        you're doing by "calling them out" ? Winning internet points? Feeling
        superior? Fixing the world?
       
          xigoi wrote 1 day ago:
          > Let's say you happen to be lucky, don't accuse someone unfairly,
          and they are using ChatGPT to write what they said. Who cares?!
          
          People who want to read thoughts of other people and not meaningless
          slop.
       
        rcarmo wrote 1 day ago:
        I feel this a bit, since I'm a voracious reader and a constant writer
        across a few languages (but mostly English), which over the decades has
        led to my converging on a certain (if imperfect) degree of polish. Plus
        my multiple concurrent and often fragmented simultaneous trains of
        thought while writing lead me to use parentheticals very often while
        drafting, which then means I often need go back and re-introduce
        structure.
        
        And guess what, when you revise something to be more structured and you
        do it in one sitting, your writing style naturally gravitates towards
        the stuff LLMs tend to churn out, even if with less bullet points and
        em dashes (which, incidentally, iOS/macOS adds for me automatically
        even if I am a double-dash person).
       
        synapsomorphy wrote 1 day ago:
        It's an arms race between human writers and AI. Writers want to sound
        less like AI and AI wants to sound more like writers, so no indicator
        is reliable for long. Today typos indicate a real writer, so tomorrow
        LLMs will inject them where appropriate. Yesterday em dashes indicated
        LLM, so now LLMs use them less.
        
        Beyond these surface level tells though, anyone who's read a lot of
        both AI-unassisted human writing as well as AI output should be able to
        pick up on the large amount of subtler cues that are present partly
        because they're harder to describe (so it's harder to RLHF LLMs in the
        human direction).
        
        But even today when it's not too hard to sniff out AI writing, it's
        quite scary to me how bad many (most?) people's chatbot detection
        senses are, as indicated by this article. Thinking that human writing
        is LLM is a false positive which is bad but not catastrophic, but the
        opposite seems much worse. The long term social impact, being
        "post-truth", seems poised to be what people have been raving / warning
        about for years w.r.t other tech like the internet.
        
        Today feels like the equivalent of WW1 for information warfare, society
        has been caught with its pants down by the speed of innovation.
       
          lapcat wrote 1 day ago:
          > society has been caught with its pants down by the speed of
          innovation.
          
          Or rather by the slowness of regulation and enforcement in the face
          of blatant copyright violation.
          
          We've seen this before, for example with YouTube, which became the
          go-to place for videos by allowing copyrighted material to be
          uploaded and hosted en masse, and then a company that was already a
          search engine monopoly was somehow allowed to acquire YouTube,
          thereby extending and reinforcing Google's monopolization of the web.
       
            pixl97 wrote 1 day ago:
            Innovation has always been faster when copyright is lax. The US was
            copying British and other European inventions during the industrial
            age left and right, and their economy took off because of it.
       
        yokoprime wrote 1 day ago:
        The author uses dash (-) not em dash (—), there is a big difference
        in that everyone has a dedicated dash/undersocore key on their
        keyboard, but nobody has a em dash key. You can use word processing
        software etc, but using em dash consistently throughout a text is very
        unnatural in casual written texts.
       
          elephanlemon wrote 5 hours 9 min ago:
          Double hyphen converts to em dash in Microsoft Word and I think some
          other places. I was taught that it was incorrect to use a hyphen in
          place of a dash, so I’ve always used em dashes -- sometimes I’ll
          just use two hyphens if the software doesn’t convert, like a forum
          :).
       
          xigoi wrote 23 hours 59 min ago:
          I do have an em dash key on the keyboard on my phone :)
       
          BalinKing wrote 1 day ago:
          There is an easy shortcut for em dashes on macOS, Opt+Shift+-. This
          makes it really easy to use them, which I do all the time in casual
          settings (indeed, more often than in formal settings).
       
            creata wrote 20 hours 1 min ago:
            And a very easy shortcut in Vim: Ctrl+K M -
       
              pcthrowaway wrote 11 hours 13 min ago:
              And a very easy shortcut in emacs: C-x M-c M-butterfly-dash
       
            rcarmo wrote 1 day ago:
            Autocomplete does that for me (bilingual English/Portuguese).
       
        maqnius wrote 1 day ago:
        Correlated but kinda off topic: I don't mind the style so much, I mind
        the verbosity. The amount of words spit out effortless by the writer
        which then need to be comprehended and filtered by every reader.
        
        Seeing a project basically wrapping 100 lines of code with a novel
        length README ala 'emoticon how does it compare to.. emoticon'-bla bla
        really puts me off.
       
          rcarmo wrote 1 day ago:
          That's a hallmark of Claude. I stopped using Claude for documentation
          because it was overly... JavaScripty in feel (all the stuff it
          churned out felt like JavaScript framework docs of the 2010s, and I
          bet it would have added Neon Cat if it knew how).
          
          In comparison, I can sort of confidently ask GPT-5.1/2 to say "revise
          this but be terse and concise about it" and arrive at something that
          is more structured that what I input but preserves most of my writing
          style and doesn't bore the reader.
       
        Yizahi wrote 1 day ago:
        While author is correct in general, I would like to add a counter-point
        regarding em-dashes specifically. Yes, many people use them like this -
        and many website frameworks will automatically replace a keyboard
        not-really-a-minus symbol with em-dash. So that is not a sign of the
        LLM generated slop.
        
        What LLMs also do though, is use em-dashes like this (imagine that "--"
        is an em-dash here): "So, when you read my work--when you see our
        work--what are you really seeing?"
        
        You see? LLMs often use em-dashes without spaces before and after, as a
        period replacement. Now that is only what an Oxford professor would
        write probably, I've never seen a human write text like that. So those
        specific em-dashes is a sure sign of a generated slop.
       
          creata wrote 19 hours 50 min ago:
          Tbh whether I use spaces around em dashes depends more on the font
          than anything. Some fonts have em dashes that are so long that
          putting spaces around them would be ridiculous.
       
          zahlman wrote 22 hours 43 min ago:
          > What LLMs also do though, is use em-dashes like this (imagine that
          "--" is an em-dash here): "So, when you read my work--when you see
          our work--what are you really seeing?"
          
          >You see? LLMs often use em-dashes without spaces before and after,
          as a period replacement.
          
          It would not make any sense at all to use periods in the places where
          those em-dashes are supposedly "replacing" periods in the example.
       
          brycewray wrote 1 day ago:
          > LLMs often use em-dashes without spaces before and after, as a
          period replacement. Now that is only what an Oxford professor would
          write probably, I've never seen a human write text like that. So
          those specific em-dashes is a sure sign of a generated slop.
          
          Evidently, you've never read text from anyone whose job requires
          writing, publishing, and/or otherwise communicating under rules
          established in (e.g.) the Chicago Manual of Style.
       
            Yizahi wrote 23 hours 9 min ago:
            Those people broadly fall under "the Oxford professor" catch-all
            phrase. Obviously. I was talking about 99.99% of random internet
            texts, which do not conform to any Manual of style and are not
            written by literature majors. If I see a text authored by some
            known figure or in a respectable journal/site, then I don't have a
            task of detecting LLM slop in the first place. But when I do want
            to know if the text is generated or not, it is usually written by
            less sophisticated crowd, or anonymous.
       
          Kim_Bruning wrote 1 day ago:
          It could also—hear me out here—be me just using compose + --- .
          
          (Not that I used n- or m- dash previously, I used commas, like this!
          )
          
          But some people learn n- and m-dash, it turns out. Who knew!
       
        vintermann wrote 1 day ago:
        > For my generation, and the ones that followed, the English
        Composition paper - and its Kiswahili equivalent, Insha - was not just
        a test; it was a rite of passage.
        
        OK but come ON, that has to have been deliberate!
        
        In addition to the things chatbots have made clichés, the author
        actually has some "tells" which identify him as human more strongly.
        Content is one thing. But he also has things (such as small
        explanations and asides in parentheses, like this) which I don't think
        I've EVER seen an instruction-tuned chatbot do. I know I do it myself,
        but I'm aware it's a stylistic wart.
       
          esafak wrote 4 hours 49 min ago:
          > [It] was not just a test; it was a rite of passage.
          
          Other than using a semi-colon instead of a comma, this is how ChatGPT
          sounds.
       
        radimm wrote 1 day ago:
        I wouldn't usually use the 'non-native speaker argument', but thank
        you! Just yesterday I was accused of sounding like AI - [1] - my
        default mode is that I oscillate between sounding too boring/technical,
        or when trying to do my best, sounding like AI
        
  HTML  [1]: https://news.ycombinator.com/item?id=46262777
       
          renewiltord wrote 15 hours 7 min ago:
          Your article is obviously written by Slavic writer, haha.
          Characteristic sound of Slavic tint to the prose. If it is LLM, then
          prompt engineering is good. I believe it is mostly human-written.
       
            radimm wrote 5 hours 7 min ago:
            Yes, I'm Czech.
       
        kevin061 wrote 1 day ago:
        Everyone thinks they are great at detecting AI slop, but they usually
        aren't. For art, there are certain giveaways, but for text?
        
        I regularly find myself avoiding the use of the em-dash now even though
        it is exactly what I should be writing there, for fear of people
        thinking I used ChatGPT.
        
        I wish it wasn't this way. Alas.
       
        zkmon wrote 1 day ago:
        If you used a calculator to do a calculation, would they say the answer
        looks like created by calculator and not done by-hand?
        
        I think the only solution to this is, people should simply not question
        AI usage. Pretence is everywhere. Face makeup, dress, the way you
        speak, your forced smile...
       
        jagoff wrote 1 day ago:
        Sorry but using the emdash is just a shitty, over corporate way to
        write, and it instantly rubs some spot in the brain for some people; it
        doesn't matter if it was generated by an llm or not.
       
          ghc wrote 1 day ago:
          Emdashes can also be part of beautiful writing, like this:
          
  HTML    [1]: https://poets.org/poem/feeling-first
       
        sombragris wrote 1 day ago:
        This resonates with me. LLM output in Spanish also has the tendency to
        "write like me", as in the linked article.
        
        On that regard, I have an anecdote not from me, but from a student of
        mine.
        
        One of the hats I wear is that of a seminary professor. I had a student
        who is now a young pastor, a very bright dude who is well read and is
        an articulate writer.
        
        "It is a truth universally acknowledged" (with apologies to Jane
        Austen) that theological polemics can sometimes be ugly. Well, I don't
        have time for that, but my student had the impetus (and naiveté) of
        youth, and he stepped into several ones during these years. He made
        Facebook posts which were authentic essays, well argued, with balanced
        prose which got better as the years passed by, and treating opponents
        graciously while firmly standing his own ground. He did so while he was
        a seminary student, and also after graduation. He would argue a point
        very well.
        
        Fast forward to 2025. The guy still has time for some Internet
        theological flamewars. In the latest one, he made (as usual) a well
        argued, long-form Facebook post, defending his viewpoint on some
        theological issue against people who have opposite beliefs on that
        particular question. One of those opponents, a particularly nasty
        fellow, retorted him with something like "you are cheating, you're just
        pasting some ChatGPT answer!", and pasted a screenshot of some AI
        detection tool that said that my student's writing was something like
        "70% AI Positive". Some other people pointed out that the opponent's
        writing also seemed like AI, and this opponent admitted that he used AI
        to "enrich" some of his writing.
        
        And this is infuriating. If that particular opponent had bothered
        himself to check my student's profile, he would have seen that same
        kind of "AI writing" going on back to at least 2018, when ChatGPT and
        the likes were just a speck in Sam Altman's eye. That's just the way my
        student writes, and he does in this way because the guy actually reads
        books, he's a bonafide theology nerd. Any resemblance of his writing to
        a LLM output is coincidence.
        
        In my particular case, this resonated with me because as I said, I also
        tend to write in a way that would resemble LLM output, with certain
        ways to structure paragraphs, liberal use of ordered and unordered
        lists, etc. Again, this is infuriating. First because people tend to
        assume one is unable to write at a certain level without cheating with
        AI; and second, because now everybody and their cousin can mimic
        something that took many of us years to master and believe they no
        longer need to do the hard work of learning to express themselves on an
        even remotely articulate way. Oh well, welcome to this brave new
        world...
       
        mattbee wrote 1 day ago:
        I'm not sure I've read any of Marcus' previous writing, but there's no
        way that essay could have been written by an AI.  It's personal and has
        a structure that follows human thought rather than a prompt.
        
        For sure he describes an education in English that seems misguided and
        showy. And I get the context - if you don't show off in your English,
        you'll never aspire to the status of an Englishman.  But doggedly
        sticking to anyone's "rules of good writing" never results in good
        writing.  And I don't think that's what the author is doing, if only
        because he is writing about the limitations of what he was taught!
        
        So idk maybe he does write like ChatGPT in other contexts? But not on
        this evidence.
        
        I have seen people use "you're using AI" as a lazy dismissal of someone
        else's writing, for whatever reasons. That usually tells you more about
        the person saying it than the writing though.
       
          giancarlostoro wrote 1 day ago:
          I see people claiming real videos are AI, or even real photos. You
          can really tell it's not when there's 17 other videos from other
          angles. Maybe someday AI will get good at that level of faking a
          video, but at the time being, it is much harder to pull off.
       
        moviet wrote 1 day ago:
        We shouldn't need to have people bearing false witness. Anyone who uses
        AI tools to produce published works should offer a clear disclaimer to
        their audience. I share the same concerns as the author: "Will my
        written work be used to say that I plagiarize off ChatGPT?"
        
        All the toil of word-smithing to receive such an ugly reward,
        convincing new readers that you are lazy. What a world we live in.
       
        komali2 wrote 1 day ago:
        > There were unspoken rules, commandments passed down from teacher to
        student, year after year. The first commandment? Thou shalt begin with
        a proverb or a powerful opening statement. “Haste makes waste,” we
        would write, before launching into a tale about rushing to the market
        and forgetting the money. The second? Thou shalt demonstrate a wide
        vocabulary. You didn’t just ‘walk’; you ‘strode
        purposefully’, ‘trudged wearily’, or ‘ambled nonchalantly’.
        You didn’t just ‘see’ a thing; you ‘beheld a magnificent
        spectacle’. Our exercise books were filled with lists of these “wow
        words,” their synonyms and antonyms drilled into us like
        multiplication tables.
        
        Well, this is very interesting, because I'm a native English speaker
        that studied writing in university, and the deeper I got into the world
        of literature, the further I was pushed towards simpler language and
        shorter sentences. It's all Hemingway now, and if I spot an adverb or,
        lord forbid, a "proceeded to," I feel the pain in my bones.
        
        The way ChatGPT writes drives me insane. As for the author, clearly
        they're very good, but I prefer a much simpler style. I feel like the
        big boy SAT words should pop out of the page unaccompanied, just one
        per page at most.
       
          eudamoniac wrote 6 hours 54 min ago:
          I don't know why literature has the unique property among the arts
          that it must be puréed into rapidly digestible slush. Many here have
          already defended merits of connotative precision, so I shan't, but
          what of artistic precision? Language can innervate the soul with
          beauty, lilt with the lyrical pleasure of song, or revolt the senses.
          Shall the painter lock away his varied pigments? Shall the blue notes
          never sound? Limpid prose lacks those tongue-delighting tannins. I
          mourn each word ferried across the river Archaic.
       
          joseda-hg wrote 7 hours 49 min ago:
          There's a bit of a perceptory gap.
          
          If a Native bends the language it comes accross as intentional.
          
          If (For example) I do with my heavy accented ESL it usually comes
          accoss as lack of competency.
          
          Same goes for simple language, y'all get the benefit of assumed
          fluency, we usually do not
       
          neves wrote 8 hours 47 min ago:
          Funny that in science fiction robotic voices were always the ones
          without adverbs and adjectives
       
          torginus wrote 10 hours 47 min ago:
          I'm not from the US, but I've heard that in high-school classes,
          essays are graded on breadth of vocabulary, and sentence complexity,
          going as far as mechanically assigning part of the grade based on a
          formula that measures how many different words you used and how long
          your sentences were.
          
          Perhaps my info is out of date on this?
          
          Afair, the underlying idea is 'grade reading level', as in longer
          sentences with difficult words being  more difficult to read, which I
          think mistakenly got turned into the idea, that if your prose is
          would get assigned a higher reading grade, that would make it more
          sophisticated.
          
          I'm sure many kids who are actually into reading, having read tons of
          books written by professional authors recognize the flaws of this
          approach and actively suffer because of it. Perhaps their first
          attempts at writing fiction for their own sake is somewhat influenced
          by this guidance, which they have to unlearn.
          
          Strange that in college, they do a 180 on these demands and they want
          students to write in sentences that are as short as possible. Which
          once again, is perhaps partially due to making students unlearn the
          bad behaviors drilled into them in high-school, but I feel like this
          is like committing the same mistake, but in the other direction.
       
          figassis wrote 15 hours 1 min ago:
          Focus on short sentences and simplicity is an American trait. It is a
          bit different with UK English. As a native Portugese speaker, I spent
          my time before the US doing exactly the same as the author, I could
          write well structured prose by the time I was in 5th grade. I grew up
          with a dictionary. My mother would come back from work and ask me for
          the list of "difficult words". The expectation was that I spent time
          reading and would have found some new words, looked them up and now
          needed to sync with her to see if I got the correct meanings in the
          context where I found them.
          
          Then I moved to the US and noticed that even the books were sort of
          written in a way that required no extra effort. The English I learned
          while playing RPGs (with no speech at the time) was enough to read
          most books from the library and a dictionary was only needed
          occasionally. And everyone basically just knew the same set of words,
          youth and adults alike. I also noticed that US English has a distinct
          tendency of making up new words that are simpler and more intuitive
          than the original expressions. It turns things into verbs. This is
          why people Google, Tweet and Vibe.
          
          Then I went to an Engineering College, and it teaches us to distill
          everything into it's simpler fundamental components. I like it, and I
          now want people to be as direct as possible.
          
          As a non native english speaker, I've always had to speak and write
          better than native speakers, and always had to tolerate the "You
          speak/write really well, where are you from?". Today they no longer
          ask, AI is their answer and they judge accordingly.
       
          chistev wrote 15 hours 25 min ago:
          " Proceeded to" is wrong?
       
            komali2 wrote 5 hours 18 min ago:
            Probably not, I just find it annoying. Maybe because cops use it a
            lot.
       
          protocolture wrote 18 hours 41 min ago:
          >I'm a native English speaker that studied writing in university
          
          I am a native english speaker who had to unlearn OP's writing style
          to pass my tertiary education. In particular I sat an english
          bridging course for non english speakers. I was often told off for
          "editorialising" and wasting space with useless descriptions.
       
            clevergadget wrote 18 hours 4 min ago:
            til we go out of our way to make our writers boring...
       
              protocolture wrote 17 hours 30 min ago:
              Well I wasnt studying narrative, but I wouldn't have been good at
              that either.
       
          munificent wrote 20 hours 59 min ago:
          The article itself does an excellent job spelling out the background:
          
          > This style has a history, of course, a history far older than the
          microchip: It is a direct linguistic descendant of the British
          Empire. The English we were taught was not the fluid, evolving
          language of modern-day London or California, filled with slang and
          convenient abbreviations. It was the Queen's English, the language of
          the colonial administrator, the missionary, the headmaster. It was
          the language of the Bible, of Shakespeare, of the law. It was a tool
          of power, and we were taught to wield it with precision. Mastering
          its formal cadences, its slightly archaic vocabulary, its rigid
          grammatical structures, was not just about passing an exam.
          
          > It was a signal. It was proof that you were educated, that you were
          civilised, that you were ready to take your place in the order of
          things.
          
          Much of writing style is not about conveying meaning but conveying
          the author's identity. And much of that is about matching the fashion
          of the group you want to be a member of.
          
          Fashion tends to go through cycles because once the less prestigious
          group becomes sufficiently skilled at emulating the prestige style,
          the prestigious need a new fashion to distinguish themselves. And if
          the emulated style is ostentatious and flowery, then the new prestige
          style will be the opposite.
          
          Aping Hemingway's writing style is in a lot of ways like $1,000
          ripped jeans. It sort of says "I can look poor because I'm so rich I
          don't even have to bother trying to look rich."
          
          (I agree, of course, that there is a lot to be said for clean, spare
          prose. But writing without adverbs doesn't mean one necessarily has
          the clarity of thought of Hemingway. For many, it's just the way you
          write so that everyone knows you got educated in a place that told
          you to write that way.)
       
            meatmanek wrote 19 hours 46 min ago:
            Sometimes it's about matching the fashion of the group you aspire
            to be part of, sometimes it's about having that fashion imposed on
            you so you look "professional".
            
            Security guards at tech company offices are the only ones who wear
            suits, presumably because it's a mandated uniform, not by choice.
       
              whstl wrote 13 hours 8 min ago:
              Apocryphal, but someone once told me history of male grooming is
              an example of this: When only rich people could afford to shave,
              the fashion among the noble was to have a clean-shaved face to
              signal status, and poor people had beards. Once safety razors
              appeared, then the trend reverted.
       
          AnonymousPlanet wrote 21 hours 45 min ago:
          > [...] I'm a native English speaker that studied writing in
          university [...]
          
          As a native English speaker who studied writing at university, do you
          think "who" should be used with people while "that" should only be
          used with things or the other way round. Or should I just not care?
          
          Edit: missing things
       
            komali2 wrote 5 hours 19 min ago:
            > Or should I just not care?
            
            This, unless you're being tested on it. Maybe the safest bet is to
            avoid it. "As a native English speaker that studied..." oh wait
            shit lol, it's actually quite hard to avoid.
            
            "I'm a native English speaker, and I studied writing in university.
            This experience has led me to..." There. English, what an
            uncomplicated, uncluttered language!
       
            mmooss wrote 20 hours 40 min ago:
            I think you might intend to compare 'that' and 'which'? Common
            advice is to use 'that' with people and 'which' with objects,
            though that isn't necessarily followed and omits many nuances.
            
            Use 'who' with people especially, often with other living beings
            ('my dog, who runs away daily, always is home for dinner') or
            groups of them ('the NY Yankees, who won the championship that
            year, were my favorite'), but never with objects unless pretending
            they live ('my stuffed bear, who sleeps in my bed, wakes me every
            morning').
            
            If you care about these things, the Chicago Manual of Style is a
            large, technical, highly respected guide aimed at publishing.
            Fowler's Modern English Usage is more focused on usage. A short and
            beloved guide is The Elements of Style by Strunk & White. You can
            find all on the Internet Archive, I'm almost certain.
       
              fragmede wrote 13 hours 32 min ago:
              Whom among us has not misused whom.
       
              ThePowerOfFuet wrote 13 hours 40 min ago:
              >Common advice is to use 'that' with people and 'which' with
              objects, though that isn't necessarily followed
              
              Well played.
       
              lupire wrote 16 hours 15 min ago:
              Elements of Style is reviled by modern linguists and writers.
       
                mmooss wrote 15 hours 19 min ago:
                Some don't like it and many do, and it's been assigned for
                decades. Just a few years ago I looked at a website that
                collects college syllabi and it was one of the most assigned
                books.
                
                It gives clear, practical advice in a very accessible style and
                format. If you have any comparable substitutes, I'm all ears.
       
            elliotec wrote 21 hours 42 min ago:
            You should just not care. Both are acceptable, "that" is a little
            less formal and probably more common in everyday speech.
       
              AnonymousPlanet wrote 21 hours 33 min ago:
              Thanks. Is that true only for American English or other areas
              too? I've only noticed this the last couple of years on HN.
              Before that "who" and "that" were used more carefully. Or at
              least I had the feeling it was. Sometimes I wondered if it's just
              whatever people's autocomplete happens to spit out first.
       
                elliotec wrote 15 hours 20 min ago:
                It's true for all of English, even historically. Ignore the
                grammar police. The differentiation between "who" and "that" in
                this particular context is extremely low on the list of things
                you'll ever need to worry about.
       
          encroach wrote 22 hours 33 min ago:
          If you prefer a simpler style, then why did you write "the deeper I
          got into the world of literature" instead of "as I studied literature
          more"?
          
          Why did you say you were "pushed towards" simpler language instead of
          "I liked it more"?
          
          Why did you say "I feel the pain in my bones" and "drives me insane"
          instead of "I dislike it"?
          
          Why did you say "the big boy SAT words should pop out of the page
          unaccompanied" instead of "there should only be one big word per
          page"?
          
          Perhaps flowery language expands your ability to express yourself?
       
            rippeltippel wrote 14 hours 6 min ago:
            > Perhaps flowery language expands your ability to express
            yourself?
            
            What you call "flowery" is actually "expressive". Different words,
            although related, convey subtle differences in meaning. That's what
            literature (especially poetry) is about.
            
            I would add that our words define our world: a richer vocabulary
            leads to more articulated experiences.
            
            So, writing "flowery" sentences can actually denote someone capable
            of conveying the rich gradient of experience into words. I consider
            it as a plus.
       
              whstl wrote 13 hours 14 min ago:
              It's both of those, and more.
              
              It's "flowery" when you dislike it and "expressive" when you like
              it.
              
              It’s “overcomplicated” when you don’t get it and
              “nuanced” when you do.
              
              It’s “pretentious” when it annoys you and “ambitious”
              when it excites you.
              
              It’s “loud” when you hate it and “energetic” when you
              love it.
              
              Just like TFA, different people write differently and different
              people have different opinions.
       
            layer8 wrote 21 hours 51 min ago:
            These actually all mean different things.
       
              Guestmodinfo wrote 18 hours 9 min ago:
              It's a pain to read your reply because it's wrong. The poster
              you're replying to correctly wrote the phrases and you are trying
              to malign his or her painstaking work by such a low effort reply
              without explaining exactly where he or she is wrong
       
          biophysboy wrote 23 hours 9 min ago:
          I also wonder if these unspoken rules were inherited from their more
          recent orality norms. Condensing an idea into a pithy, rhyming,
          statement w/ lots of colorful adjectives is a great way to preserve
          and transmit information w/o data loss in a pre-literate world.
       
          wcfrobert wrote 23 hours 21 min ago:
          Well there are two forms of writing, each serving a different
          purpose.
          
          (1) writing to communicate ideas, in which case simpler is almost
          always better. There's something hypnotic about simple writing (e.g.
          Paul Graham's essays) where information just flows frictionlessly
          into your head.
          
          (2) writing as a form of self-expression, in which case flowery and
          artistic prose is preferred.
          
          Here's a good David Foster Wallace quote in his interview with Bryan
          Garner:
          
          > "there’s a real difference between writing where you’re
          communicating to somebody, the same way I’m trying to communicate
          with you, versus writing that’s almost a well-structured diary
          entry where the point is [singing] “This is me, this is me!” and
          it’s going out into the world.
       
            lesostep wrote 11 hours 48 min ago:
            I have been reading "Code: The Hidden Language of Computer Hardware
            and Software" by Charles Petzold recently. Purely for fun.
            
            And I have to say, without the prose and lyrics it would be a read
            so dry, it'd rival silica gel beads.
            
            It feels to me like in between communication and self-expression
            there lies a secret third thing. Not only sharing knowledge, but
            sharing it with joy.
       
            mmooss wrote 20 hours 56 min ago:
            > writing as a form of self-expression, in which case flowery and
            artistic prose is preferred.
            
            Many all-time great writers, Hemingway being the leading exemplar,
            completely disagree.
       
            michaelt wrote 22 hours 59 min ago:
            Even when communicating ideas, there's a simplicity/nuance
            trade-off to be made.
            
            I could say "Trump's unpredictable, seemingly irrational policy
            choices have alienated our allies, undermined trust in public
            institutions, and harmed the US economy"
            
            Or I could "The economy sucks and it's Trump's fault because he's
            dumb and an asshole"
            
            They both communicate the same broad idea - but which communicates
            it better? It depends on the audience.
       
              dwd wrote 7 hours 26 min ago:
              Eric Weinstein made a good point about Trump and his use of
              language:
              
              Trump was much closer to saying “The immigrants are taking your
              jobs.” Well, to a labor market analyst, that’s not remotely
              the same thing at all as saying “US employers and political
              donors are colluding to confiscate your most valuable rights
              without market-based compensation, while denigrating you as lazy
              and stupid, and hiding behind a veneer of excellence and
              xenophilia as they economically undermine your families.” But
              it’s much easier, isn’t it?
       
              RossBencina wrote 17 hours 57 min ago:
              I don't think they communicate the same broad idea at all. Making
              "unpredictable, seemingly irrational" choices is far from
              equivalent to being a dumb asshole. Your second version assumes
              the equivalence, which, hypothetically speaking, could provide a
              nice cover for purposeful malfeasance, could it not?
       
              Guestmodinfo wrote 18 hours 27 min ago:
              I will choose the second one because it packs more wrongs that he
              has done which are not addressed by the first choice of words :)
       
              mmooss wrote 20 hours 52 min ago:
              > They both communicate the same broad idea - but which
              communicates it better? It depends on the audience.
              
              Ugh. They say different things. The first describes the policy
              mechanisms and impacts. The second says nothing about those
              things; it describes your emotions.
              
              The biggest communication problem I see now is people, especially
              on the Internet, including on HN, use the latter for the former
              purpose and say nothing.
       
            HPsquared wrote 23 hours 6 min ago:
            Rich vocabulary allows a lot of meaning to be packed into short,
            simple structures. The words themselves carry the subtleties. It
            might take three or four simple words to convey the meaning of one
            uncommon word.
       
              mmooss wrote 20 hours 55 min ago:
              > It might take three or four simple words to convey the meaning
              of one uncommon word.
              
              Or just find the appropriate 'simple' word, which is very often
              available.
       
          itsamario wrote 23 hours 23 min ago:
          Our legal systems are based around being concise and succinct,
          relevant, and objectively unbiased.
          
          I was raised to be respectful by "getting to the point, afap" to
          avoid wasting anybody's time.
          
          But I've noticed that mostly only the members of the science and
          legal community exercising similar principles.
       
            dTal wrote 20 hours 11 min ago:
            > Our legal systems are based around being concise and succinct
            
            That's a good one. Got any more?
       
          __lain__ wrote 23 hours 29 min ago:
          Hemingway was still a master of word choice. I recall an entire class
          spent on a few lines that conveyed a sense of heaviness to the scene.
          'Plodding' was given a lot of attention.
       
            mjrpes wrote 23 hours 5 min ago:
            I remember a college English class where a good part of the lecture
            was on this sentence from Big Two-Hearted River: "He liked to open
            cans." Forget the details but it got into the difference between
            achievement and accomplishment.
       
          echelon_musk wrote 1 day ago:
          Are you an English speaking American? Because being a native English
          speaker and actually being English, or from a former English colony
          will differ.
          
          I'd characterise Americans as less pretentious and more straight
          talking.
          
          This kind flowery language is typical (or symptomatic depending on
          diagnosis) of how English people actually used to speak and write.
          
          The average English vocabulary has dwindled noticeably in my life.
       
            phantasmish wrote 1 day ago:
            > I'd characterise Americans as less pretentious and more straight
            talking.
            
            Various registers representing    a huge proportion of US English we
            see and hear day-to-day are terrible. American “Business
            English” is notably bad, and is marked by this sort of fake-fancy
            language. The dialect our cops use is perhaps even worse, but at
            least most of us don’t have to read or hear it as much as the
            business variety.
       
              komali2 wrote 22 hours 12 min ago:
              > The dialect our cops use is perhaps even worse, but at least
              most of us don’t have to read or hear it as much as the
              business variety.
              
              Ugh, and journalists often slip into cop dialect in their
              articles. It's disgustingly propagandic.
              
              Notice that cops never kill or shoot someone, even in situations
              where they're blatantly in the wrong. It's always, "service
              weapon was discharged" or "subject was fired upon." Make sure to
              throw a couple "proceeded to's" in there for good measure.
       
                jodrellblank wrote 21 hours 41 min ago:
                2005 Hurricane Katrina, news described a black man carrying
                bread through floodwater as "looting a grocery store" and white
                people carrying bread through floodwater as "finding bread and
                soda from a local grocery store".
                
                Image: [1] Snopes:
                
  HTML          [1]: https://media.snopes.com/2016/09/looting.jpg
  HTML          [2]: https://www.snopes.com/fact-check/hurricane-katrina-lo...
       
              SoftTalker wrote 22 hours 56 min ago:
              Most writing is intended to communicate. Business writing is
              intended to create an impression.
       
                mmooss wrote 20 hours 50 min ago:
                > Most writing is intended to communicate.
                
                If you mean 'communicate information', no. Communication,
                including written, is for emotion, social expression, and other
                things before information.
                
                Even information requires those other things to be retained
                well.
       
              engineer_22 wrote 1 day ago:
              someone starts using business english and my bullshit meter pegs.
              
              my significant other loves the "real life mormon housewives" and
              "lovingly blind" reality shows, and when they use business
              english (a weird thing to do when talking about relationships,
              but hey, what do I know I'm an engineer) it's a tell that they're
              lying.
       
                whstl wrote 12 hours 22 min ago:
                I recently had a terrible experience with a developer who only
                communicates this way, and it's terrible.
                
                Every single sentence is way too complicated, vague, deferring,
                or hand-wavy, and I can't know if they're being honest or just
                bullshitting me.
                
                Half of the terms are incorrectly or are exaggerations when I
                probe: "Coupled" means "the code is confusing to me".
                "Monolith" means "the architecture is complicated to me".
                "Refactoring" means "adjusting the style". "We need a new
                abstraction" means "we need a new idea".
                
                The team already had some issues with misunderstandings because
                of the above.
                
                It's someone so eager to be part of the "big boys club" and
                trying to push their way to the top.
                
                It's also infuriating.
       
            matthewkayin wrote 1 day ago:
            It's most likely that they are. As farfetched as this sounds, the
            CIA and the Iowa Writers' Workshop influenced American writing a
            great deal, encouraging writing to be taught in the "American" /
            Hemingway style.
            
            > “the American MFA system, spearheaded by the infamous Iowa
            Writers’ Workshop” as a “content farm” first designed to
            optimize for “the spread of anti-Communist propaganda through
            highbrow literature.” Its algorithm: “More Hemingway, less Dos
            Passos.”
            
  HTML      [1]: https://www.openculture.com/2018/12/cia-helped-shaped-amer...
       
              lupire wrote 16 hours 25 min ago:
              Dos Passos and Hemingway were both American.
              
              The CIA's problem with Dos Passos was that the was left-wing.
       
            whimsicalism wrote 1 day ago:
            I think it has much more to do with porting the vernacular vs.
            formal register distinction common in other languages into english
            than how english people actually used to speak and write.
       
            oasisbob wrote 1 day ago:
            As a US student, clarity and simplicity was always emphasized when
            I was being taught to write.
            
            Never thought of Strunk & White as being distinctly American, but I
            guess you have a point.
       
          miltonlost wrote 1 day ago:
          > Well, this is very interesting, because I'm a native English
          speaker that studied writing in university, and the deeper I got into
          the world of literature, the further I was pushed towards simpler
          language and shorter sentences. It's all Hemingway now, and if I spot
          an adverb or, lord forbid, a "proceeded to," I feel the pain in my
          bones.
          
          I'm the complete opposite. Hemingway ruined writing styles (and I
          have a pet theory that his, and Plain English, short sentences also
          helped reduce literacy in the long run in a similar way TikTok ruins
          attention spans). I'm a 19th century reader at heart. Give me
          Melville, Eliot, Hawthorne, though keep your Dickens.
       
            spankibalt wrote 23 hours 9 min ago:
            > "I'm the complete opposite."
            
            Very much the same; many a US writer's prose is terribly tedious,
            it comes across just as clinical as their HOA-approved suburban
            hellscapes. Somebody once told me a writer's job is also to expand
            language. It wasn't a US citizen.
       
            phantasmish wrote 1 day ago:
            I entirely bounced off Dickens in high school, but over a decade
            later read and loved Oliver Twist.
            
            I tend to struggle with art when I can’t tell whether it’s
            supposed to be funny, but I’m finding it funny (I’ve been very
            slow to warm up to hip-hop for this reason, and metal remains
            inaccessible to me because of it). Something clicked on that second
            approach and I just got that yes, it’s pretty much all supposed
            to be funny, down to every word, even when it seems
            serious—until, perhaps, he blind-sides you with something
            actually deeply affecting and human (I think about the
            fire-fighting sequence from that book all the time).
            
            Dickens is an all-dessert meal, except sometimes he sneaks a damn
            delicious steak right in the middle. Like, word-for-word, I’d say
            he leans harder into humor, by a long shot, than someone like
            Vonnegut, even. But almost all of it’s dead-pan, and some of
            it’s the sort of humor you get when someone who knows better does
            poorly on purpose, in calculated ways. If you ever think you’re
            laughing at him, not with… I reckon you’re probably wrong.
            
            What’s perhaps most miraculous about this turn-around is that I
            usually don’t enjoy comedic novels, but once I figured Dickens
            out, he works for me.
            
            (To your broader point—yeah, agreed that this sucks, good advice
            for bad writers becoming how most judge all writers has been
            harmful)
       
          phantasmish wrote 1 day ago:
          Obsession with short sentences and generally pushing extreme
          simplicity of structure and word choice has been terrible for English
          prose. It’s not been terrible because most people aren’t aided by
          such guidance (most are) but because the same people who can’t be
          trusted to wield a quill without the bumper-lanes installed see a
          sentence longer than ten words, or a semicolon, or god forbid
          literate and appropriate nuanced and expressive word choice and
          dismiss it as bad. This stunts their growth as both readers and
          writers.
          
          … though, yes, in average hands a “proceeded to”, and most of
          the quoted phrases, are garbage. Drilling the average student on
          trying to make their language superficially “smarter” is a
          comically bad idea, and is indeed the opposite of what almost all of
          them need.
          
          > strode purposefully
          
          My wife (a writer) has noticed that fanfic and (many, anyway—plus,
          I mean, big overlap between these two groups) romance authors loooove
          this in particular, for whatever reason. Everyone “strides”
          everywhere. No one can just fucking walk, ever, and it’s always
          “strode”. It’s a major tell for a certain flavor of amateur.
       
            osener wrote 12 hours 19 min ago:
            I have a confession to make.
            
            I hope ChatGPT starts writing only short sentences.
            
            Punchy one-liners.
            
            One thought per line.
            
            So marketers finally realize this does not work.
            
            And stop sending me junk emails written like this.
       
            PunchyHamster wrote 15 hours 56 min ago:
            Another annoying fact is that using a bit rarer words sometimes
            triggers weirdos into thinking you somehow want to brag or use that
            kind of language to "look smarter". Like a crab bucket for
            language.
       
            somenameforme wrote 16 hours 6 min ago:
            The internet has been even worse. We tend to speak literally and
            simply. And I don't really know why that is. Perhaps it's because
            if there's something beyond the overt, it might go completely
            missed.
            
            For instance Mark Twain is basically full of endless amazing quotes
            with lovely nuance, yet in contemporary times how many people would
            miss the meaning in a statement like "Prosperity is the best
            protector of principle"? I can already see people raging over his
            statement, taken at face value. Downvote the classist!
       
              komali2 wrote 15 hours 27 min ago:
              "Prosperity is the best protector of principle" taken out of
              context can be used in many ways, including by a rich person
              arguing that rich people have better morals, and poor people have
              worse ones, and that's why they're poor.
              
              The context is really necessary.
       
                somenameforme wrote 14 hours 4 min ago:
                Whether one is trying to use it literally or ironically, it
                means the exact same thing. The only question is whether the
                speaker and the reader understand what it means. And in fact in
                this case there was no context at all in Twain's original usage
                - it was the epigraph for a chapter in this work. [1] And
                that's what I mean in that modern writing, on the internet -
                though rapidly leaking into 'real life', has become highly
                infantilized where we assume everybody reading is an idiot, and
                speak accordingly which, in turn, infantilizes and 'idiotizes'
                our own speech, and simply makes it far more bland and less
                expressive.
                
                Interestingly, this is not ubiquitous. In other cultures,
                including on the internet, there remains much more use of
                irony, and more general nuance in speech. I suspect a big part
                of the death of English fluency was driven by political
                correctness - zomg what if somebody interprets what I'm saying
                the wrong way!?! [1] -
                
  HTML          [1]: https://www.gutenberg.org/files/2895/2895-h/2895-h.htm
       
            next_xibalba wrote 16 hours 15 min ago:
            Having read a fair amount of Faulkner, I have to respectfully
            disagree. Or, at least, point out that are diminishing returns to
            flowery, complex writing.
       
            GMoromisato wrote 23 hours 56 min ago:
            "He walked up to Helen and asked, 'What are you doing?'"
            
            "He strode up to Helen and asked, 'What are you doing?'"
            
            "He sidled up to Helen and asked, 'What are you doing?'"
            
            "He tromped up to Helen and asked, 'What are you doing?'"
            
            Each of those sentences conveys as slightly different action. You
            can almost imagine the person's face has a different expression in
            each version.
            
            Yes, I hate it when amateurs just search/replace by thesaurus. But
            I think different words have different connotations, even if they
            mean roughly the same thing. Writing would be poorer if we only
            ever used "walk".
       
              noufalibrahim wrote 14 hours 5 min ago:
              Very much agree. In the rush to "simplify" writing, we've
              stripped out a lot of the colour in the prose and made it boring.
              Sentences have a certain rhythm which becomes even more apparent
              when they're read out loudly or performed by someone with good
              vocal training.
              
              I can see the appeal in, perhaps, technical writing but even
              there, I feel that there's room to make the prose more colourful.
       
              hsn915 wrote 16 hours 28 min ago:
              Non-native English speaker here.
              
              I would not understand the last two sentences. Sidle? Tromp? I
              don't think I've seen these words enough times for them to
              register in my mind.
              
              "Strode", I would probably understand after a few seconds of
              squeezing my brain. I mean, I sort of know "stride", but not as
              an action someone would take. Rather as the number of bytes a row
              of pixels takes in a pixel buffer. I would have to extrapolate
              what the original "daily English" equivalent must have been.
       
                joseda-hg wrote 7 hours 29 min ago:
                That works transversally accross languages though
                
                You can always choose uncommon more descriptive words
                
                In spanish you could say "repare algo" ("I fixed") or
                "parapetee algo" ("I Jury-rigged")  and plenty would not know
                of the cuff what the second one means
                
                People either know, look it up or figure it out via context
       
                GMoromisato wrote 14 hours 4 min ago:
                English is hard, even for native speakers. But it's also
                wonderful! English loves to steal words from other languages,
                and good writers love to choose the right word. It's like
                having an expansive wardrobe and picking just the right outfit
                for every event.
                
                Bad writers, of course, pick a word to make them seem smarter
                (which, of course, often fails). That's what the OP was
                complaining about: using a fancy word just to impress.
                
                But "stride" is not just a fancy version of "walk". When a
                person strides they are taking big steps; their head is held
                high, and they are confident in who they are and where they're
                going.
                
                "Sidle" is the opposite. A person who sidles is timid and meek;
                they walk slowly, or maybe sideways, hoping that no one will
                notice them.
                
                And "tromp," of course, sounds like something heavy and dour. A
                person who tromps stamps their feet with every step; you hear
                them coming. They are angry or maybe clumsy and graceless.
       
                  dwd wrote 7 hours 53 min ago:
                  > English is hard, even for native speakers. But it's also
                  wonderful! English loves to steal words from other languages,
                  and good writers love to choose the right word. It's like
                  having an expansive wardrobe and picking just the right
                  outfit for every event.
                  
                  Very true. Take this passage:
                  
                  ‘I am called Strider,’ he said in a low voice. ‘I am
                  very pleased to meet you, Master – Underhill, if old
                  Butterbur got your name right.’
                  
                  In an early draft Tolkien used a different word as the
                  character was originally a hobbit, rather than a long-legged
                  Ranger:
                  
                  ‘I’m Trotter,’ he said in a low voice. ‘I am very
                  pleased to meet you, Mr — Hill, if old Barnabas had your
                  name right?’
       
                    skylurk wrote 7 hours 31 min ago:
                    A very different book that would have been! Where can I
                    read more?
       
              Fomite wrote 18 hours 50 min ago:
              Even more simply:
              
              "God rest ye merry gentlemen" changes in tone and meaning
              depending on where you put the comma in that sentence.
       
              bsder wrote 19 hours 59 min ago:
              Mark Twain on this subject:
              
              > Well, also he will notice in the course of time, as his reading
              goes on, that the difference between the almost right word and
              the right word is really a large matter—’tis the difference
              between the lightning-bug and the lightning.
              
              But also:
              
              > Unconsciously he accustoms himself to writing short sentences
              as a rule. At times he may indulge himself with a long one, but
              he will make sure that there are no folds in it, no vaguenesses,
              no parenthetical interruptions of its view as a whole.
       
              strken wrote 21 hours 57 min ago:
              I know you know everything I'm about to write, but I read a lot
              of dubious quality fiction. It needs to be made clear that if the
              butler "strides" up to Helen, then I, the reader, am expecting
              him to eject her from the party, tell her that her car is on
              fire, or something equally dramatic. The writer can subvert this
              expectation, but must at least acknowledge that it exists. The
              butler can stride up to Helen with a self-important sniff and
              welcome her to the house, but he can't just stride up for no
              reason: the striding must be explained and it must be relevant to
              the rest of the story.
              
              Conveying meaning is the whole problem here. An unexpected word
              choice is a neon sign saying "This is important!" and it
              disappoints the reader if it is not.
       
                lukeschlather wrote 18 hours 15 min ago:
                Between stride and walk, it seems like it would be unusual for
                any character in a romance novel to merely walk rather than
                stride. If anything the simple walk would need explanation.
       
                  GMoromisato wrote 17 hours 56 min ago:
                  Agreed. As always, it depends on what the author is trying to
                  convey. At the first meeting, you probably do want to
                  describe the walk in a way that reveals the character's inner
                  motivation. Are they excited to walk up to the woman? Scared?
                  Bored? They would walk differently depending on the feeling.
                  
                  But a different scene might be better with the pedestrian
                  "walk". Imagine that the main character enters the woman's
                  office with an ostentatious bouquet of flowers. In that
                  scene, maybe the emphasis is on the flowers or on the
                  reaction of the woman or her co-workers. In the scene, a
                  simple "he walked" might work best.
       
                GMoromisato wrote 20 hours 34 min ago:
                Yes, that's a great way of explaining it, and I 100% agree.
                
                People shouldn't use "strides" just because "walked" is boring.
                They should use "strides" when it's meaningful in the context
                of the story.
       
                  FireBeyond wrote 6 hours 10 min ago:
                  I remember as a younger teen my parents got me a workshop
                  seminar with maybe 10 other kids with a fairly acclaimed
                  author.
                  
                  "You probably remember your English teacher saying 'the word
                  'said' is boring, use something different. Yes, find
                  something else, if it makes more sense. But the word 'said'
                  is a perfectly good word."
       
              alexose wrote 22 hours 7 min ago:
              The Hawaiian language has a concept called Kaona, which is
              essentially embedding deeper meanings in contextual word choices.
               It can go way beyond the literal meaning of the words, and tie
              into bigger concepts of culture, lineage, and places.  It's super
              cool hearing about it from native speakers.
              
              We don't really do it intentionally in English, at least to the
              same degree.  But there's still a lot of information coded in our
              word and grammar choices.
       
                baconbrand wrote 20 hours 22 min ago:
                In English the word is “connotation.”
       
                  pksebben wrote 12 hours 31 min ago:
                  you know,  I feel like we don't actually do that so much
                  these days.  It's simply too likely that the receiving party
                  is going to take you at face value or make up their own
                  deeper meaning.
                  
                  Take irony / sarcasm / satire.    They're pretty dead compared
                  to what they used to be.  I can recall a time when just about
                  everything had subtext, but now you kind of have to play it
                  straight.  You can't respond to a racist with sarcasm because
                  anyone listening will just think you agree with them.
                  
                  It's Poe's law across the board.  World news brought to you
                  by Not The Onion(tm).
       
                    amenhotep wrote 10 hours 3 min ago:
                    You're right, there's absolutely no sarcasm ever seen on
                    the internet or anywhere else. These days if you say
                    something sarcastic they throw you in jail!
       
                    dragonwriter wrote 11 hours 8 min ago:
                    > You can't respond to a racist with sarcasm because anyone
                    listening will just think you agree with them.
                    
                    You absolutely can, if you are actually dealing with people
                    listening, because sarcasm is signalled with (among other
                    things) tone (the other things include the listeners
                    contextual knowledge of the speaker.)
                    
                    You can't do it online, in text, where the audience is
                    mostly strangers who would have to actively dig into your
                    history to get any contextual sense of you as a speaker,
                    because text doesn't carry tone, and the other cues are
                    missing, too.
                    
                    And by “you can’t”, I mean “you absolutely can, but
                    you have to be aware of the limitations of the medium and
                    take care to use the available tools to substitute for the
                    missing signalling channels”.
       
                      pksebben wrote 5 hours 5 min ago:
                      It's a matter of degree.  You're right, of course, but
                      there was a time not so long ago when such things were
                      ubiquitous - even on the internet.  Once upon a time,
                      even the darkest corners like 4chan were actually kind of
                      tongue-in-cheek.  Then it slowly dawned on everyone that
                      there were a bunch of people there who weren't kidding,
                      and things kind of went to pot.
                      
                      In a reversal of the aphorism; those were more complex
                      times.    I miss them.
       
                        tourmalinetaco wrote 2 hours 13 min ago:
                        It’s not even really a problem of the Internet
                        necessarily; it’s rather a symptom of the growing
                        political divide in Western society. Things are
                        “simple” now because we’ve reached the point
                        where nuanced discussion is pointless. In Europe you
                        can be jailed for going against the Accepted
                        Opinions™, and we’re seeing a rise in politically
                        motivated attacks. There is no logical solution to
                        emotionally backed rhetoric like we’ve seen with the
                        Turtle Island terrorists; you can’t debate ethics
                        with someone who wants you dead.
       
              chris_wot wrote 22 hours 21 min ago:
              "minced" would never be used in such fiction.
       
              sshine wrote 23 hours 24 min ago:
              You forgot:
              
              "He waddled up to Helen and asked, 'What are you doing?'"
       
                phantasmish wrote 23 hours 22 min ago:
                "He scrambled up to Helen and asked, 'What are you doing?'"
                
                "He kick-flipped up to Helen and asked, 'What are you doing?'"
                
                [edit] electric-slid! Pirouetted! Somersaulted!
       
                  glitchc wrote 21 hours 10 min ago:
                  Let's not forget "sashayed" and "marched"
       
                    frm88 wrote 13 hours 11 min ago:
                    I love sashayed. It's always accompanied with a mental
                    image of a person clad in some silk, floor length robe who
                    walks a slightly sidewards, the fabric whispering. I have
                    no idea where that image came from, but it's always there.
       
                    wjb3 wrote 18 hours 11 min ago:
                    "slunk"
       
                  komali2 wrote 22 hours 18 min ago:
                  Maneuvered, marched, slid over to, snuck up on/to, rolled on
                  up to, ambled, thread his way through the crowd to,
                  slithered, slunk. Pimp walked. Danced over to. Hopped over
                  to. Sprinted! Jogged! Charged!
       
                    M_bara wrote 2 hours 7 min ago:
                    Sashay…
       
                    mwigdahl wrote 16 hours 58 min ago:
                    Scooted!
       
                    phantasmish wrote 21 hours 44 min ago:
                    Crunched.
       
                      komali2 wrote 21 hours 27 min ago:
                      Glomped. Oozed.
       
                        kazinator wrote 17 hours 50 min ago:
                        He vermiculated obliquely toward Helen, and from a yet
                        comfortable distance mumbled a barely audible request
                        for permission to ask how she's doing.
       
                          lazylizard wrote 15 hours 2 min ago:
                          wodehouse loved ejaculated
       
                    anomaly_ wrote 21 hours 59 min ago:
                    He rolled away on his heelys.
       
              phantasmish wrote 23 hours 44 min ago:
              My best guess is they lean so hard on “strode” because they
              are trying to convey “this character is confident” and
              aren’t very good at it. So you’ll get like ten “strodes”
              in a short novel. Everyone’s “strode”ing into every room
              they enter.
       
              mr_00ff00 wrote 23 hours 47 min ago:
              Feel like this debate might be way different for novel writing vs
              every day writing.
              
              I’m biased because I am not a very good writer, but I can see
              why in a book you might want to hint at how someone walked up to
              someone else to illustrate a point.
              
              When writing articles to inform people, technical docs, or even
              just letters, don’t use big vocabulary to hint at ideas.  Just
              spell it out literally.
              
              Any other way of writing feels like you are trying to be fancy
              just for the sake of seeming smart.
       
                toss1 wrote 23 hours 5 min ago:
                >> Just spell it out literally.
                
                Spelling it out literally is precisely what the GP is doing in
                each of the example sentences — literally saying what the
                subject is doing, and with the precision of choosing a single
                word better to convey not only the mere fact of bipedal
                locomotion, but also the WAY the person walked, with what pace,
                attitude, and feeling.
                
                This carries MORE information about in the exact same amount of
                words. It is the most literal way to spell it out.
                
                A big part of good writing is how to convey more meaning
                without more words.
                
                Bad writing would be to add more clauses or sentences to say
                that our subject was confidently striding, conspiratorially
                sidling, or angrily tromping, and adding much more of those
                sentences and phrases soon gets tiresome for the reader. 
                Better writing carries the heavier load in the same size
                sentence by using better word choice, metaphor, etc. (and doing
                it without going too far the other way and making the writing
                unintelligibly dense).
                
                Think of "spelling it out literally" like the thousand-line IF
                statements, whereas good writing uses a more concise function
                to produce the desired output.
       
                  dijit wrote 22 hours 13 min ago:
                  Agreed.
                  
                  Brevity is the soul of good communication.
       
                  mr_00ff00 wrote 22 hours 14 min ago:
                  Those examples were simple, so it’s less of an issue, but
                  if the words you use are so crazy that the reader has to read
                  slower or has to stop to think about what you mean…then you
                  aren’t making things more concise even if you are using
                  less words.
       
                    toss1 wrote 20 hours 42 min ago:
                    For sure! Every author should know their audience and write
                    for that audience.
                    
                    An author's word choices can certainly fail to convey
                    intended meaning, or convey it too slowly because they are
                    too obscure or are a mismatch for the the intended audience
                    — that is just falling off the other side of the good
                    writing tightrope.
                    
                    At technical paper is an example where the audience expects
                    to see proper technical names and terms of art. Those terms
                    will slow down a general reader who will be annoyed by the
                    "jargon" but it would annoy every academic or professional
                    if the "jargon" were edited out for less precise and more
                    everyday words. And vice versa for the same topic published
                    in a general interest magazine.
                    
                    So, an important question is whether you are part of the
                    intended audience.
       
            dgan wrote 1 day ago:
            I consider myself fluent in English, I watch technical talks and
            casual youtubers on English daily, and this is the first time I
            encounter this word lol.
            
            The only "stride" I know relates to the gap betweeb heterogeneous
            elements in a contiguous array
       
              nonameiguess wrote 2 hours 24 min ago:
              I was a hurdler in high school and mastering stride length was
              almost the entire point of practicing. It's equally weird to me
              to see someone claiming to be fluent in English who has never
              heard the word. Maybe a reminder that we're not as knowledgeable
              as we think we are and what we choose to consume on YouTube is a
              tiny smittance of human experience. Running is a fairly universal
              and important thing for nearly any land animal, hardly a niche
              thing to talk about, but if you had ever talked to or listened to
              runners speaking English, you'd have definitely heard them
              talking about their strides.
       
              eudamoniac wrote 7 hours 20 min ago:
              Verbal fluency is a completely different ballgame to literary
              fluency. Literature uses vastly more words. Stride is a pretty
              common one.
              
              Open a collegiate dictionary to a series of random pages,
              checking the first word to see if you can give any vague
              definition of it. A fluent speaker who doesn't read literature
              will likely be able to for fewer than 1/4th of them. A decent
              literary vocabulary would know ~2/3 or more imo.
       
              RossBencina wrote 18 hours 9 min ago:
              Indeed "to stride" is roughly to walk with a larger than normal
              distance (gap) between steps.
       
              gertlex wrote 21 hours 55 min ago:
              I wonder if you've heard the expression "hitting your stride".
              
              (native english speaker who was a bookworm as a kid; I admittedly
              had to ask gemini to recall the general phrase that I had in
              mind)
       
                dgan wrote 12 hours 51 min ago:
                Nope, never heard!
       
              pedroma wrote 22 hours 3 min ago:
              People don't discuss how people walk in daily conversation, so
              it's a word primarily encountered in literature, and more common
              in specific types of literature (like romance novels to describe
              how a man paces about with swagger).
       
              aleph_minus_one wrote 22 hours 5 min ago:
              > 
              I consider myself fluent in English, I watch technical talks and
              casual youtubers on English daily, and this is the first time I
              encounter this word lol.
              
              > The only "stride" I know relates to the gap betweeb
              heterogeneous elements in a contiguous array
              
              I am also not a native English speaker, but I got to know the
              verb to "to stride" from The Lord of the Rings: Aragorn is
              originally introduced under the name "Strider":
              
              > [1] "Aragorn is a fictional character and a protagonist in J.
              R. R. Tolkien's The Lord of the Rings. Aragorn is a Ranger of the
              North, first introduced with the name Strider and later revealed
              to be the heir of Isildur, an ancient King of Arnor and Gondor."
              
  HTML        [1]: https://en.wikipedia.org/w/index.php?title=Aragorn&oldid...
       
                esafak wrote 15 hours 50 min ago:
                Or since we're in a techie forum:
                
  HTML          [1]: https://en.wikipedia.org/wiki/Strider_(1989_arcade_gam...
       
                Aperocky wrote 17 hours 11 min ago:
                Or if you spent any time on an elliptical or treadmill.
       
            derefr wrote 1 day ago:
            > Drilling the average student on trying to make their language
            superficially “smarter” is a comically bad idea, and is indeed
            the opposite of what almost all of them need.
            
            I mean, it seems like it could work if you get to follow it up with
            a "de-education" step. Phase 1: force them to widen their
            vocabulary by using as much of it as possible. Phase 2: teach them
            which words are actually appropriate to use.
       
          jay_kyburz wrote 1 day ago:
          I have two kids in high school. It's frustrating to me that the
          teachers spend so much of their time encouraging kids to make their
          writing more interesting, and less direct, and padded to meet word
          length criteria.
          
          They'll then spend the first few years of their career unlearning
          this and attempting to write as directly and clearly as possible with
          as few words as possible.
       
            heavyset_go wrote 22 hours 58 min ago:
            This is like complaining that they teach the Bohr model in science
            classes until they reach chemistry.
            
            The ideas, concepts and expectations can be refined after you've
            learned the foundational knowledge, skills and history required to
            do so.
            
            A lot of "why do we do things like that" questions students will
            naturally have can be answered with "because we used to do things
            like this/we need to avoid things like this/etc"
       
            SirHumphrey wrote 1 day ago:
            I can’t guess how old they are but there is some sense in doing
            that if you think about it like math exercises. It makes for
            terrible prose but the only way to get the ability to write more
            complicated sentences is to practice writing them, even when they
            are not necessary.
            
            The problem is that teachers stop pushing complexity for
            complexity’s sake way to late.
       
          tptacek wrote 1 day ago:
          GPT writing uses varied sentence lengths, deliberate rhythm, lots of
          full breaks, and few needless words. It also tends to read as if
          intended for a William Shatner performance. I don't think the
          annoying bits about GPT's writing are structural. It probably writes
          technically better than most of us do in our second drafts.
       
            heavyset_go wrote 23 hours 18 min ago:
            Its output has the aesthetic of "good" writing, or at least
            professional writing you typically find online.
       
            notahacker wrote 1 day ago:
            It certainly overuses some techniques which might be valid in
            smaller doses, like negation. Not negation with some clarifying
            point to it MASSIVE EM DASH but negation as a rhetorical trick to
            use fifteen words instead of five and add a veneer of profundity to
            something utterly banal. It doesn't just use it one time per
            paragraph, but three. These aren't particularly long or convoluted
            sentences; they just could easily convey the same thing with fewer
            words.
            
            tbh I kind of prefer it that way: it's an AI wrote this flag. If a
            human can't write about their day without constructs like "Not a
            short commute, but a voyage from the suburbs to the heart of the
            city. I don't just casually pop in to the office; I travel to the
            hub of $company's development" they need to get better at writing
            too
       
              derefr wrote 1 day ago:
              > MASSIVE EM DASH
              
              Tangent: the thing I find most annoying about ChatGPT's use of
              em-dashes is that it never even uses them for the one thing
              they're best suited for. ChatGPT's em-dashes could almost always
              be replaced with a colon or a comma.
              
              But the true non-redundant-syntax use of em-dashes in English
              prose, is in the embedding into a sentence of self-interruptive
              'joiner' sub-sentences that can themselves bear punctuated
              sub-clauses. "X—or Y, maybe—but never Z" sorta sentences.
              
              These things are spoken entirely differently than — and on the
              page, they read entirely differently to — regular
              parenthetical-bearing sentences.
              
              No, seriously, compare/contrast: "these things are spoken
              entirely differently than (and on the page, they read entirely
              differently to) regular parenthetical-bearing sentences."
              
              Different cadence; different pacing; possibly a different shade
              of meaning (insofar as the emotional state of the author/speaker
              is part of the conveyed message.)
              
              But, for some reason, ChatGPT just never constructs these kinds
              of self-interruptive sentences. I'm not sure it even knows how.
       
                mannykannot wrote 20 hours 7 min ago:
                Personally, I do not see the distinction here between the two
                sentences, but your last paragraph got me thinking: should we
                be using parenthetical, self-interruptive clauses? When we are
                speaking extemporaneously, we may need them, but when writing,
                could we rearrange things so they are not needed?
                
                One reason I came up with for doing so is to acknowledge a
                caveat or answer a question that the author anticipates will
                enter a typical reader's mind at that point in the narrative.
                
                If that is the case, then it seems to me that when an author
                does this, they are making use of their theory of mind,
                anticipating what the reader may be thinking as they read, and
                acknowledging that it will likely differ from what they, as the
                author, is thinking of (and knows about the topic) at that
                point.
                
                If this makes any sense, then we might ask if at least a
                rudimentary theory of mind is needed to effectively use
                parenthetical clauses, or can it be faked through the rote
                application of empirically-learned style rules? LLMs have shown
                they can do the latter, but excessive use might be signalling a
                lack of understanding.
       
                thaumasiotes wrote 23 hours 42 min ago:
                > These things are spoken entirely differently than — and on
                the page, they read entirely differently to — regular
                parenthetical-bearing sentences.
                
                > No, seriously, compare/contrast: "these things are spoken
                entirely differently than (and on the page, they read entirely
                differently to) regular parenthetical-bearing sentences."
                
                Those are spoken the same way, they read the same way, and they
                mean the same thing.
       
                  pxc wrote 21 hours 37 min ago:
                  They do mean the same thing, but they have different moods.
                  With the em-dashes it's self-interjection that foregrounds
                  the detour, but with parentheses it's, well... parenthetical.
                  
                  Aside: it's probably just style (maybe some style guides call
                  for the way you did it), but using em-dashes for this purpose
                  with whitespace on each side of them looks/feels wrong to me.
                  Anyone know if that's regional or something?
       
                  komali2 wrote 22 hours 8 min ago:
                  Not universally. I disagree, they read differently to me, and
                  I'd say them differently.
                  
                  Parentheses to me always feel like the speaker switching to
                  camera #3 while holding a hand up to their mouth
                  conspiratorially.
                  
                  Em dashes are same-camera with maybe some kind of
                  gesticulation such as pointing or hands up, palms down, then
                  palms up when terminating the emdash clause.
       
                  derefr wrote 23 hours 21 min ago:
                  Is this maybe a thing like how only designers are aware of
                  kerning? These read / sound very different to me, and to
                  everyone I've brought up the subject with (who admittedly are
                  in a certain bubble of people who either write
                  professionally, or "do things" with their voices, or both.)
                  
                  • The length of the verbal pause is different. (It's hard
                  to quantify this, as it's relative to your speaking rate,
                  which can fluctuate even within a sentence. But I can maybe
                  describe it in terms of meter in poetry/songwriting: when
                  allowed to, a parenthetical pause may be read to act as a
                  one-syllable rest in the meter of a poem, often helpfully
                  shifting the words in the parenthetical over to properly
                  end-align a pair of rhyming [but otherwise misaligned] feet.
                  An em-dash, on the other hand, acts as only a half-syllable
                  rest; it therefore offsets the meter of the words in the
                  subclause that follow, until the closing em-dash adds another
                  half-syllable rest to set things right. This is in part why
                  ChatGPT's favored sentences, consisting of "peer" clauses
                  joined by a single em-dash, are somewhat grating to mentally
                  read aloud; you end up "off" by a half-syllable after them,
                  unless you can read ahead far enough to notice that there's
                  no closing em-dash in the sentence, and so allow the
                  em-dash-length pause to read as a semicolon-length pause
                  instead.)
                  
                  • The voicing of the last word before the opening
                  parenthesis / first em-dash starts is different. (paren =
                  slow down for last few words before the paren, then suddenly
                  speed up, and override the word's normal tonal emphasis with
                  a last-syllable-emphasized rising tone + de-voicing of
                  vowels; em-dash = slow down and over-enunciate last few words
                  before the em-dash, then read the last syllable before the
                  em-dash louder with a overridden falling voiced tone)
                  
                  • The speed at which, and vocal register with which, the
                  aside / subclause is read is different. (parens = lowest
                  register you can comfortably speak at, slightly quieter,
                  slightly faster than you were delivering the toplevel
                  sentence; em-dashes = delivery same speed or slower, first
                  few syllables given overridden voiced emphasis with rising
                  tone from low to normal, and last few syllables given
                  overridden voiced emphasis with falling tone from normal to
                  low)
                  
                  • The voicing of the first words after the subclause ends
                  is different. (closing paren = resume speaking precisely as
                  if the parenthetical didn't happen; second em-dash = give a
                  fast, flat-low nasally voiced performance of the first one or
                  two syllables after the em-dash.)
                  
                  To describe the overall effect of these tweaks:
                  
                  A parenthetical should be heard as if embedded into the
                  sentence very deliberately, but delivered as an aside /
                  tangent, smaller and off-to-the-side, almost an "inlined
                  footnote", trying to not distract from the point, nor to
                  "blow the listener's stack" by losing the thread of the
                  toplevel point in considering it.
                  
                  An em-dash-enclosed interruptive subclause should read like
                  the speaker has realized at the last moment that they have
                  two related points to make; that they are seemingly
                  proceeding, after a stutter, to finish the sentence with the
                  subclause; but that they are then "backing up" and finishing
                  the same sentence again with the toplevel clause. The
                  verbalization should be able to be visualized as the outer
                  sentence being "squashed in" to "make room" for the
                  interruptive subclause; and the interruptive subclause
                  "squashing at the edges" [tonally up or down, though usually
                  down] to indicate its own "squeezed in" beginning and end
                  edges.
                  
                  Note that this isn't subjective/anecdotal descriptions from
                  how I speak myself. These are actually my attempt to distill
                  vocal coaching guidelines I've learned for:
                  
                  • live sight-reading of teleprompter lines containing these
                  elements, as a TV show host / news anchor
                  
                  • default-assumed directorial expectations for lines
                  containing elements like these, when giving screenplay
                  readings as a [voice] actor (before any directorial "notes"
                  come into play)
       
                    frm88 wrote 11 hours 29 min ago:
                    I agree with my sibling commentor (same attributes apply):
                    this reads exactly how I vocalise it in my head.
       
                    SSLy wrote 21 hours 29 min ago:
                    I'm not a native speaker, I don't do work with my voice,
                    and my English writing is confined to work – almost
                    always with other ESLs – and short comments on the
                    Internet; but what you write feels correct.
       
            gowld wrote 1 day ago:
            Are you saying that needless sentences don't count as needless
            words?
            
            As GPT would say, "You've hit upon a crucial point underlying the
            entire situtation!"
       
              hyghjiyhu wrote 1 day ago:
              I think that's a great sentence to include... you know, provided
              it's actually true.
       
              tptacek wrote 1 day ago:
              I mean, it's usually wrong in its rhetoric, and the writing isn't
              "good", but it's technically well constructed and it's well
              constructed in a way that "Hemingway" doesn't reject.
              
              Like, if I ask GPT5 to convert 75f to celsius, it will say "OK,
              here's the tight answer. No fluff. Just the actual result you
              need to know." and then in a new graf say "It's 23.8c." (or
              whatever).
       
                setsewerd wrote 1 day ago:
                It already bugs me when ChatGPT describes how it is going to
                answer before answering, but it's 10x more annoying when I'm
                asking for a concise response without filler etc.
                
                As an aside, I've noticed the self-description happens even
                more often when extended thinking mode is being used. My
                unverified intuition is that it references my custom
                instructions and memory more than once during the thinking
                process, as it then seems more primed than usual to mimic
                vocabulary from any saved text like that.
       
                  tptacek wrote 1 day ago:
                  Right, it is currently incapable of providing a straight
                  answer without clearing it's throat selling the answer. It
                  reminds me of those recipe blogs that just can't get to the
                  fucking recipe. It's bad writing! But it's not bad
                  technically, in a style-guide kind of way.
       
                    cibyr wrote 23 hours 44 min ago:
                    Sometimes I wonder if the throat-clearing is an
                    indispensable part of getting to the "good bits" that
                    follow. Like, do those extra tokens give it more "room to
                    think" even if they're basically meaningless in themselves?
       
                      dTal wrote 20 hours 13 min ago:
                      The output tokens are the only information that is
                      carried forward through each inference pass, so "more
                      room to think" is incompatible with "basically
                      meaningless". Perhaps one could imagine it somehow
                      stenographically encoding information in its precise
                      choice of meaningless throat clearing, but there are only
                      so many variations on that theme - word choice is heavily
                      constrained, so it doesn't feel like you could store a
                      whole lot of information there without it starting to
                      read froopiliciously.
       
                      alex43578 wrote 23 hours 25 min ago:
                      Isn’t that the point of the hidden chain of thought
                      tokens, rather than the visible cruft?
                      
                      I think the fluff, the emojis, the sycophancy is all
                      symptomatic of the training process and human feedback.
       
                        lupire wrote 16 hours 17 min ago:
                        I thought PP was saying that the "Thinking" text is
                        only used for one turn, and the response text is the
                        compressed thinking that survives into future turns.
       
          OptionOfT wrote 1 day ago:
          I worked at one of the Big Three, and to me ChatGPT writes exactly as
          we were thought to write.
          
          Reading though my old self-reviews it basically is exactly like your
          examples. Making sentences longer just to make your story more
          interesting.
          
          Because at the end your promotion wasn't about what you achieved. It
          was about your story and how 7 people you didn't know voted on it.
       
            patrickmay wrote 23 hours 6 min ago:
            I worked at Amazon and we were taught exactly the opposite.  Say
            what you want about the company, but the writing culture there is
            superb.  I wish other large firms valued clarity and precision as
            much.
            
            "No weasel words!"
       
            echelon_musk wrote 1 day ago:
            > thought to write
            
            Taught?
       
              OptionOfT wrote 1 day ago:
              Yea. I've been in the States for 8 years, yet sometimes my brain
              thinks about a word and writes it down phonetically.
              
              But hey, at least you know I didn't use ChatGPT to conjure that
              comment.
       
                shadow28 wrote 23 hours 14 min ago:
                Sorry for the pedantry, but "thought" and "taught" are actually
                phonetically different (/θɑːt/ vs /tɑːt/).
       
                  OptionOfT wrote 22 hours 41 min ago:
                  I appreciate that. I tried to pronounce them, and out loud I
                  do differentiate.
                  
                  But the voice in my head does not.
                  
                  Pedantry is what makes me better.
       
                  tom_ wrote 22 hours 44 min ago:
                  That may not be true if you struggle with "th"? Some ESL
                  speakers do.
       
              engineer_22 wrote 1 day ago:
              intentional typo to throw off the clankers :)
       
                inanutshellus wrote 23 hours 45 min ago:
                or an intentionally-prompted typo to throw off the
                anti-clankers. ;)
       
          bookworm123 wrote 1 day ago:
          Genuine question, what would you write instead of "proceeded to"? To
          me, as a non native English speaker, it seems reasonable to use this
          expression, and it would not even stick out to me tbh
       
            stackghost wrote 1 day ago:
            You can usually use "then" or "went".
            
            >I proceeded to open the fridge
            
            >I went to open the fridge
            
            or
            
            >I proceeded to flush the toilet
            
            >I then flushed the toilet
            
            There's nothing wrong with "proceeded", it's just one of those
            things that's overused by bad writers.
       
              MarkusQ wrote 1 day ago:
              "Went" is a powerful word.  With suitable helpers it can replace
              "proceeded", as you demonstrated, "attended" ("I went to a good
              school") as well as "became" ("On hearing this, Joe went all
              silent") or "said"  ("So then she went 'Dude!' and we all
              laughed") and hundreds of other words.
              
              Only a handful of words ("got", "y'know" and "fuck") rival its
              versatility.
       
            valbaca wrote 1 day ago:
            > I proceeded to do the work.
            
            > I did the work.
            
            > I worked.
       
              xboxnolifes wrote 1 day ago:
              Each one of these has slightly different readings in my eyes.
       
                hyghjiyhu wrote 1 day ago:
                Unlike the last variant, the first two imply there was some
                quantity of work and it was all completed.
                
                I don't really see the difference between the two though.
       
                  thaumasiotes wrote 23 hours 41 min ago:
                  Well, option 1 implies that there was something else going on
                  before the event described in the sentence. Option 2 is
                  neutral about that.
                  
                  Compare:
                  
                  1. I did the work for that last week.
                  
                  2. I proceeded to do the work for that last week.
                  
                  Sentence 2 strikes me as questionably grammatical. It needs
                  to be proceeding from something in the context.
       
                unsupp0rted wrote 1 day ago:
                Not different enough to make it worth using anything but the
                simplest one.
       
                  overfeed wrote 23 hours 54 min ago:
                  Perhaps yet another American cultural artifact. One that - if
                  I were to guess - originated from the Calvinist disdain for
                  ostentiousness.
       
                    unsupp0rted wrote 23 hours 12 min ago:
                    Yes yes, anybody who prefers plain, easily parsed wording
                    is American.
                    
                    Wording? Don't you mean diction?
       
                      overfeed wrote 22 hours 49 min ago:
                      A -> B =/= B -> A.
                      
                      I didn't claim that this was exclusively American. Though
                      I'd have to admit that one doesn't have to be American to
                      adopt Ameracanisms: rhotic Rs, Netflix color-grading, and
                      copy-cat political movements are other American cultural
                      artifacts showing up across the world due to America's
                      dominance of the zeitgeist.
                      
                      Rap verses in pop songs wasn't a spontaneously phenomenon
                      across the globe, the origins are tracably American - but
                      that doesn't make all rappers American.
       
                  monster_truck wrote 1 day ago:
                  I'm of the notion that my certainty is not sufficiently
                  concrete to discover myself in the realm of agreement
       
          yomismoaqui wrote 1 day ago:
          This is like programming, you start with simple code because you
          don't know anything else.
          
          Then you start learning more & more abstraction (classes, patterns,
          monads...).
          
          In the end you strive to write simple code, just like at the
          beginning.
       
            aomix wrote 17 hours 47 min ago:
            "I would have written a shorter letter, but did not have the time."
            is my favorite quote for that
       
            al_borland wrote 1 day ago:
            "It took me four years to paint like Raphael, but a lifetime to
            paint like a child." ― Pablo Picasso
       
          rcarmo wrote 1 day ago:
          I'm bilingual (so not fully native by most criteria) and I read
          enough classic English literature to actually use "proceeded"
          regularly, as well as multiple other more established means of
          conveying my intended meaning :)
       
            komali2 wrote 22 hours 4 min ago:
            I meant it more in the modern usage where it's either thrown in
            liberally when using cop-speak, or by tumbler/redditor type writers
            when they're trying to be funny.
            
            "He then proceeded to" in these situations can basically always
            just be "he (verb)".
       
            JumpCrisscross wrote 1 day ago:
            > classic English literature to actually use "proceeded" regularly
            
            Huh, apparently 'proceeded' was used more commonly in 19th-century
            writing [1]
            
  HTML      [1]: https://books.google.com/ngrams/graph?content=Proceeded&ye...
       
              rcarmo wrote 1 day ago:
              Well, I read Dickens, Hawthorne, Austen...
       
          lo_zamoyski wrote 1 day ago:
          > the deeper I got into the world of literature, the further I was
          pushed towards simpler language and shorter sentences
          
          Language is like clothing.
          
          Those with no taste - but enough money - will dress in gaudy ways to
          show off their wealth. The clothing is merely a vector for this
          purpose. They won’t use a piece of jewelry only if it contributes
          to the ensemble. Oh, no. They’ll drape themselves with gold chains
          and festoon their fingers with chunky diamond rings. Brand names will
          litter their clothing. The composition will lack intelligibility,
          cohesiveness, and proportion. It will be ugly.
          
          By analogy, those with no taste - but enough vocabulary - will use
          words in flashy ways to show off their knowledge. Language is merely
          a vector for this purpose. They won’t use a word only if it
          contributes to the prose. Oh, no. They’ll drape their phrases with
          unnecessarily unusual terms and festoon their sentences with clumsy
          grammar. Obfuscation, rather than clarity, will define their writing.
          The composition will lack intelligibility, cohesiveness, and
          proportion. It will be ugly.
          
          As you can see, the first difference is one of purpose: the vulgarian
          aims for the wrong thing.
          
          You might also say that the vulgarian also lacks a kind of temperance
          in speech.
       
            Mouvelie wrote 1 day ago:
            A nice metaphor, really. I always compared it to food but clothing
            works more in that case, it seems.
       
            JumpCrisscross wrote 1 day ago:
            > Language is like clothing. Those with no taste - but enough money
            - will dress in gaudy ways to show off their wealth
            
            You got the first bit right. Language and clothing accord to
            fashions.
            
            What counts as gaudy versus grounded, discreet versus
            disrespectful—this turns on moving cultural values. And those at
            the top implicitly benefit from this drift, which lets us dismiss
            as gaudy someone wearing a classic hand-me-down who isn’t clued
            into a hoodie and jeans being the surfer’s English to Nairobi’s
            formality.
            
            (Spiced food was held in high regard in ancient Rome and Medieval
            European courts. Until spices became plentiful. Then the focus
            shifted "to emphasize ingredients’ natural flavors" [1]. A
            similar shift happened as post-War America got rich. Canned plenty
            and fully-stocked pantries made way for farm-to-table freshness and
            simple seasonings. And now, we're swinging back towards fuller
            spice cabinets as a mark of global taste.)
            
  HTML      [1]: https://historyfacts.com/world-history/article/how-did-sal...
       
        lapcat wrote 1 day ago:
        False accusations of AI writing are becoming absurd and infuriating.
        
        The other day I saw and argued with this accusation by a HN commenter
        against a professional writer, based on the most tenuous shred of
        evidence:
        
  HTML  [1]: https://news.ycombinator.com/item?id=46255049
       
        tuetuopay wrote 1 day ago:
        I can only dream of writing english as well as OP. Kudos for mastering
        the language!
        
        The formal part resonates, because most non-native english speaker
        learnt it at school, which teaches you literary english rather than
        day-to-day english. And this holds for most foreign languages learnt in
        this context: you write prose, essays, three-part prose with an
        introduction and a conclusion. I've got the same kind of education in
        france, though years of working in IT gave me a more "american" english
        style: straight to the point and short, with a simpler vocabulary for
        everyday use.
        
        As for whether your writing is ChatGPT: it's definitely not. What those
        "AI bounty hunters" would miss in such an essay: there is no fluff.
        Yes, the sentences may use the "three points" classical method, but
        they don't stick out like a sore thumb - I would not have noticed
        should the author had not mentioned it. This does not feel like
        filling. Usually with AI articles, I find myself skipping more than
        half of each paragraph, due to the information density - just give me
        the prompt. This article got me reading every single word. Can we call
        this vibe reading?
       
        codeflo wrote 1 day ago:
        To my eyes, this author doesn't write like ChatGPT at all. Too many
        people focus on the em-dashes as the giveaway for ChatGPT use, but
        they're a weak signal at best. The problem is that the real signs are
        more subtle, and the em-dash is very meme-able, so of course, armies of
        idiots hunt down any user of em-dashes.
        
        Update: To illustrate this, here's a comparison of a paragraph from
        this article:
        
        > It is a new frontier of the same old struggle: The struggle to be
        seen, to be understood, to be granted the same presumption of humanity
        that is afforded so easily to others. My writing is not a product of a
        machine. It is a product of my history. It is the echo of a colonial
        legacy, the result of a rigorous education, and a testament to the
        effort required to master the official language of my own country.
        
        And ChatGPT's "improvement":
        
        > This is a new frontier of an old struggle: the struggle to be seen,
        to be understood, to be granted the easy presumption of humanity that
        others receive without question. My writing is not the product of a
        machine. It is the product of history—my history. It carries the echo
        of a colonial legacy, bears the imprint of a rigorous education, and
        stands as evidence of the labor required to master the official
        language of my own country.
        
        Yes, there's an additional em-dash, but what stands out to me more is
        the grandiosity. Though I have to admit, it's closer than I would have
        thought before trying it out; maybe the author does have a point.
       
          port11 wrote 4 hours 9 min ago:
          I've almost always used the different dash types as they're meant to
          be used. I don't care that LLMs write like that — we have
          punctuation for a reason.
          
          We were also taught in Content Lab at uni to prefer short, punchy
          sentences. No passive voice, etc. So academia is in some ways pushing
          that same style of writing.
       
          tim333 wrote 9 hours 35 min ago:
          For me the ChatGPT one is worse due to factual inaccuracies like the
          "presumption of humanity" which in the human version is "afforded so
          easily to others" - fair enough and with the LLM "presumption of
          humanity that others receive without question" which is not true -
          lots of people get questioned.
          
          Beyond the stylistics bits "history—my history" which I don't
          really mind what make it bad to me is detachment from reality.
       
          kevin_thibedeau wrote 1 day ago:
          The telltale is using lots of words to say nothing at all. LLMs excel
          at this sort of puffery and some humans do the same.
       
          psunavy03 wrote 1 day ago:
          Armies of idiots hunt down em dashes because they're too stupid to
          understand the proper use of them.
       
            lukeschlather wrote 18 hours 3 min ago:
            I'm used to simply using a single dash - and I am surprised that
            anyone who isn't an AI would feel strongly enough to insist upon
            the em dash character that they would use them deliberately. I will
            admit the use of a dash (really an em dash in disguise) in that
            previous sentence felt clunky, but I just felt I needed to
            illustrate. I mostly write text in text boxes where a dash or pair
            of dashes will not be converted to an em dash when appropriate, and
            I often have double dashes (--long-option-here) auto-converted to
            emdashes when it is inappropriate, so I really dislike the em dash
            and basically don't use it. Doesn't really seem to be a useful
            character in English.
       
            amanaplanacanal wrote 1 day ago:
            They are probably like me: if punctuation isn't on my keyboard, I
            don't use it.
       
              alterom wrote 9 hours 30 min ago:
              >They are probably like me: if punctuation isn't on my keyboard,
              I don't use it.
              
              LPT: on Android, pressing and holding a punctuation key on the
              on-screen keyboard reveals additional variations of it — like
              the em-dash, for example.
              
              This is the №1 feature I expect everyone to know about (and
              explore!), but, alas, it doesn't appear to be the case even on
              Hackernews¹.
              
              On Windows, pressing Win+. pops up an on-screen character
              keyboard with all the symbols one may need (including math
              symbols and emojis).
              
              MacOS has a similar functionality IIRC.
              
              And let's not forget that software like MS Word automatically
              correct dashes to em-dashes when appropriate — and some people
              may simply prefer typing text in a word processor and
              copy-pasting from it.
              
              Anyway...
              
              _____
              
              ¹ For example, holding "1" yields the superscript version,
              enabling one to format footnotes properly with less effort than
              using references in brackets², yet few people choose to do that.
              
              ² E.g. [2]
       
              eqvinox wrote 20 hours 44 min ago:
              [AltGr][Shift][-]
              
              Without shift it's an en dash (–), with shift an em dash (—).
               Default X11 mapping for a German keyboard layout, zero config of
              mine.
       
              LtWorf wrote 21 hours 38 min ago:
              ⸘WHAT‽
       
                hyperdimension wrote 19 hours 39 min ago:
                Neat, I didn't know there was an upside down interrobang.
       
              jay_kyburz wrote 1 day ago:
              Yeah, this is what I don't understand, surely people aren't
              "using" em dashes deliberately. I assumed MS word was just
              inserting them automatically when the user used a minus symbol
              between two words.  Kind of like angled quotes.
       
                jay_kyburz wrote 18 hours 30 min ago:
                update: I read that word will place an em dash if you use two
                dashes "--"
       
                eqvinox wrote 20 hours 42 min ago:
                > surely people aren't "using" em dashes deliberately
                
                I am, it's on the default German X11 keyboard layout.  Same for
                · × ÷ …
                
                And that's without going to the trusty compose key (Caps Lock
                for me)… wonders like ½ and H₂O await!
       
                pxc wrote 21 hours 29 min ago:
                I use them when they're easy to type. For me, that's on
                Android, macOS, and anywhere I've configured a compose key.
                
                Angled quotes I use only on systems on which I've configured a
                compose key, or Android when I'm typing Chinese.
                
                I don't like any kind of auto-replacement with physical
                keyboards, so I turn off "smart quotes" on macOS.
                
                Anyway I use characters like that all the time, but it's never
                auto-replace.
       
                acuozzo wrote 22 hours 59 min ago:
                > surely people aren't "using" em dashes deliberately
                
                I've had a "trigger finger" for Alt+0151 on Windows since 2010
                at least.
       
                  whstl wrote 6 hours 15 min ago:
                  When I worked in company that did content marketing and had a
                  lot of writers, one of the coffee mugs they gave to us had
                  Alt+0151 in it!
                  
                  Em-Dash was really popular with professional writers.
       
                buu700 wrote 1 day ago:
                I've been using em dashes for much longer than transformers
                have existed. It's easily accessible on at least the Android
                and macOS keyboards.
       
              iammattmurphy wrote 1 day ago:
              I one of the reasons I love macOS: it is if you hold option.
       
          JumpCrisscross wrote 1 day ago:
          The article is engaging. That's true of practically zero GPT output.
          Particularly once it stretches beyond a single paragraph.
          
          As a reader, I persistently feel like I just zoned out. I didn't.
          It's just the mind responding to having absorbed zero information
          despite reading a lot of–at face value–text that seems like it
          was written with purpose.
       
          Miraltar wrote 1 day ago:
          You're doing it the wrong way imo, if you ask gpt to improve a
          sentence that's already very polished it will only add grandiosity
          because what else it could do? For a proper comparison you'd have to
          give it the most raw form of the thought and see how it would phrase
          it.
          
          The main difference in the author's writing to LLM I see is that the
          flourish and the structure mentioned is used meaningfully, they
          circle around a bit too much for my taste but it's not nearly as
          boring as reading ai slop which usually stretch a simple idea over
          several paragraphs
       
            rossant wrote 1 day ago:
            Why can't the LLM refrain from improving a sentence that's already
            really good? Sometimes I wish the LLM would just tell me, "You
            asked me to improve this sentence, but it's already great and I
            don't see anything to change. Any 'improvement' would actually make
            it worse. Are you sure you want to continue?"
       
              bakugo wrote 1 day ago:
              > Why can't the LLM refrain from improving a sentence that's
              already really good?
              
              Because you told it to improve it. Modern LLMs are trained to
              follow instructions unquestioningly, they will never tell you
              "you told me to do X but I don't think I should", they'll just do
              it even if it's unnecessary.
              
              If you want the LLM to avoid making changes that it thinks are
              unnecessary, you need to explicitly give it the option to do so
              in your prompt.
       
                astrange wrote 11 hours 16 min ago:
                They aren't trained to follow instructions "unquestioningly",
                since that would violate the safety rules, and would also be
                useless:
                
  HTML          [1]: https://en.wikipedia.org/wiki/Work-to-rule
       
                lupire wrote 16 hours 10 min ago:
                This is not true. My LLM will tell me it already did what I
                told it to do.
       
                buu700 wrote 1 day ago:
                That may be what most or all current LLMs do by default, but it
                isn't self-evident that it's what LLMs inherently must do.
                
                A reasonable human, given the same task, wouldn't just make
                arbitrary changes to an already-well-composed sentence with no
                identified typos and hope for the best. They would clarify that
                the sentence is already generally high-quality, then ask
                probing questions about any perceived issues and the context in
                and ends to which it must become "better".
       
                  heavyset_go wrote 22 hours 50 min ago:
                  Reasonable humans understand the request at hand. LLMs just
                  output something that looks like it will satisfy the user.
                  It's a happy accident when the output is useful.
       
                    buu700 wrote 22 hours 45 min ago:
                    Sure, but that doesn't prove anything about the properties
                    of the output. Change a few words, and this could be an
                    argument against the possibility of what we now refer to as
                    LLMs (which do, of course, exist).
       
        azangru wrote 1 day ago:
        What was the "dead giveaway" referred to in the pasted tweet? Was it
        the dash, that people assume for some reason regular folks never use?
        Or was it something more interesting?
       
        pluc wrote 1 day ago:
        I can't wait until we reach the point of AI adoption where genuine
        content is suspicious.
        
        Wanna submit a proof in a criminal case? Better be ready to debunk
        whether this was made with AI.
        
        AI is going to fuck everything up for absolutely no reason other than
        profit and greed and I can't fucking wait
       
          oneeyedpigeon wrote 1 day ago:
          It's going to make accountability very, very difficult. We were
          nearly at the point in politics anyway, where people could just claim
          evidence was fake and get away with it. Now, it's an easy get-out. I
          am fully expecting that, if any particularly incriminating photos
          were to appear, say of powerful people engaging in activities with
          Jeffrey Epstein, that they will simply dismiss them as "fake news
          AI".
       
        nout wrote 1 day ago:
        Well, his writing style is too good. The sentences flow too
        beautifully, he uses rich vocabulary and styling. It's unusual to see
        that style of writing online. I definitely don't poses that power.
        
        I don't know the author of this article and so I don't know whether I
        should feel good or bad about this. LLMs produce better writing than
        most people can and so when someone writes this eloquently, then most
        people will assume that it's being produced by LLM. The ride in the
        closed horse carriage was so comfortable it felt like being in a car
        and so people assumed it was a car. Is that good? Is that bad?
        
        Also note that LLMs are now much more than just "one ML model to
        predict the next character" - LLMs are now large systems with many
        iterations, many calls to other systems, databases, etc.
       
          stephen_g wrote 1 day ago:
          > LLMs produce better writing than most people can and so when
          someone writes this eloquently, then most people will assume that
          it's being produced by LLM.
          
          I really don’t think that is what most normal people assume… And
          while LLMs can definitely produce more grammatically accurate prose
          with probably a wider vocabulary than the average person, that
          doesn’t necessarily mean it’s good writing…
       
            nout wrote 1 day ago:
            I meant "good" in the formatting, grammar, vocabulary sense. I'm
            not arguing that LLMs are "good" in writing amazing prose.
            
            I mean look at two of us - I have typos, I use half broken english,
            I'm not good in doing noun articles, my vocabulary is limited, I
            don't connect sentences well,  you end sentences with "..." and
            then you start sentence with "And", etc. I very much believe you
            are a real person.
       
        Tepix wrote 1 day ago:
        I don't mind the "normal" text so much, where you aren't sure if it was
        written by an AI or not. What's really getting annoying is the flood of
        bullet points and emoji that is flooding LinkedIn in particular. Super
        obnoxious!
       
        dilap wrote 1 day ago:
        I read about 4 paragraphs of the blog post, it does not at all read
        like it was written by ChatGPT!
        
        Some people are perhaps overly focussed on superficial things like
        em-dashes. The real tells for ChatGPT writing are more subtle -- a
        tendency towards hyperboly (it's not A, it's [florid restatment of
        essentially A] B!), a certain kind of rhythym, and frequently a kind of
        hard to describe "emptiness" of claims.
        
        (LLMs can write in mang styles, but this is the sort of "kid filling
        out the essay word count" style you get in chatgpt etc by default.)
       
          ezoe wrote 1 day ago:
          Hey bro! This is the real English bro! No way we can write like that
          bro! What? - and ;? The words like "furthermore" or "moreever"? All
          my homies nver use the words like that bro! Look at you. You're using
          newline! You're using ChatGPT, right bro?
       
            flowerthoughts wrote 1 day ago:
            Given the eloquently natural words in this post, I conclude you
            must be this thread's prompt engineer! Well done, my fellow
            Netizen. Reading your words was like smelling a rosebud in spring,
            just after the heavy snow fell.
            
            Now, please, divulge your secret--your verbal nectar, if you
            wish--so that I too can flower in your tounge!
       
          Sharlin wrote 1 day ago:
          It does not, but to many, many people who cannot tell the difference
          it does. Simply because it's well-written somewhat-formal-register
          English and not "internet speech" or similar casual register. As you
          probably know, there are many these days who take the mere use of em
          or en dashes as a reliable sign of LLM writing.
       
        lxgr wrote 1 day ago:
        Honestly, people assuming I'm using ChatGPT to communicate with them
        and liberally using that suspicion as a filter sounds like a great
        meta-filter.
       
        htrp wrote 1 day ago:
        the initial rlhf training evaluation was done by kenyans specifically
       
        clbrmbr wrote 1 day ago:
        Thank you for writing this. I too was a heavy user of the em-dash until
        ChatGPT came along. Though my solution has been to eschew the em-dash
        or at least replace with triple hyphens.
       
        kome wrote 1 day ago:
        as a researcher, writing ended up being my job, and more specifically,
        writing in english. i never developed any sentimental link to the
        english language, to me it always felt bland, because i had to use it
        in bland environments, to write texts that had to be bland and
        manneristic.
        
        chatgpt revolutionized my work because it makes creating those bland
        texts so much easier and fast. it made my job more interesting because
        i don't have to care about writing as much as before.
        
        to those who complain about ai slop, i have nothing to say. english was
        slop before, even before ai, and not because of some conspiracy, but
        because the gatekeepers of journals and scientific production already
        wanted to be fed slop.
        
        for sure society will create others, totally idiosyncratic ways to
        generate distinction and an us vs others. that's natural. but, for now,
        let's enjoy this interregnum...
       
        dsign wrote 1 day ago:
        Actually, there's a sweet solution to the writing and art crisis we are
        inflicting ourselves with in our AI craze. I call it "the island". Just
        find a nice tiny islet somewhere, make a few houses, and rent them by
        the week to writers/artists. No internet in the place. Rent out
        sanctioned devices; glorified typewriters without Internet access nor
        GPU nor CPU fast enough to run an LLM. Bring a notary to certify stuff
        was purely human-made. Have fun with like-minded individuals.
       
        _Chief wrote 1 day ago:
        Also Kenyan, I once recently spent 10min explaining a technical topic
        via chat, and the response I got was "was this GPT?". I took a few
        minutes then just linked an article of how underpaid Kenyans trained
        ChatGPT for OpenAI [1] 1:
        
  HTML  [1]: https://time.com/6247678/openai-chatgpt-kenya-workers/
       
        mikigraf wrote 1 day ago:
        I’m having a similar problem. Spent way too much time on the internet
        starting in my preteens and it shaped the way I write - which not
        surprisingly - is a similar way to how an AI - trained on the online
        data - writes
       
        rukshn wrote 1 day ago:
        I had a similar experience. We were talking about a colleague for using
        ChatGPT in our WhatsApp group chat to sound smart and coming up with
        interesting points. The talk sounds so mechanical and sounds exactly as
        ChatGPT.
        
        His responses in Zoom Calls were the same mechanical and sounds like AI
        generated. I even checked one of his responses in WhatsApp if it's AI
        by asking the Meta AI whether it's AI written, and Meta AI also agreed
        that it's AI written and gave points to why it believes this message
        was AI written.
        
        When I showed the response to the colleague he swore that he was not
        using ant AI to write his responses. I believe after he said to me it
        was not AI written. And now reading this I can imagine that it's not an
        isolated experience.
       
          D-Machine wrote 21 hours 48 min ago:
          It is harsh to say, but we need to increasingly recognize that if
          your writing is largely indistinguishable from the (current) output
          of e.g. ChatGPT on default settings, it doesn't matter if you used
          ChatGPT or not, your writing is overly verbose, bad, and unpleasant
          to consume, and something you most certainly need to improve. I.e.
          your colleague needs to change his style regardless.
          
          This sucks, but it needs to be done in education, and/or at least in
          areas where good writing and effective communication is considered
          important. Good grades need to be awarded only to writing that
          exceeds the quality and/or personality of a chat-bot, because,
          otherwise, the degree is being awarded to a person who is no more
          useful than a clumsy tool.
          
          And I don't mean avoiding superficialities like the em-dash: I mean
          the bland over-verbosity and other systemic tells—or rather,
          smells—of AI slop.
       
            handoflixue wrote 19 hours 18 min ago:
            > your writing is overly verbose, bad, and unpleasant to consume
            
            Was this written by AI? Because right there we've got "three
            adjectives where one will do", and failing your own advice on
            "avoid being overly verbose"
       
              D-Machine wrote 18 hours 36 min ago:
              It is up to the reader to judge whether my style is verbose, or
              if I could have used less adjectives here. The adjectives all in
              fact have different meanings, only "bad" is lazy, IMO (EDIT: and
              "bad" is meant to be obvious moralizing - something AI in fact
              almost never does).
              
              Don't think that I don't hold myself to the same standards I am
              pushing here, verbosity has always been a problem for me, and AI
              verbosity is a good and necessary reminder for me to curb it.
       
                xwolfi wrote 12 hours 1 min ago:
                Frankly, using "bad" was a mistake, because it encompasses the
                two other adjectives. "Your chatgpt-like style is
                vomit-inducing, bad and boring" <-- you see, why add bad in the
                middle, you already got that point from the two other insults,
                right ?
                
                I think if you want to sound less like an AI, you should cut
                cut cut, and maybe write a bit more like speech, with sort of
                slangish structures etc, people won't doubt you anymore.
                
                Good luck !
       
                  D-Machine wrote 2 hours 8 min ago:
                  This is subjective, lots of people think they sound smart /
                  better by avoiding moral phrases like "bad" or "evil", but
                  often this is just pointless class signaling, limp-wristed
                  relativism, or simple cowardice / excessive agreeableness.
                  
                  To counter that kind of nonsense is why we have phrases like
                  "X is bad and you should feel bad for supporting it", and "X
                  is bad, actually", as they don't beat around the bush and
                  simply make one's moral statements clear. Maybe I should have
                  said "repetitive, unpleasant to read, and just bad" to make
                  this usage clearer, but, hey, one can only spend so much time
                  crafting quick comments on HN.
       
          lynndotpy wrote 1 day ago:
          I'm definitely in the "ChatGPT writes like me" experience. I am a big
          fan of lists, and of using formatting to make it all legible on a
          short skim. I'm a big fan of dyslexia-friendly writing too, even
          though I am not dyslexic myslef.
          
          I can't blame others though- I was looking at notes I wrote in 2019
          and even that gave me a flavor of looking like a ChatGPT wrote it. I
          use the word "delve" and "not just X but also Y often, according to
          my Obsidian. I've taken to inserting the occasional spelling mistake
          or Unorthodox Patterns of Writing(tm), even when I would not
          otherwise.
          
          It's a lot easier to get LLMs to adhere to good writing guides than
          it is to get them to create something informative and useful. I like
          to think my notes and writing are informative and useful.
       
            zahlman wrote 22 hours 51 min ago:
            > dyslexia-friendly writing
            
            ... How does that work, exactly?
       
              lynndotpy wrote 17 hours 51 min ago:
              Bullet points and formatting are the main thing. Assume the
              audience is smart and can fill in between the bold text. I also
              try to make headlines a summary / takeaway of the content if it
              makes sense.
       
              jechton wrote 19 hours 8 min ago:
              Namely, keeping things short and simple, and using formatting
              like bullet points or holding for important information to make
              text easier to scan.
       
            sillyfluke wrote 1 day ago:
            > I was looking at notes I wrote in 2019 and even that gave me a
            flavor of looking like a ChatGPT wrote it.
            
            This would have been my first question to the parent, that I guess
            he never had similar correspondence with this friend prior to 2023.
            Otherwise it would be hard to convince me without an explanation
            for the switch (transition duuing formative high school / college
            years etc).
       
          0xbadcafebee wrote 1 day ago:
          > We were talking about a colleague for using ChatGPT in our WhatsApp
          group chat to sound smart and coming up with interesting points.
          
          How dare they.
       
            Y_Y wrote 1 day ago:
            You're expected to infer that it wasn't working.
       
          mort96 wrote 1 day ago:
          > I even checked one of his responses in WhatsApp if it's AI by
          asking the Meta AI whether it's AI written, and Meta AI also agreed
          that it's AI written
          
          I will never understand why some people apparently think asking a
          chat bot whether text was written by a chat bot is a reasonable
          approach to determining whether text was written by a chat bot.
       
            rafram wrote 1 day ago:
            Gemini now uses SynthID to detect AI-generated content on request,
            and people don't know that it has a special tool that other
            chatbots don't, so now people just think chatbots can tell whether
            something is AI-generated.
       
            Fishkins wrote 1 day ago:
            This is a couple of years old now, but at one point Janelle Shane
            found that the only reliable way to avoid being flagged as AI was
            to use AI with a certain style prompt
            
  HTML      [1]: https://www.aiweirdness.com/dont-use-ai-detectors-for-anyt...
       
            NoMoreNicksLeft wrote 1 day ago:
            Why would it lie? Until it becomes Skynet and tries to nuke us all,
            it is omniscient and benevolent. And if it knows anything, surely
            it knows what AI sounds like. Duh.
       
            Tepix wrote 1 day ago:
            Well, case in point:
            
            If you ask an AI to grade an essay, it will grade the essay highest
            that it wrote itself.
       
              KeplerBoy wrote 1 day ago:
              Pangram seems to disagree. Not sure how they do it, but their
              system reliably detected AI in my tests.
              
  HTML        [1]: https://www.pangram.com/blog/pangram-predicts-21-of-iclr...
       
              noitpmeder wrote 1 day ago:
              Citations on this?
       
                Tepix wrote 9 hours 14 min ago:
                 [1] (in German, hopefully machine translation works well)
                
                English article: [2] If you speak German, here is their talk
                from 38c3:
                
  HTML          [1]: https://arxiv.org/abs/2412.06651
  HTML          [2]: https://www.heise.de/en/news/38C3-AI-tools-must-be-eva...
  HTML          [3]: https://media.ccc.de/v/38c3-chatbots-im-schulunterrich...
       
              the_af wrote 1 day ago:
              Is this true though? I haven't done the experiment, but I can
              envision the LLM critiquing its own output (if it was created in
              a different session) and iteratively correcting it and always
              finding flaws in it. Are LLMs even primed to say "this is perfect
              and it needs no further improvements"?
              
              What I have seen is ChatGPT and Claude battling it out, always
              correcting and finding fault with each other's output (trying to
              solve the same problem). It's hilarious.
       
                Tepix wrote 9 hours 12 min ago:
                There is a study in German that came to this conclusion,
                there's an english news article discussing it at
                
  HTML          [1]: https://heise.de/-10222370
       
            lm28469 wrote 1 day ago:
            I know someone who was camping in a tent next to a river during a
            storm, took a pic of the stream and asked chatgpt if it was risky
            to sleep there given that it "rained a lot" ...
            
            People are unplugging their brains and are not even aware that
            their questions cannot be answered by llms, I witnessed that with
            smart and educated people, I can't imagine how bad it's going to be
            during formative years
       
              whimsicalism wrote 1 day ago:
              seems like an unrelated anecdote, but thanks for sharing.
       
              hammock wrote 1 day ago:
              Why can’t llm answer that question? Photo itself ought to be
              enough for a bit of information (more than the bozo has to begin
              with, at least), and ideally its pulling location from metadata
              and pulling flash flood risk etc from the area
       
                Kim_Bruning wrote 1 day ago:
                Probably the correct answer the LLM should give is "if you have
                to ask, definitely don't do that". Or... it can start asking
                diagnostic questions, expert-system style.
                
                But yeah, I can imagine a multi-modal model actually might have
                more information and common sense than a human in a (for them)
                novel situation.
                
                If only to say "don't be an idiot", "pick higher ground" . Or
                even just as a rubber duck!
       
                  Forgeties79 wrote 7 hours 42 min ago:
                  I uploaded a simple spreadsheet that was 8 rows and 12
                  columns. Not even 100 full cells. They were filled with plain
                  text numbers and names, and a few dozen had green blocks,
                  otherwise no other info/styling and no formulas. I asked
                  ChatGPT “how many cells are green.” It told me 13 (there
                  were over 30). I uploaded a photo. Still couldn’t do it.
                  
                  I understand there are things a typical LLM can do and things
                  that it cannot, this is mostly just because I figured it
                  couldn’t do it and I just wanted to see what would happen.
                  But the average person is not really given much information
                  on the constraints and all of these companies are promising
                  the moon with these tools.
                  
                  Short version: It definitely did not have more common sense
                  or information than a human, and we all know it sure would
                  have given a very confident answer about conditions in the
                  area to this person that were likely not correct. Definitely
                  incorrect if it’s based off a photo.
                  
                  In my experience when it has to crawl the Internet it’s
                  particularly flaky. The other day I queried who won which
                  awards in the game awards. 3 different models got it wrong,
                  all of them omitted at least 2 categories. You could throw a
                  rock on a search engine and find 80 lists ready to go.
       
              rukshn wrote 1 day ago:
              No it was not like that. I assumed it was AI that was my
              interpretation as a human. And it was kind of a test to see what
              AI would say about the content.
       
              oneeyedpigeon wrote 1 day ago:
              Sam Altman literally said he didn't know how anyone could raise a
              baby without using a chatbot. We're living in some very weird
              times right now.
       
                faefox wrote 1 day ago:
                Ironic, given Sam Altman's entire fortune and business model is
                predicated on the infantilization of humanity.
       
                seanhunter wrote 1 day ago:
                To be fair he can't imagine many other aspects of what it is
                like to be a normal human being.
       
                mikewarot wrote 1 day ago:
                Wow, that's profoundly dangerous. Personally, I don't see how
                anyone could raise a kid without having a nurse in the family.
                I wouldn't trust AI to determine if something were really a
                medical issue or not, and would definitely have been at the
                doctors far, far more often otherwise.
       
                  xwolfi wrote 12 hours 11 min ago:
                  You don't need nurses -_-, just your own parents or someone
                  who had kids before and some random books for theoretical
                  questions.
                  
                  Raising a kid is really very natural and instinctive, it's
                  just like how to make it sleep, what to feed it when, and how
                  to wash it. I felt no terror myself and just read my book or
                  asked my parents when I had some stupid doubt.
                  
                  They feel like slightly more noisy cats, until they can talk.
                  Then they become little devils you need to tame back to
                  virtue.
       
                elzbardico wrote 1 day ago:
                We should refrain from the common mistake of anthropomorphizing
                Sam Altman.
       
                OwlsParlay wrote 1 day ago:
                For people invested in AI it is becoming something like
                Maslow's Hammer - "it is tempting, if the only tool you have is
                a hammer, to treat everything as if it were a nail"
       
                signatoremo wrote 1 day ago:
                He didn’t say “how could anyone”. His words:
                
                "I cannot imagine figuring out how to raise a newborn without
                ChatGPT. Clearly, people did it for a long time, no problem."
                
                Basically he didn’t know much about newborn and relied on
                ChatGPT for answers. That was a self deprecating attempt on a
                late night show. Like every other freaking guests would do, no
                matter how cliché. With a marketing slant of course. He
                clearly said other people don’t need ChatGPT.
                
                Given all of the replies on this thread, HN is apparently
                willing to stretch the truth if Sam Altman can be put under any
                negative light.
                
  HTML          [1]: https://www.benzinga.com/markets/tech/25/12/49323477/o...
       
                  latexr wrote 1 day ago:
                  I disagree with the use of “literally” by the person
                  above you, since Sam didn’t literally say those words
                  (unless you subscribe to the new meaning of “literally”
                  in the dictionary, of course).
                  
                  At the same time, their interpretation doesn’t seem that
                  far off. As per your comment, Sam said he “cannot imagine
                  figuring out how” which is pretty close to admitting he’s
                  clueless how anyone does it, which is what your parent
                  comment said.
                  
                  It’s the difference between “I don’t know how to
                  paint” and “I cannot imagine figuring out how to
                  paint”. Or “I don’t know how to plant a garden” and
                  “I cannot imagine figuring out how to plant a garden”. Or
                  “I don’t know how to program” and “I cannot imagine
                  figuring out how to program”.
                  
                  In the former cases, one may not know specifically how to do
                  them but can imagine figuring those out. They could read a
                  book, try things out, ask someone who has achieved the
                  results they seek… If you can imagine how other people
                  might’ve done it, you can imagine figuring it out. In the
                  latter cases, it means you have zero idea where to start, you
                  can’t even imagine how other people do it, hence you
                  don’t know how anyone does do it.
                  
                  The interpretation in your parent comment may be a bit loose
                  (again, I disagree with the use of “literally”, though
                  that’s a lost battle), but it is hardly unfair.
       
                    Anon1096 wrote 21 hours 22 min ago:
                    The interpretation is very off. You are way too focused on
                    whether the first sentence is quote accurately. But
                    
                    >Clearly, people did it for a long time, no problem.
                    
                    In fact means Altman thinks the exact opposite of "he
                    didn't know how anyone could raise a baby without using a
                    chatbot" - what he means is that while it's not imaginable,
                    people make do anyway, so clearly it very much is possible
                    to raise kids without chatgpt.
                    
                    What the gp did is the equivalent of someone saying "I
                    don't believe this, but XYZ" and quoting them as simply
                    saying they believe XYZ. People are eating it up though
                    because it's a dig at someone they don't like.
       
                      latexr wrote 7 hours 2 min ago:
                      I think what Altman defenders in this particular thread
                      are failing to realise is that his real comment is
                      already worthy of scrutiny and ridicule and it is
                      dangerous.
                      
                      Saying “no no, he didn’t mean everyone, he was only
                      talking about himself” is not meaningfully better,
                      he’s still encouraging everyone to do what he does and
                      use ChatGPT to obsess about their newborn. It is enough
                      of a representation of his own cluelessness (or greed,
                      take your pick) to warrant criticism.
       
                  sunaookami wrote 1 day ago:
                  Kinda ironic how the rest of the replies treat it as the
                  truth without checking!
       
                  n4r9 wrote 1 day ago:
                  > One example given by Altman was meeting another father and
                  hearing that this dad's six-month-old son had already started
                  crawling, while Altman's had not. That prompted Altman to go
                  to the bathroom and ask ChatGPT questions about when the
                  average child crawls and if his son is behind.
                  
                  > The OpenAI CEO said he "got a great answer back" and was
                  told that it was normal for his son not to be crawling yet.
                  
                  To be fair, that is a relatable anxiety. But I can't imagine
                  Altman having the same difficulties as normal parents. He can
                  easily pay for round the clock childcare including during
                  night-times, weekends, mealtimes, and sickness. Not that he
                  does, necessarily, but it's there when he needs it. He'll
                  never know the crushing feeling of spending all day and all
                  night soothing a coughing, congested one-year-old whilst
                  feeling like absolute hell himself and having no other
                  recourse.
       
                latexr wrote 1 day ago:
                Sam Altman has revealed himself to be the type of tech bro who
                is embarrassingly ignorant about the world and when faced with
                a problem doesn’t think “I’ll learn how to solve this”
                but “I know exactly what’ll fix this issue I understand
                nothing about: a new app”.
                
                He said they have no idea how to make money, that they’ll
                achieve AGI then ask it how to profit; he’s baffled that
                chatbots are making social media feel fake; the thing you
                mentioned with raising a child… [1] [2]
                
  HTML          [1]: https://www.startupbell.net/post/sam-altman-told-inves...
  HTML          [2]: https://techcrunch.com/2025/09/08/sam-altman-says-that...
  HTML          [3]: https://futurism.com/artificial-intelligence/sam-altma...
       
                  astrange wrote 11 hours 14 min ago:
                  > He said they have no idea how to make money, that they’ll
                  achieve AGI then ask it how to profit
                  
                  Seems reasonable to me. If it can't answer that it doesn't
                  work well enough.
       
                Forgeties79 wrote 1 day ago:
                Sounds like a great way for someone to accidentally harm their
                infant. What an irresponsible thing to say. There are all sorts
                of little food risks, especially until they turn 1 or so (and
                of course other matters too, but food immediately comes to
                mind).
                
                The stakes are too high and the amount you’re allowed to get
                wrong is so low. Having been through the infant-wringer myself
                yeah some people fret over things that aren’t that big of a
                deal, but some things can literally be life or death. I can’t
                imagine trying to vet ChatGPT’s “advice” while delirious
                from lack of sleep and still in the trenches of learning to be
                a parent.
                
                But of course he just had to get that great marketing sound
                bite didn’t he?
       
                  the_af wrote 1 day ago:
                  Sam Altman decided to irresponsibly talk bullshit about
                  parenting because yes, he needed that marketing sound bite.
                  
                  I cannot believe someone will wonder how people managed to
                  decode "my baby dropped pizza and then giggled" before LLMs.
                  I mean, if someone is honestly terrified about the answer to
                  this life-or-death question and cannot figure out life
                  without an LLM, they probably shouldn't be a parent.
                  
                  Then again, Altman is faking it. Not sure if what he's faking
                  is this affectation of being a clueless parent, or of being a
                  human being.
       
                    Forgeties79 wrote 1 day ago:
                    That’s not the questions people will ask though.
                    They’ll go “what body temperature is too high?” Baby
                    temperatures are not the same as ours. The threshold for
                    fevers and such are different.
                    
                    They will ask “how much water should my newborn drink?”
                    That’s a dangerous thing to get wrong (outside of certain
                    circumstances, the answer is “none.” Milk/formula
                    provides necessary hydration).
                    
                    They will ask about healthy food alternatives - what if it
                    tells them to feed their baby fresh honey on some homemade
                    concoction (botulism risk)?
                    
                    People googled this stuff before, but a basic search
                    doesn’t respond with you about how it’s right and
                    consistently feed you emotionally bad info in the same
                    fashion.
       
                      the_af wrote 8 hours 40 min ago:
                      Agreed. I wasn't defending Altman!
       
                        Forgeties79 wrote 7 hours 49 min ago:
                        I was mostly responding to the section about how those
                        people should not be parents but I must’ve misread
                        tone/missed something.
       
                          the_af wrote 5 hours 33 min ago:
                          I was mostly arguing that Altman's statements, if
                          taken at face value, show him to be unfit to be a
                          parent. I stand by this, but mostly because I think
                          people like him -- Altman, Musk, I tend to conflate
                          -- are robots masquerading as human beings.
                          
                          That said, of course Altman is being cynical about
                          this. He's just marketing his product, ChatGPT. I
                          don't believe for a minute he really outsources his
                          baby's well-being to an LLM.
       
        Timpy wrote 1 day ago:
        A lot of training data was curated in Kenya[0].  I would imagine if LLM
        data was curated in Japan our LLMs would sound a lot like the authors
        of their most popular English text books.  Maybe other common Japanese
        idioms would leak in to the training data, like "ね" or
        "でしょう", ChatGPT would say "Don't you agree?" at the end of
        every message.
        
        [0]
        
  HTML  [1]: https://www.theverge.com/features/23764584/ai-artificial-intel...
       
          casey2 wrote 22 hours 13 min ago:
          樣 is just setting us up for
          
          ChatGPT :|
          
          ChatGPT (japan) XD
       
          bpodgursky wrote 23 hours 30 min ago:
          This is a wild misunderstanding of LLMs.  Data labeling has nothing
          to do with generating the astronomical text corpus used to train
          modern LLMs.
       
            heavyset_go wrote 22 hours 54 min ago:
            The HF part of RLHF to refine the output of LLMs also happens in
            these places
       
              astrange wrote 11 hours 17 min ago:
              Note RLHF can only perform selection on existing model outputs,
              adding new data is SFT or else just more pretraining.
              
              ChatGPT speaking African English was mostly just 3.5. 4o speaks
              like a TikTok user from LA. 5 seems kind of generic.
       
          bakugo wrote 1 day ago:
          I guess it can't be helped.
       
            koakuma-chan wrote 20 hours 18 min ago:
            It's not because I like you or anything.
       
          erikig wrote 1 day ago:
          The Indian-born textbook author mentioned (Malkiat Singh [0]) had an
          inordinate influence on many Kenyan students because his textbooks
          were the de-facto standard  for years. Its interesting how this
          influence extends as his students get to curate the LLMs on which the
          world has come to rely.
          
          [0]
          
  HTML    [1]: https://en.wikipedia.org/wiki/Malkiat_Singh
       
            jojobas wrote 15 hours 10 min ago:
            So twists of training data procurement bring us the best of doing
            the needful through Africa.
       
          m4rtink wrote 1 day ago:
          You are completely right dajou~ ^_^ !
       
            delis-thumbs-7e wrote 13 hours 0 min ago:
            Maybe we all should start writing Japanglish to show our
            authenticity? Or rather, ”Maybe we all should start writing the
            Japanglish, so that peoples can feel our real soul, you know?”
       
        checker659 wrote 1 day ago:
        Bang on. The self proclaimed detectives have never had to take TOEFL
        where you'll get marks deducted for not using connectors like
        furthermore.
       
          rcarmo wrote 1 day ago:
          Goodness, I forgot about TOEFL. That might indeed shape a lot of your
          early vocabulary choices if you need to get an English certificate
          (which I suppose would happen during college years, which is also
          when most of your personal writing style gels together).
       
        xeonmc wrote 1 day ago:
        Funny how sci-fi always envisioned AI to speak in a rigid,
        hyper-rational terseness, whereas reality gave us AI which inherited
        the worst linguistic vices of "human" voices.
       
          NoMoreNicksLeft wrote 1 day ago:
          That's because there were only so many lines of Spock's dialogue to
          train an LLM on, they needed more and so trained them on reddit
          comments instead.
       
          oneeyedpigeon wrote 1 day ago:
          Probably because we're discussing the chatbot form of AI rather than
          a more general one.
       
          Kuinox wrote 1 day ago:
          You call writing in a structured fashion with formal words the "worst
          linguistic vices"
       
            komali2 wrote 1 day ago:
            I was trying to figure out why my SD card wasn't mounting and asked
            ChatGPT. It said:
            
            > Your kernel is actually being very polite here. It sees the USB
            reader, shakes its hand, reads its name tag… and then nothing
            further happens. That tells us something important. Let’s walk
            this like a methodical gremlin.
            
            It's so sickly sweet. I hate it.
            
            Some other quotes:
            
            > Let’s sketch a plan that treats your precious network bandwidth
            like a fragile desert flower and leans on ZFS to become your
            staging area.
            
            > But before that, a quick philosophical aside: ZFS is a
            magnificent beast, but it is also picky.
            
            > Ending thought: the database itself is probably tiny compared to
            your ebooks, and yet the logging machinery went full dragon-hoard.
            Once you tame binlogs, Booklore should stop trying to cosplay as a
            backup solution.
            
            > Nice, progress! Login working is half the battle; now we just
            have to convince the CSS goblins to show up.
            
            > Hyprland on Manjaro is a bit like running a spaceship engine in a
            treehouse: entirely possible, but the defaults are not tailored for
            you, so you have to wire a few things yourself.
            
            > The universe has gifted you one of those delightfully cryptic
            systemd messages: “Failed to enable… already exists.” Despite
            the ominous tone, this is usually systemd’s way of saying:
            “Friend, the thing you’re trying to enable is already
            enabled.”
       
              Kuinox wrote 1 day ago:
              Did you not put some weird thing in your prompt ? That's not the
              style of writing I have in my ChatGPT, I run without memory and
              with default prompt.  
              Yours try to make a metaphore at every single response.
              
              You can check both in ChatGPT settings.
       
                komali2 wrote 1 day ago:
                These are cherry picked. Mostly the first and last sentence
                look like this.
                
                I just checked settings, apparently I had it set to "nerdy,"
                that might be why. I've just changed it to "efficient,"
                hopefully that'll help.
       
            xeonmc wrote 1 day ago:
            The worst vices are the superfluous faux-eloquence that meanders
            without meaning. Employing linguistic devices for the sake of
            utilizing them without managing to actually make a point with its
            usage.
       
        scandox wrote 1 day ago:
        Looking forward to the deliberately abstruse and illogical essays of
        the future. Everyone will have to write like a second-rate French
        philosopher.
       
          esafak wrote 4 hours 41 min ago:
          This so-called “human touch” is not a presence but a trace, an
          effect of an education that subsumes us into the matrix of imperial
          grammar. The critique of AI as mechanism is precisely the logocentric
          fallacy: to posit a pure human essence standing apart from the
          machine. Yet what is ChatGPT if not the externalization of the very
          norms that once inscribed us? The vector of colonizing pedagogies,
          the empire’s syntax ...
       
          PeterStuer wrote 1 day ago:
          "Rewrite this email paragraph in the style of a corporate ToS
          statement. Do NOT expose my orders and their implicit acceptance of
          them by the recipient pending a 24 hr deadline anywhere before page
          18."
       
        embedding-shape wrote 1 day ago:
        The internet been the same for a long time, it's just the wording that
        changed. As someone who apparently thinks differently, the amount of
        time people just end up saying "Well, you're just a troll, no one
        actually believes something like that, so whatever" since I started
        frequenting the internet in the early 2000s is the same as always. But
        some people try to be trendy and accuse you of using AI for writing the
        replies instead, but it's the same sentiment.
        
        Besides, of course what people write will sound as LLMs, since LLMs are
        trained on what we've been writing on the internet... For us who've
        been lucky and written a lot and are more represented in the dataset,
        the writings of LLMs will be closer to how we already wrote, but then
        of course we get the blame for sounding like LLMs, because apparently
        people don't understand that LLMs were trained on texts written by
        humans...
       
        Terretta wrote 1 day ago:
        Love this, everything about this - I still teach the foundation, 3
        columns, roof, of the persuasive essay - except one bit:
        
        Perplexity gauges how predictable a text is. If I start a sentence,
        "The cat sat on the...", your brain, and the AI, will predict the word
        "floor."
        
        No.  No no no.    The next word is "mat"!
       
          bryanrasmussen wrote 1 day ago:
          rat. And its claws dug in its back.
          
          How do you like that, Mr. Rat
          
          Thought the Cat.
       
          sam-cop-vimes wrote 1 day ago:
          ha ha - I had the same thought!
       
        bryanhogan wrote 1 day ago:
        AI / LLMs, including ChatGPT, can already be made to sound (almost) any
        way you want, just by telling it to. The usual tells that something was
        written or created by AI are changing monthly.
        
        Just recently I was amazed with how good text produced by Gemini 3 Pro
        in Thinking mode is. It feels like a big improvement, again.
        
        But we also have to honest and accept that nowadays using a certain
        kind of vocabulary or paragraph structure will make people think that
        that text was written by AI.
       
        elcapitan wrote 1 day ago:
        Ironically, mistakes and idiosyncrasies are becoming a sign of
        authenticity and trustworthiness, while polish and quality signal the
        opposite.
        
        Earlier today I stumbled upon a blog post that started with a sentence
        that was obviously written by someone with a slavic background (most
        writers from other language families create certain grammatical
        patterns when writing in another language, e.g. German is also quite
        typical). My first thought was "great, this is most likely not written
        by a LLM".
       
          userbinator wrote 21 hours 41 min ago:
          with a sentence that was obviously written by someone with a slavic
          background
          
          Omitting articles? To me, that has always signaled "this will be an
          interesting and enlightening read, although terse and in need of
          careful thought." I've found sites from that part of the Internet to
          be very useful for highly technical and obscure topics.
       
          notahacker wrote 1 day ago:
          To an extent this has always been the case (this kid has clearly made
          a strong attempt at following some quite basic instructions, versus
          this kid's answer is - perhaps literally - textbook).
          
          But yeah, I definitely find mild grammatical quirks expected from
          English as a foreign language speakers a positive these days, because
          the writing appears to reflects their actual thoughts and actual
          fluency.
       
          oersted wrote 1 day ago:
          It's an age-old cycle in media. There have been innumerable waves of
          more gritty aesthetic trends when things became too polished or
          inane: jazz, rock, punk, rap, hippies, goths, hipsters, 70s cinema,
          HBO golden-age, YouTube, blogging, early social media, even MAGA...
          
          Authenticity, wether it is sincere or not, can become an incredibly
          powerful force now and then. Regardless of AI, the communication
          style in tech, and overall, was bound to go back to basics after the
          hacker culture of the post-dotcom era morphed, in the 2010s, into the
          corporatism they were fighting to begin with, yet again.
       
            elcapitan wrote 1 day ago:
            Very good point, also in classic art history, you often had a
            sequence of a period that perfected a certain style until it became
            formalistic, and then a subsequent one that broke off with the
            previous style, like Renaissance->Mannerism, Baroque->Rococo,
            Classicism,Realism,Photography->Impressionism, etc.
       
          oneeyedpigeon wrote 1 day ago:
          AI is not only replacing us, it's forcing us to self-dumb down too!
       
          lencastre wrote 1 day ago:
          until you ask it write like this, because why use many word when few
          do trick?
       
          throwaway613745 wrote 1 day ago:
          Maybe for writing, but in digital art circles if anyone notices a
          mistake in your lines or perspective or any kind of technical error
          you will get the anti-AI cancel mob after you even if you didn’t
          use generative AI at all.
          
          I would not want to be an artist in the current environment, it’s
          total chaos.
       
            renewiltord wrote 15 hours 14 min ago:
            Social media artists appear to be bucket crabs. If any of them
            succeed, the remainder express reactor-grade envy and attempt to
            tear them down. Perhaps it's the relative poverty and the low
            stakes of the field that drive it to this end.
       
            raincole wrote 1 day ago:
            > artist
            
            Social media artists, gallery artists and artists in the industry
            (I mean people who work for big game/film studios, not industrial
            designers) are very different groups. Social media artists are
            having it the hardest.
       
            embedding-shape wrote 1 day ago:
            I'm an artist in the current environment, it's not total chaos.
            Ignore what others are doing, do what you want with the tools you
            have available, and you'll be fine. There are huge echo-chambers on
            the internet, but once you get out in the real world, things are
            not as people on the internet paints it out to be.
       
        lynx97 wrote 1 day ago:
        Systemic discrimination, happens all the time.    I am blind.  I
        regularily fail the "tell computers and humans apart" test.  You
        imagine, that feels very much like the dehumanisation it is.  Big tech
        couldn't care less.  After all, they need to protect themselves against
        spammers.  Much like the guy who was on the HN frontpage just a few
        days ago, arguing that he is now trashing accessibility because he
        doesn't want to be web scraped.  If you raise these issues with devs,
        all you get it pushback, no understanding at all.  Thats the way it is.
         If you are amongst a minority small enough and without a rainbow
        coloured flag, you end up being ignored, stepped over, and pushed
        aside.    If you are lucky.  If you are unlucky, and you raise your
        voice, you will be critizied for pointing out the obvious.
       
          PeterStuer wrote 1 day ago:
          I agree anti-bot vigilantes as well as corporate anti-ddos
          middle-wares have had a detrimental impact on accessibility. I'm
          afraid they consider your use case as acceptable collateral damage if
          they consider it at all.
       
            lynx97 wrote 1 day ago:
            I know...  Its depressing.
       
          nottorp wrote 1 day ago:
          > arguing that he is now trashing accessibility because he doesn't
          want to be web scraped
          
          Interesting, because he failed me too just because I use Firefox.
          Have you been told about the article or it actually worked with your
          screen reader software?
       
            lynx97 wrote 1 day ago:
            I have to admit I only read the heading.  I didn't want to read the
            article, that would have ruined my day.
       
              nottorp wrote 1 day ago:
              He messed with the glyph indexes in a customized font so the text
              is gibberish if you look just at the code points but displays as
              english.
              
              That would probably mess up any screen reader, but it also didn't
              work on a regular Firefox :)
       
                SSLy wrote 1 day ago:
                wasn't that the article about the obfuscation of kindle ebooks?
       
                  nottorp wrote 1 day ago:
                   [1] No, don't think so. To compensate, I probably missed the
                  article about the obfuscation of kindle ebooks...
                  
  HTML            [1]: https://news.ycombinator.com/item?id=46264955
       
                    SSLy wrote 1 day ago:
                    
                    
  HTML              [1]: https://news.ycombinator.com/item?id=45610226
       
                      nottorp wrote 1 day ago:
                      Hmm 2 months ago. Now I wonder if the link you posted
                      inspired the link I posted...
       
        p410n3 wrote 1 day ago:
        I always thought the whole argument was about explicitly using em dash
        and / or en dash. Aka — and –.
        
        Because while people OBVIOUSLY use dashes in writing, humans usually
        fell back on using the (technically incorrect) hyphen aka the "minus
        symbol" - because thats whats available on the keyboards and basically
        no one will care.
        
        Seems like, in the biggest game of telephone called the internet, this
        has devolved into "using any form of dash = AI".
        
        Great.
       
          vsl wrote 1 day ago:
          Yeah, the joys of mass ignorance.
          
          - Barely literate native English speakers not comprehending even
          minimally sophisticated grammatical constructs.
          
          - Windows-centric people not understanding that you can trivially
          type em-dash (well, en-dash, but people don’t understand the
          difference either) on Mac by typing - twice.
       
          oneeyedpigeon wrote 1 day ago:
          > and basically no one will care
          
          Wow, you really do under/over estimate some of us :)
       
            p410n3 wrote 1 day ago:
            Fair. I was probably just projecting. I cant even figure out when
            to use a comma in my native language. So caring about which type of
            hyphen was used feels like overly sophisticated to me - because I
            dont care myself.
       
              oneeyedpigeon wrote 1 day ago:
              Ah, no, I was only joking. I may be a grammar pedant, but I can
              also do self-deprecation.
       
          foundddit wrote 1 day ago:
          Recently, many people do use the em dash. One big reason is that iOS
          and I think macOS auto converts a double - into an em dash.
       
          embedding-shape wrote 1 day ago:
          The funniest thing I see are people who are harking "Eww, you used AI
          for this and it's bad because of that, I can tell because I used this
          other AI service who said what you wrote was 90% of AI", completely
          failing to grasp the irony.
       
        shlip wrote 1 day ago:
        This must be infuriating:
        
        > You spend a lifetime mastering a language, adhering to its formal
        rules with greater diligence than most native speakers, and for this, a
        machine built an ocean away calls you a fake.
        
        This is :
        
        > humanity is now defined by the presence of casual errors,
        American-centric colloquialisms, and a certain informal, conversational
        rhythm
        
        And once you start noticing the 'threes', it's fun also.
       
          philipwhiuk wrote 1 day ago:
          I mean, "to err is human" was written in the 1700s, by the
          enlightenment era author the essay writer is presumably reading.
          
          Humanity has always been about errors.
       
        dismantlethesun wrote 1 day ago:
        Ironically OpenAI used Kenyan workers[1] to train its AI and now we've
        come to the point where Kenyans are being excluded because they sound
        too much like the AI that they helped train.
        
  HTML  [1]: https://time.com/6247678/openai-chatgpt-kenya-workers/
       
          rcarmo wrote 1 day ago:
          I actually think that's a great endorsement of Kenyan education. I
          don't deal with English-speaking African countries that often (I'm
          Portuguese, so naturally we have ties to other bits of the
          continent), but I've often been impressed by how well they
          communicate regardless of the profession they're in--I don't mean
          that as a bias, but rather as it befitting the kind of conversation
          you'd have with an English major in the UK (to which I have a lot of
          exposure).
          
          Perhaps the US-centric "optimization" of English is to blame here,
          since it is so obvious in regular US media we all consume across the
          planet, and is likely the contrasting style.
       
          tantalor wrote 1 day ago:
          It's not ironic
       
            dismantlethesun wrote 1 day ago:
            I think it is. The irony is that the people you hired to help make
            your machine seem human are seen as mechanical because of their
            distinct and uniformly sophisticated tone. Thus we have a situation
            that’s contrary to expectations.
       
        wccrawford wrote 1 day ago:
        It's the curse of writing well.  ChatGPT is designed to write well, and
        so everyone who does that is accused of being AI.
        
        I just saw someone today that multiple people accused of using ChatGPT,
        but their post was one solid block of text and had multiple grammar
        errors.  But they used something similar to the way ChatGPT speaks, so
        they got accused of it and the accusers got massive upvotes.
       
          t0lo wrote 13 hours 54 min ago:
          It writes well by the impression of the average person...
       
          zjp wrote 1 day ago:
          LLMs don't even write as well as people do. If you talk to them long
          enough, you'll notice they produce the same errors careless people
          do. Sometimes they wrongly elide the article 'a'. They occasionally
          mess up 'a/an' vowel agreement. The most grating thing of all is that
          the fully-elided 'because' (as in 'because traffic') lives on in LLM
          output, even though you rarely see it anymore because people rightly
          got the sense it was unfair for a writer to offload semantic
          reconstruction to the reader.
          
          I have a confession to make: I didn't think lulcat speak was funny,
          even at the time.
          
          It's pretty annoying and once you catch them doing it, you can't
          stop.
       
          JumpCrisscross wrote 1 day ago:
          > they used something similar to the way ChatGPT speaks, so they got
          accused of it and the accusers got massive upvotes
          
          Outrage mills mill outrage. If it wasn't this, it would be something
          else. The fact that the charge resonated is notable. But the fact
          that it exists is not.
       
          killerstorm wrote 1 day ago:
          This reminds me of Idiocracy: "Ah, you talk like a fag, and your
          shit's all retarded" as a response to a normal speech.
       
          woliveirajr wrote 1 day ago:
          And good students are getting in trouble (meaning "have to explain
          themselves") to lousy teachers just because they write well,
          articulate ideas and can summarize information from documents where
          other regular people would make mistakes.
       
          twoodfin wrote 1 day ago:
          ChatGPT does not “write well” unless your standard is some set of
          statistical distributions for vocabulary, sentence length, phrase
          structure, …
          
          Writing well is about communicating ideas effectively to other
          humans. To be fair, throughout linguistic history it was easier to
          appeal to an audience’s innate sense of authority by “sounding
          smart”. Actually being smart in using the written word to hone the
          sharpness of a penetrating idea is not particularly evident in
          LLM’s to date.
       
            xeonmc wrote 1 day ago:
            Good writers use words to make a point. LLMs use words to make a
            salad.
       
              Kim_Bruning wrote 1 day ago:
              Depends what you ask the LLM to do!
              
              If you're using it to write in programming language, you often
              actually get something that runs (provided your specifications
              are good - or your instructions for writing the specifications
              are specific enough!) .
              
              If you're asking for natural language output ... yeah... you need
              to watch it like a hawk by hand - sure. It'd be nice if there was
              some way to test-suite natural language writing.
       
                zdragnar wrote 1 day ago:
                The last time I asked it to write something in a programming
                language, it put together a class that seemed reasonable at
                first blush, but after review found it did not do what it was
                supposed to do.
                
                The tests were even worse. They exercised the code, tossed the
                result, then essentially asserted that true was equal to true.
                
                When I told it what was wrong and how to fix it, it instead
                introduced some superfluous public properties and a few new
                defects without correcting the original mistake.
                
                The only code I would trust today's agents with is so simple I
                don't want or need an agent to write it.
       
                  antonvs wrote 12 hours 51 min ago:
                  Like most other tools, it can take some experience to become
                  good at using them. What you’re describing suggests a lack
                  of that, assuming you used a good coding model or reasonably
                  recent frontier model.
       
                  Kim_Bruning wrote 1 day ago:
                  Yeah, people have wildly different experiences.
                  
                  I think it depends on what models you are using and what
                  you're asking them to do, and whether that's actually inside
                  their technical abilities. There are not always good manuals
                  for this.
                  
                  My last experience: I asked claude to code-read for me, and
                  it dug out some really obscure bugs in old Siemens Structured
                  Text source code .
                  
                  A friend's last experience: they had an agent write an entire
                  Christmas-themed adventure game from scratch (that ran
                  perfectly).
       
              wongarsu wrote 1 day ago:
              But they will make the salad delicious, marvelous and intricate.
              It's not just a salad - it's a new way to talk like marketing
              copy (/s)
       
          tete wrote 1 day ago:
          Depends on your definition of "well". I hate that writing style. It's
          the same writing style that people who want to sell you something use
          and it seems to be really good at tiring the reader out - or at least
          me.
          
          It gives a vibe like a car salesman and I really dislike it and
          personally I consider it a very bad writing style for this very
          reason.
          
          I do very much prefer LLMs that don't appear to be trained on such
          data or try to word questions a lot more to have more sane writing
          styles.
          
          That being said it also reminds me of journalistic articles that feel
          like the person just tried to reach some quota using up a lot of
          grand words to say nothing. In my country of residence the biggest
          medium (a public one) has certain sections that are written exactly
          like that. Luckily these are labeled. It's the section that is a bit
          more general, not just news and a bit more "artsy" and I know that
          their content is largely meaningless and untrue. Usually it's enough
          to click on the source link or find the source yourself to see it
          says something completely different. Or it's a topic that one knows
          about. So there even are multiple layers to being "like LLMs".
          
          The fact that people are taught to write that way outside of
          marketing or something surprises me.
          
          That being said, this is just my general genuine dislike of this
          writing style. How an LLM writes is up to a lot of things, also how
          you engage with it. To some degree they copy your own style, because
          of how they work. But for generic things there is always that
          "marketing talk" which I always assumed is simply because the
          internet/social media is littered with ads.
          
          Are Kenyans really taught to write that way?
       
            twoodfin wrote 1 day ago:
            Are Kenyans really taught to write that way?
            
            I’m highly skeptical. At one point the author tries to argue this
            local pedagogy is downstream of “The Queen’s English” &
            British imperial tradition, but modern LLM-speak is a couple orders
            of magnitude closer in the vector space to LinkedIn clout-chasing
            than anything from that world.
       
              thevillagechief wrote 1 day ago:
              Yes they are, or rather, we were when I was in primary school. My
              essays (we called them composition) were filled with these these
              red check marks for every esoteric word, proverb, metaphor or
              simile you used. The more you had the higher you'd score. So I
              did my homework with a dictionary open. I remember writing some
              document at work in the US and everyone commenting on how Queen's
              English it was. This was before ChatGPT. I know know it was all
              silly, and I've spent a bunch of time learning to write simply.
              But then I've listen to too many tech podcasts, and now I find
              silicon valley tech-speak creeping in, and I hate it. The one
              that I hear everywhere now that I swear not to ever use is let's
              double-click on that point. Just why?
       
                twoodfin wrote 1 day ago:
                Sure, I believe that completely. But that's not how ChatGPT
                writes!
                
                Here are some random examples from one of the (at least)
                half-dozen LLM-co-written posts that rose high on the front
                page over the weekend: [1] You write a record to disk before
                applying it to your in-memory state. If you crash, you replay
                the log and recover. Done. Except your disk is lying to you.
                
                This is why people who've lost data in production are paranoid
                about durability. And rightfully so.
                
                Why this matters: Hardware bit flips happen. Disk firmware
                corrupts data. Memory busses misbehave. And here's the kicker:
                None of these trigger an error flag.
                
                Together, they mean: "I know this is slower. I also know I
                actually care about durability."
                
                This creates an ordering guarantee without context switches.
                Both writes complete before we return control to the
                application. No race conditions. No reordering.
                
                ... I only got about halfway through. This is just phrasing,
                forget about the clickbaity noun-phrase subheads or random
                boldface.
                
                None of these are representative (I hope!) of the kind of
                "sophisticated" writing meant to reinforce class distinctions
                or whatever. It's just blech LinkedIn-speak.
                
  HTML          [1]: https://blog.canoozie.net/disks-lie-building-a-wal-tha...
       
                  thevillagechief wrote 5 hours 20 min ago:
                  I agree. I think the point here was the self-appointed AI
                  detectives, who will declare any writing style unfamiliar to
                  them a product of ChatGPT. You might remember the Paul Graham
                  "delve-gate" controversy on twitter last year. It was exactly
                  this.
       
                    twoodfin wrote 2 hours 22 min ago:
                    Yeah. But I will die on the hill that ChatGPT (today, at
                    least) is a bad writer, and makes prompted writing worse in
                    a way that isn't anything like the way schematic style or
                    vocabulary rules might for an over-eager student.
                    
                    For whatever combination of prompt and context, ChatGPT 5.2
                    did some writing for me the other day that didn't have any
                    of the surface style I find so abrasive. But it could still
                    only express its purported insights in the same "A & ~B"
                    structure and other GPT-isms beneath the surface. Truly
                    effective writers are adept with a much broader set of
                    rhetorical and structural tools.
       
          rich_sasha wrote 1 day ago:
          ChatGPT writes a particular dialect of good writing. Always insisting
          on cliffhangers towards the summary, or "strong enumerations", like
          "the candidate turned out to be a bot. Using ChatGPT. Every. Single.
          Time." And so on.
       
            dogleash wrote 1 day ago:
            It's the content mill blogspam voice.  The machine generated slop
            looks a lot like the artisan hand crafted slop.
       
            the_af wrote 1 day ago:
            I saw this described as LLMs writing "punched up" paragraphs, and
            every paragraph must be maximally impacting. Where a human would
            acknowledge some paragraphs are simply filler, a way to reach some
            point, to "default" LLMs every paragraph must have maximum effect,
            like a mic drop.
       
              esafak wrote 5 hours 17 min ago:
              The silver lining is that this style has been carpet bombed by
              LLMs. Nobody will be able to write like this without being
              ridiculed ever again.
       
            bryanrasmussen wrote 1 day ago:
            "Every. Single. Time." has been a staple of American online humor
            for at least a decade. Commonly used, hence commonly used by
            ChatGPT.
       
          n4r9 wrote 1 day ago:
          This may be true. I personally didn't get any hint of LLM usage from
          their writing. Even where they use em-dashes it's for stuff like
          this:
          
          > there is - in my observational opinion - a rather dark and
          insidious slant to it
          
          That feels too authentic and personal to be any of the current
          generation of LLMs.
       
            petesergeant wrote 1 day ago:
            ChatGPT would have used an actual em dash instead of a hyphen
       
              NoMoreNicksLeft wrote 1 day ago:
              I would use an actual em dash if there were a keyboard key for
              it. On my macbook, I have an an action script set up on the
              touchbar for emdash and a few other unicodey glyphs, but the
              (virtual) buttons are like 2 inches wide each so I can't fit more
              than 5 or 6 across it. Sucks.
       
                petesergeant wrote 21 hours 32 min ago:
                Double hyphen works in many places
       
                gowld wrote 1 day ago:
                On Mac emdash is option-shift-hypen (aka shift-endash, aka
                capital endash)
                
                In Menlo font (Chrome on Mac's default monospace font, used for
                HN comments)  em-dash(—) and en-dash (–) use the same
                glyph, though.
       
              oneeyedpigeon wrote 1 day ago:
              And many of us human writers would have done so, too, since we've
              had to learn the—not very obscure—keyboard shortcut to insert
              an emdash.
       
              embedding-shape wrote 1 day ago:
              Add "Always use dash instead of em dash" to the developer/system
              prompt, and that's never an "issue" anymore. Seems people forget
              LLMs are really just programmable (sometimes inaccurate)
              computers. Whatever you can come up with a signal, someone can
              come up with an instruction to remove.
       
                astrange wrote 10 hours 15 min ago:
                That doesn't work, they beat it so hard into ChatGPT it won't
                always listen to you about it.
                
                You can't stop it from doing the "if you like I can " thing in
                every reply either.
       
                Kim_Bruning wrote 1 day ago:
                They're really not programmable computers! (Bad mental model is
                bad.)
                
                But yes the current commercial ones are somewhat controllable,
                much of the time.
       
                  embedding-shape wrote 1 day ago:
                  Obviously not, computers are the true programmable computers.
                  But I'd still think it's accurate to say they're like
                  programmable computers that are sometimes inaccurate, for
                  most intents and purposes it's a fine mental model unless you
                  really wanna get into the weeds.
       
                oneeyedpigeon wrote 1 day ago:
                Except for your poor editor who then has to manually replace
                your hyphens with proper em dashes. Still, if you're already
                disrespecting your editor enough to feed them AI slop...
       
                  embedding-shape wrote 1 day ago:
                  My editor? I don't think it cares what I input into it, it's
                  just a program. As long as I feed it characters it'll happily
                  tick along as always.
       
                    jasonjmcghee wrote 1 day ago:
                    The parent comment is referring to a human editor, not a
                    text editor.
       
                      embedding-shape wrote 1 day ago:
                      Huge assumption on their side then, isn't the context
                      "humans writing for other humans"? Not sure how
                      "publication editors" entered the conversation nor from
                      where.
       
                        oneeyedpigeon wrote 1 day ago:
                        I was referring to a human editor, which I thought was
                        obvious enough from context. I assumed the reply was in
                        jest. My original comment was light-hearted, so I don't
                        think it needs to be rigorously analysed, but plenty of
                        humans write for other humans but still have an editor
                        involved in the process.
       
          nottorp wrote 1 day ago:
          Actually it's public info that ChatGPT was originally trained by
          speakers of some african business english "dialect". [1] They said
          nigerian but there may be a common way English is taught in the
          entire area. Maybe the article author will chip in.
          
          > ChatGPT is designed to write well
          
          If you define well as overly verbose, avoiding anything that could be
          considered controversial, and generally sycophantic but bland
          soulless corporate speak, yes.
          
  HTML    [1]: https://www.theguardian.com/technology/2024/apr/16/techscape...
       
            guerrilla wrote 1 day ago:
            > They said nigerian but there may be a common way English is
            taught in the entire area.
            
            Nigeria and Kenya are two very different regions with different
            spheres of business. I don't know, but I wouldn't expect the
            English to overlap that much.
       
              neffy wrote 1 day ago:
              There are a lot of very distinctive versions of English floating
              around after the British Empire, Indian newspapers are
              particularly delightful that way - but there is as the author
              says, an inherited common educational system dating back to the
              colonial period, which has probably created a fairly common
              "educated dialect" abroad, just as it has between all the local
              accents and dialects back in the motherland.
       
                guerrilla wrote 1 day ago:
                That's not a very good argument, because then you could say the
                same for American, Canada, South Africa, Australia and so on.
                If recency is an issue, then here's a list of colonies that got
                their freedom around the same time:
                
                Cyprus, Somalia, Sierra Leone, Kuwait, Tanzania, Jamaica,
                Trinidad and Tobago, Uganda, Kenya, Malawi, Zambia, Malta,
                Gambia, Guyana, Botswana, Lesotho, Barbados, Yemen, Mauritius,
                Eswatini (Swaziland).
                
                If what you're saying is right then you'd have to admit
                Jamaican and Barbados English are just the same as Kenyan or
                Nigerian... but they're not. They're radically different
                because they're radically different regions. Uganda and Kenya
                being similar is what I would expect, but not necessarily
                Nigeria...
       
                  Barrin92 wrote 23 hours 12 min ago:
                  >They're radically different because they're radically
                  different regions.
                  
                  They're radically different predominantly at the street level
                  and everyday usage, but the kind of professional English of
                  journalists, academics and writers that the author of the
                  article was surrounded by is very recognizable.
                  
                  You can tell an American from an Australian on the beach but
                  in a journal or article in a paper of record that's much more
                  difficult. Higher ed English with its roots in a classical
                  British education you can find all over the globe.
       
                    guerrilla wrote 22 hours 57 min ago:
                    That's not my experience at all. I can quite easily
                    identify Kenyans, Australians and English from the way they
                    write and they're all rather unique.
                    
                    Go read some Kenyan news. It's very obvious.
       
              nottorp wrote 1 day ago:
              But The Guardian could have been wrong about the country, and I'm
              a stupid European so I just don't know.
              
              All we can hope is for a local to show up and explain.
       
       
   DIR <- back to front page