URI:
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   Optophone
       
       
        Animats wrote 2 hours 19 min ago:
        The concept of measuring how much ink appears as the text passes a
        vertical slot came back again in the 1950s. MICR codes, the numbers
        that appear on checks, are read that way. [1] Or at least were in the
        original implementation. The ink was magnetized and the paper went past
        a one-track magnetic tape head. The waveform for each symbol is unique.
        The recognizer is more like a bar code reader than an OCR system.
        
        There are only 14 characters in that font - the digits 0-9 and four
        special field identification symbols. The 1970s "futuristic" text fonts
        which look like MICR symbols are purely decorative.
        
  HTML  [1]: https://en.wikipedia.org/wiki/Magnetic_ink_character_recogniti...
       
        zaius wrote 5 hours 9 min ago:
        After reading Hail Mary, I wondered how reasonable it was for someone
        to truly be able to understand a language based in tones / chords
        alone. Maybe 60 words per minute would be enough to communicate but it
        sure would be frustrating.
       
          rtkwe wrote 3 hours 30 min ago:
          I think you could get faster with a language actually meant to be
          'sung' instead of this rough translation of english characters into
          audio.
       
            quizzical8432 wrote 2 hours 35 min ago:
            My first thought was: “oh, that’s an interesting concept, I
            wonder how hard it would be to learn?”
            
            Then I saw the frequency/time graph, and realised that didn’t
            seem to have been a consideration at all. This was obviously
            designed by a sighted person who cared more about what the pictures
            looked like!
            
            Blind person: “But how do I know which letter is which?”
            Designer: “Oh, that’s easy! Just look at the picture!”
            
            I love the idea of a sung language, though!
       
              rtkwe wrote 2 hours 20 min ago:
              Take a look at when this was invented, it's a critical detail in
              evaluating all this, it was 1913! They were working with the very
              limited technology they had, they couldn't detect the letters and
              map them to a particular new tone or chord that might be easier
              to understand, that tech just wasn't possible [0]. They had to
              directly translate the image of the letters on simple photo
              receptors into a corresponding frequency value.
              
              [0] As I was writing this I did have the wild thought that in
              theory if you had the weights already you could, in theory,
              implement a very basic character recognition neural net with
              analog circuitry using vacuum tubes that could recognize letters
              for direct mapping to sound but it's entirely impractical to
              create from scratch in reasonable time frames. Maybe over the
              span of decades you could manually tune one?
       
        altruios wrote 5 hours 17 min ago:
        Is this a lighthearted jab at computer vision being reduced to tokens?
       
        ge96 wrote 5 hours 21 min ago:
        I take it this was before speak and spell
       
          rtkwe wrote 3 hours 46 min ago:
          This was before integrated circuits and was all analog.
       
       
   DIR <- back to front page