sketchucation logo sketchucation
    • Login
    ℹ️ Licensed Extensions | FredoBatch, ElevationProfile, FredoSketch, LayOps, MatSim and Pic2Shape will require license from Sept 1st More Info

    Upgrading plugins to Ruby 2.0 for SketchUp 2014

    Scheduled Pinned Locked Moved Developers' Forum
    43 Posts 7 Posters 7.2k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • tt_suT Offline
      tt_su
      last edited by

      @marksup said:

      Would it be possible for SketchUp to provide a function to reenable foolproof reading of text files? (i.e. to reinstate the automatic encoding recognition functionality that Ruby 2 is missing compared to Ruby 1.8)

      There never was an automatic detection - as Ruby 1.8 treated all strings as 8bit byte sequences.

      To provide a proper answer to you I need to know a little bit more about what type of files you are opening.

      If they are binary files:

      
      filemode = 'rb'
      if RUBY_VERSION.to_f > 1.8
        filemode << ';ASCII-8BIT'
      end
      File.open(file_name, filemode) {|file|
        # read file
      }
      
      

      If you know the file is UTF-8:

      
      filemode = 'rb'
      if RUBY_VERSION.to_f > 1.8
        filemode << ';UTF-8'
      end
      File.open(file_name, filemode) {|file|
        # read file
      }
      
      

      If you know the file is ISO-8859-1 but you want it as UTF-8:

      
      filemode = 'rb'
      if RUBY_VERSION.to_f > 1.8
        filemode << ';ISO-8859-1;UTF-8'
      end
      File.open(file_name, filemode) {|file|
        # read file
      }
      
      

      I recommend reading up on the IO class and Encoding class:
      http://www.ruby-doc.org/core-2.1.2/IO.html#method-c-new

      Forcing an encoding is prone to errors - it's like brute forcing and crossing your fingers hoping it will work. By being clear by what encoding you expect you will catch incorrectly coded strings early and at the correct points.

      1 Reply Last reply Reply Quote 0
      • M Offline
        marksup
        last edited by

        Hi, thanks, but you may be missing the point...

        I would wish to read a simple text (not binary) file, (typically in CSV format).

        I do not know what encoding a user may use to create the text file - and most likely neither will the user!

        Thus the file may be UTF-8 already, or it may be something else, I just wish to reliably and simply process it whatever the encoding, as was apparently possible with earlier releases.

        I have no attachment to force_encoding - merely that it was suggested and it does appear to work, and will happily comply with any preferred alternative.

        So, what extra logic is required for anyone and everyone to reliably process any text file, (which might include £ or m[sup:1cfz4sft]2[/sup:1cfz4sft], for example) regardless of how it might be encoded?

        1 Reply Last reply Reply Quote 0
        • tt_suT Offline
          tt_su
          last edited by

          Your force encoding example will only work if the input text file (CSV) file is ISO-8859-1. ISO-8859-1 is all 8bit per character. However, if you open a file that is saved with UTF-8 encoding and it contains characters over the ASCII range (127) then there will be multi-byte characters - when you then force ISO-8859-1 encoding and convert that to UTF-8 you will mangle the characters.

          If you have no idea or control over the input data then I would try to use a try/catch approach of first reading the file in what is more likely for the file to be encoded in. If you get encoding errors thrown which you can catch and try the next likely encoding. You can then fall back to just reading it as binary:

          
          if RUBY_VERSION.to_f > 1.8
            filemode << ';ASCII-8BIT'
          end
          File.open(file_name, filemode) {|file|
            file.seek(80, IO;;SEEK_SET)
            face_count = file.read(4).unpack('i')[0]
          }
          
          

          Look at these errors and see if they might be thrown:

          Encoding;;CompatibilityError
          Encoding;;ConverterNotFoundError
          Encoding;;InvalidByteSequenceError
          Encoding;;UndefinedConversionError
          EncodingError
          

          If you are not familiar with Unicode and how it's represented in byte data I would recommend reading up on that as well. The reason it has worked for you so far has probably been that you have had ASCII compatible data. UTF-8 is byte compatible with ASCII where it uses only one byte per character - but the moment you go outside the ASCII (US-ASCII to be precise) it get multibyte characters.

          For your testing purposes I would strongly recommend you test with non-english characters. For good measure make sure you go outside of European languages as well, try Japanese or Chinese for instance which might be four byte per characters.

          1 Reply Last reply Reply Quote 0
          • 1
          • 2
          • 3
          • 3 / 3
          • First post
            Last post
          Buy SketchPlus
          Buy SUbD
          Buy WrapR
          Buy eBook
          Buy Modelur
          Buy Vertex Tools
          Buy SketchCuisine
          Buy FormFonts

          Advertisement