Your force encoding example will only work if the input text file (CSV) file is ISO-8859-1. ISO-8859-1 is all 8bit per character. However, if you open a file that is saved with UTF-8 encoding and it contains characters over the ASCII range (127) then there will be multi-byte characters - when you then force ISO-8859-1 encoding and convert that to UTF-8 you will mangle the characters.
If you have no idea or control over the input data then I would try to use a try/catch approach of first reading the file in what is more likely for the file to be encoded in. If you get encoding errors thrown which you can catch and try the next likely encoding. You can then fall back to just reading it as binary:
if RUBY_VERSION.to_f > 1.8
filemode << ';ASCII-8BIT'
end
File.open(file_name, filemode) {|file|
file.seek(80, IO;;SEEK_SET)
face_count = file.read(4).unpack('i')[0]
}
Look at these errors and see if they might be thrown:
Encoding;;CompatibilityError
Encoding;;ConverterNotFoundError
Encoding;;InvalidByteSequenceError
Encoding;;UndefinedConversionError
EncodingError
If you are not familiar with Unicode and how it's represented in byte data I would recommend reading up on that as well. The reason it has worked for you so far has probably been that you have had ASCII compatible data. UTF-8 is byte compatible with ASCII where it uses only one byte per character - but the moment you go outside the ASCII (US-ASCII to be precise) it get multibyte characters.
For your testing purposes I would strongly recommend you test with non-english characters. For good measure make sure you go outside of European languages as well, try Japanese or Chinese for instance which might be four byte per characters.