postgresql - column headers are corrupted when querying with pyodbc on Ubuntu -
i'm using postgres on ubuntu , use unixodbc , pyodbc 4.0.16 access data. seem have issue related unicode. when querying db, column headers appear corrupted.
here's example:
import pyodbc conn = pyodbc.connect("dsn=local_postgres") conn.setdecoding(pyodbc.sql_char, encoding='utf-8') conn.setdecoding(pyodbc.sql_wchar, encoding='utf-8') #conn.execute('create schema test') conn.execute('create table test.uni_test(column1 varchar)') conn.execute("insert test.uni_test(column1) values ('my value')") results = conn.execute('select * test.uni_test') print results.description columns = [column[0].decode('latin1') column in results.description] print "columns: " + str(columns) print list(results) result: ((u'c\x00\x00\x00o\x00\x00', <type 'str'>, none, 255, 255, 0, true),) columns: [u'c\x00\x00\x00o\x00\x00'] [(u'my value', )]
i'm not sure issue is. btw - same behavior observed on mac (el capitan).
thanks in advance, alex
u'c\x00\x00\x00o\x00\x00'
first 7 bytes of 'column1' in utf-32le encoding. (the value apparently truncated @ 7 bytes because 'column1' 7 characters long.)
pyodbc received significant upgrade unicode handling 4.x version, , 1 of things developers discovered surprising variety of ways odbc drivers can mix-and-match encoding when returning values. pyodbc wiki page unicode recommends the following postgresql odbc under python 2.7
cnxn.setdecoding(pyodbc.sql_char, encoding='utf-8') cnxn.setdecoding(pyodbc.sql_wchar, encoding='utf-8')
but in case following required
cnxn.setdecoding(pyodbc.sql_wmetadata, encoding='utf-32le')
Comments
Post a Comment