amazon s3 - Read/write parquet files with AWS Lambda? -


hi need lambda function read , write parquet files , save them s3. tried make deployment package libraries needed use pyarrow getting initialization error cffi library:

module initialization error: [errno 2] no such file or directory: '/var/task/__pycache__/_cffi__x762f05ffx6bf5342b.c' 

can make parquet files aws lambda? did had similar problem?

i this:

import pyarrow pa import pyarrow.parquet pq import pandas pd  df = pd.dataframe([data]) #data dictionary table = pa.table.from_pandas(df) pq.write_table(table, 'tmp/test.parquet', compression='snappy') table = pq.read_table('tmp/test.parquet') table.to_pandas() print(table) 

or other method, need able read , write parquet files compressed snappy.

can please open issue discuss on arrow issue tracker? https://issues.apache.org/jira/projects/arrow


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -