python - Dealing with large binary file (3 GB) in docker and Jenkins -


i'm using google model (binary file: around 3gb) in docker file , use jenkins build , deploy on production server. rest of code pulled bitbucket repo.

an example line docker file download , unzip file. happens once command cached.

from python:2.7.13-onbuild  run mkdir -p /usr/src/app workdir /usr/src/app  arg debian_frontend=noninteractive run apt-get update && apt-get install --assume-yes apt-utils run apt-get update && apt-get install -y curl run apt-get update && apt-get install -y unzip run curl -o - https://s3.amazonaws.com/dl4j-distribution/googlenews-vectors-negative300.bin.gz \  | gunzip > /usr/src/app/googlenews-vectors-negative300.bin 

everything works fine when build , run docker on local machine. however, when make patch-version push these changes production server through jenkins, build process fails @ end. setup, build , test phases works fine. however, post-build phase fails. (the build process pushes changes repo, , according logs, commands in docker file runs fine.) happens after , following error when looked @ logs.

18:49:27 654f45ecb7e3: layer exists 18:49:27 2c40c66f7667: layer exists 18:49:27 97108d083e01: pushed 18:49:31 35a4b123c0a3: pushed 18:50:10 1e730b4fb0a6: pushed 18:53:46 error parsing http 413 response body: invalid character '<' looking beginning of value: "<html>\r\n<head><title>413 request  `entity large</title></head>\r\n<body  bgcolor=\"white\">\r\n<center>`<h1>413 request entity large</h1></center>\r\n<hr> center>nginx/1.10.1</center>\r\n</body>\r\n</html>\r\n" 

could file large?

before addition of file, docker , jenkins working fine too.

i wondering if there limitations in docker/jenkins in handling large file this? or breaking way approaching it.

update: increasing client_max_body_size solved specific error. however, getting error @ ssh -o stricthostkeychecking=no root@ipaddress "cd /root/ourapi &&docker-compose pull api &&docker-compose -p somefolder -d"

the docker-compose pull fails here unexpected eof. tries download image (1.6 gb) cancel after getting close size , retries ends wit eof error.

which brings me old question if large files needs handled different in situation?

update 2: the issue has been resolved. needed increase client_max_body_size 4 gb , need increase time-out parameter pulling repository our own repository server. adjusting these 2 parameters has led resolving problem.

the problem caused due following reasons:

  • the default value of client_max_body_size in ngnix server configuration low. due which, not upload file of 3.6 gb increased value 4 gb.
  • we run jetty server on our repository management system serve out http traffic needed increase time-out there jenkins pull relevant docker files there.

this answer in context of specific problem. however, question regarding how handle such files in better way still remains open. moreover, not clear if increasing client_max_body_size 4 gb idea in general.

relevant docs client_max_body_size: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -