Akka http file streaming not writing bytes more than 'n' size from different clients -
`akka http 10.0.6; max-content-length set 6000m`
i using akka file streaming upload huge files( sent octet-stream service ) accept incoming bytes , write file sink. observe experimentations.. limited understand reading documents client should able keep sending data unless tell explicitly akka http pressure mechanisms.. been searching online understand behavior, not able understanding yet on explain following behavior.. there missing in code? how can debug more on this? also, via scalatest, able it. if can throw more might on what’s difference in behavior w.r.t scalatest , via curl/http clients..
- through "curl", can stream max of 1kb sized file. more this, hang , message given after timeout no matter how long wait(20 seconds, 5 mins, 10 mins, etc) "sending 2xx 'early' response before end of request received... note connection closed after response. also, many clients not read responses! consider issuing response after request data has been read!"
- through "apache http client", max of 128kb sized file streaming happens. more this, hang , same message in #1 akka http service
- through "python client", max of ~26kb sized file streaming happens. more this, hang , same message in #1 akka http service
- through scalatest inside service, able upload files 200mb, 400mb , more also
here’s code..
`put { withoutsizelimit { extractdatabytes { bytes =>
implicit val system = actorsystem()
implicit val materializer = actormaterializer() // tried system dispatcher implicit val executioncontext = system.dispatchers.lookup("dispatcher") val sink = fileio.topath(paths.get("/file.out")) val action = bytes.runwith(sink).map { case ior if ior.wassuccessful => {
complete(statuscodes.ok, s"${ior.count} bytes written")
} case ior => complete(statuscodes.enhanceyourcalm, ior.geterror.tostring)
} await.result(action, 300.seconds)
} } }`
Comments
Post a Comment