amazon web services - ASP.NET Core for AWS ECS requires VIRTUAL_HOST -
i'm deploying asp.net core web api app docker image aws ecs, use task definition file that.
it turns out app works if specify environment variable virtual_host
public dns of ec2 instance (as highlighted here: http://docs.servicestack.net/deploy-netcore-docker-aws-ecs), see taskdef.json
below:
{ "family": "...", "networkmode": "bridge", "containerdefinitions": [ { "image": "...", "name": "...", "cpu": 128, "memory": 256, "essential": true, "portmappings": [ { "containerport": 80, "hostport": 0, "protocol": "http" } ], "environment": [ { "name": "virtual_host", "value": "ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com" } ] } ] }
once app deployed aws ecs, hit endpoints - eg http://ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com/v1/ping
- with actual public dns of ec2 instance in
virtual_host
works fine - without env variable i'm getting "503 service temporarily unavailable" nginx/1.13.0
- and if put empty string
virtual_host
i'm getting "502 bad gateway" nginx/1.13.0.
now, i'd avoid specifying virtual host in taskdef file - possible? problem asp.net core related or nginx related?
amazon ecs have secret management system using amazon s3. have create secret in ecs interface, , able reference in configuration, environment variable.
{ "family": "...", "networkmode": "bridge", "containerdefinitions": [ { "image": "...", "name": "...", "cpu": 128, "memory": 256, "essential": true, "portmappings": [ { "containerport": 80, "hostport": 0, "protocol": "http" } ], "environment": [ { "name": "virtual_host", "value": "secret_s3_virtual_host" } ] } ] }
store secrets on amazon s3, , use aws identity , access management (iam) roles grant access stored secrets in ecs.
you make own nginx docker image, contain environment variable.
from nginx label maintainer your_email env "virtual_host" "ec2-xx-xxx-xxxxxx.compute1.amazonaws.com"
and have build it, ship privately , use configuration.
Comments
Post a Comment