Docker Compose on AWS



  • I have no experience with AWS and I would like to know what the best approach is when it comes to embedding an environment built using Docker Compose (a simple application with two services, API and Redis), I came across such possibilities at the moment:

    • making an EC2 instance, connecting via SSH, configuring the environment and manually building and running the environment (can you somehow automate it?)
    • making an ECS, uploading a docker image to the ECR, setting a cluster and a ribbon to start building the environment based on the image from the ECR
    • using Elastic Beanstalk, making applications using Docker and managing from the CLI level (eb init, create, deploy) They are different? (additionally connecting CodePipeline and maybe GH Actions)

    What approach in such situations is practiced in your companies and what should you be most careful about?



  • This question risks being flagged as "too broad", and invites opinion-based answers. With that caveat out of the way I'll try to answer in an unbiased way.

    In these cases, it's best to go back to first principles and architect the application from scratch using the relevant AWS services. If you're deploying an application that is really just a few services that can be composed, you're probably best off using Fargate.

    In any case, I would not suggest using EC2 instances

    In terms of how to go about doing that:

    • Create the VPC if you need it
    • define security groups that will define what traffic is allowed
    • set up the ECR repositories for the images in the application stack
    • convert the services in the docker compose file to ECS task definitions
    • decide how you will manage persistent data? If part of the application is redis, you might consider using an ElasticCache component in the stack.
    • decide how you want to expose the services (ENI? Load balancer?)

    There is also the issue of IAM policies that would need to be attached to various entities, in order to read / write to ECR, etc.

    I would write this all as one or two Terraform modules. One for the network configuration (VPC, subnets) and one for the application itself (ECS cluster, Capacity providers, load balancers, ECR repositories, etc). This is a purely personal choice though, and you could just as well create a CloudFormation stack.

    Continuous deployment could indeed be done with Github Actions, if that's where you are storing the code which describes the app.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2