This module builds resources required to support an AWS ECS Service. It is designed to work in conjunction with the module terraform-aws-architecture-ecs. The two modules implement the architecture described below .
A CloudFront distribution allows access to the service via an Application Load Balancer. The ECS Services themselves will run on EC2 instances. Both CloudFront and the Load Balancer will use certificates stored in AWS Certificate Manager.
Note that this module has been designed so that several ECS service can be built on top of the same base architecture (resources built by module terraform-aws-architecture-ecs
). The input alb_listener_arn
is key to this arrangement, and a default Load Balancer Listener must have been built externally to allow multiple services to listen on the default HTTPs port 443.
This module uses an input allow_private_access
to control whether additional resources for private access will be built. This defaults to false.
When set to true, this module will create an additional Security Group for private access. This is intended to be used by consumers that exist inside the VPC where the ECS Service has been built. This exposed in the output security_group_private_access_id
. Alongside the security group, a rule will be added to the security group indicated in the input asg_security_group_id
, allowing access from the private access security group on the port indicated in the input alb_target_group_port
. An egress rule on the private access security group allows outbound access to the asg_security_group_id
on the same port.
When allow_private_access
is set to true this module will also build two AWS Cloud Map resources: a private DNS namespace, and a Service Discovery service. The private DNS namespace will automatically build a managed Route 53 Hosted Zone, and the Service Discovery service will add Route 53 records to the managed hosted zone. This will allow consumers to resolve the private IP(s) of the ECS Service instances.
When the ecs_network_mode
input is set to "bridge" (the default value) the Route 53 record type will be SRV. This will automatically map to a managed "A" record in the format <ecs task id>.<ecs service container>.<namespace>
. When ecs_network_mode
is set to "awsvpc" the Service Discovery Service is able to map the IP address directly and creates an "A" record in the format <ecs service container>.<namespace>
. In either case the name of the "A" record is exposed in the output private_access_host
.
This module makes use of two providers: one, the default is used to build most resources in the AWS Region of your choice. The other must be configured in the us-east-1
region, and have an alias us-east-1
. This provider is used to build CloudFront resources, along with a AWS Certificate Manager certificate. To configure SSL certificates with CloudFront, the certificate must exist in the us-east-1
region.
Add this provider to your root module to make use of this module. For example:
provider "aws" {
region = "us-east-1"
alias = "us-east-1"
default_tags {
tags = var.tags
}
}
This can then be passed to the module, for example:
module "my_service" {
source = "github.com/cambridge-collection/terraform-aws-workload-ecs.git?ref=1.0.0"
providers = {
aws.us-east-1 = aws.us-east-1
}
}
An ECS Service can be configured with a custom IAM role. This role allows the ECS Service to interact with the load balancer on your behalf.
If no IAM role is supplied using input ecs_service_iam_role
the default ECS service-linked role will be used. The role will have an ARN in the format arn:aws:iam::<account number>:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS
. This role may not exist in your account, in which case passing the ARN to the ecs_service_iam_role
input will cause it to be created. However, AWS will raise an error on apply if the ARN is used and the role already exists.
If using the awsvpc
networking mode, this module with not allow a custom IAM role to be specified: this is expected behaviour for AWS.
The input ecr_repositories_exist
can be used to refer to pre-existing ECR Repositories. This defaults to false, meaning this module will create ECR Repositories listed in the input ecr_repository_names
. Currently, the user will need to make sure images are available in these repositories that match the values specified in the ECS container defintions.
If ecr_repositories_exist
is set to true the module will lookup the repositories listed in ecr_repository_names
and no additional ECR resources will be created.
An output ecr_repository_urls
shows the URIs indicated by the input ecr_repository_names
. These have the format <aws account id>.dkr.ecr.<aws region>.amazonaws.com/<repository name>
Setting the input ecs_service_capacity_provider_name
allows scaling of the ECS service to be managed by an ECS capacity provider. When this input is unset, the launch_type
property of the ECS service defaults to "EC2".
An ECS capacity provider can be created in Terraform using a aws_ecs_capacity_provider
resource, and associated with an ECS cluster using a aws_ecs_cluster_capacity_providers
resource. The aws_ecs_capacity_provider
must be connected to an Auto Scaling Group, allowing it to manage capacity in the ASG. When associated with the ECS service using the ecs_service_capacity_provider_name
input, the capacity provider responds to deployments in the ECS service. When a service is deployed the capacity provider will provision new EC2 instances to meet the estimated requirements of the deployment. To allow this, the Auto Scaling Group must have a maximum capacity size slightly greater than the desired capacity, allowing the desired capacity to increase to accept the new deployment. If the deployment is successful, connections will be drained from the old tasks and unused EC2 instances terminated as the desired capacity is reduced. The deployment lifecycle will be completely managed by a combination of the capacity provider, autoscaling group and ECS service.
See the article https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/ for further details of ECS Cluster Auto Scaling using capacity providers
The module can optionally create an EFS file system, mount targets and access point, as well as a dedicated Security Group for the EFS mount targets. The input use_efs_persistence
should be set to true
if this is desired. An EFS mount target is created for each subnet in the input vpc_subnet_ids
, allowing EFS to be accessed from inside the subnets specified. Note that the list input vpc_subnet_ids
must have a non-zero length if use_efs_persistence
is true, as ECS services deployed in a VPC require the mount targets to exist in each subnet used: the mount targets allow the DNS address of the EFS file system to be resolved.
A reference to the EFS file system is created in the ECS Task Definition. If use_efs_persistence
is set to true
, a reference is created between the volume and the EFS file system for each item in the input ecs_task_def_volumes
, effectively mounting the ECS volume on the EFS file system.
The EFS Access Point is used to modify the permissions on the EFS file system. In testing, this was necessary to enable ECS to mount the file system correctly. The access point defaults to setting the mount to be owned by the root user on the host, with permissions allowing read, write and executable access. Changing the default settings may lead to ECS being unable to modify the permissions on the root directory, or otherwise Docker on the host being able to create files in the file system where the container user is non-root.
Note that services deployed in a VPC that need access to EFS may need a VPC Endpoint for the service elasticfilesystem
if they don't have a route to the public interface for EFS.
Files can be stored in S3 if needed by ECS. There are two distinct S3 bucket inputs for use by either ECS tasks, or ECS task execution.
The inputs s3_task_execution_bucket
and s3_task_execution_additional_buckets
are used to control IAM permissions for task execution requiring access to S3. The input s3_task_execution_bucket
is the name of the main bucket required by ECS task execution, which can be omitted if no S3 permissions are needed. The input s3_task_execution_additional_buckets
is a list of additional bucket names that may also be needed for task execution. Objects can be uploaded to the bucket named in the input s3_task_execution_bucket
using the input s3_task_execution_bucket_objects
, which is a map of bucket paths and file contents. Note that if the input s3_task_execution_bucket_objects
is supplied, s3_task_execution_bucket
must also be defined.
The input s3_task_bucket
is used to control IAM permissions for ECS tasks requring access to S3. Objects can be uploaded to the task bucket using the input s3_task_bucket_objects
: this is a map of bucket paths and file contents. If the input s3_task_bucket_objects
is supplied, the input s3_task_bucket
must also be defined.
Both the s3_task_execution_bucket_objects
and s3_task_bucket_objects
inputs are set to sensitive
, meaning the contents of the uploaded files are not displayed in Terraform output.
The input datasync_s3_objects_to_efs
can be used to enable AWS DataSync between an S3 bucket and EFS. Note this has no effect if the input use_efs_persistence
is set to false.
If datasync_s3_objects_to_efs
and use_efs_persistence
are both true, DataSync source and target locations will be built. The DataSync source is the S3 bucket named in the input s3_task_execution_bucket
, the target is the EFS file system created when use_efs_persistence
is set to true
. A DataSync task will be created allowing data to be transferred between S3 and EFS. Additionally, security group rules will be created on the security group aws_security_group.efs
allowing traffic on port 2049 (NFS protocol) to and from the VPC CIDR: this is a requirement for DataSync communication.
Note that if datasync_s3_objects_to_efs
is set to true
, the input s3_task_execution_bucket
must be supplied.
The input datasync_s3_subdirectory
can be set to sync a specific path in S3. If omitted this will default to the name_prefix
path: it is assumed that the s3_task_execution_bucket
will be shared by several services and the name_prefix
will by default be used to distinguish them.
When ecs_network_mode
is set to "awsvpc", AWS assigns the task an private IP address inside the VPC. This allows the task to be assigned its own network configuration. This is configured as a dynamic network_configuration
block on the aws_ecs_service.this
resource. Subnets must be specified with the vpc_subnet_ids
input variable. This input is not required when ecs_network_mode
is set to "bridge" (the default value). When using the awsvpc
network mode additional security groups for the task can be specified with the vpc_security_groups_extra
optional input.
Attempting to use the network_configuration
block in aws_ecs_service.this
when ecs_network_mode
is set to anything other than awsvpc
lead to an error:
InvalidParameterException: Network Configuration is not valid for the given networkMode of this task definition.
The inputs vpc_subnet_ids
and vpc_security_groups_extra
are ignored if the ecs_network_mode
value is not "awsvpc".
No requirements.
Name | Version |
---|---|
aws | n/a |
aws.us-east-1 | n/a |
external | n/a |
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
account_id | AWS Account ID used to interpolate ECS Service IAM role ARN | string |
n/a | yes |
acm_certificate_arn | ARN of an existing certificate in Amazon Certificate Manager | string |
null |
no |
acm_certificate_arn_us-east-1 | ARN of an existing certificate in us-east-1 AWS Region of Amazon Certificate Manager. For use by CloudFront Distribution | string |
null |
no |
acm_certificate_validation_timeout | Length of time to wait for the public ACM certificate to validate | string |
"10m" |
no |
acm_create_certificate | Whether to create a certificate in Amazon Certificate Manager | bool |
true |
no |
alb_arn | ARN of the ALB used by the listener | string |
n/a | yes |
alb_dns_name | DNS name for the ALB used by the Cloudfront distribution | string |
n/a | yes |
alb_listener_arn | The Application Load Balancer Listener ARN to add the forward rule and certificate to | string |
n/a | yes |
alb_listener_rule_priority | The priority for the rule between 1 and 50000. Leaving it unset will automatically set the rule with next available priority after currently existing highest rule. A listener can't have multiple rules with the same priority. |
string |
null |
no |
alb_security_group_id | ID of the ALB Security Group for creating ingress to the ALB | string |
n/a | yes |
alb_target_group_deregistration_delay | Amount time for ELB to wait before changing the state of a deregistering target from draining to unused | number |
300 |
no |
alb_target_group_health_check_healthy_threshold | The number of checks before a target is registered as Healthy | number |
2 |
no |
alb_target_group_health_check_interval | Time in seconds between health checks | number |
60 |
no |
alb_target_group_health_check_path | Path for health checks on the service | string |
"/" |
no |
alb_target_group_health_check_status_code | HTTP Status Code to use in target group health check | string |
"200" |
no |
alb_target_group_health_check_timeout | Time in seconds after which no response from a target means a failed health check | number |
10 |
no |
alb_target_group_health_check_unhealthy_threshold | The number of checks before a target is registered as Unhealthy | number |
5 |
no |
alb_target_group_port | Port number to use for the target group | number |
n/a | yes |
alb_target_group_protocol | Protocol to use for the target group | string |
"HTTP" |
no |
alb_target_group_slow_start | Amount time for targets to warm up before the load balancer sends them a full share of requests | number |
0 |
no |
allow_private_access | Whether to allow private access to the service | bool |
false |
no |
alternative_domain_names | List of additional domain names to add to ALB listener rule and CloudFront distribution | list(string) |
[] |
no |
asg_name | Name of Autoscaling Group for registering with ALB Target Group | string |
n/a | yes |
asg_security_group_id | ID of the ASG Security Group for creating ingress from from ALB | string |
n/a | yes |
cloudfront_access_logging | Whether to log CloudFront requests | bool |
false |
no |
cloudfront_access_logging_bucket | S3 bucket name for CloudFront access logs | string |
null |
no |
cloudfront_allowed_methods | List of methods allowed by the CloudFront Distribution | list(string) |
[ |
no |
cloudfront_cached_methods | List of methods cached by the CloudFront Distribution | list(string) |
[ |
no |
cloudfront_origin_connection_attempts | Number of times that CloudFront attempts to connect to the origin. Must be between 1-3 | number |
3 |
no |
cloudfront_origin_read_timeout | Read timeout for CloudFront origin | number |
60 |
no |
cloudfront_viewer_request_function_arn | ARN of a CloudFront Function to add to CloudFront Distribution in Request | string |
null |
no |
cloudfront_viewer_response_function_arn | ARN of a CloudFront Function to add to CloudFront Distribution in Response | string |
null |
no |
cloudfront_waf_acl_arn | ARN of the WAF Web ACL for use by CloudFront | string |
n/a | yes |
cloudmap_associate_vpc_ids | List of VPC IDs to associate with Cloud Map Service Discovery | list(string) |
[] |
no |
cloudwatch_log_group_arn | ARN of the CloudWatch Log Group for adding to IAM task execution role policy | string |
n/a | yes |
datasync_bytes_per_second | Limits the bandwidth used by a DataSync task | number |
-1 |
no |
datasync_overwrite_mode | Specifies whether DataSync should modify or preserve data at the destination location | string |
"ALWAYS" |
no |
datasync_preserve_deleted_files | Specifies whether files in the destination location that don't exist in the source should be preserved | string |
"PRESERVE" |
no |
datasync_s3_objects_to_efs | Whether to use DataSync to replicate S3 objects to EFS file system | bool |
false |
no |
datasync_s3_source_bucket_name | Name of an S3 bucket to use as DataSync source | string |
null |
no |
datasync_s3_subdirectory | Allows a custom S3 subdirectory for DataSync source to be specified | string |
"/" |
no |
datasync_s3_to_efs_pattern | Pattern to filter DataSync transfer task from S3 to EFS | string |
null |
no |
datasync_transfer_mode | The default states DataSync copies only data or metadata that is new or different content from the source location to the destination location | string |
"CHANGED" |
no |
domain_name | Domain Name to be used for the ACM certificate and Route 53 record | string |
n/a | yes |
ecr_repositories_exist | Whether the ECR repositories in ecr_repository_names already exist | bool |
false |
no |
ecr_repository_force_delete | Whether to delete non-empty ECR repositories | bool |
false |
no |
ecr_repository_names | List of names of ECR repositories required by this workload | list(string) |
[] |
no |
ecs_cluster_arn | ARN of the ECS cluster to which this workload should be deployed | string |
n/a | yes |
ecs_network_mode | Networking mode specified in the ECS Task Definition. One of host, bridge, awsvpc | string |
"bridge" |
no |
ecs_service_capacity_provider_name | Name of an ECS Capacity Provider | string |
null |
no |
ecs_service_container_name | Name of container to associated with the load balancer configuration in the ECS service | string |
n/a | yes |
ecs_service_container_port | Container port number associated load balancer configuration in the ECS service. This must match a container port in the container definition port mappings | number |
n/a | yes |
ecs_service_deployment_maximum_percent | Maximum percentage of tasks to allowed to run during a deployment (percentage of desired count) | number |
200 |
no |
ecs_service_deployment_minimum_healthy_percent | Minimum percentage of tasks to keep running during a deployment (percentage of desired count) | number |
100 |
no |
ecs_service_desired_count | Sets the Desired Count for the ECS Service | number |
1 |
no |
ecs_service_iam_role | ARN of an IAM role to call load balancer for non-awsvpc network modes. AWSServiceRoleForECS is suitable, but AWS will generate an error if the value is used and the role already exists in the account | string |
null |
no |
ecs_service_max_capacity | Sets the Maximum Capacity for the ECS Service | number |
2 |
no |
ecs_service_min_capacity | Sets the Minimum Capacity for the ECS Service | number |
1 |
no |
ecs_service_scheduling_strategy | ECS Service scheduling strategy, either REPLICA or DAEMON | string |
"REPLICA" |
no |
ecs_task_def_container_definitions | Container Definition string for ECS Task Definition. See https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDefinition.html | string |
n/a | yes |
ecs_task_def_cpu | Number of cpu units used by the task | number |
null |
no |
ecs_task_def_memory | Amount (in MiB) of memory used by the task. Note if this is unset, all container definitions must set memory and/or memoryReservation | number |
1024 |
no |
ecs_task_def_volumes | List of volume names to attach to the ECS Task Definition | list(string) |
[] |
no |
efs_access_point_id | ID of an existing EFS Access Point | string |
null |
no |
efs_access_point_posix_user_gid | POSIX group ID used for all file system operations using the EFS access point. Default maps to root user on Amazon Linux | number |
0 |
no |
efs_access_point_posix_user_secondary_gids | Secondary POSIX group IDs used for all file system operations using the EFS access point | list(number) |
[] |
no |
efs_access_point_posix_user_uid | POSIX user ID used for all file system operations using the EFS access point. Default maps to root user on Amazon Linux | number |
0 |
no |
efs_access_point_root_directory_path | Root directory for EFS access point | string |
"/" |
no |
efs_access_point_root_directory_permissions | POSIX permissions to apply to the EFS root directory, in the format of an octal number representing the mode bits | number |
777 |
no |
efs_create_file_system | Whether to create an EFS File System to persist data | bool |
false |
no |
efs_file_system_id | ID of an existing EFS File System | string |
null |
no |
efs_file_system_provisioned_throughput | The throughput, measured in MiB/s, that you want to provision for the file system | number |
null |
no |
efs_file_system_throughput_mode | Throughput mode for the file system. Valid values: bursting, provisioned, or elastic | string |
"bursting" |
no |
efs_nfs_mount_port | NFS protocol port for EFS mounts | number |
2049 |
no |
efs_security_group_id | ID of an existing EFS Security Group to allow access to ASG | string |
null |
no |
efs_use_existing_filesystem | Whether to use an existing EFS file system | bool |
false |
no |
efs_use_iam_task_role | Whether to use Amazon ECS task IAM role when mounting EFS | bool |
true |
no |
iam_task_additional_policies | Map of IAM Policies to add to the ECS task permissions. Values should be Policy ARNs; Keys are descriptive strings | map(string) |
{} |
no |
ingress_security_group_id | ID of a security group to grant acess to container instances | string |
null |
no |
name_prefix | Prefix to add to resource names | string |
n/a | yes |
route53_zone_id | ID of the Route 53 Hosted Zone for records | string |
n/a | yes |
s3_task_bucket_objects | Map of S3 bucket keys (file names) and file contents for upload to the task bucket | map(string) |
{} |
no |
s3_task_buckets | Names of the S3 Buckets for use by ECS tasks on the host (i.e. running containers) | list(string) |
[] |
no |
s3_task_execution_additional_buckets | Names of additional buckets for adding to the task execution IAM role permissions | list(string) |
[] |
no |
s3_task_execution_bucket | Name of the bucket for storage of static data for services | string |
null |
no |
s3_task_execution_bucket_objects | Map of S3 bucket keys (file names) and file contents for upload to the task execution bucket | map(string) |
{} |
no |
ssm_task_execution_parameter_arns | Names of SSM parameters for adding to the task execution IAM role permissions | list(string) |
[] |
no |
tags | Map of tags for adding to resources | map(string) |
{} |
no |
update_ingress_security_group | Whether to update external security group by creating an egress rule to this service | bool |
false |
no |
vpc_id | ID of the VPC for the deployment | string |
n/a | yes |
vpc_security_groups_extra | Additional VPC Security Groups to add to the service | list(string) |
[] |
no |
vpc_subnet_ids | VPC Subnet IDs to use with EFS Mount Points | list(string) |
[] |
no |
Name | Description |
---|---|
ecr_repository_urls | Map of ECR Repsitory name keys and Repository URLs |
link | Link to connect to the service |
name_prefix | This is a convenience for recycling into the task definition template |
private_access_host | Route 53 record name for the A record created by Cloud Map Service Discovery |
private_access_port | Port number for accessing service via private access host name |