Amazon Web Services Installation
This tutorial shows how to set up a private Amazon Elastic Kubernetes Service (Amazon EKS) cluster with full Capact installation using Terraform.
Architecture
NOTE: For now the worker nodes are deployed only in a single availability zone.
Prerequisites
- S3 bucket for the remote Terraform state file
- AWS account with AdministratorAccess permissions on it
- A domain name for the Capact installation
- Terraform 0.15 or newer
- AWS CLI v2
To configure the AWS CLI follow this guide. If you use AWS SSO on your account, then you can also configure SSO for AWS CLI instead of creating an IAM user. This page shows how to configure AWS CLI with AWS SSO.
Installation
Set the required environment variables by running:
export CAPACT_NAME={name_of_the_environment}
export CAPACT_REGION={aws_region_in_which_to_deploy_capact}
export CAPACT_DOMAIN_NAME={domain_name_used_for_the_capact_environment}
export TERRAFORM_STATE_BUCKET={s3_bucket_for_the_remote_statefile} # bucket needs to existConfigure optional parameters.
To select a specific Capact version set the following environment variable:
export CAPACT_VERSION={capact_version} # possible values: @local, @latest, x.y.z e.g. 0.4.0
By default, the cluster worker nodes are created in a single availability zone. To increase the number of availability zones, where the cluster worker nodes are created, run:
export EKS_AZ_COUNT={number_of_availability_zones}
To enable Amazon Elastic File System configuration for the EKS cluster, run:
export EKS_EFS_ENABLED=true
If this option is enabled, after following this tutorial, the
efs-sc
StorageClass will be available to use in your Kubernetes cluster.To add custom flags for
terraform apply
command, set theCAPACT_TERRAFORM_OPTS
environmental variable. For example, run:export CAPACT_TERRAFORM_OPTS="-var worker_group_max_size=4"`
Clone the
capact
repository:git clone https://github.com/capactio/capact
cd capactRun the
./hack/eks/install.sh
script.When you see the "Do you want to perform these actions?" question, provide
yes
value in the command line and press enter.NOTE: This operation can take around to 20 minutes to finish.
Configure the name servers for the Capact Route53 Hosted Zone in your DNS provider. To get the name server for the hosted zone check the generated
hack/eks/config/route53_zone_name_servers
file.cat hack/eks/config/route53_zone_name_servers
{
"aws-1.cluster.capact.dev": [
"ns-1260.awsdns-29.org",
"ns-1586.awsdns-06.co.uk",
"ns-444.awsdns-55.com",
"ns-945.awsdns-54.net"
]
}Wait for the DNS propagation.
Export the KUBECONFIG environment variable pointing to the newly created EKS cluster:
export KUBECONFIG=$PWD/hack/eks/config/eks_kubeconfig
Verify if the Cert Manager issued a certificate for Gateway.
Run:
kubectl get secret -n capact-system gateway-tls
If there is no such Secret resource, see the logs of Cert Manager controller:
kubectl logs -l=app.kubernetes.io/component=controller -l=app=cert-manager -n capact-system
Cert Manager may have difficulties to detect the updated nameservers. To solve this, kill the pod:
kubectl delete pod -l=app.kubernetes.io/component=controller -l=app=cert-manager -n capact-system
Access API server from the bastion host
The bastion hosts has kubectl
preinstalled and kubeconfig
configured to the EKS cluster API server. SSH to the bastion using the following command from:
ssh -i hack/eks/config/bastion_ssh_private_key ubuntu@$(cat hack/eks/config/bastion_public_ip)
Now you should be able to query the API server:
kubectl get nodes
Use Capact CLI from the bastion host
The bastion host can access the Capact gateway and has Capact CLI preinstalled, along with kubectl
, Argo and Helm binaries.
SSH to the bastion host:
ssh -i hack/eks/config/bastion_ssh_private_key ubuntu@$(cat hack/eks/config/bastion_public_ip)
Verify, if you can query the Capact Gateway and list all Interfaces in the Hub:
capact hub interfaces search
Connect to Capact Gateway from local machine
Only the bastion host can access the Capact Gateway. To be able to connect to the Gateway, you need to proxy your traffic.
Open SSH tunnel:
ssh -f -M -N -S /tmp/gateway.${CAPACT_DOMAIN_NAME}.sock -i hack/eks/config/bastion_ssh_private_key ubuntu@$(cat hack/eks/config/bastion_public_ip) -L 127.0.0.1:8081:gateway.${CAPACT_DOMAIN_NAME}:443
Add new entry to
/etc/hosts
:export LINE_TO_APPEND="127.0.0.1 gateway.${CAPACT_DOMAIN_NAME}"
export HOSTS_FILE="/etc/hosts"
grep -qxF -- "$LINE_TO_APPEND" "${HOSTS_FILE}" || (echo "$LINE_TO_APPEND" | sudo tee -a "${HOSTS_FILE}" > /dev/null)Test connection:
- Using Capact CLI
capact login https://gateway.${CAPACT_DOMAIN_NAME}:8081 -u {user} -p {password}
- Using Browser. Navigate to Gateway GraphQL Playground
https://gateway.${CAPACT_DOMAIN_NAME}:8081/graphql
.
When you are done, close the connection:
ssh -S /tmp/gateway.${CAPACT_DOMAIN_NAME}.sock -O exit $(cat hack/eks/config/bastion_public_ip)
Cleanup
Remove the
ingress-nginx
andpublic-ingress-nginx
Helm releases. This is required to deprovision the AWS ELBs. Run:helm delete -n capact-system ingress-nginx
helm delete -n capact-system public-ingress-nginxRemove the records from the Route53 Hosted Zone from the AWS Console. Only the entries for apex SOA and NS should be left.
Deprovision the EKS cluster and VPC.
cd hack/eks/terraform
# This command might fail. See "Limitations and bugs" section.
terraform destroy -var domain_name=$CAPACT_DOMAIN_NAMEIf the previous command failed execute the following commands:
terraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]'
terraform state rm 'kubernetes_storage_class.efs_storage_class'
terraform state rm 'kubernetes_service_account.efs_csi_driver_ctrl_sa'
terraform destroy -var domain_name=$CAPACT_DOMAIN_NAME
Limitations and bugs
- There is an issue, with the EKS module, where
terraform destroy
fails on the resourcemodule.eks.kubernetes_config_map.aws_auth[0]
. You don't have to worry about this, just remove the resource manually from the state file usingterraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]'
and runterraform destroy again