...
install terraform https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
ensure you have AWS CLI installed and configured https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
download the terraform script found at https://github.com/Panintelligence/terraform-ecs
create the “hosted_zone_edit_role” and permission
Code Block aws iam create-policy \ --policy-name pi-hosted-zone-edit \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": [ "route53:ListTagsForResource", "route53:ListResourceRecordSets", "route53:GetHostedZone", "route53:ChangeResourceRecordSets" ], "Resource": "arn:aws:route53:::hostedzone/${HOSTED_ZONE_ID}" }, { "Sid": "", "Effect": "Allow", "Action": "route53:ListHostedZones", "Resource": "*" }, { "Sid": "", "Effect": "Allow", "Action": "route53:GetChange", "Resource": "arn:aws:route53:::change/*" } ] }' aws iam create-role \ --role-name MyExampleRole \ --assume-role-policy-document '{ "Version":"2012-10-17", "Statement": [ {"Effect":"Allow", "Principal":{"AWS":"arn:aws:iam::${ACCOUNT_ID}:root"}, "Action":"sts:AssumeRole"} ] }'
configure your key and secret key prior to before executing these scripts
Code Block language bash export AWS_ACCESS_KEY_ID="anaccesskey" export AWS_SECRET_ACCESS_KEY="asecretkey" export AWS_REGION="us-west-1" export DEPLOYMENT_NAME="sampledeployment" export HOSTED_ZONE_ID="your aws hosted zone id" export CERTIFICATE_ARN="your certificate arn" export HOSTED_ZONE_EDIT_ROLE_ARN="role that permits editing of your hosted zone" export DASHBOARD_DOCKER_TAG="2024_04" export RENDERER_DOCKER_TAG="2024_04" export PIRANA_DOCKER_TAG="2024_04" export DB_PASSWORD="5UP3RsECUR3p455W0Rd123!" export DB_USERNAME="pi_db_admin" export DOCKER_USERNAME="yourgithubusername" export DOCKER_PASSWORD="yourgithubaccesstoken" export LICENCE_KEY="panintelligence-licence" export DOMAIN="example.com" export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) export STATE_BUCKET="${ACCOUNT_ID}-panintelligence-tfstate"
create the s3 state bucket
Code Block aws s3api create-bucket --bucket ${STATE_BUCKET} --create-bucket-configuration LocationConstraint=$AWS_REGION
create the efs_prep lambda function
Code Block <project_dir>/build_lambda.sh
create your ACM certificate as per instructions https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html
initialise the terraform configuration
Code Block terraform init -backend-config="bucket=${STATE_BUCKET}" \ -backend-config="region=${AWS_REGION}" \ -backend-config="key=pi_dashboard/${DEPLOYMENT_NAME}-terraform.tfstate"
plan the changes
Code Block language bash terraform plan -out=plan \ -var="deployment_name=${DEPLOYMENT_NAME}" \ -var="hosted_zone_id=${HOSTED_ZONE_ID}" \ -var="certificate_arn=${CERTIFICATE_ARN}" \ -var="hosted_zone_edit_role_arn=${HOSTED_ZONE_EDIT_ROLE_ARN}" \ -var="dashboard_docker_tag=${DASHBOARD_DOCKER_TAG}" \ -var="renderer_docker_tag=${RENDERER_DOCKER_TAG}" \ -var="dashboard_db_password=${DB_PASSWORD}" \ -var="dashboard_db_username=${DB_USERNAME}" \ -var="docker_hub_credentials={\"username\":\"${DOCKER_USERNAME}\",\"password\":\"${DOCKER_PASSWORD}\"}" \ -var="licence_key=${LICENCE_KEY}" \ -var="region=${AWS_REGION}"
apply the configuration to your target aws account
Code Block terraform apply plan
invoke the configuration lambda
Code Block aws lambda invoke --function-name ${DEPLOYMENT_NAME}_dashboard_prep --payload '{}' out --log-type Tail
Deleteing EFS
remove the EFS backup vault
Code Block $EFS_VAULT_NAME=panintelligence_efs_backup_${DEPLOYMENT_NAME} EFS_BACKUP_ARN = $(aws backup list-recovery-points-by-backup-vault --backup-vault-name ${VAULT_NAME}" --query 'RecoveryPoints[].RecoveryPointArn' --output text) aws backup delete-recovery-point --backup-vault-name "${VAULT_NAME}" --recovery-point-arn "${EFS_BACKUP_ARN}"
tear down using terraform scripts
Code Block terraform plan -destroy -out=plan \ -var="deployment_name=${DEPLOYMENT_NAME}" \ -var="hosted_zone_id=${HOSTED_ZONE_ID}" \ -var="certificate_arn=${CERTIFICATE_ARN}" \ -var="hosted_zone_edit_role_arn=${HOSTED_ZONE_EDIT_ROLE_ARN}" \ -var="dashboard_docker_tag=${DASHBOARD_DOCKER_TAG}" \ -var="renderer_docker_tag=${RENDERER_DOCKER_TAG}" \ -var="dashboard_db_password=${DB_PASSWORD}" \ -var="dashboard_db_username=${DB_USERNAME}" \ -var="docker_hub_credentials={\"username\":\"${DOCKER_USERNAME}\",\"password\":\"${DOCKER_PASSWORD}\"}" \ -var="licence_key=${LICENCE_KEY}" \ -var="region=${AWS_REGION}"
remove s3 terraform state files and bucket
Code Block aws s3 rm s3://${STATE_BUCKET} --recursive aws s3api delete-bucket --bucket ${STATE_BUCKET}
Resilience
It’s a good idea to deploy your dashboard across more than one availability zone and also to employ an auto-scaling group on the analytics, renderer, and dashboard services. You must only run a single Scheduler task. The dashboard is based on Java, which ringfences memory, as a result, you must use CPU as the value to trigger scaling.
...