/
Deploying Using Containers

Deploying Using Containers

Containers represent a repeatable, immutable, and portable way to run software on-premise or in the cloud. Coupled with Container orchestration, it means an opportunity to deploy software in a highly resilient, and scalable fashion. At Panintelligence, we offer our software in containers to facilitate your stable deployment of our software.

Architecture overview

Migrating from all-in-one, Marialess, and other deployments of Panintelligence

What are microservices

Traditionally built software required components to be deployed next to each other with mutual dependencies. If there were to be a failure with any part of that software, or if a package needed to be updated that caused issues with other parts of the software, this would create difficulties. Containers represent the opportunity to create scope by which smaller components for software that can still talk to each other, can be deployed in a way that no longer is mutually reliant on other functions. Pi is designed this way.

5 distinct microservices form the architectural blend for Pi. These are:

Dashboard

This is the nucleus of the pi infrastructure. Nothing much will work without this.

Scheduler

This handles the report generation and task scheduler for the dashboard.

Analytics

This is the analytics engine for the dashboard, formally known as Pirana. This microservice is fundamental to forming the mushroom maps in the dashboard.

Renderer

This is the service that takes HTML charts and renders them into whatever shape the end user requires. The more resources you can dedicate to the renderer, the better. Fewer resources means slower renders.

excel-reader

This stages an Excel file into a SQLite database for consumption with the dashboard.

why would I want to use separate containers?

Some elements of any software solution are more heavily accessed than others. By breaking it into smaller chunks, you have better control over scaling each part. Additionally, in the event of a failure, you don’t lose access to other functioning areas of the software, as you would with a monolith. Likewise, a misbehaving component will not collapse your entire stack. It also permits you to monitor individual components more closely for failures and develop automated recovery plans.

how do I migrate from a single container-based installation to separate containers?

Database

you will need to be able to execute mysqldump and mysql commands against your database. You could install MySQL locally, or you could make use of docker by running the following command on your local machine.

docker run -dt mariadb:latest database docker exec -it database bash

This will put you into an interactive session on the container which will have the SQL tools you need to effect a backup and restore. Please note that container filesystems are immutable. If you remove your container, any extract you’ve taken will be lost, unless you copy it to your host filesystem. You can copy files from your container by following this set of instructions.

Panintelligence uses a repository database containing metadata on defining elements in the dashboard such as categories, charts, users, user roles and permissions, and audit information. To back this up, you will need to perform the following action:

mysqldump \ --add-drop-table \ --add-drop-database \ --databases \ -u{db_username} \ -p{db_password} \ -h{db_host} \ -P{db_port} \ {db_schema_name} \ --ignore-table={db_schema_name}.mis_user_cat_access_view_pi \ --ignore-table={db_schema_name}.test_user_access \ --ignore-table={db_schema_name}.mis_user_cat_access_view \ > sqldump{datenow}.sql

You will then need to insert this data into your new database. be mindful, that if you have already created your database by launching Panintelligence and are trying to overwrite, you will need to stop your dashboard server. This is because there are table locks generated by the dashboard application which will prevent the drop action of the SQL script you generated when creating your backup.

mysql \ -u{db_username} \ -p{db_password} \ -h{db_host} \ -P{db_port} \ {db_schema_name} < {yoursqlfile}.sql

Persistent items

you will now want to take a backup of the following items and mount the volumes to your new deployment. you can find information about docker volumes here.

  • Themes

    • ${source_dir}/Dashboard/tomcat/webapps/panMISDashboardResources/themes

  • Images

    • ${source_dir}/Dashboard/tomcat/webapps/panMISDashboardResources/images

  • Custom JDBC

    • ${source_dir}/Dashboard/tomcat/custom_jdbc_drivers

  • SVG

    • ${source_dir}/Dashboard/tomcat/webapps/panMISDashboardResources/svg

 

Quick Start

Links here to a docker-compose including MariaDB container which is our reference deployment with only our software + maria (i.e. the most straightforward setup utilizing multiple containers).

Common Deployment Pattern

AWS - ECS

terraform

  • install terraform Install Terraform | Terraform | HashiCorp Developer

  • ensure you have AWS CLI installed and configured https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

  • download the terraform script found at https://github.com/Panintelligence/terraform-ecs

  • create the “hosted_zone_edit_role” and permission

    aws iam create-policy \ --policy-name pi-hosted-zone-edit \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": [ "route53:ListTagsForResource", "route53:ListResourceRecordSets", "route53:GetHostedZone", "route53:ChangeResourceRecordSets" ], "Resource": "arn:aws:route53:::hostedzone/${HOSTED_ZONE_ID}" }, { "Sid": "", "Effect": "Allow", "Action": "route53:ListHostedZones", "Resource": "*" }, { "Sid": "", "Effect": "Allow", "Action": "route53:GetChange", "Resource": "arn:aws:route53:::change/*" } ] }' aws iam create-role \ --role-name MyExampleRole \ --assume-role-policy-document '{ "Version":"2012-10-17", "Statement": [ {"Effect":"Allow", "Principal":{"AWS":"arn:aws:iam::${ACCOUNT_ID}:root"}, "Action":"sts:AssumeRole"} ] }'
  • configure your key and secret key before executing these scripts

    export AWS_ACCESS_KEY_ID="anaccesskey" export AWS_SECRET_ACCESS_KEY="asecretkey" export AWS_REGION="us-west-1" export DEPLOYMENT_NAME="sampledeployment" export HOSTED_ZONE_ID="your aws hosted zone id" export CERTIFICATE_ARN="your certificate arn" export HOSTED_ZONE_EDIT_ROLE_ARN="role that permits editing of your hosted zone" export DASHBOARD_DOCKER_TAG="2024_04" export RENDERER_DOCKER_TAG="2024_04" export PIRANA_DOCKER_TAG="2024_04" export DB_PASSWORD="5UP3RsECUR3p455W0Rd123!" export DB_USERNAME="pi_db_admin" export DOCKER_USERNAME="yourgithubusername" export DOCKER_PASSWORD="yourgithubaccesstoken" export LICENCE_KEY="panintelligence-licence" export DOMAIN="example.com" export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) export STATE_BUCKET="${ACCOUNT_ID}-panintelligence-tfstate"
  • create the s3 state bucket

    aws s3api create-bucket --bucket ${STATE_BUCKET} --create-bucket-configuration LocationConstraint=$AWS_REGION
  • create the efs_prep lambda function

    <project_dir>/build_lambda.sh
  • create your ACM certificate as per instructions https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html

  • initialise the terraform configuration

    terraform init -backend-config="bucket=${STATE_BUCKET}" \ -backend-config="region=${AWS_REGION}" \ -backend-config="key=pi_dashboard/${DEPLOYMENT_NAME}-terraform.tfstate"
  • plan the changes

    terraform plan -out=plan \ -var="deployment_name=${DEPLOYMENT_NAME}" \ -var="hosted_zone_id=${HOSTED_ZONE_ID}" \ -var="certificate_arn=${CERTIFICATE_ARN}" \ -var="hosted_zone_edit_role_arn=${HOSTED_ZONE_EDIT_ROLE_ARN}" \ -var="dashboard_docker_tag=${DASHBOARD_DOCKER_TAG}" \ -var="renderer_docker_tag=${RENDERER_DOCKER_TAG}" \ -var="dashboard_db_password=${DB_PASSWORD}" \ -var="dashboard_db_username=${DB_USERNAME}" \ -var="docker_hub_credentials={\"username\":\"${DOCKER_USERNAME}\",\"password\":\"${DOCKER_PASSWORD}\"}" \ -var="licence_key=${LICENCE_KEY}" \ -var="region=${AWS_REGION}"
  • apply the configuration to your target aws account

    terraform apply plan
  • invoke the configuration lambda

    aws lambda invoke --function-name ${DEPLOYMENT_NAME}_dashboard_prep --payload '{}' out --log-type Tail

Deleteing EFS

  • remove the EFS backup vault

    $EFS_VAULT_NAME=panintelligence_efs_backup_${DEPLOYMENT_NAME} EFS_BACKUP_ARN = $(aws backup list-recovery-points-by-backup-vault --backup-vault-name ${VAULT_NAME}" --query 'RecoveryPoints[].RecoveryPointArn' --output text) aws backup delete-recovery-point --backup-vault-name "${VAULT_NAME}" --recovery-point-arn "${EFS_BACKUP_ARN}"
  • tear down using terraform scripts

    terraform plan -destroy -out=plan \ -var="deployment_name=${DEPLOYMENT_NAME}" \ -var="hosted_zone_id=${HOSTED_ZONE_ID}" \ -var="certificate_arn=${CERTIFICATE_ARN}" \ -var="hosted_zone_edit_role_arn=${HOSTED_ZONE_EDIT_ROLE_ARN}" \ -var="dashboard_docker_tag=${DASHBOARD_DOCKER_TAG}" \ -var="renderer_docker_tag=${RENDERER_DOCKER_TAG}" \ -var="dashboard_db_password=${DB_PASSWORD}" \ -var="dashboard_db_username=${DB_USERNAME}" \ -var="docker_hub_credentials={\"username\":\"${DOCKER_USERNAME}\",\"password\":\"${DOCKER_PASSWORD}\"}" \ -var="licence_key=${LICENCE_KEY}" \ -var="region=${AWS_REGION}"
  • remove s3 terraform state files and bucket

    aws s3 rm s3://${STATE_BUCKET} --recursive aws s3api delete-bucket --bucket ${STATE_BUCKET}

Resilience

It’s a good idea to deploy your dashboard across more than one availability zone and also to employ an auto-scaling group on the analytics, renderer, and dashboard services. You must only run a single Scheduler task. The dashboard is based on Java, which ringfences memory, as a result, you must use CPU as the value to trigger scaling.

Container insights

In order to gain more insight into the performance of your instances running in an ECS cluster, it will be important to enable “container insights”. Be aware, there is an additional cost. You will receive basic telemetry on your service performance without container insights.

Logs

you must enable a log group for each of your task definitions to ensure you receive logs. Without this, the logs from the container will not be captured. As a prerequisite, you will need to create the custom log group before defining this in your task definition.

 

{ "containerDefinitions": [ { "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "/panintelligence/dashboard", "awslogs-region": "eu-west-1", "awslogs-create-group": "true", "awslogs-stream-prefix": "panintelligence" } } }

Kubernetes

Persistence

Panintelligence requires some files to be shared amongst pods. to achieve this end, you will need to create a persistent volume. This must be of a storage Capacity of at least 5Gi, however it’s advised to give some space to grow (should you wish to include more custom jdbc, images or themes). an example is shown below

apiVersion: v1 kind: PersistentVolume metadata: name: panintelligence-standard spec: capacity: storage: 20Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: nfs: path: /volume1/pi server: your_nfs_server

For simplicity, you can choose to copy this to a file pi-pv.yaml and execute kubectl apply -f pi-pv.yaml to create the persistent volume for use by the panintelligence deployment in your cluster.

Secrets

To access panintelligence containers, we will need to whitelist your user. Please contact your CSM who will grant you this access. you will then need to create a secret which is to be used by the helm charts.

kubectl create secret docker-registry pi-docker \ --docker-server=https://index.docker.io/v1/ \ --docker-username=<your-name> \ --docker-password=<your-password>

Helm

Please find our helm charts at https://github.com/Panintelligence/helm

Ingress

here is an example ingress for the dashboard.

kind: Ingress metadata: name: panintelligence-ingress spec: rules: - http: paths: - path: /pi pathType: Prefix backend: service: name: testdashboard-dashboard-service port: number: 8224 - path: /panMISDashboardResources pathType: Prefix backend: service: name: testdashboard-dashboard-service port: number: 8224

Azure

Azure allows for multiple deployment methodologies. The most commonly used are App service and kubernetes.

App Service

As previously stated, Panintelligence can be deployed as distinct web applications, which allows you to choose which components you wish to deploy based on your requirements, tailoring your deployment plan to your current needs.

Please note: azure have deprecated mariadb. we’re working to update this document to reflect this. You’re welcome to make these changes for yourself in the meantime!

execute the following statements:

Prerequisites

Please run to set these environment variables before executing any other scripts.

# --- modify these values below --- # export NAME=sample export LOCATION=uksouth export DATABASEUSERNAME=sample export DATABASEPASSWORD=5up3r5ecur3p455w0rd! export DATABASENAME=pansample export ENV=dev export PI_LICENCE="some licence" export DOCKERREGISTRYSERVERUSERNAME=docker-username export DOCKERREGISTRYSERVERPASSWORD=docker-password # --- modify these values above --- #
Dashboard

please run the prerequisites before running this script in the AZ cli.

az group create --name $NAME --location $LOCATION az storage account create --resource-group $NAME --name "acct${NAME}" --location $LOCATION export STORAGEKEY=$(az storage account keys list --resource-group $NAME --account-name "acct${NAME}" --query "[0].value" --output tsv) az storage share create --name "themes${NAME}" --account-name "acct${NAME}" --account-key $STORAGEKEY az storage share create --name "images${NAME}" --account-name "acct${NAME}" --account-key $STORAGEKEY az storage share create --name "svg${NAME}" --account-name "acct${NAME}" --account-key $STORAGEKEY az storage share create --name "customjdbc${NAME}" --account-name "acct${NAME}" --account-key $STORAGEKEY az storage share create --name "keys${NAME}" --account-name "acct${NAME}" --account-key $STORAGEKEY az appservice plan create --name $NAME --resource-group $NAME --sku B3 --is-linux az mariadb server create --resource-group $NAME --name $DATABASENAME --location $LOCATION --admin-user $DATABASEUSERNAME --admin-password $DATABASEPASSWORD --sku-name B_Gen5_1 --version 10.3 az mariadb server firewall-rule create --name allAzureIPs --server $DATABASENAME --resource-group $NAME --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0 az mariadb server update --resource-group $NAME --name $DATABASENAME --ssl-enforcement Disabled DATABASEHOST=$(az mariadb server list --query "[?name=='$DATABASENAME'].fullyQualifiedDomainName" --output tsv) az mariadb server configuration set --resource-group $NAME --server $DATABASENAME --name lower_case_table_names --value 1 az mariadb server configuration set --resource-group $NAME --server $DATABASENAME --name sql_mode --value ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES az mariadb server configuration set --resource-group $NAME --server $DATABASENAME --name log_bin_trust_function_creators --value ON az webapp create --resource-group $NAME --plan $NAME --name "pan${NAME}${ENV}" --deployment-container-image-name "panintelligence/server:2024_08" DEFAULTDOMAIN=$(az webapp show --name "pan${NAME}${ENV}" --resource-group $NAME --query 'defaultHostName' --output tsv) az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_DB_HOST=$DATABASEHOST az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_DB_USERNAME="$DATABASEUSERNAME@$DATABASEHOST" az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_DB_PASSWORD=$DATABASEPASSWORD az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_DB_SCHEMA_NAME=dashboard az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_DB_PORT=3306 az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_EXTERNAL_DB="true" az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_LICENCE=${PI_LICENCE} az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_PROXY_ENABLED=true az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_PROXY_SCHEME=https az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_PROXY_HOST=$DEFAULTDOMAIN az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_PROXY_PORT=443 az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_PROXY_IS_SECURE="true" az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings PI_TOMCAT_COOKIE_NAME=PROFESSIONALCOOKIENAME az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings WEBSITES_PORT=8224 az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_USERNAME=$DOCKERREGISTRYSERVERUSERNAME az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_PASSWORD=$DOCKERREGISTRYSERVERPASSWORD az webapp config appsettings set --name "pan${NAME}${ENV}" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_URL=https://index.docker.io az webapp config storage-account add --resource-group $NAME --name pan${NAME}${ENV} --custom-id themes --storage-type AzureFiles --share-name themes${NAME} --account-name acct${NAME} --access-key $STORAGEKEY --mount-path /var/panintelligence/Dashboard/tomcat/webapps/panMISDashboardResources/themes az webapp config storage-account add --resource-group $NAME --name pan${NAME}${ENV} --custom-id images --storage-type AzureFiles --share-name images${NAME} --account-name acct${NAME} --access-key $STORAGEKEY --mount-path /var/panintelligence/Dashboard/tomcat/webapps/panMISDashboardResources/images az webapp config storage-account add --resource-group $NAME --name pan${NAME}${ENV} --custom-id svg --storage-type AzureFiles --share-name svg${NAME} --account-name acct${NAME} --access-key $STORAGEKEY --mount-path /var/panintelligence/Dashboard/tomcat/webapps/panMISDashboardResources/svg az webapp config storage-account add --resource-group $NAME --name pan${NAME}${ENV} --custom-id customjdbc --storage-type AzureFiles --share-name customjdbc${NAME} --account-name acct${NAME} --access-key $STORAGEKEY --mount-path /var/panintelligence/Dashboard/tomcat/custom_jdbc_drivers az webapp config storage-account add --resource-group $NAME --name pan${NAME}${ENV} --custom-id keys --storage-type AzureFiles --share-name keys${NAME} --account-name acct${NAME} --access-key $STORAGEKEY --mount-path /var/panintelligence/Dashboard/keys az webapp config set --resource-group $NAME --name "pan${NAME}${ENV}" --generic-configurations '{"healthCheckPath": "/pi/version"}'
Renderer

please run the prerequisites before running these scripts.

DATABASEHOST=$(az mariadb server list --query "[?name=='$DATABASENAME'].fullyQualifiedDomainName" --output tsv) az webapp create --resource-group $NAME --plan $NAME --name "pan${NAME}${ENV}renderer" --deployment-container-image-name "panintelligence/renderer:2023_07.4" az webapp config set --resource-group $NAME --name "pan${NAME}${ENV}renderer" --generic-configurations '{"healthCheckPath": "/version"}' az webapp config appsettings set --name "pan${NAME}${ENV}renderer" --resource-group $NAME --settings WEBSITES_PORT=9915 az webapp config appsettings set --name "pan${NAME}${ENV}renderer" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_USERNAME=$DOCKERREGISTRYSERVERUSERNAME az webapp config appsettings set --name "pan${NAME}${ENV}renderer" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_PASSWORD=$DOCKERREGISTRYSERVERPASSWORD az webapp config appsettings set --name "pan${NAME}${ENV}renderer" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_URL=https://index.docker.io

After your service is verified as being available, you will need to set up the renderer by logging into the dashboard, and then setting the PAN_RENDERER_URL to the URL of the renderer you’ve just deployed. it should look something like “pan<your environment name><environment>renderer”. If you apply a custom domain, this will change.

Scheduler
DEFAULTDOMAIN=$(az webapp show --name "pan${NAME}${ENV}" --resource-group $NAME --query 'defaultHostName' --output tsv) DATABASEHOST=$(az mariadb server list --query "[?name=='$DATABASENAME'].fullyQualifiedDomainName" --output tsv) az webapp create --resource-group $NAME --plan $NAME --name "pan${NAME}${ENV}scheduler" --deployment-container-image-name "panintelligence/scheduler:2023_07.4" az webapp config set --resource-group $NAME --name "pan${NAME}${ENV}scheduler" --generic-configurations '{"healthCheckPath": "/version"}' az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings SCHEDULER_DASHBOARD_URL="https://${DEFAULTDOMAIN}/pi" az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings PI_DB_HOST=$DATABASEHOST az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings PI_DB_PASSWORD=$DATABASEPASSWORD az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings PI_DB_USERNAME="$DATABASEUSERNAME@$DATABASEHOST" az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings PI_DB_SCHEMA_NAME=dashboard az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings PI_DB_PORT: 3306 az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings WEBSITES_PORT=9917 az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_USERNAME=$DOCKERREGISTRYSERVERUSERNAME az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_PASSWORD=$DOCKERREGISTRYSERVERPASSWORD az webapp config appsettings set --name "pan${NAME}${ENV}scheduler" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_URL=https://index.docker.io az webapp config storage-account add --resource-group $NAME --name pan${NAME}${ENV}scheduler --custom-id keys --storage-type AzureFiles --share-name keys${NAME} --account-name acct${NAME} --access-key $STORAGEKEY --mount-path /var/panintelligence/Dashboard/keys/
Analytics
az webapp create --resource-group $NAME --plan $NAME --name "pan${NAME}${ENV}pirana" --deployment-container-image-name "panintelligence/pirana:2023_07.4" az webapp config set --resource-group $NAME --name "pan${NAME}${ENV}pirana" --generic-configurations '{"healthCheckPath": "/version"}' az webapp config appsettings set --name "pan${NAME}${ENV}pirana" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_USERNAME=$DOCKERREGISTRYSERVERUSERNAME az webapp config appsettings set --name "pan${NAME}${ENV}pirana" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_PASSWORD=$DOCKERREGISTRYSERVERPASSWORD az webapp config appsettings set --name "pan${NAME}${ENV}pirana" --resource-group $NAME --settings DOCKER_REGISTRY_SERVER_URL=https://index.docker.io

Pulling the images to a local repository

We recommend not relying on a third party to host your containers for your business continuity. Therefore we highly recommend pulling, retagging, and then pushing your containers to your private image repository.

docker pull panintelligence/server:latest docker tag panintelligence/server:latest your.private.repo:5000/your_image_name:latest docker push your.private.repo:5000/your_image_name:latest

We’d also advise against using the “latest” tag in production but use specific versions so you can control when and how you migrate to the latest version of our software.

Healthchecks

how do we know the container is functioning? we use a health check! for orchestration, you can use the following endpoints that will all return a http status code of 200 when the software is functioning well:

Server

https://<your dashboard url>/pi/health

Renderer

https://<your renderer url>/version

Scheduler

https://<your scheduler url>/version

Pi Analytics

https://<your pi analytics url>/version

Configuring your containers

Containers are immutable. If a container stops working (determined by the healthcheck) then you will want to start another container immediately. some configuration will need to be persisted using environment variables, and some using volumes

Environment Variables

Our example scripts below show the minimum environment variables you will need to get up and running, there are however many other options which are listed on our Environment Variables documentation page

Example of a standard environment configuration

variable

description

example value

variable

description

example value

PI_DB_HOST

the database host that’s hosting the repository database for the dashboard

mypidatabase.mydns.com

PI_DB_PASSWORD

The database password for the database that’s hosting the repository for the dashboard

my5up3r53cur3P4$$w0rd!

PI_DB_USERNAME

The database username for the database that’s hosting the repository for the dashboard

megadbusername

PI_DB_SCHEMA_NAME

The database schema for the database that’s hosting the repository for the dashboard. default dashboard

myawesemedashboardrepo

PI_DB_PORT

database for for the repository database

3306

PI_EXTERNAL_DB

sets the database to use an external repository as opposed to a supplied internal repository (note with the “server” container, there is no supplied db)

“true”

PI_LICENCE

your dashboard licence as supplied by your CSM

 

PI_TOMCAT_MAX_MEMORY

The max memory that is allocated to the java heapspace for the application. N.B. when defining this, you will need to leave the recommended allowance for the OS. for our containers, this is 1GB

3076

PI_PROXY_ENABLED

sets the dashboard configuration to understand that it is behind a proxy.

“true”

PI_PROXY_SCHEME

This is used to dress the redirects with the correct scheme (HTTP / HTTPS)

HTTPS

PI_PROXY_HOST

This is used to dress the redirects with the correct URL

spankingdashboard.com

PI_PROXY_PORT

this is used to set the redirect port

443

PI_TOMCAT_MAX_TOTAL_DB_CONNECTIONS

the max number of concurrent connections the database can hold open to the repository database (be warned, most databases set a small limit for this that will need increasing.)

40

PI_TOMCAT_MAX_THREADS

Maximum number of threads the tomcat application can run. set to a number equal to or below PI_TOMCAT_MAX_TOTAL_DB_CONNECTIONS for best stability

40

PI_PROXY_IS_SECURE

determines whether the proxy is TLS/SSL enabled

“true”

PI_TOMCAT_COOKIE_NAME

if your users are going to be connecting to more that one instance of pi at the same time, it’s a good idea to set this to something unique for each deployment.

PIISGREAT

PI_TOMCAT_FRAME_ANCESTORS

this is a space separated whitelist of domains from which you permit embedding in iframes.

https://mycoolwebsite.com

RENDERER_DASHBOARD_URL

renderer will need to find your dashboard installation. this value will be passed to the renderer from the dashboard. It should be the full path to your dashboard.

https://spankingdashboard.com/pi

Volumes

Certain items used by Panintelligence software need to be stored on a file system (these will be read-write items). Below is a list of the volumes you may need to setup and what they are used for. Our example scripts always include these.

Dashboard

Themes

Internal Location: /var/panintelligence/Dashboard/tomcat/webapps/panMISDashboardResources/themes

Use: Custom UI themes for Panintelligence software

Images

Internal Location: /var/panintelligence/Dashboard/tomcat/webapps/panMISDashboardResources/images

Use: custom images

Keys

Internal Location: /var/panintelligence/Dashboard/keys/

Use: Generated keys used for communication between dashboard and scheduler services

Custom JDBC

Internal Location: /var/panintelligence/Dashboard/tomcat/custom_jdbc_drivers

Use: Additional JDBC drivers that are not bundled, used for free-format jdbc connections

SVG

Internal Location: /var/panintelligence/Dashboard/tomcat/webapps/panMISDashboardResources/svg

Use: SVG images for custom maps functionality

Specifications

Server

Item

Value

Item

Value

CPU

2

Ram

4096MB

Min Count

1

Desired Count

3

Notes

We’ve stress-tested this combination to 45 concurrent users. The simulation for the test included the simulant users making constant requests to the dashboard. This is heavy, almost excessive use of the dashboard, and not precisely the usage patterns you’d find from a user. As a result, under real-world conditions, this configuration will likely support many more users.

Renderer

Item

Value

Item

Value

CPU

2

Ram

4096MB

Min Count

1

Desired Count

1

Notes

If you’re rendering in “real time” then you will want to give the renderer as much resource as you can spare. This is because the more resources you give the renderer, the faster the results.

Scheduler

Item

Value

Item

Value

CPU

0.5

Ram

1024MB

Min Count

1

Max Count

1

Analytics

Item

Value

Item

Value

CPU

0.25

Ram

512MB

Min Count

1

Desired Count

1

Questions and Answers

When configured to use Redis for persistent session sharing amongst application nodes, how do i verify it’s working

You can achieve this one of 3 ways. You could drain all your application nodes and restart them. Once they’re reporting as healthy, if you’re able to log in without needing to authenticate, then your session data has persisted.

The second method is far less invasive. When starting the dashboard, a log entry is made stating that a connection to the redis service is assumed.

The third method is to connect to the redis system and view the records changing. You may wish to add an agent that is reactive to changes to your redis collection for the dasboard session data and present this information back as a metric to your monitoring solution.

Shared volumes

The dashboard holds some persistent files that dictate the look and the feel of Pi. Here are some suggestions on how you can manage this data within your Pi deployment.

Writing changes to files

You may wish to guard your production system from changes and push modifications through source control. When deploying using kubernetes, it could be worth deploying your theme assets using a configmap. This would mean that themes are non editable in production, so it would be worth generating these themes either on a local machine or a development instance. Please contact your CSM to obtain a licence for this application.

Write operations to themes is normally done by an administrator, which is a restricted user, therefore the chances of conflict during updates is greatly reduced, however they do sometimes occur.

What sort of storage do i need in my shared volume?

This depends on how many themes, images to serve the dashboard directly, locales, svgs and the number and size of your custom JDBC drivers. Having said that, with the exception of custom JDBC, the majority of these files tend to be smaller. For example, your images should be optimised for web, so shouldn’t be more than a few KB in size. To this end, generally less than 100MB and often far less storage is required. Additionally, the data will not require fast access, so using a cloud based object based file system to back a volume is an option, if you’re worried about scale.

What about backing up these volumes?

This data tends to be slow moving, once set will not change often. If your subjecting this data to a change control process, then it would be even easier to manage configuration drift. You should backup your data to 3 locations, including one that is immutable.

Can I use my Redis service to host images and themes?

It’s not recommended to use redis to store images and themes, and is presently not a feature that Pi supports.

Scaling

Dashboard

state for the dashboard app node is held in your persistent mounted volumes as detailed above, with session data being held in Redis where configured. If you do not have session sync enabled, you would need to take advantage of sticky sessions from your load balancer.

Renderer

The renderer is stateless. You can scale the renderer either vertically (speed up renders) or horizontally (meet demand profile more accurately). You may wish to bring additional renderer nodes online if you know reports are run at specific times during the day.

Scheduler

Currently Pi only needs one single renderer application service. Scaling this to more than one node may have adverse effects and send multiple reports, where one was expected. We have several feature upgrades planned for the renderer over the coming months, so speak to your CSM.