Amazon Web Services resource prerequisites
- 1.
- 3.
- 2.
- 1.HTTP
- 2.HTTPS
- 3.SSH
- 4.PostgreSQL
- 5.Custom TCP rule: Port 6379
- 6.Custom TCP rule: Port 3022
- 7.Custom TCP rule: Port 8800
- 3.
- 1.Service: Elastic Container Registry; Actions: All Elastic Container Registry; Resources: Any for repository
- 2.Service: EKS; Actions: All EKS actions; Resources: Any for cluster
- 3.Service: Systems Manager; Actions: all GetParameter; Resources: Any
- 4.
- 1.One “registry manager” role with the EC2 use case and the new policy from Step 3 attached
- 2.One “cluster manager” role created with the EKS - Cluster use case and
AmazonEKSClusterPolicy
- After this role is created, click on its name, then Attach Policy and select
AmazonEKSServicePolicy
- 3.One “worker node” role with the EC2 use case and these policies:
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_POLICY
- 5.Create an EC2 instance provisioned with the Plotly AMI according to Installation on Amazon Web Services (This instance will act as the Replicated Management Node), with these Instance Details changed from the default:
- 1.Set the Network to the VPC from Step 1 and select a Subnet from that VPC
- 2.Set Auto-assign public IP to Disable
- 3.Set the IAM role to the “registry manager” EC2 service account from Step 4a
- 4.Select an existing security group and assign the group you created in Step 2
- Depending on your network, you may need to assign additional security groups; please consult your cloud infrastructure administrator if unsure
- 7.
- 8.
- 1.During configuration:
- Choose the Production template
- Leave the Master username as
postgres
and record the password for use in the next step - Under Availability & Durability, ensure Create a standby instance is selected
- Under Connectivity, select the VPC you created in Step 1 and its associated VPC subnet group
- Expand Additional Configuration and set the Initial database name to dashauth
- 2.Once the RDS instance is ready, SSH into the EC2 instance you created and connect to the database (guide), then:
- Create a second database named
dash_deployment_server
- Assign all privileges to the
postgres
user for thedashauth
anddash_deployment_server
databases
- 9.In the ElastiCache dashboard, create a new DB subnet group.
- 1.Use your VPC ID from Step 1
- 2.In the Availability Zone or outpost drop-down, select each availability zone you created in Step 1a and Add their Subnet IDs, then select Create
- 10.
- 1.In the Advanced Redis settings, ensure Multi-AZ with Auto-Failover is selected
- 2.Select the Subnet group you just created
- 11.Create an Elastic Kubernetes Service cluster (guide) with the following settings changed from default:
- 1.
- 2.Cluster Service Role: the “cluster manager” IAM Role created in Step 4b
- 3.VPC: the VPC you created in Step 1
- 4.Security groups: all groups suggested by the wizard
- 5.Cluster endpoint access: private
- 12.Select the cluster name, then select the Cluster security group (under the Networking tab) and add an Inbound rule allowing traffic on port 443 from your EC2 instance’s subnet.
- 13.From your EC2 instance’s command line, add your new cluster’s configuration to your kubeconfig (guide).
- 14.Create an EKS node group (guide). We recommend having separate node groups—one for CPU and one for GPU.
- 1.Node group compute configuration
- If using a GPU node pool:
- Managed node instance type: any of G2, G3, G3S, G4DN, P2, P3, P3DN, or others that support GPU processing (reference)
- Once the node group has been created, install the NVIDIA GPU device plugin for Kubernetes. This can take up to 10 minutes.
- If not using a GPU node pool:
- AMI: Amazon Linux 2 (AL2_x86_64)
- 2.Node IAM Role Name: the “worker node” IAM Role created in Step 4c
- 3.Allow remote access to nodes from selected security groups and choose the group you created in Step 2
- 4.Disk size: 200 GiB
- 5.At least 4 nodes
- 15.From your EC2 instance’s command line, add the Roles you created in Step 4 to the cluster configmap (guide) with the below example.
- 1.Replace WORKER_NODE_ROLE_ARN with the ARN of the role you created in Step 4c
- 2.Replace REGISTRY_MANAGER_ROLE_ARN with the ARN of the role you created in Step 4a
- 3.
apiVersion: v1kind: ConfigMapmetadata:name: aws-authnamespace: kube-systemdata:mapRoles: |- rolearn: WORKER_NODE_ROLE_ARNusername: system:node:{{EC2PrivateDNSName}}groups:- system:bootstrappers- system:nodes- rolearn: REGISTRY_MANAGER_ROLE_ARNusername: kubectl-access-usergroups:- system:masters
Create the above file and name it 'aws.config' then run:
kubectl apply -f aws.config
The output should be similar to:
Warning: resource configmaps/aws-auth is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
configmap/aws-auth configured
Next, create an Elastic Container Registry repository (guide) to contain the images for your Dash apps.
Finally, go to Dash Enterprise Kubernetes additional required configuration and follow the steps under "Update base domain DNS record to use the load balancer IP" to ensure the base domain name is correct.