Dash Enterprise for Kubernetes

Memory and CPU on the Replicated management node will increase as the users create and use more workspaces on the platform. We recommend that you plan to scale up as usage increases. To monitor usage, configure an external memory & CPU monitor like AWS Cloudwatch, Google Cloud metrics, or Azure Monitor. We recommend setting an alert when the memory usage reaches 70% so that you have time to plan an upgrade or communicate with your user base to optimize their workspace usage.

Memory and CPU on the Kubernetes cluster nodes will increase as the users create and deploy more user-managed services to Dash Enterprise on Kubernetes.

These recommendations are a lower bound of memory and CPU requirements. Your users may require much more memory or CPU, depending on the types of applications they build and viewership. See Behind the Recommendations below for details.

Capacity Planning Table

The following table serves as a guide for your capacity planning, with suggested machine types on AWS and Azure

Replicated Management Node

Minimum CPU

Minimum Memory

AWS

Azure

5 Apps & Workspaces

4

16GB

m5.xlarge

Standard_D4_v4

10 Apps & Workspaces

8

27GB

m5.2xlarge

Standard_D8_v4

25 Apps & Workspaces

16

61GB

m5.4xlarge

Standard_D16_v4

50 Apps & Workspaces

32

117GB

m5.8xlarge

Standard_D32_v4

Cluster Worker Nodes

Minimum CPU

Minimum Memory

AWS

Azure

5 Apps & Workspaces

8

23GB

m5.2xlarge

Standard_D8_v4

10 Apps & Workspaces

11

33GB

c5.4xlarge

Standard_F16s_v2

25 Apps & Workspaces

18

63GB

m5.4xlarge

Standard_D16_v4

50 Apps & Workspaces

31

113GB

m5.8xlarge

Standard_D32_v4

Behind The Recommendations

Memory Usage

Dash apps, Workspaces, and Job Queues will consume memory while idle and awaiting requests. Depending on the user's code, the memory may or may not be substantially larger than these baseline requirements.

Replicated Management Node

Dash Enterprise Core Services requirements: 4GB

This is the memory required on the Replicated Management node to run Dash Enterprise without any user-created Dash Apps, Workspaces, or Job Queues.

Standard Dash application service requirements:

  • Barebones Workspace Container with minimal* data: 1.5G

  • Recommended memory expansion factor: 1.5

Minimal data: A dataframe shaped with 10,000 rows & 3 columns loaded into memory with about a dozen pandas operations applied on it. Memory usage will be lower if data isn’t loaded into memory or higher if more data is loaded into memory.

* Consider the Minimal data and the app's stats one "application unit", consider measuring your applications’ resource consumption. The level of detail is up to you. Come up with an application number that you estimate will be hosted on DE and follow this formula:

Memory = 4GB + (Number of applications in application units 1.5GB 1.5)

With this data, 5 Dash Snapshots-enabled apps with five workspaces would require:

Memory = 4GB + (5 1.5GB 1.5) = 15.25GB

Cluster Worker Nodes

Dash Enterprise Core Services requirements: 13GB

This is the memory required on each Kubernetes Cluster Worker node to run Dash Enterprise without any user-created Dash Apps, Workspaces, or Job Queues.

Standard Dash application service requirements:

  • Barebones Dash App Snapshotting Job Queue Containers with minimal data running four processes*: 1GB.

  • Barebones Job Queue Container (required for Dash Snapshot Engine & updating data in the background): 1GB.

Memory = 14GB + (Number of applications in application units * 2GB)

** Each App is scaled using 4 "Preloaded" gunicorn workers that share memory instead of scaled with containers

CPU Usage

Dash apps, Workspaces, and Job Queues will not consume CPU while idle and awaiting requests.

Cluster Worker Node

Dash Enterprise Core Services requirements: 6 CPUs

This is the number of CPUs required on each Kubernetes Cluster Worker node to run Dash Enterprise without any user-created Dash Apps, Workspaces, or Job Queues.

Since CPU is shared across the App Manager and all of the applications, the number of CPUs will depend on the maximum number of viewers & users interacting with apps or the App Manager at any given moment and how long the computations in the deployed applications take.

This is difficult to predict, which is why we recommend monitoring CPU usage and scaling up when the CPU exceeds 75% of the machine’s usage.

This will depend on your usage.

For example, if your apps are distributed to 100 people every morning at 9 AM, this may require between 1-10 CPUs. (behind these numbers*)

As a lower bound, consider 1 CPU for every Dash app & 1 CPU for every Workspace + 4 CPU for Dash Enterprise baseline services. Since CPU is shared, higher-traffic apps will use as many idle CPUs as available. If all of your apps and workspaces are in high demand at the same point in time, then you will need to scale this up. Monitoring your CPU usage is the best way forward.

Behind these numbers:

10 CPUs: All 100 people load the web page at the exact same time. The backend requests to load the web page take 3 seconds. The server will timeout after 30 seconds, so you need to serve everyone across 10, 3-second batches. Ten batches, 100 people: 10 CPUs. Some users will wait 27 seconds before seeing the page load.

1 CPU: All 100 people can load the web page evenly across 5 minutes: from 9:00 to 9:05. This allows for 300 seconds (5 minutes) / 3 seconds = 100 3-second batches. A single CPU could process each 3-second request every 3 seconds and service all 100 users after 5 minutes, which is slow, but this shows what a lower bound would look like for the above example.

That said, the minimum requirement for the number of CPUs is 4, but we recommend 8.

Disk Space

For the following reasons, we recommend using the same capacity planning as for the single server for the Dash Enterprise for Kubernetes Replicated Management Node:

  • Application docker images are built on the Replicated Management Node before being pushed to the docker registry.

  • App metadata and code is stored on the Replicated Management Node's disk.

  • Other architectural differences for disk space between DEK and DE Single Server are marginal.

Last updated