On-premise installation on own server

This section applies to installing Dash Enterprise on a single server hosted on your own network. For installing the Kubernetes version of the software, see Dash Enterprise Kubernetes installation.

Server requirements

To install Dash Enterprise using this method, you need root access to a server that meets the below requirements:

  • 64-bit Linux server

  • 4 CPU (or vCPU) cores

  • 32 GB RAM and 32 GB swap space

  • 2 TB disk space

    • If your team is larger or smaller, feel free to scale this number up or down as needed, but never make it less than 200 GB

  • 80 GB minimum available disk space for the Docker data directory (defaults to /var/lib/docker)

  • 20 GB minimum available disk space for the Server Manager data directory (/var/lib/replicated)

If using AWS, the server size closest to this is m3.xlarge, but please consider an AMI-based installation in this case.

Supported Linux distributions

We support the server editions of the following distributions or operating systems:

  • Debian 7.7+

  • Ubuntu 16.04 / 18.04 / 20.04

  • Red Hat Enterprise Linux 7+

  • CentOS 7+

  • Oracle Linux 7+

Plotly has the most experience supporting Ubuntu 18.04 and RHEL/CentOS 7, but any of the above are acceptable. If multiple editions exist, choose the server edition and not the desktop edition.

Additional requirements and recommendations

To avoid any issues that may arise as a result of running multiple applications on the same server, we strongly recommend using a dedicated server or VM to run Dash Enterprise.

We do not recommend enabling SELinux on the server.

You must have root access to the server you’re using.

The server requires Internet access to download installation files (proxy servers are supported). If Internet access is not an option, see Fully Offline Installation.

Ensure the Docker data directory (/var/lib/docker has enough space allocated to it, at least 80 GB.

Certain Docker storage drivers are not supported as they are either deprecated or not intended to be used in production environments. This includes the zfs, btrfs, and devicemapper (in loopback mode) storage drivers. If you are uncertain whether your Docker storage driver is fully supported, please contact Dash Enterprise support.

Memory and CPU needs will increase as you deploy more Dash applications and workspaces, so plan to monitor and scale up your server resources correspondingly. We recommend configuring an external memory and CPU monitor (AWS Cloudwatch, Google Cloud metrics, or Azure Monitor) and setting an alert for when memory usage reaches 70%. This should provide adequate warning to plan an upgrade or communicate with your user base to rescale their applications. If you find yourself rescaling frequently, consider Dash Enterprise Kubernetes to enable autoscaling functionality in your cluster.

The recommendations in the below table are a lower bound of memory and CPU requirements. Your users may require much more memory or CPU depending on the types of applications that they build and viewership. See “Behind the Recommendations” below for an explanation of how to determine resource allocation.

Minimum CPU

Minimum Memory

1 App & Workspace

8

9GB

5 Apps & Workspaces

14

29GB

10 Apps & Workspaces

24

54GB

25 Apps & Workspaces

54

128GB

50 Apps & Workspaces

104

252GB

>50 Apps & Workspaces

We recommend running apps across multiple servers with Dash Enterprise Kubernetes

3.1.2.1 Behind the recommendations

Memory usage

Dash apps, workspaces, and job queues consume memory while idle and awaiting requests. Depending on the app’s code, the memory may or may not be substantially larger than these baseline requirements.

Dash Enterprise core services require 4 GB RAM to run without any user-created Dash apps, workspaces, or job queues.

For each Dash app deployed, standard barebones memory requirements are:

  1. Dash app snapshotting job queue containers with minimal data running 4 processes: 0.75 GB

    1. “Minimal data” example: a dataframe shaped with 10,000 rows and 3 columns loaded into memory, with about a dozen pandas operations applied on it

    2. Memory usage is lower if data isn’t loaded into memory and correspondingly higher the more data is loaded into memory

    3. Each app is scaled using 4 preloaded gunicorn workers that share memory, rather than being scaled with containers

  2. Job queue container (for Dash Snapshot Engine and updating data in the background): 0.75 GB

  3. Workspace container with minimal data: 1.5GB

  4. Redis container with minimal data: 200MB

    1. See 1a for “minimal data” example

    2. A Redis database’s memory footprint varies widely depending on how it’s used (caching, shared session store, job queue message interchange)

  5. Postgres container without any data: negligible (roughly 11MB)

  6. Recommended memory expansion factor: 1.5x

To estimate how much RAM to provision your server with, you can use the below formula, assuming your apps will use a “minimal data” profile as described above in 1a. When counting the number of apps you expect to need to host, consider any app’s associated workspace and job queue containers as part of the app for your count; if you have one app with a workspace and a job queue, count it as one app for your estimate:

Server memory = 4GB + (number of apps * 3.3GB per barebones app * 1.5 expansion factor)

As an example, five Dash apps, each with snapshots enabled and a workspace, require a server with:

Server memory = 4GB + (5 apps * 3.3GB per barebones app* 1.5 expansion factor) = 28.75GB

For a more accurate prediction, consider measuring your application’s actual resource consumption and using that information in your calculation in place of the barebones estimate.

CPU usage

Dash apps, workspaces, and job queues do not consume CPU while idle and awaiting requests.

The Dash App Manager and all Dash apps share the same set of CPU resources, so you will need to consider the following when allocating CPUs to your server:

  • The maximum number of viewers and users expected to interact with apps or the Dash App Manager at any given moment

  • How long the deployed apps’ computations take

Because this is difficult to predict, we recommend implementing a CPU usage monitor and scaling up when CPU use reaches 75% of maximum.

As an example, if your apps are being distributed across to 100 people every morning at 9 a.m., this may require anywhere between one to 10 CPUs. Here are different scenarios with different CPU counts:

  • Higher-bound example using 10 CPUs: all 100 people attempt to load the webpage at the exact same time

    • Each backend request to load the web page takes about three seconds

    • The server times out after 30 seconds, meaning the requests must be fulfilled within 10 three-second batches

    • Processing 100 people’s requests in 10 batches needs 10 CPUs

    • Some users will wait 27 seconds before seeing the page load

  • Lower-bound example using one CPU: all 100 people attempt to load the webpage across five minutes from 9 a.m. to 9:05 a.m.

    • Five minutes equals 300 seconds, which means 100 three-second requests are queued

    • A single CPU can process one request every three seconds

    • All 100 users will have had their web requests complete after five minutes.

Your actual Dash Enterprise use will probably be a mix of the above scenarios. As a lower bound for CPU count, consider:

  • 4 CPUs for Dash Enterprise core services

  • 1 CPU per Dash app

    • Since CPU is shared, higher-traffic apps will use as many idle CPUs as are available

  • 1 CPU per workspace

If all of your apps and workspaces are in high demand at the same time, you will need to scale your CPU allocation up. That said, the minimum CPU count is 4, but we recommend 8 in production.

Open ports

The following open ports are required:

  • Port 443 (HTTPS): Required for creating, viewing, or administering Dash Apps.

  • Port 8800 (Server Manager UI, via HTTPS): Required for administrators to install, upgrade, and configure Dash Enterprise. You may restrict access to this port to administrators only.

The following ports are optional but recommended:

  • Port 80 (HTTP): All Dash Enterprise requests are made over HTTPS, so this port is not strictly required for full functionality. However, opening this port will allow Dash Enterprise to automatically redirect HTTP requests to use HTTPS; without this port, HTTP requests will simply fail.

  • Port 3022 (Dash app deployment via SSH): To support app deployment over SSH, Dash Deployment Server requires a port for SSH connections (default port 3022, configurable in Server Manager Settings). If you do not open a port for Dash app deployment via SSH, you will still be able to deploy Dash apps using HTTPS.

    • We do not recommend changing this to port 22, since you would need to disable SSH on the server.

Installation using Plotly script

  1. SSH into your Linux server

  2. Create a data directory to hold all of the Dash Enterprise data

    • We recommend /plotly for ease of reference, but you may choose any name that suits your needs:

      sudo mkdir /plotly
  3. Run the installation script:

    curl -sSL https://get.plot.ly | sudo bash
    • If your server uses a proxy to access the Internet, see Configuring a proxy

    • If prompted for the server’s service IP address, press Enter to accept the default

Your server is now ready for you to upload your license and configure Dash Enterprise; see Configuration to continue.

Last updated