Dash Enterprise Architecture and Internals

A Dash Enterprise instance currently needs a single server running a supported Linux distribution (For both the Single Server and Kubernetes modes). (See the Before you install page for the list). The installation is managed using the Replicated Native Scheduler, a commercial product included with Dash Enterprise. All components of Dash Enterprise and Replicated, including the Dash Apps themselves, run in Docker containers.

Docker Containers and Docker Engine

Docker Engine (usually referred to as “Docker”) is a widely used container engine. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. In the Docker world, these packages are called “images”, and the term “container” is used to refer to something that’s actually running on a given machine. In other words a container is an instance of an image. Containers have their own filesystems and process spaces, isolated from other containers and from the server itself.

The CE (Community Edition) version of Docker Engine is typically used in Dash Enterprise. Docker Engine CE is normally installed on the server by Replicated as part of its bootstrapping process; that said, if a suitable version of Docker is installed before Replicated, it can be used instead.

For more information on Docker, start with this overview.

Normally, it’s not necessary to interact directly with Docker to manage Dash Enterprise; however, it can be useful for exploration and troubleshooting. See the following to learn more:

Replicated Native Scheduler

The Replicated Native Scheduler (usually referred to as “Replicated”) consists of several Docker containers plus some support scripts. It handles bootstrapping and installing Dash Enterprise, updates, configuration, licensing, support bundle generation, and snapshot-based backups.

Replicated’s containers can be identified by the container name; all Replicated containers have names starting with replicated or retraced. These containers are managed by Replicated and the Dash Enterprise administrator doesn’t need to deal with them, except for cases where Replicated itself is inaccessible or appears to be misbehaving. In these cases, an advanced Dash Enterprise administrator may want to look at the logs for the main Replicated containers, as explained in a coming page.

The main Replicated containers are:

  • replicated, which manages the overall state of the system;

  • replicated-ui, which provides a web-based UI (on port 8800) to manage the system; and

  • replicated-operator, which manages containers running on each node (we currently support only one node).

More information on the Replicated Native Scheduler (mostly aimed at integrators like Plotly) can be found in their help documentation.

Dash Enterprise Containers

Dash Enterprise itself consists of several components that run as Docker containers. These containers are started and managed by Replicated.

  • dash: holds the Dash App Manager and Portal backends, the daemons used to receive apps when they are pushed (sshd and git-http-backend), a web server (nginx) that routes requests to individual Dash Apps, and the backend services needed to support the Dash App Manager and Portal (including serving the frontend assets). Parts of the dash container are based on Dokku, an open-source project described in more detail below.

  • dashauth: container provides the authentication server used to control access to individual Dash Apps, the Dash App Manager, and the Portal.

  • haproxy: runs HAProxy, an HTTP gateway used to perform SSL termination and route requests to the dash and streambed containers.

  • postgres, and redis: supporting databases used by dash and dashauth, each running in its own container.

  • plotly-image-exporter: if the optional Dash Snapshot Engine is licensed, one or more randomly named containers, with plotly-image-exporter being in their image names, is spun up to provide that service. Also:

    • A second haproxy_imageserver container is spun up to route requests between these containers; and

    • a licensed-fonts container is generated to provide proprietary fonts included as part of Dash Snapshot Engine.

  • workspace-container-event-listener, dash-app-container-event-listener: watch for workspaces and app container events such as restarts that cause IP address changes and rebuilds routing information. This prevents routing issues caused by container restarts initiated from outside the Dash Enterprise system.

Dash App and Service Containers

Each running Dash App consists of one or more containers. These are automatically started and managed by Dash Enterprise from within the dash container.

Dash App container names take the form APPNAME.TYPE.N (for example dash-bio.web.1), where:

  • APPNAME is the name of the Dash App (dash-bio in this example)

  • TYPE is the process type, as defined in the Procfile in the app’s root directory. Common process types are:

    • web, a web application server

    • worker, a background task processor such as Celery

  • N is a number starting from 1, used to differentiate the containers since more than one container of each type can be run.

Services used by Dash Apps also run in their own containers. Currently only Redis and Postgres databases are supported as services. Redis container names take the form dokku.redis.NAME (for example dokku.redis.dash-bio). Similarly, Postgres container names take the form dokku.postgres.NAME (for example dokku.postgres.dash-bio). NAME is the name of the Redis or Postgres database.

Workspaces also run in their own containers. A workspace is necessarily associated with a dash app, which in return has only one workspace associated with it. Workspaces containers are named APPNAME.workspace like dash-bio.workspace.


Dokku is an open source project (sponsored by Plotly and others) that provides a Heroku-like “Platform as a Service” (PaaS) based on Docker containers. A PaaS is a service that handles all the steps needed to build, deploy, and run a webapp (for example, Dash Enterprise). Heroku was one of the earliest PaaS products introduced to the market, and it popularized the simple git push app deployment workflow that Dash Enterprise uses.

Dokku, in turn, uses a build system called Herokuish. This system automatically detects the type of app being built and builds it using a language-specific tool called a buildpack. While many open-source buildpacks exist, Dash Enterprise uses custom buildpacks maintained by Plotly in order to allow airgapped (fully offline) app builds and improve debugging.

All Dokku commands run in the dash container described above. Dokku is used to build and deploy apps as well as manage them; the Dash App Manager runs Dokku commands to perform many of its operations such as initializing apps, and Dash Enterprise exposes a carefully selected subset of Dokku commands to advanced users via SSH.

For more information on Dokku, see its documentation. Please note that not all of Dokku’s capabilities are currently supported in Dash Enterprise.

How an App is Deployed

Dash Apps are deployed via the git push command. This in turn uses either SSH or HTTPS to communicate with the server.

For an SSH push, git connects to the SSH daemon (sshd) running in the dash container.

For an HTTPS push, git connects to the git-http-backend daemon, again running in the dash container. (The connection passes through the haproxy container, which performs SSL termination as usual, and through the nginx web server in the dash container.)

In both cases, after receiving the new or updated code, git (on the server) runs Dokku via a hook. Dokku builds a Docker image for the app using its build system and runs it in one or more Docker containers.

How an HTTP(S) Request is Served

HTTP(S) requests to Dash Apps first reach the HAProxy HTTP gateway running in the haproxy container. If HAProxy receives an HTTP (non SSL/TLS) request, it redirects the unsecured request to a URL using HTTPS (SSL/TLS).

In the case of an HTTPS request, HAProxy performs SSL termination. In other words, it receives the HTTPS request, handles all the details of the SSL connection, and sends an HTTP request to the nginx service running in the dash container. (We do not use SSL here because both services run on the same host, so there is no possibility of an outsider eavesdropping on the connection.)

The nginx service in the dash container receives the HTTP request from HAProxy and routes it based on the request’s pathname:

  • Requests related to authentication (e.g. login or the user administration site) go to the authentication server running in the dashauth container.

  • Requests intended for the Portal and Dash App Manager go to the Dash Enterprise backend running in the dash container.

  • git HTTPS pushes go to the git HTTP backend running in the dash container.

  • Requests for Dash Apps running on this server are handled as described in the next paragraph.

  • All remaining requests result in an “app not found” error.

nginx handles requests to Dash Apps by first checking authentication and authorization using a subrequest. This means that nginx makes a request to the Dash Enterprise backend, which in turn uses the user’s session information (found using a session cookie) and the app name to determine if the user is authorized to access the app. If the user is logged in and authorized to access the app, the subrequest instructs nginx to continue with the rest of the process. Otherwise:

  • If the user is not logged in, the app prompts the user to log in; or

  • If the user is logged in but does not have the required permissions for the app, an “app not found” error is sent.

If the user has access to the app, nginx sends the request to one of the app’s containers. The web server inside the app container receives this request and handles it.

Last updated