MCE Deployment Overview

MCE can be deployed in multiple configurations, according to the required support and the available topology. Currently it can be configured as a standalone deployment, or target an SfB (with or w/o PChat enabled) environment.

Please note that currently MCE chat rooms can only be accessed and used via a product of the MindLink suite, either client (Anywhere, Desktop, Mobile) or integration (MLAPI). Native or third party SfB clients will not be able to interact with MCE rooms.

Deployment options#

The MindLink Chat Engine is shipped as a module within the core MindLink product suite. The complete MindLink solution is therefore split into two application roles:

  1. The front end services - serving as the gateway for MindLink clients
  2. The back end services - hosting the MindLink Chat Engine

One-box deployment#

The most basic deployment runs both front end and back end services in a single process.

One-box MindLink Deployment

You can also form a cluster of these one-box deployments for high availability and resiliency.

Clustered MindLink Deployment

Network topology#

A one-box deployment makes it trivial to deploy MindLink Anywhere with MCE capabilities as all services are provided by the same host process.

One-box network topology

In a multi-node deployment there are additional requirements in order for the cluster nodes to talk to each other, requiring additional ports to be exposed.

Cluster network topology

High availability#

A single one-box deployment offers no high availability. To achieve scalability and high-availability multiple servers can be installed in a cluster.

The front end services offer a stateless gateway to the running MCE system and can be safely placed behind a load balancer to ensure high availability.

The back end services is built on a distributed actor-based platform that is horizontally scalable and auto-load balancing. This means that nodes can be added and removed from the running system dynamically and the system will continue to operate without additional configuration.

The communication between the two tiers also leverages the auto-load balancing platform, ensuring that no additional configuration is required to direct traffic between them.

The front end and back end tiers have different requirements to continue running in the face of n failures:

  • Front end requires n + 1 servers deployed
  • Back end requires n + 2 servers deployed

Even though every node in the deployment hosts both tiers, you can enforce that the backend tier nodes do not take on any front-end load by configuring the load-balancer to leverage only a subset of the cluster.

For a cluster of one-box nodes you should follow the more restrictive back-end requirements (n + 2 servers).

High availability deployment

In this example deployment with 5 nodes (2 acting front-end service nodes) the system can survive 2 total failures and a maximum of 1 front-end service failure.

High availability deployment with failure

Tiered deployment#

It is possible to deploy the front end services and back end services in separate tiers. The front-end tier should follow the MindLink Anywhere deployment recommendations to provide high availability and resiliency. The back-end tier should be deployed as an self-managing cluster.

Multi-tier MindLink Deployment

Network topology#

A tiered deployment requires configuring one or more servers as a front-end cluster (MindLink Anywhere) and one or more servers as a back end cluster (MCE).

The front-end cluster and backend cluster require different networking requires, although there is a significant overlap.

The front-end cluster has the following networking requirements:

Front-end network topology

The back-end cluster has the following network requirements:

Back-end network topology

High availability#

A tiered deployment does not necessarily offer high availability (single front-end node and single back-end node). To achieve scalability and high-availability multiple servers can be installed in a cluster at either tier.

The front end services offer a stateless gateway to the running MCE system and can be safely placed behind a load balancer to ensure high availability.

The back end services is built on a distributed actor-based platform that is horizontally scalable and auto-load balancing. This means that nodes can be added and removed from the running system dynamically and the system will continue to operate without additional configuration.

The communication between the two tiers also leverages the auto-load balancing platform, ensuring that no additional configuration is required to direct traffic between them.

The front end and back end tiers have different requirements to continue running in the face of n failures:

  • Front end requires n + 1 servers deployed
  • Back end requires n + 2 servers deployed

High availability tiered deployment

In this example deployment with 5 nodes (2 acting front-end service nodes) the system can survive 1 front-end service failure and 1 back-end service failure.

Administration Services#

Currently the MCE administration services perform Windows account authentication and can authorize for a specific UPN, an Active Directory group or security attribute. It is recommended that at least one node is deployed that provides the administration services on a firewalled port that is not accessible outside of localhost.

In a one-box cluster this means having at least one node not being used to load-balance MindLink Anywhere traffic and only hosting the MCE administration service (the same port is used for both services). It is possible to restrict access to the MCE administration services by enforcing all traffic from outside localhost routes through a reverse proxy configured to block access to the administration URL prefix mce/management. This will mean the administration services are available only from the MCE administration localhost.

In a tiered deployment this means having backend service nodes hosting the MCE administration module and ensuring that the web services port (default: 9080) is not accessible outside localhost.

Sole Persistent Chat Service#

In this configuration MCE acts as the sole persistent chat engine. The MindLink server is connected to the SfB topology for the IM and Presence workloads.

Side-by-side with Skype for Business Persistent Chat#

In this configuration Skype for Business Persistent Chat is enabled and working on the Skype topology. The MindLink server is connected to the topology and is configured to connect to Persistent Chat. MindLink clients can seamlessly interact with either SfB Persistent Chat or MCE-backed rooms.

Attribute servers#

MCE utilises external security attribute systems to synchronize security attributes. The administration engine is integrated with Active Directory and, optionally, a third-party attribute server. The third-party attribute server is utilised by the content classification, communitites of interest, and IM ethical wall features.