Prerequisites
- A shared file system based on NAS or SAN - a shared file system is a single point of failure
Recording solution has multiple components, each component high availability is described below;
Recorder Failover
Core recorder service is deployed at two different servers with the exact same configurations and two SIP trunks are configured In CUCM accordingly. CUCM keeps track of the recorder servers and sends SIP invites to one of the active recorders. The active recorder will save the recordings on a shared network file system.
Scenario | Behavior |
---|---|
When Recorder-A (active) is down | CUCM starts sending SIP invites to Server-B and makes it active |
When Recorder-A restores | CUCM makes Recorder-A active and sends new invites to it |
When both Recorder-A and Recorder-B are down | No recordings are done until one of the recorders is restored |
When the link between Recorder-A (active) and CUCM is down | CUCM makes Recorder-B active |
When Recorder-A(active) goes down with active calls | Active recordings will be marked as abnormal calls with no recording files. |
Other services like Mixer, Rest Server, and Archival process high availability is achieved with Docker Swarm.
A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles. When you create a service, you define its optimal state (number of replicas, network and storage resources available to it, ports the service exposes to the outside world, and more). Docker works to maintain that desired state. For instance, if a worker node becomes unavailable, Docker schedules that node’s tasks on other nodes. A task is a running container which is part of a swarm service and managed by a swarm manager, as opposed to a standalone container.
To learn more about Docker Swarm Mode, see Docker Swarm Concepts
While deploying the products in the Docker Swarm Mode, the Docker Swarm takes care of the Docker Containers orchestration. Hence, when any of the Docker containers go down, it starts the Docker container on an available Docker Engine in the cluster.
However, if the Docker engine becomes down, it is restarted by the Linux OS automatically in most of the cases. If in case, it does not restart, manual intervention would be required to start it. FOr the time it is down, the active containers continue to run. However, all the container orchestration features are not available; i.e. no new node can be added/removed to/from the cluster during the time the Docker engine is down.