Document toolboxDocument toolbox

Deployment Guide version 12.3


Purpose

To provide installation prerequisites, system requirements and installation instructions of supervisor tools.

Solution Prerequisites

Supervisor Tools is compatible with the following.

Hardware Requirements

For HA deployment, each machine in the cluster should have the following hardware specifications.

Components

Minimum requirement

CPU

2 cores vCPU 

RAM

4 GB

Disk

100 GB mounted on /

NICs

1 NIC

Software Requirements

OS Compatibility

Supervisor Tools is tested with the following Linux-based operating systems.

Item

Version

Notes

Red Hat Enterprise Linux RECOMMENDED

release 8.4 or higher

Administrative privileges (root) are required to follow the deployment steps.

CentOS Linux

7.7.1908 (Core) 

Not recommended as it has many vulnerabilities and no longer maintained

Administrative privileges (root) are required to follow the deployment steps.

Database Requirements

Item

Notes

SQL Server 2019 Express/ Standard/ Enterprise

The customer or partner to install and make it accessible on the local system or LAN. To support High Availability, the partner/customer must set up MS SQL Server in a failover cluster mode.

Docker Engine Requirements

Item

Notes

Docker CE

Docker CE 18+ 

Docker-compose

Version 1.23.1

Browser Compatibility

Item

Version

Notes

Chrome

120.x

COMPATIBLE

Firefox

Not Tested

The UI of some pages might not render properly in Firefox.

Cisco Unified CC Compatibility

Item

Version

UCCX

12.5, 12.0 (SS0)

UCCE

12.6

CUCM

12.5.1

Installation

  • Place all files provided with the release in any folder (preferably /root) on all the machines where the application is being deployed.

  • Go to the folder where you copied all the tar files and load them in running following command (the command starts after $ sign):

$ for file in *.tar; load < $file; done  


Configurations

Application Configurations

Create new DBs in the MSSQL server with any names corresponding to all in the -compose file. Update the configurations in the compose file (values under environment tag in each service). below:

Generic configurations in all services:

DB_URL

URL of the database for the microservice.

For MSSQL:
://<>/<>   

Example:

://192.168.1.92:1433/

://192.168.1.92:1433/

DB_DRIVER

Database driver.

For MSSQL:


DB_DIALECT

Database dialect.


For MSSQL:


DB_USER

Database which has full access on the configured DB

DB_PASS

Database password corresponding to the above user

TZ

configuration. Use the TZ database name for your region from the following URL:

https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List

PRIM_IP

IP of the Primary machine in the HA cluster OR IP of the machine

Example:

192.168.1.76

SEC_IP

IP of the Secondary machine in the HA cluster.

Use the same value as of PRIM_IP if not deployed in HA

Example:

192.168.1.80

UMM-specific configurations:

the above configurations, in UMM service in the compose file.

PROMPT_PORT

Port of Prompt microservice. Defaults to 8081

EABC_PORT

Port of EABC microservice. Defaults to 8082

TAM_PORT

Port of TAM microservice. Defaults to 8083

CIM_PORT

Port of CIM microservice (if deployed; Omit if not deployed). Defaults to 8084

HA Configurations

Edit the keep file to configure the HA attributes. The following table describes the details.


KEEPALIVED_UNICAST_PEERS

IP of the other machine in the cluster (cluster means an HA cluster of 2 machines where supervisor tools are being deployed). It shouldn't be the IP of the machine on which you are.

Example:

192.168.1.233

KEEPALIVED_VIRTUAL_IPS

) to to access the application. This should be available in the same subnet as the HA machines.

Example:

192.168.1.245

KEEPALIVED_PRIORITY


Priority of the node . A lower number represents a higher priority. The node with higher priority will be master and the other one will be the slave by default.

Example:

100

KEEPALIVED_INTERFACE

The network interface of the machine on which . On Linux, you can get this by executing $ sh

On a typical installation, it would be:

eth0

CLEARANCE_TIMEOUT

UMM startup time in seconds. On a 2 core machine, UMM takes around 3 minutes or 180 seconds to boot up.

Example:

180

KEEPALIVED_ROUTER_ID

Router ID of the cluster. This should be the same on both the machines in the cluster.

Example:

51

SCRIPT_VAR

This is the health check script. It should remain as is unless you change the umm port in the compose file. Update UMM port instead of 6060 if you’re using some other port. Example:

&& -O index.html localhost:6060/umm/base/index.html

Firewall Configuration


You must allow all the ports corresponding to services in the compose file for HTTP and HTTPS for both external and internal network zones. By default, we’d have 8080-8084 (HTTP), 8443-8444 (HTTPS) ports and port 27017 (if ).


On CentOS 7, you can execute the following commands to enable the default ports in public zone:


$ firewall-cmd add-port=8080-8084/tcp permanent


$ firewall-cmd add-port=8443-8444/tcp permanent


$ firewall-cmd add-port=27017/tcp permanent


$ firewall-cmd reload


Consult network admin for SLES.

SSL Configuration


To configure SSL/TLS in your application, place your certificate and key files (names should be and server.key) in a directory on root and mount a volume corresponding to that directory inside UMM container. A default self-signed certificate has in the build, but . production certificate files should by the customer


For example, if you place your SSL files in /root/SSL, update umm part in the -compose file as should 


 umm:


   image: umm:12.1.11


   ...               #All the configurations would remain the same


   volumes:


      /root/ssl:/usr/local/tomcat/webapps/umm/base/

Do NOT make the above changes if you want to use the default self-signed certificate for the server.

Translations

For Supervisor Tools


Download this folder, update the translations in it in the respective JSON files and place it in the root directory of the machine (all the machines if deploying in HA).  The following files refer to their corresponding languages as below:


  • en: English

  • fr: French

  • : German

  • it: Italian


The translation folder should have the following hierarchy:


  • translations

    • Announcement

      • i18n

        • en

        • fr


        • it

    • Base

      • i18n

        • en

        • fr


        • it


After updating and placing translations folder, mount the following volume in umm service in the -compose file:


volumes:


For CIM (if installed)

Download this folder, update the translations in it in the respective JSON files and place it in the root directory of the machine (all the machines if deploying in HA).  The following files refer to their corresponding languages as below:

  • en: English

  • fr: French

  • : German

  • it: Italian


The translation folder should have the following hierarchy:

  • translations-cim

    • i18n

      • en

      • fr


      • it


After updating and placing translations-cim folder, mount the following volume in umm service in the eabc-compose.yml file:

volumes:

Log Rotation

Add the following lines in daemon.json file in /etc/docker/ directory (create the file with the name daemon.json in /etc/docker/ if it isn’t there already) and restart the docker daemon using systemctl restart docker on all machines. Following configuration sets a maximum of three log files with max size for each file as 20 MBs.

{ "log-driver": "json-file",

"log-opts": {

"max-size": "20m",

"max-file": "3"

}

}

Execution

CentOS

  • Start the HA cluster on all the machines with the following command:

$ chmod +x keep-command.sh && ./keep-command.sh

  • Start the compose service on all machines using the following command:

$ docker-compose -f eabc-compose.yml up -d

SLES

  • Start a swarm cluster on the machine if you haven’t done that already (for some other application stack) with the following command:

$ docker swarm init

  • Start the HA cluster on all the machines with the following command:

$ chmod +x keep-command.sh && ./keep-command.sh

  • Start the compose service on all machines using the following command:

$ docker stack deploy -c eabc-compose.yml st

If everything goes well, you should be able to access the application on http://Virtual-IP:8080/umm OR https://Virtual-IP:8443/umm after the application boot time (takes around 3 minutes on a 2 core machine).

With non-redundant (non-HA) deployment, the access URL will have physical machine IP instead of the virtual IP.

Gadget Deployment

  1. Download the latest gadget’s specifications files from here and extract files on the desired directory.

  2. Open the EFGadget.js file and update the URL with your virtual-IP and port of umm service as per your deployment on line 64 as below:


  3. Run FileZilla or any other FTP client and log in to finesse using following credentials

    1. Host: IP of the finesse where gadget is deployed

    2. Username: 3rdpartygadget  

    3. Password: ask the Finesse administrator for the password

    4. port: 22

  4. Create a folder e.g. EFGadget into the files directory on finesse server and upload the specifications files on finesse server.

  5. Repeat the same steps for deploying the gadget on the secondary Finesse server.

  6. Open cfadmin for your finesse using https://finesse.ip:8445/cfadmin, login and follow the steps below to add the supervisor tools gadget in your team:

    1. Click on Team Resources tab on top

    2. Click on the team for which you want to add the gadget

    3. Scroll down the “Desktop Layout XML” part at the bottom of the screen and locate <role>supervisor</role> and add the following line after the <tabs> tag.

      <tab>

                   <id>ST</id>

                   <label>Supervisor Tools</label>

                   <columns>

                       <column>

                           <gadgets>

                               <gadget>/3rdpartygadget/files/<folder>/EFGadget.xml</gadget>

                           </gadgets>

                       </column>

                   </columns>

               </tab>

See the following screenshot.

Troubleshooting

Logs for each container are available in files as well as within docker daemon. To see the logs for any container on STDOUT, execute docker ps and get the id of the container. Use that id to see the logs using docker logs <container_id>.

At any given time, the active machine would be the one with highest keepalived priority, so the logs would be stored there. You can also see the active machine in case of HA using keepalived logs.