Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Current »


Purpose

To provide installation prerequisites, system requirements and installation instructions of supervisor tools.

Solution Prerequisites

Hardware Sizing

The following lists the machine specifications for a maximum of 100 concurrent agents.

vCPU

vRAM

vDisk

2 cores (2.60GHz, 2 processors)

8 GB

1x100 GB

To support HA, the same specifications would be doubled for the two machines for a redundant deployment.

To get machine specifications for a larger agent volume and/or with other ExpertFlow products in a coresident deployment, please check here.

Software Requirements

The following software is required to be installed on the server.

We require the customer/partner to install the following software on the server.

OS Compatibility

Use any of the following OS. Both are compatible with the Supervisor Tools.

Item

Version

CentOS Linux

7.7.1908 (Core) 

updated to the latest packages (use Yum update)

Red Hat Enterprise Linux 

release 8.4 (Ootpa)

Database Requirements

Item

Notes

SQL Server 2019 Express/ Standard/ Enterprise

The customer or partner to install and make it accessible on the local system or LAN. To support High Availability, the partner/customer must set up MS SQL Server in a failover cluster mode.

Docker Engine Requirements

Item

Notes

Docker CE

Docker CE 18+ and docker-compose (in the case of CentOS

Browser Compatibility

Item

Version

Notes

Chrome

120.x

 

Firefox

Not Tested

The UI of some pages might not render properly in Firefox.

Cisco Unified CC Compatibility

UCCX and UCCE have been tested without Single Sign-On (SSO).

Item

Version

UCCX

12.0, 12.5

UCCE

12.6

 

Port Utilization

Type

Source Host

Source Port

Destination Host

Destination Port

HTTP

Supervisor Tools web app

any

CCX-A, CCX-B

80

HTTPS

Supervisor Tools web app

any

CCX-A, CCX-B

443

TCP

Supervisor Tools web app

any

MS SQL Server

1433

TCP

Supervisor Tools web app

any

MySQL

3306

HTTP/S

Enterprise web user

any

Supervisor Tools web app

8080

System Access Requirements

Integration with CCX

To synchronize changes with CCX, we need admin user for the Express Configuration APIs to read and write data on CCX.

Machine Access

Administrative privileges (root) on the machine are required to follow the deployment steps.

Database access rights

We require the following database and the privileged SQL user to connect to the EF application with its database. The application will create its tables and other schema objects itself after connection.

  • Create a wallboard application database with the name EFSupervisorTools.

  • Create an SQL Server User EFUser with a database role db_owner on the EFSupervisorTools database.

Time Synchronization Requirements

If , the system can produce unpredictable results. Therefore, EF Team Administration application and Cisco UCCX should have their time zone and date/time properly configured, according to the geographic region and must .

To configure the time zone, please see the instructions from the hardware or software manufacturer of the NTP server. The application servers should be synchronized. This synchronization should be maintained continuously and validated regularly. For security reasons, we recommend the Network Time Protocol (NTP) V 4.1+.

Installation

  • Place all files provided with the release in any folder (preferably /root) on all the machines where the application is being deployed.

  • Go to the folder where you copied all the tar files and load them in running following command (the command starts after $ sign):

$ for file in *.tar; load < $file; done  


Configurations

Application Configurations

Create new DBs in the MSSQL server with any names corresponding to all in the -compose file. Update the configurations in the compose file (values under environment tag in each service). below:

Generic configurations in all services:

DB_URL

URL of the database for the microservice.

For MSSQL:
://<>/<>   

Example:

://192.168.1.92:1433/

://192.168.1.92:1433/

DB_DRIVER

Database driver.

For MSSQL:


DB_DIALECT

Database dialect.


For MSSQL:


DB_USER

Database which has full access on the configured DB

DB_PASS

Database password corresponding to the above user

TZ

configuration. Use the TZ database name for your region from the following URL:

https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List

PRIM_IP

IP of the Primary machine in the HA cluster OR IP of the machine

Example:

192.168.1.76

SEC_IP

IP of the Secondary machine in the HA cluster.

Use the same value as of PRIM_IP if not deployed in HA

Example:

192.168.1.80

UMM-specific configurations:

the above configurations, in UMM service in the compose file.

PROMPT_PORT

Port of Prompt microservice. Defaults to 8081

EABC_PORT

Port of EABC microservice. Defaults to 8082

TAM_PORT

Port of TAM microservice. Defaults to 8083

CIM_PORT

Port of CIM microservice (if deployed; Omit if not deployed). Defaults to 8084

HA Configurations

Edit the keep.env file to configure the HA attributes. The following table describes the details.


KEEPALIVED_UNICAST_PEERS

IP of the other machine in the cluster (cluster means an HA cluster of 2 machines where supervisor tools are being deployed). It shouldn't be the IP of the machine on which you currently are.

Example:

192.168.1.233

KEEPALIVED_VIRTUAL_IPS

Virtual IP (should be provided by the customer) to be used to access the application. This should be available in the same subnet as the HA machines.

Example:

192.168.1.245

KEEPALIVED_PRIORITY


Priority of the node from 0-255. A lower number represents a higher priority. The node with higher priority will be master and the other one will be the slave by default.

Example:

100

KEEPALIVED_INTERFACE

The network interface of the machine on which LAN is connected. On Linux, you can get this by executing $ ip addr sh

On a typical SuSE installation, it would generally be:

eth0

CLEARANCE_TIMEOUT

UMM startup time in seconds. On a 2 core machine, UMM takes around 3 minutes or 180 seconds to boot up.

Example:

180

KEEPALIVED_ROUTER_ID

Router ID of the cluster. This should be the same on both the machines in the cluster.

Example:

51

SCRIPT_VAR

This is the health check script. It should remain as is unless you change the umm port in the compose file. Update UMM port instead of 6060 if you’re using some other port. Example:

pidof dockerd && wget -O index.html localhost:6060/umm/base/index.html

Firewall Configuration


You must allow all the ports corresponding to services in the compose file for HTTP and HTTPS for both external and internal network zones. By default, we’d have 8080-8084 (HTTP), 8443-8444 (HTTPS) ports and port 27017 (if MongoDB is installed).


On CentOS 7, you can execute the following commands to enable the default ports in public zone:


$ firewall-cmd --zone=public --add-port=8080-8084/tcp --permanent


$ firewall-cmd --zone=public --add-port=8443-8444/tcp --permanent


$ firewall-cmd --zone=public --add-port=27017/tcp --permanent


$ firewall-cmd --reload


Consult network admin for SLES.

SSL Configuration


To configure SSL/TLS in your application, place your certificate and key files (names should be server.crtserver.crt and server.key) in a directory on root and mount a volume corresponding to that directory inside UMM container. A default self-signed certificate has been added in the build, but the production certificate files should be provided by the customer. been addedthe production certificate files should be provided by the customer


For example, if you place your SSL files in /root/SSL, you should update umm part in the eabc-compose.yml file as below:you should ummeabc.yml


 umm:


   image: umm:12.1.11


   ...               #All the configurations would remain the same


   volumes:


     - /root/ssl:/usr/local/tomcat/webapps/umm/base/https

Do NOT make the above changes if you want to use the default self-signed certificate for the server.

Translations

For Supervisor Tools


Download this folder, update the translations in it in the respective JSON files and place it in the root directory of the machine (all the machines if deploying in HA).  The following files refer to their corresponding languages as below:


  • en.json: English

  • fr.json: French

  • gr.json: German

  • it.json: Italian


The translation folder should have the following hierarchy:


  • translations

    • Announcement

      • i18n

        • en.json

        • fr.json

        • gr.json

        • it.json

    • Base

      • i18n

        • en.json

        • fr.json

        • gr.json

        • it.json


After updating and placing translations folder, mount the following volume in umm service in the eabc-compose.yml file:


volumes:


For CIM (if installed)

Download this folder, update the translations in it in the respective JSON files and place it in the root directory of the machine (all the machines if deploying in HA).  The following files refer to their corresponding languages as below:

  • en.json: English

  • fr.json: French

  • gr.json: German

  • it.json: Italian


The translation folder should have the following hierarchy:

  • translations-cim

    • i18n

      • en.json

      • fr.json

      • gr.json

      • it.json


After updating and placing translations-cim folder, mount the following volume in umm service in the eabc-compose.yml file:

volumes:

Log Rotation

Add the following lines in daemon.json file in /etc/docker/ directory (create the file with the name daemon.json in /etc/docker/ if it isn’t there already) and restart the docker daemon using systemctl restart docker on all machines. Following configuration sets a maximum of three log files with max size for each file as 20 MBs.

{ "log-driver": "json-file",

"log-opts": {

"max-size": "20m",

"max-file": "3"

}

}

Execution

CentOS

  • Start the HA cluster on all the machines with the following command:

$ chmod +x keep-command.sh && ./keep-command.sh

  • Start the compose service on all machines using the following command:

$ docker-compose -f eabc-compose.yml up -d

SLES

  • Start a swarm cluster on the machine if you haven’t done that already (for some other application stack) with the following command:

$ docker swarm init

  • Start the HA cluster on all the machines with the following command:

$ chmod +x keep-command.sh && ./keep-command.sh

  • Start the compose service on all machines using the following command:

$ docker stack deploy -c eabc-compose.yml st

If everything goes well, you should be able to access the application on http://Virtual-IP:8080/umm OR https://Virtual-IP:8443/umm after the application boot time (takes around 3 minutes on a 2 core machine).

With non-redundant (non-HA) deployment, the access URL will have physical machine IP instead of the virtual IP.

Gadget Deployment

  1. Download the latest gadget’s specifications files from here and extract files on the desired directory.

  2. Open the EFGadget.js file and update the URL with your virtual-IP and port of umm service as per your deployment on line 64 as below:


  3. Run FileZilla or any other FTP client and log in to finesse using following credentials

    1. Host: IP of the finesse where gadget is deployed

    2. Username: 3rdpartygadget  

    3. Password: ask the Finesse administrator for the password

    4. port: 22

  4. Create a folder e.g. EFGadget into the files directory on finesse server and upload the specifications files on finesse server.

  5. Repeat the same steps for deploying the gadget on the secondary Finesse server.

  6. Open cfadmin for your finesse using https://finesse.ip:8445/cfadmin, login and follow the steps below to add the supervisor tools gadget in your team:

    1. Click on Team Resources tab on top

    2. Click on the team for which you want to add the gadget

    3. Scroll down the “Desktop Layout XML” part at the bottom of the screen and locate <role>supervisor</role> and add the following line after the <tabs> tag.

      <tab>

                   <id>ST</id>

                   <label>Supervisor Tools</label>

                   <columns>

                       <column>

                           <gadgets>

                               <gadget>/3rdpartygadget/files/<folder>/EFGadget.xml</gadget>

                           </gadgets>

                       </column>

                   </columns>

               </tab>

See the following screenshot.

Troubleshooting

Logs for each container are available in files as well as within docker daemon. To see the logs for any container on STDOUT, execute docker ps and get the id of the container. Use that id to see the logs using docker logs <container_id>.

At any given time, the active machine would be the one with highest keepalived priority, so the logs would be stored there. You can also see the active machine in case of HA using keepalived logs.



  • No labels