(13.9) Deployment Guide
Solution Prerequisites
Hardware Sizing
The following lists the machine specifications for up to 100 concurrent agent deployments, with no other EF components installed on the VM.
vCPU | vRAM | vDisk | Notes |
2 cores (2.60GHz, 2 processors) | 4 GB | 1x100 GB mounted on / | In case MSSQL server is installed on a separate machine |
4 cores (2.60 GHz, 4 processors | 8 GB | 1x100 GB mounted on / | If MSSQL server is installed on the same machine as the wallboard stack |
For HA, the same specifications would be doubled for two machines.
To get machine specifications for a larger agent volume and/or with other ExpertFlow products in a co-resident deployment, please get a quote from ExpertFlow or contact your ExpertFlow Account Manager.
Software Requirements
OS Compatibility
The following OS software is required on the server:
Item | Version | Notes |
CentOS | 7.x updated to the latest packages (use Yum update) | Administrative privileges (root) are required to follow the deployment steps. |
Database Requirements
Item | Notes |
MS SQL Database Server 2016/2017 Express/Standard/Enterprise | For clustered SQL Server installation, the customer will be responsible for providing the cluster. For stand-alone DB deployment, DB (2017 express) will be included within docker containers. |
Microsoft SQL Server 2017 | Microsoft SQL Server 2017, Microsoft Corporation Express Edition (64-bit) on Linux (Ubuntu 16.04.5 LTS). Only valid in case of a non-HA deployment. |
Docker Engine Requirements
Item | Notes |
Docker CE 18.06.0+ |
Browser Compatibility
Item | Version | Notes |
Firefox | 73.0.1 (64-bit) | |
Chrome | Not tested | Some UI elements might not render properly. A full-blown testing cycle can be carried out on demand. |
IE | Not tested | Not supported |
Wallboard Reporting/ Display Device
The provisioning of reporting devices is the partner/customer's responsibility. For reporting device, one of the following hardware solutions must be provided to display wallboard views in the production environment.
- An existing PC with a pre-installed OS, Monitor or LCD for wallboard display OR
- A Smart LCD with an embedded browser to open the wallboard view OR
- A Raspberry Pi device with OS installed with an LCD to display.
For Raspberry Pi device's details, see https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
Port Utilization
The following ports should remain open on the Firewall. The local security policy and any antivirus should also allow open communication on the following ports.
Type | Source Host | Source Port | Destination Host | Destination Port |
HTTP | Enterprise web user | any | Dashboard & Wallboards | 80 |
HTTPS | Dashboards & Wallboards server | any | Dashboard & Wallboards | 443 |
UDP/TCP | Dashboards & Wallboards server | any | MS SQL Server (CCE DB) | 1433 |
Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps. All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.
Configure Log Rotation
Add the following lines in /etc/docker/daemon.json
file (create the file if not there already) and restart the docker daemon using systemctl restart docker.
Perform this step on all the machines in the cluster in case of HA deployment.
|
Installing Application
- Download the deployment script deploy-wallboard.sh and place it in the
/root
directory. This script will:- delete the wallboard directory in the present working directory if it exists.
- clone the wallboard repository from gitlab in the present working directory.
To execute the script, give it the execute permissions and execute it.
# chmod +x deploy-wallboard.sh
# ./deploy-wallboard.sh
- Create two databases in your DB server corresponding to wallboard and umm. This step is only required if you're using your own DB server/cluster. If you're using the built in MSSQL database server, you don't need to do anything.
Update environment variables in the following files inside
/root/wallboard/docker/environment_variables
folder.common-variables.env
Name Description Do not change the default values for non-HA deployment OR if you want to use the built in database. For HA, use SQL server cluster settings instead of the defaults. If you want to use your own MSSQL instance anyway, update the following variables accordingly. DB_URL
Wallboard database connection url
For example:
- jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
- jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
DB_USER
database user DB_PASSWORD
database password DB_DRIVER
JDBC driver e.g., net.sourceforge.jtds.jdbc.Driver DB_DIALECT
Database dialect e.g., org.hibernate.dialect.SQLServer2008Dialect CISCO_TYPE
uccx for UCCX and ucce for UCCE/PCCE SSO_ENABLED false SLA_decimal true The SLA value will be displayed for up to 2 decimal points
falseThe SLA value will be rounded off
umm-variables.env
Name Description Do not change the default values for non-HA deployment OR if you want to use the built in database. For HA, use SQL server cluster settings instead of the defaults. If you want to use your own MSSQL instance anyway, update the following variables accordingly. DB_URL
UMM database connection url
For example:
- jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
- jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
DB_USER
database user DB_PASS
database password DB_DRIVER
JDBC driver e.g., net.sourceforge.jtds.jdbc.Driver DB_DIALECT
Database dialect e.g., org.hibernate.dialect.SQLServer2008Dialect Change the following variables as per your environment PRIM_FINESSE_IP
Primary Finesse URL including port (if not 80 or 443)
For example:
SEC_FINESSE_IP
Secondary Finesse URL including port (if not 80 or 443)
For example:
FINESSE_USER
Finesse administrator user FINESSE_PASS
Finesse administrator password ADMIN_PASS
Password of the admin user
synchronizer-variables.env
Name Description CC_TYPE
Decides whether Cisco contact center is UCCX or UCCE
Holds string "UCCX" for UCCX and "UCCE" for UCCE. Default is UCCX
REDUNDANT_DEPLOYMENT
Decides if synchronizer deployment is redundant. Set it to "true". Holds strings "true" or "false". Default is false INSTANCE_NAME
Used for differentiation of instances when deployed redundant, could be any string. It should be different on both machines. SYNC_AGENTS
To enable Agent sync set it to "true", default is true SYNC_AGENTS_STATS
Enables/disables agents stats sync, default is true SYNC_AGENTS_EMAIL_STATS
Enables/disables agent email stats, default is false SYNC_QUEUES_STATS
Enables/disables queue stats, default is true SYNC_QUEUES
Enables/disables queues/skillgroups in case of UCCE, default is true DB_RETRY_ATTEMPTS
2 DB_TIMEOUT_CONNECTION
1800 Following variables are used when CC_TYPE = "UCCX" UCCX_PUB_IP
Primary UCCX IP UCCX_PUB_USERNAME
Primary UCCX admin username UCCX_PUB_PASSWORD
Primary UCCX admin password UCCX_SUB_USERNAME
Secondary UCCX admin username UCCX_SUB_PASSWORD
Secondary UCCX admin password UCCX_PUB_DB_PASSWORD
Primary UCCX database reporting user (hruser) password UCCX_SUB_DB_PASSWORD
Secondary UCCX database reporting user (hruser) password UCCX_REAL_TIME_PORT
UCCX real-time APIs port, default is 9080 CCX_AGENT_STATE_REST_OPERATION
GET for UCCX 12.0,
POST for other versions
SLA_FORMULA
1 = (Calls Answered/ Calls presented) * 100
2 = (Calls Answered met SL/ Calls presented) * 100
3 = (Calls answered met SL / (totalCalls - Number of calls abandoned met SL)) * 100
4 = (Calls answered met SL + Calls abandoned met SL)/Calls presented) * 100CCX_SSL
false if CCX APIs are available on http CCX_API_TIMEOUT 9000
This value can be increased if CCX APIs response is very slow
Following variables are used when CC_TYPE = "UCCE" CCE_DB_URL
UCCE awdb database url
jdbc:jtds:sqlserver://192.168.1.87:1433/ucce_awdb
CCE_DB_USER
CCE database user CCE_DB_PASSWORD
CCE database password SYNC_AGENTS_EMAIL_STATS
false
- The following configuration must be done at UCCX. This will allow UCCX System to write real-time data to tables that Synchronizer uses to fetch for gadgets.
- Go to Tools > “Real Time Snapshot Config” on UCCX Administration UI.
- Enable all three checkboxes
- Select “5” from dropdown against “Data Writing Interval”
- Provide the IP addresses of the machine where Synchronizer will run (comma separated IPs in case of HA, otherwise only one IP) separated by commas in the field against “Server Name” under “Wallboard System.
- Click on update button.
- Get domain/CA signed SSL certificates for wallboard FQDN/CN and place the files in
/root/wallboard/docker/certificates
folder. The file names should beserver.crt
andserver.key
. - Update the translation files inside
/root/wallboard/docker/translations
folder for multilingual UI. The file names in translation folder should remain unchanged. Having environment configurations done, copy the wallboard directory on the second machine in
/root
directory using the following command.# scp -r /root/wallboard root@<machine-ip>:/root/
- Having copied the wallboard directory to the second machine, got to /root/wallboard/docker/environment-variables and update INSTANCE_NAME variable in synchronizer-variables.env. Set it to any value other than the value that is set in the first machine. It will differentiate the two VMs as only one synchronizer service will be active. If one goes down, the other starts synchronizing data from UCCE/UCCX.
Execute the following commands inside /root/wallboard directory on both machines.
# chmod 755 install.sh
# ./install.sh
Run the following command to ensure that all the components are up and running. The screenshot below shows a sample response for a standalone non-HA deployment.
# docker ps
Run below in script in wallboard database once the containers are up and running
ALTER TABLE queue ADD skill_group_id int; ALTER TABLE queue ADD agent_group_id int;
If queue and agent attributes are not loading while creating gadgets, restart NGINX (docker_proxy_1) container using the following command
docker restart container_id
Virtual IP configuration
Repeat the following steps for all the machines in the HA cluster.
- Download keepalived.sh script and place it in
/root
directory. Give execute permission and execute the script:
# chmod +x keepalived.sh
# ./keepalived.sh
Configure keep.env file inside
/root/keep-alived
folderName
Description
KEEPALIVED_UNICAST_PEERS
IPs of the machines in the cluster. On each machine, this variable should have the IP of the other machine in HA
KEEPALIVED_VIRTUAL_IPS
Virtual IP of the cluster. It should be available in the LAN. For example: 192.168.1.245 KEEPALIVED_PRIORITY
Priority of the node. Instance with lower number will have a higher priority. It can take any value from 1-255. KEEPALIVED_INTERFACE
Name of the network interface with which your machine is connected to the network. On CentOS, ifconfig
orip addr sh
will show all the network interfaces and assigned addresses.CLEARANCE_TIMEOUT
Corresponds to the initial startup time of the application in seconds which is being monitored by keepalived. A nominal value of 60-120 is good enough KEEPALIVED_ROUTER_ID
Do not change this value. SCRIPT_VAR
This script is continuously polled after 2 seconds. Keepalived relinquishes control if this shell script returns a non-zero response. For wallboard, it should be:
pidof dockerd && wget -O index.html https://localhost:443
Give the execute permission and execute the script:
# chmod +x keep-command.sh
# ./keep-command.sh
Adding License
- Browse to https://FQDN/umm in your browser (FQDN will be the domain name assigned to the IP/VIP).
- Click on the red warning icon on right, paste the license in the field and click save.
- Go to https://FQDN to access wallboard front-end. (FQDN will be the domain name assigned to the VIP).
- Go to Data Service on left side menu to define data services. There will be two data services named Agent Service and Queue Service with the following URLs.
https://FQDN/agent-service and https://FQDN/queue-service
Uninstalling
To uninstall the application completely, execute the following commands.
|
Troubleshooting
To update any changes in the environment variables and/or compose file, simply execute the install script again:
/root/wallboard/install.sh
To restart the whole stack completely, execute the following commands:
# docker-compose -f /root/wallboard/docker/docker-compose.yml down
# /root/wallboard/install.sh