Solution Prerequisites
The following are the solution setup prerequisites.
Hardware requirements
For HA deployment, each machine in the cluster should have the following hardware specifications.
Minimum requirement | |
---|---|
CPU | 2 cores vCPU (4 cores for non-HA deployment) |
RAM | 4 GB (8 GB for non-HA deployment) |
Disk | 100 GB mounted on / |
NICs | 1 NIC |
Software Requirements
Installation Steps
Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps. Trailing # is not a part of the command.
Allow ports in the firewall
For internal communication of docker swarm, you'll need to allow the communication (both inbound and outbound) on the ports: 7575/tcp,7676/tcp , 8090/tcp, 8091/tcp and 7077/tcp.
To start the firewall on CentOS (if it isn't started already), execute the following commands. You'll have to execute these commands on all the cluster machines.:
# systemctl enable firewalld # systemctl start firewalld
To allow the ports on CentOS firewall, you can execute the following commands. You'll have to execute these commands on all the cluster machines.
# firewall-cmd --add-port=7575/tcp --permanent # firewall-cmd --add-port=7676/tcp --permanent # firewall-cmd --add-port=8090/tcp --permanent # firewall-cmd --add-port=8091/tcp --permanent # firewall-cmd --add-port=7077/tcp --permanent # firewall-cmd --reload
Configure Log Rotation
Add the following lines in /etc/docker/daemon.json
file (create the file if not there already) and restart the docker daemon using systemctl restart docker.
Perform this step on all the machines in the cluster in case of HA deployment.
{ "log-driver": "json-file", "log-opts": { "max-size": "50m", "max-file": "3" } }
Creating Databases
Create two databases i:e (For UMM & ECM) in the MSSQL/MYSQL server with a suitable name and follow the application installation steps.
Installing Application
- Download the deployment script ecm-deployment.sh and place it in any directory. This script will:
- delete the ecm-deployment directory in the present working directory if it exists.
- clone the ecm-deployment repository from gitlab in the present working directory.
To execute the script, give it the execute permissions and execute it. (on RHEL, run commands with sudo)
# chmod +x ecm-deployment.sh # ./ecm-deployment.sh
Update environment variables in the following files inside
ecm-deployment/docker/environment_variables
folder.environment-variables.env
Name Description Do not change the default values for non-HA deployment. For HA, use SQL server cluster settings instead of the defaults. DB_URL
Database connection url
For example:
- jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
- jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
- jdbc:jtds:sqlserver://192.168.1.92:1433/ECM_DEV
DB_USER
database user DB_PASS
database password DB_DRIVER
JDBC driver e.g., net.sourceforge.jtds.jdbc.Driver DB_DIALECT
Database dialect e.g., org.hibernate.dialect.SQLServer2008Dialect Change the following variables for ECM services DATABASE_ENGINE
sqlServer UCCX_HRDB_PASSWORD UCCX reporting user database password, only used of UCCX deployments SYNC_ENABLED
YES
SYNC_INTERVAL
Interval in minutes between every sync job, default is 5
FEED_INTERVAL
Interval in minutes between every sync job, default is 1 FETCH_DELAY
2, do not change.
OUTDATED_INTERVAL
24, this is number of hours a contact is closed in ECM with Dangling status if it is not synced
REDUNDANT_DEPLOYMENT
Decides if synchronizer deployment is redundant. Set it to "true". Holds strings "true" or "false". Default is false INSTANCE_NAME
Used for differentiation of instances when deployed redundant, could be any string. It should be different on both machines. UCCE_SHARED_PATH_DOMAIN
Dialer import folder machine domain UCCE_SHARED_PATH_USER
Dialer import folder machine username UCCE_SHARED_PATH_PASSWORD
Dialer import folder machine password UCCE_SHARED_PATH_IP
Dialer import folder machine IP Change the following variables as per your environment For UMM PRIM_FINESSE_IP
Primary Finesse URL including port (if not 80 or 443)
For example:
SEC_FINESSE_IP
Secondary Finesse URL including port (if not 80 or 443)
For example:
FINESSE_USER
Finesse administrator user FINESSE_PASS
Finesse administrator password UMM_DB_URL UMM Database UMM_DB_DRIVER UMM Database driver UMM_DB_DIALECT
UMM Database dialect UMM_DB_PASS UMM Database password UMM_DB_USER UMM Database username ADMIN_PASS
The password of the admin user
SSO_ENABLED
it should be set to 'false' by default. No need to change this. SSL_TRUST_STORE_PATH
Path of the SSL truststore including the name. This truststore should include UCCX SSL certificates if Finesse APIs needed to access via HTTPS SSL_TRUST_STORE_PASSWORD Truststore password Change the following variables for Frontend (ECM and UMM) SERVER_URL Provide the URL of Umm :
http://<machine_IP>:<PORT>For Example
In the case of HA, machine_IP should be the virtual IP of HA
CISCO_TYPE UCCE/UCCX (capital case without quotes) SOO_AUTO_LOGIN
it should be set to 'false' by default. No need to change this. SUP_VERSION
No need to change its default value. PCS_URL
http://<MACHINE-IP or FQDN>:PORT
For Example
PCS_USERNAME
database username PCS_PASSWORD
database password
- Get domain/CA signed SSL certificates for ecm FQDN/CN and place the files in
ecm-deployment/docker/certificates
folder. The file names should beserver.crt
andserver.key
. Copy the ecm-deployment directory to the second machine for HA. Execute below command
# scp -r ecm-deployment root@machine-ip:~/
- Go to the second machine and update the environment variables where necessary like INSTANCE_NAME and SERVER_URL
Execute the following commands inside the ecm-deployment directory on both machines.
# chmod 755 install.sh # ./install.sh
Run the following command to ensure that all the components are up and running. The screenshot below shows a sample response for a standalone non-HA deployment.
# docker ps
Virtual IP configuration
Repeat the following steps for all the machines in the HA cluster.
- Download keepalived.sh script and place it in any directory.
Give execute permission and execute the script. This will create a keep-alived directory.
# chmod +x keepalived.sh
# ./keepalived.sh
Configure keep.env file inside
keep-alived
directoryName
Description
KEEPALIVED_UNICAST_PEERS
IPs of the machines in the cluster. On each machine, this variable should have a list of IPs of all the other machines in the cluster. The format of the list is as below:
192.168.1.80
KEEPALIVED_VIRTUAL_IPS
Virtual IP of the cluster. It should be available in the LAN. For example: 192.168.1.245 KEEPALIVED_PRIORITY
Priority of the node. Instance with lower number will have a higher priority. It can take any value from 1-255. KEEPALIVED_INTERFACE
Name of the network interface with which your machine is connected to the network. On CentOS, ifconfig
orip addr sh
will show all the network interfaces and assigned addresses.CLEARANCE_TIMEOUT
Corresponds to the initial startup time of the application in seconds which is being monitored by keepalived. A nominal value of 60-120 is good enough KEEPALIVED_ROUTER_ID
Do not change this value. SCRIPT_VAR
This script is continuously polled after 2 seconds. Keepalived relinquishes control if this shell script returns a non-zero response. It could be either umm or ECM backend API.
pidof dockerd && wget -O index.html http://localhost:7575/
Give the execute permission and execute the script:
# chmod +x keep-command.sh
# ./keep-command.sh
Adding License and Application Settings
- Browse to http://<MACHINE_IP or FQDN:UMM_Port>/umm in your browser (FQDN will be the domain name assigned to the IP/VIP).
- Click on the red warning icon on right, paste the license in the field and click save.
- Once license is added, go to http://<MACHINE_IP or FQDN>/#/applicationSetting and define application settings.
Restart the services after application settings are defined using below commands inside ecm-deployment directory
docker-compose -f docker/docker-compose.yml down docker-compose -f docker/docker-compose.yml up -d