Installation Files Location
Place all build files provided with the release in /root on all machines where the application’s being deployed.
System Access Requirements
- Administrative privileges (root) are required to follow the deployment steps.
Installation
Loading all images
Load the images in docker engine on all the machines running following commands (execute each command separately, the command starts after $ sign) :
$ docker load < ecm.tar
$ docker load < umm.tar
$ docker load < ecm-services.tar
$ docker load < ecm-frontend.tar
Configuration
Application Configurations
- Create new databases in the MSSQL server with any names corresponding to umm and ecm and edit the corresponding settings in “docker-compose.yml” file for configuration. Details have been written in the comments.
- Update the DB name required urls and cisco related (IP, Username and Password) configuration in the “ecm_variables.env” file. DB configurations for umm can be updated directly in “docker-compose.yml”.
HTTPS Configuration
Replace the content of “https” folder h any https certificate and key. The names of the certificate and the key should be “server.crt” and “server.key” respectively.
Execution
- Start the compose service on all machines (in case of HA, there’d be multiple machines) using the following command:
$ docker-compose up -d
For fetching UCCX campaigns on interface
Once the application settings are defined from the web interface. Remove the container ecm backend using command $ docker container rm -f <container_id> , once the container is removed, now again run the command $ docker-compose up -d go to the campaigns menu and verify that cisco campaigns are appearing in dropdown menu.
Log Rotation
...
Solution Prerequisites
The following are the solution setup prerequisites.
Hardware requirements
For HA deployment, each machine in the cluster should have the following hardware specifications.
Minimum requirement | |
---|---|
CPU | 2 cores vCPU (4 cores for non-HA deployment) |
RAM | 4 GB (8 GB for non-HA deployment) |
Disk | 100 GB mounted on / |
NICs | 1 NIC |
Software Requirements
Installation Steps
Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps. All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.
Allow ports in the firewall
For internal communication of docker swarm, you'll need to allow the communication (both inbound and outbound) on the ports: 7575/tcp,7676/tcp , 8090/tcp, 8091/tcp and 7077/tcp.
To start the firewall on CentOS (if it isn't started already), execute the following commands. You'll have to execute these commands on all the cluster machines.:
Code Block |
---|
# systemctl enable firewalld
# systemctl start firewalld |
To allow the ports on CentOS firewall, you can execute the following commands. You'll have to execute these commands on all the cluster machines.
Code Block |
---|
# firewall-cmd --add-port=7575/tcp --permanent
# firewall-cmd --add-port=7676/tcp --permanent
# firewall-cmd --add-port=8090/tcp --permanent
# firewall-cmd --add-port=8091/tcp --permanent
# firewall-cmd --add-port=7077/tcp --permanent
# firewall-cmd --reload |
Configure Log Rotation
Add the following lines in /etc/docker/daemon.json
file (create the file if not there already) and restart the docker daemon using systemctl
. Restart docker restart docker.
Perform this step on all the machines in the cluster in case of HA deployment.
Code Block |
---|
{ "log-driver": "json-file", "log-opts": { { "max-size": "20m50m", # Max size of the log files. "max-file": "3" # The maximum number of log files. } } |
{ "log-driver": "json-file",
"log-opts": {
"max-size": "20m", # Max size of the log files.
"max-file": "3" # The maximum number of log files.
}
}
Troubleshooting
...
}
} |
Creating Databases
Create two databases i:e (For UMM & ECM) in the MSSQL/MYSQL server with a suitable name and follow the application installation steps.
Installing Application
- Download the deployment script ecm-deployment.sh and place it in the
/root
directory. This script will:- delete the ecm-deployment directory in the present working directory if it exists.
- clone the ecm-deployment repository from gitlab in the present working directory.
To execute the script, give it the execute permissions and execute it.
Code Block language bash # chmod +x ecm-deployment.sh # ./ecm-deployment.sh
Update environment variables in the following files inside
/root/ecm-deployment/docker/environment_variables
folder.environment-variables.env
Name Description Do not change the default values for non-HA deployment. For HA, use SQL server cluster settings instead of the defaults. DB_URL
Database connection url
For example:
- jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
- jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
- jdbc:jtds:sqlserver://192.168.1.92:1433/ECM_DEV
DB_USER
database user DB_PASS
database password DB_DRIVER
JDBC driver e.g., net.sourceforge.jtds.jdbc.Driver DB_DIALECT
Database dialect e.g., org.hibernate.dialect.SQLServer2008Dialect Change the following variables for ECM services DATABASE_ENGINE
sqlServer UCCX_HRDB_PASSWORD UCCX reporting user database password, only used of UCCX deployments SYNC_ENABLED
YES
SYNC_SCHEDULE
0 0/3 * 1/1 * ? *
This is cron expression for sync process scheduler.FEED_SCHEDULE
0 0/1 * 1/1 * ? *
This is cron expression for feed process scheduler.FETCH_DELAY
2, do not change.
OUTDATED_INTERVAL
24, this is number of hours a contact is closed in ECM with Dangling status if it is not synced
PCS_URL
http://<MACHINE-IP or FQDN>:PORT
For Example
PCS_USERNAME
database username PCS_PASSWORD
database password REDUNDANT_DEPLOYMENT
Decides if synchronizer deployment is redundant. Set it to "true". Holds strings "true" or "false". Default is false INSTANCE_NAME
Used for differentiation of instances when deployed redundant, could be any string. It should be different on both machines. UCCE_SHARED_PATH_DOMAIN
Dialer import folder machine domain UCCE_SHARED_PATH_USER
Dialer import folder machine username UCCE_SHARED_PATH_PASSWORD
Dialer import folder machine password UCCE_SHARED_PATH_IP
Dialer import folder machine IP Change the following variables as per your environment For UMM PRIM_FINESSE_IP
Primary Finesse URL including port (if not 80 or 443)
For example:
SEC_FINESSE_IP
Secondary Finesse URL including port (if not 80 or 443)
For example:
FINESSE_USER
Finesse administrator user FINESSE_PASS
Finesse administrator password UMM_DB_URL UMM Database UMM_DB_DRIVER UMM Database driver UMM_DB_DIALECT
UMM Database dialect UMM_DB_PASS UMM Database password UMM_DB_USER UMM Database username ADMIN_PASS
The password of the admin user
Change the following variables for Frontend GAT_URL Provide the URL of Umm :
http://<machine_IP>:<PORT>/<microservice>For Example
(microservice = umm)
In the case of HA, machine_IP should be the virtual IP of HA
CISCO_TYPE UCCE/UCCX (capital case)
- Get domain/CA signed SSL certificates for ecm FQDN/CN and place the files in
/root/ecm-deployment/docker/certificates
folder. The file names should beserver.crt
andserver.key
. Copy the ecm-deployment directory to the second machine for HA. Execute below command
Code Block # scp -r ecm-deployment root@machine-ip:~/
- Go to the second machine and update the environment variables where necessary like INSTANCE_NAME and GAT_URL
Execute the following commands inside /root/ecm-deployment directory on both machines.
Code Block language bash # chmod 755 install.sh # ./install.sh
Run the following command to ensure that all the components are up and running. The screenshot below shows a sample response for a standalone non-HA deployment.
Code Block language bash # docker ps
Virtual IP configuration
Repeat the following steps for all the machines in the HA cluster.
- Download keepalived.sh script and place it in
/root
directory. Give execute permission and execute the script:
# chmod +x keepalived.sh
# ./keepalived.sh
Configure keep.env file inside
/root/keep-alived
folderName
Description
KEEPALIVED_UNICAST_PEERS
IPs of the machines in the cluster. On each machine, this variable should have a list of IPs of all the other machines in the cluster. The format of the list is as below:
192.168.1.80
KEEPALIVED_VIRTUAL_IPS
Virtual IP of the cluster. It should be available in the LAN. For example: 192.168.1.245 KEEPALIVED_PRIORITY
Priority of the node. Instance with lower number will have a higher priority. It can take any value from 1-255. KEEPALIVED_INTERFACE
Name of the network interface with which your machine is connected to the network. On CentOS, ifconfig
orip addr sh
will show all the network interfaces and assigned addresses.CLEARANCE_TIMEOUT
Corresponds to the initial startup time of the application in seconds which is being monitored by keepalived. A nominal value of 60-120 is good enough KEEPALIVED_ROUTER_ID
Do not change this value. SCRIPT_VAR
This script is continuously polled after 2 seconds. Keepalived relinquishes control if this shell script returns a non-zero response. It could be either umm or ECM backend API.
pidof dockerd && wget -O index.html http://localhost:7575/
Give the execute permission and execute the script:
# chmod +x keep-command.sh
# ./keep-command.sh
Adding License and Application Settings
- Browse to http://<MACHINE_IP or FQDN>/umm in your browser (FQDN will be the domain name assigned to the IP/VIP).
- Click on the red warning icon on right, paste the license in the field and click save.
- Once license is added, got to http://<MACHINE_IP or FQDN>/#/applicationSetting and define application settings.
Restart the services after application settings are defined using below commands inside /root/ecm-deployment/docker directory
Code Block docker-compose down docker-compose up -d