Document toolboxDocument toolbox

Deployment Guide

Solution Prerequisites

The following are the solution setup prerequisites.

Hardware requirements

For HA deployment, each machine in the cluster should have the following hardware specifications.


Minimum requirement

CPU

2 cores vCPU (4 cores for non-HA deployment)

RAM

4 GB (8 GB for non-HA deployment)

Disk

100 GB mounted on /

NICs

1 NIC

Software Requirements

Installation Steps

Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps. All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.

Allow ports in the firewall

For internal communication of docker swarm, you'll need to allow the communication (both inbound and outbound) on the ports: 7575/tcp,7676/tcp , 8090/tcp, 8091/tcp and 7077/tcp.

To start the firewall on CentOS (if it isn't started already), execute the following commands. You'll have to execute these commands on all the cluster machines.: 

# systemctl enable firewalld
# systemctl start firewalld

To allow the ports on CentOS firewall, you can execute the following commands. You'll have to execute these commands on all the cluster machines.

# firewall-cmd --add-port=7575/tcp --permanent
# firewall-cmd --add-port=7676/tcp --permanent
# firewall-cmd --add-port=8090/tcp --permanent
# firewall-cmd --add-port=8091/tcp --permanent
# firewall-cmd --add-port=7077/tcp --permanent
# firewall-cmd --reload

Configure Log Rotation

Add the following lines in /etc/docker/daemon.json file (create the file if not there already) and restart the docker daemon using systemctl restart docker. Perform this step on all the machines in the cluster in case of HA deployment.

{   
    "log-driver": "json-file", 
    "log-opts": {
        "max-size": "50m",
        "max-file": "3"
    } 
}


Creating Databases

Create two databases i:e (For UMM & ECM) in the MSSQL/MYSQL server with a suitable name and follow the application installation steps.

Installing Application

  1. Download the deployment script ecm-deployment.sh and place it in the /root directory. This script will:
    1. delete the ecm-deployment directory in the present working directory if it exists.
    2. clone the ecm-deployment repository from gitlab in the present working directory.
  2. To execute the script, give it the execute permissions and execute it. 

    # chmod +x ecm-deployment.sh
    # ./ecm-deployment.sh
  3. Update environment variables  in the following files inside /root/ecm-deployment/docker/environment_variables folder.

    1. environment-variables.env

      NameDescription
      Do not change the default values for non-HA deployment. For HA, use SQL server cluster settings instead of the defaults.
      DB_URL

      Database connection url

      For example:

      DB_USERdatabase user
      DB_PASSdatabase password
      DB_DRIVERJDBC driver e.g., net.sourceforge.jtds.jdbc.Driver
      DB_DIALECTDatabase dialect e.g., org.hibernate.dialect.SQLServer2008Dialect
      Change the following variables for ECM services
      DATABASE_ENGINEsqlServer
      UCCX_HRDB_PASSWORDUCCX reporting user database password, only used of UCCX deployments
      SYNC_ENABLEDYES
      SYNC_SCHEDULE0 0/3 * 1/1 * ? *
      This is cron expression for sync process scheduler.
      FEED_SCHEDULE0 0/1 * 1/1 * ? *
      This is cron expression for feed process scheduler.
      FETCH_DELAY2, do not change.
      OUTDATED_INTERVAL24, this is number of hours a contact is closed in ECM with Dangling status if it is not synced 
      PCS_URL

      http://<MACHINE-IP or FQDN>:PORT

      For Example

      PCS_USERNAMEdatabase username
      PCS_PASSWORDdatabase password
      REDUNDANT_DEPLOYMENTDecides if synchronizer deployment is redundant. Set it to "true". Holds strings "true" or "false". Default is false 
      INSTANCE_NAMEUsed for differentiation of instances when deployed redundant, could be any string. It should be different on both machines.
      UCCE_SHARED_PATH_DOMAINDialer import folder machine domain
      UCCE_SHARED_PATH_USERDialer import folder machine username
      UCCE_SHARED_PATH_PASSWORDDialer import folder machine password
      UCCE_SHARED_PATH_IPDialer import folder machine IP
      Change the following variables as per your environment For UMM
      PRIM_FINESSE_IP

      Primary Finesse URL including port (if not 80 or 443)

      For example:

      SEC_FINESSE_IP

      Secondary Finesse URL including port (if not 80 or 443)

      For example:

      FINESSE_USERFinesse administrator user
      FINESSE_PASSFinesse administrator password
      UMM_DB_URLUMM Database
      UMM_DB_DRIVERUMM Database driver

      UMM_DB_DIALECT

      UMM Database dialect
      UMM_DB_PASSUMM Database password
      UMM_DB_USERUMM Database username

      ADMIN_PASS

      The password of the admin user

      Change the following variables for Frontend
      GAT_URL

      Provide the URL of Umm :
      http://<machine_IP>:<PORT>/<microservice>

      For Example

      (microservice = umm)

      In the case of HA, machine_IP should be the virtual IP of HA

      CISCO_TYPEUCCE/UCCX (capital case)
  4. Get domain/CA signed SSL certificates for ecm FQDN/CN and place the files in /root/ecm-deployment/docker/certificates folder. The file names should be server.crt and server.key.
  5. Copy the ecm-deployment directory to the second machine for HA. Execute below command 

    # scp -r ecm-deployment root@machine-ip:~/
  6. Go to the second machine and update the environment variables where necessary like INSTANCE_NAME and GAT_URL
  7. Execute the following commands inside /root/ecm-deployment directory on both machines.

    # chmod 755 install.sh
    # ./install.sh
  8. Run the following command to ensure that all the components are up and running. The screenshot below shows a sample response for a standalone non-HA deployment. 

    # docker ps


Virtual IP configuration

Repeat the following steps for all the machines in the HA cluster.

  1. Download keepalived.sh script and place it in /root directory.
  2. Give execute permission and execute the script: 

    # chmod +x keepalived.sh
    # ./keepalived.sh

  3. Configure keep.env file inside /root/keep-alived folder

    Name

    Description

    KEEPALIVED_UNICAST_PEERS

    IPs of the machines in the cluster. On each machine, this variable should have a list of IPs of all the other machines in the cluster. The format of the list is as below: 

    192.168.1.80

    KEEPALIVED_VIRTUAL_IPSVirtual IP of the cluster. It should be available in the LAN. For example: 192.168.1.245
    KEEPALIVED_PRIORITYPriority of the node. Instance with lower number will have a higher priority. It can take any value from 1-255. 
    KEEPALIVED_INTERFACEName of the network interface with which your machine is connected to the network. On CentOS, ifconfig or ip addr sh will show all the network interfaces and assigned addresses. 
    CLEARANCE_TIMEOUTCorresponds to the initial startup time of the application in seconds which is being monitored by keepalived. A nominal value of 60-120 is good enough
    KEEPALIVED_ROUTER_IDDo not change this value.
    SCRIPT_VAR

    This script is continuously polled after 2 seconds. Keepalived relinquishes control if this shell script returns a non-zero response. It could be either umm or ECM backend API.

    pidof dockerd && wget -O index.html http://localhost:7575/

  4. Give the execute permission and execute the script: 

    # chmod +x keep-command.sh
    # ./keep-command.sh


Adding License and Application Settings

  1. Browse to http://<MACHINE_IP or FQDN>/umm in your browser (FQDN will be the domain name assigned to the IP/VIP). 
  2. Click on the red warning icon on right, paste the license in the field and click save. 




  3. Once license is added, got to http://<MACHINE_IP or FQDN>/#/applicationSetting and define application settings.
  4. Restart the services after application settings are defined using below commands inside /root/ecm-deployment/docker directory

    docker-compose down
    docker-compose up -d