Document toolboxDocument toolbox

.HA deployment v13.0

Solution Prerequisites

Following are the solution setup prerequisites.

For HA deployment, we will be using two VMs, each machine in the cluster should have the following hardware specifications. The twoVMs will be referred by VM1 and VM2 in this guide.


Minimum requirement

CPU

4 cores on each VM

RAM

4 GB on each VM

Disk

300 GB on VM

NICs

1 NIC per VM

Software requirements


Minimum requirement

OS (2)

CentOS 7

MySQL (2)

5.5+ 

Docker CE

18+

Docker compose

1.21

On this page


Installation Steps

Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps. All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.

Allow ports in the firewall

For internal communication of docker swarm, you'll need to allow the communication (both inbound and outbound) on the ports: 2376/tcp, 2377/tcp, 7946/tcp, 7946/udp, 4789/udp, 80/tcp and 443/tcp.

To start the firewall on CentOS (if it isn't started already), execute the following commands:  

# systemctl enable firewalld
# systemctl start firewalld

To allow the ports on CentOS firewall, you can execute the following commands. You'll have to execute these commands on all the cluster machines. 

# firewall-cmd --add-port=2376/tcp --permanent
# firewall-cmd --add-port=2377/tcp --permanent
# firewall-cmd --add-port=7946/tcp --permanent
# firewall-cmd --add-port=7946/udp --permanent
# firewall-cmd --add-port=4789/udp --permanent
# firewall-cmd --add-port=80/tcp --permanent
# firewall-cmd --add-port=443/tcp --permanent

# firewall-cmd --reload

On VM1 and VM2, execute below additional commands,

# firewall-cmd --add-port=5060/tcp --permanent 
# firewall-cmd --add-port=16386-32768/udp --permanent 
# firewall-cmd --add-port=9092/tcp --permanent 
# firewall-cmd --reload



Configure Log Rotation


Add the following lines in 
/etc/docker/daemon.json file (create the file if not there already) and restart the docker daemon using systemctl restart docker. Perform this step on all the machines in the cluster..

{  
    "log-driver": "json-file", 
    "log-opts": {
        "max-size": "50m",
        "max-file": "3"
    } 
}

Installation Steps

  1. Run below command to log in to Expertflow docker registry 

    docker login gitlab.expertflow.com:9242 --username deployment --password xWb8WafM8ZvdwBHNxLm3
  2. Download the deployment script deployment.sh and place it in the user home or any desired directory. This script will:
    1. delete the recording-solution directory if it exists.
  3. To execute the script, give it the execute permissions and execute it. 

    $ chmod 755 deployment.sh
    $ ./deployment.sh
  4. Change to newly created directory with name recording-solution. This directory contains all the required files.

  5. Run SQL script in MySQL to create database and tables.  (recording-solution/db_schema.sql).
  6. Update environment variables  in the following files inside /root/recording-solution/docker/environment_variables folder.

    1. general-environment.env

      NameDescription
      DB_URLVRS database connection url, (jdbc:mysql://192.168.1.225:3306/vrs)
      DB_USERVRS database user
      DB_PASSWORDVRS database password
      DB_DRIVERJDBC driver (com.mysql.jdbc.Driver)
      DB_DIALECTorg.hibernate.dialect.MySQL5Dialect
      AMQ_PRIMARYtcp://VM1-IP:61616 
      AMQ_SECONDARYtcp://VM2-IP:61616
      AMQ_TIMEOUT3000
      AMQ_PRIORITY_BACKUPtrue
      AMQ_RANDOMIZEfalse
    2. recorder-environment.env

      NameDescription
      LOCAL_MACHINE_IPLocal VM IP
      ZOMBIE_TIMERTime is minutes to clear dangling calls. set it to 1
      CALL_TIMEOUTTime in milliseconds to clear call after BYE invite is received,set it to 20000
      THREAD_TIMEInterval in milliseconds between two jobs that clears dangling records,set it to 15000
    3. apis-environment.env

      NameDescription
      TIME_CUSHIONNumber of seconds to add to start and end time of call when calling API from CIM. There are few seconds difference between CIM interaction's start and end time and recording solution start and end time since CIM fetch interactions from Finesse while recording solution gets time from CUCM 
      ENCRYPTION_ENABLED

      If set to "true" then APIs will first de-crypt recording files and then return

      true = enabled

      false = disabled

      ARCHIVE_PATHShared path on archival server where archival process archive recordings
      ARCHIVE_PATH_USERArchival path user
      ARCHIVE_PATH_PASSWORDArchival path password
    4. mixer-environment.env

      NameDescription
      CC_TYPE

      Cisco contact center type.

      Set it to UCCE or UCCX

      UCCE_AWDB_URL

      Only If Cisco Contact type is UCCE

      UCCE AWDB database connection URL.
      jdbc:jtds:sqlserver://IP:PORT/ucce_awdb_name;user=sa;password=db-password

      ENCRYPTION_ENABLED

      To enable/disable recorded file encryption

      true = enabled

      false = disabled

      ENCODED_FILE_EXTENSIONExtension to use with encryption file. It should be set to "enc"
      Only If Cisco Contact type is UCCX
      UCCX_URL

      UCCX  url (https://192.168.1.101)

      UCCX_USERUCCX admin APIs user
      UCCX_PASSWORDUCCX admin APIs password
    5. archival-environment.env

      Name

      Description

      ENCODED_FILE_EXTENSIONExtension to use with encryption file. It should be set to "enc"
      DIRECTORY_PATH_TO_MONITORThis and following 9 variables are used for archival process. This variable will hold the recordings path
      ARCHIVED_MEDIA_FILES_EXTENSIONArchival process will archive recordings with this extension, set it to "wav"
      NO_OF_DAYSNumber of days to keep recordings in primary server. Recordings older than this value days will be archived
      SFTP_HOSTSFTP host name or IP
      SFTP_PORTSFTP port
      SFTP_USERNAMESFTP username
      SFTP_PASSWORDSFTP password
      ARCHIVAL_JOB_INTERVALArchival process will run every this values seconds and archive any pending archival recordings
      RETRY_LIMITNumber of retries on pending archival recording folders
      ARCHIVE_PATHShared path on archival server where archival process archive recordings
      ARCHIVE_PATH_USERArchive path's machine user
      ARCHIVE_PATH_PASSArchive path's machine password

       

  7. Having environment configurations done, copy the recording-solution directory on VM2 in/root directory using the following command.

    # scp -r /root/recording-solution root@<vm-ip>:/root/
  8. Navigate to /root/recording-solution/activemq/conf and open activemq.xml file. Uncomment the "networkConnectors" tag by removing the START and END comment tags as shown in below picture. This should be uncommented on VM1 only. There are two child tags inside networkConnectors called "networkConnector", put VM1 IP in "uri" property of first child tag and VM2 IP in "uri" property of second tag. 

  9. Execute the following commands inside /root/recording-solution directory. 

    # chmod 755 install.sh
    # ./install.sh
  10. Run the following command to ensure that all the components are up and running. 

    # docker ps

    This will show services status as shown below image 

  11. (HA Only) Now go to VM2, update LOCAL_MACHINE_IP  variable to VM2 IP in root/recording/solution/docker/environment variables/recorder-environment.env file  and run below command inside /root/recording-solution to start recorder and activemq services. The two activemq services on VM1 and VM2 will now act as master/slave to provide HA. The two recorder services on VM1 and VM2 will be configured in Cisco Call Manager (CUCM) to provide HA.  

    # chmod 755 install.sh
    # ./install.sh
  12. The directory "/root/recording-solution/recordings" mounted in /root/recording-solution/docker/docker-compose.yml file should also be mounted on network shared file system on both VMs. In this way, all services on two VMs will have a shared directory for recording files reading or writing. See this tutorial to mount a shared directory on multiple machines.

Troubleshooting