Info | ||
---|---|---|
| ||
If VRS 13.3 is already installed then we will need to create a group in Keycloak that will be assigned to agents upon first time login to VRS, update three docker images in docker/docker-compose.yml file and add an environment variable in docker/environment-variables/general-environment.env file.
|
Installation Steps
Warning |
---|
Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps. |
Note |
---|
All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command. |
Allow ports in the firewall
To start the firewall on CentOS (if it isn't started already), execute the following commands:
Code Block |
---|
# systemctl enable firewalld # systemctl start firewalld |
To allow the ports on CentOS firewall, you can execute the following commands. (Run on both machines in case of HA).
Code Block |
---|
# firewall-cmd --add-port=443/tcp --permanent # firewall-cmd --add-port=8088/tcp --permanent # firewall-cmd --add-port=5060/tcp --permanent # firewall-cmd --add-port=16386-32768/udp --permanent # firewall-cmd --add-port=9092/tcp --permanent # firewall-cmd --reload |
Installation Steps
- Please make sure that Solution Prerequisites are met for the desired deployment type.
- Download the deployment script deployment.sh and place it in the user home or any desired directory. This script will:
- delete the recording-solution directory if it exists.
- clone the required files for deployment
To execute the script, give it the execute permissions and execute it.
Code Block language bash $ chmod 755 deployment.sh $ ./deployment.sh
Change to newly created directory with name recording-solution. This directory contains all the required files.
- Run SQL script in SQL Server to create database and tables. (recording-solution/db_schema.sql).
- Create a database for Keycloak on same machine as VRS database
Update environment variables in the following files inside
/root/recording-solution/docker/environment_variables
folder.general-environment.env Name
Description
DB_URL
VRS database connection URL
jdbc:jtds:sqlserver://192.168.1.92:1433/vrs;user=sa;password=Expertflow464
DB_DRIVER
JDBC driver
net.sourceforge.jtds.jdbcx.JtdsDataSource
CC_TYPE Cisco Contact center type (UCCX or UCCE) TZ Timezone (Asia/Karachi) ENCRYPTION_ENABLED To enable/disable recorded file encryption
true = enabled
false = disabled
AMQ_PRIMARY Primary ActiveMQ URL, (VRS machine IP) tcp://192.168.1.242:61616
AMQ_SECONDARY Secondary ActiveMQ URL, Keep it same as primary if ActiveMQ not available in HA AMQ_TIMEOUT 3000, keep it same AMQ_RANDOMIZE false, keep it same AMQ_PRIORITY_BACKUP true, keep it same LOCAL_MACHINE_IP VRS machine IP CUCM IP Cisco Call Manager IP CUCM_APPLICATION_USER_NAME CUCM Application user's username created in step 6. CUCM_APPLICATION_USER_PASSWORD CUCM Application user's password created in step 6. TIME_CUSHION The number of seconds to add to the start and end time of call when calling API from CIM. There are few seconds difference between CIM interaction's start and end time and recording solution start and end time since CIM fetch interactions from Finesse while recording solution gets time from CUCM MAX_RING_TIME Maximum Call ring time on agent desktop, default is 30 seconds CALL_TIMEOUT Socket timeout for recording rtp packets, set it to 10 THREAD_TIME Interval in seconds between two jobs that clears completed calls, set it 10 FILE_EXTENSION Extension that archival file will look for file to archive. Set it to "wav" DIRECTORY_PATH_TO_MONITOR This and following 9 variables are used for the archival process. This variable will hold the path of the recording ARCHIVED_MEDIA_FILES_EXTENSION The archival process will archive recordings with this extension, set it to "wav" NO_OF_DAYS The number of days to keep recordings in the primary server. Recordings older than this value days will be archived SFTP_HOST SFTP hostname or IP SFTP_PORT SFTP port SFTP_USERNAME SFTP username SFTP_PASSWORD SFTP password ARCHIVAL_JOB_INTERVAL Archival process will run every this values seconds and archive any pending archival recordings RETRY_LIMIT Number of retries on pending archival recording folders ARCHIVE_PATH The shared path on the archival server where archival process archive recordings ARCHIVE_PATH_USER Archive path's machine user ARCHIVE_PATH_PASS Archive path's machine password ARCHIVAL_PROCESS_NODE This variable value should be "active" on once machine and "passive" on second machine in HA.
"active" machine archival process will sent files to SFTP server and then delete. "passive" machine
process will only delete local file. (HA Only)
UCCE_DB_URL UCCE awdb database connection URL, used for UCCE deployment only.
jdbc:jtds:sqlserver://192.168.1.87:1433/ucce_awdb;user=sa;password=Expertflow464
Used for UCCX deployment only... UCCX_URL UCCX URL, used for fetching agent details, https://192.168.1.101
UCCX_USERNAME UCCX user, should have privileges to fetch agents UCCX_PASSWORD UCCX user password KEYCLOAK_USER master username for Keycloak, KEYCLOAK_PASSWORD password for Keycloak user DB_VENDOR Keycloak database engine, keep it default DB_USER Keycloak database username DB_PASSWORD Keycloak database password DB_ADDR Keycloak database machine IP DB_DATABASE Database name created in step 7 Update below environment variables once Keycloak is setup in step 11 KEYCLOAK_REALM_NAME Realm name created in step 11 KEYCLOAK_CLIENT_ID Keycloak client id from step 11 KEYCLOAK_CLIENT_SECRET Keycloak client secret from step 11 KEYCLOAK_URL keep default
FINESSE_URL
HTTP address of finesse server to validate credentials given by agent. KEYCLOAK_ADMIN_USERNAME
Keycloak admin username from step 11 KEYCLOAK_ADMIN_PASSWORD
Keycloak admin password from step 11 KEYCLOAK_PERMISSION_GROUP
Keycloak Group from step 11 KEYCLOAK_USER_PASSWORD
Hard Coded private password that remains same for every new agent created in keycloak (example: 12345) TRUST_STORE_PATH /app/ssl/truststore.jks (The Path of the trust store, this is the certificate required for SSO) TRUST_STORE_PASSWORD Truststore Password Note: If the finesse deployed with https, then certificate must be provided to access it, we use 'truststore.jks' file in ssl directory to provide certificate store, to place certificate in truststore, simply run this command in directory where certificate is present and later export the truststore to your deployment ssl directory.
Command to add certificate is certificate store:
keytool -import -alias ccx -file ccx.cer -keystore keystore.jks -storepass Expertflow464
(HA Only) Having environment configurations done, copy the recording-solution directory on VM2 in
/root
directory using the following command.Code Block # scp -r /root/recording-solution root@<vm-ip>:/root/
Execute the following commands inside /root/recording-solution directory.
Code Block # chmod 755 install.sh # ./install.sh
- Set up keycloak
- Once keycloak is setup up, update these environment variables (KEYCLOAK_REALM_NAME, KEYCLOAK_CLIENT_ID, KEYCLOAK_CLIENT_SECRET, KEYCLOAK_ADMIN_USERNAME, KEYCLOAK_ADMIN_PASSWORD, KEYCLOAK_PERMISSION_GROUP) and run ./install.sh again
Run the following command to ensure that all the components are up and running.
Code Block # docker ps
Note: We need to run below query in the VRS DB
alter table sessions add is_archived int default 0 - Go to https://VRS-IP/#/login to access the application
- Follow this guide to deploy VRS Finesse gadget
(HA Only) Now go to VM2, update LOCAL_MACHINE_IP variable to VM2 IP in root/recording/solution/docker/environment variables/recorder-environment.env file and run below command inside /root/recording-solution to start recorder and activemq services. The two activemq services on VM1 and VM2 will now act as master/slave to provide HA. The two recorder services on VM1 and VM2 will be configured in Cisco Call Manager (CUCM) to provide HA.
Code Block # chmod 755 install.sh # ./install.sh
- (HA Only) The directory "/root/recording-solution/recordings/wav" should also be mounted on network shared file system on both VMs or they should be synchronized with each other . In this way, all services on two VMs will have a shared directory for recording files reading or writing. Follow next step if network shared and synchronized folder is not provided
- (HA Only) Recording folder synchronization, follow below steps;
Install lyncd utility on one machine, run below commands.
Code Block root@host # yum -y install epel-release root@host # yum -y install lsyncd
Generate SSH Keys on same. Run below command to generate a key. Use default by pressing enter every time it prompts
Code Block root@host # ssh-keygen -t rsa
Transfer the SSH key to the other other machine by running below commands, enter other machine root password when prompted
Code Block ssh-copy-id root@other-machine-ip
Code Block vi ~/.ssh/config
enter below text in config file, replace the Hostname with other machine IP
Code Block Host dest_host Hostname 172.16.144.32 User root IdentityFile ~/.ssh/id_rsa
Code Block settings { logfile = "/var/log/lsyncd/lsyncd.log", statusFile = "/var/log/lsyncd/lsyncd-status.log", statusInterval = 1 } sync { default.rsync, source="/root/recording-solution/recordings", target="192.168.1.125:/root/recording-solution/recordings", delete = false, rsync={ compress = true, acls = true, verbose = true, owner = true, group = true, perms = true, rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"} }
- Follow above steps for the other machine
(HA Only) Repeat the following steps on both machines.
- Download keepalived.sh script and place it in any directory.
Give execute permission and execute the script. This will create a keep-alived directory.
# chmod +x keepalived.sh
# ./keepalived.sh
Configure keep.env file inside
keep-alived
directoryName
Description
Name
Description
KEEPALIVED_UNICAST_PEERS
IPs of the machines in the cluster. On each machine, this variable should have a list of IPs of all the other machines in the cluster. The format of the list is as below:
192.168.1.80
KEEPALIVED_VIRTUAL_IPS
Virtual IP of the cluster. It should be available in the LAN. For example: 192.168.1.245 KEEPALIVED_PRIORITY
Priority of the node. Instance with lower number will have a higher priority. It can take any value from 1-255. KEEPALIVED_INTERFACE
Name of the network interface with which your machine is connected to the network. On CentOS, ifconfig
orip addr sh
will show all the network interfaces and assigned addresses.CLEARANCE_TIMEOUT
Corresponds to the initial startup time of the application in seconds which is being monitored by keepalived. A nominal value of 60-120 is good enough KEEPALIVED_ROUTER_ID
Do not change this value. SCRIPT_VAR
This script is continuously polled after 2 seconds. Keepalived relinquishes control if this shell script returns a non-zero response. It could be either umm or ECM backend API.
pidof dockerd && wget -O index.html https://localhost:443/
Give the execute permission and execute the script:
# chmod +x keep-command.sh
# ./keep-command.sh
Troubleshooting
Configure Log Rotation
Add the following lines in /etc/docker/daemon.json
file (create the file if not there already) and restart the docker daemon using systemctl restart docker.
Perform this step on all the machines in the cluster..
|
...