...
Note |
---|
All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command. |
Requirements
Replay Server for HA
SQL Server 2019
Two SIP Trunk for HA
VRS solution on two separate machines for HA
EFCX Server (for KeyCloak)
Docker and Docker compose
Git
...
There are two types of installation 1. : EFCX and 2. Cisco (UCCX & UCCE). As for For EFCX, most of the steps are not required, as Keycloak, JtapiConnector, Active Mq, and Mixer are not required for EFCX.
Allow ports in the firewall
...
Code Block |
---|
443/tcp
444/tcp
8088/tcp
5060/tcp (only for Cisco)
16386-32768/udp (only for Cisco)
8021/tcp
1433 /tcp
5432 /tcp
# Additional port to open in case of High Avaliability (HA)
8500
8300
8301 /tcp/udp
8302 /tcp/udp
8303 /tcp/udp
8600 /tcp/udp |
Installation Steps
Please make sure that Solution Prerequisites are met for the desired deployment type.
Download the deployment script
and and place it in the user’s home or any desired directory. This script will:View file name deployment.sh delete the recording-solution directory if it exists.
clone the required files for deployment
To execute the script, give it the permissions to execute permissions and execute it. run it as follows.
Code Block language bash $ chmod 755 deployment.sh $ ./deployment.sh
This command will clone the skeleton project
...
; recording-solution.
...
This recording-solution directory contains all the required files for deployment.
...
Code Block | ||
---|---|---|
| ||
$ chmod 755 deployment.sh
$ ./deployment.sh |
...
It will be cloned in the same place as deployment script is placed.
Now, our cloning has been completed. Our VRS files deployment files and directories have been downloaded. We can proceed to configure it.
For Non HA or HA (High Availability) Deployment follow these steps.
Follow
to install and configure Freeswitch. The recording paththis guide
...
in Free SWITCH and in docker compose volume must be same.
Follow this guide to configure ESL. (for Pause and Resume Recording)
Follow this guide to create an application user on CUCM for jtapi-connector.
...
Create a database in SQL Server for VRS with the name vrs and run the SQL script (sqlserver.sql) located in
recording-solution/data/scripts
. This script will generate the required database tables.Navigate to
recording-solution/docker
...
directory.
Only in case of non-HA, open
docker-compose-cisco.yml
and uncomment archival-process container. For HA keep it commented. Save and exit.Open
config.env
in the same directory and update the environment variables given below.
Name | Description | |
---|---|---|
1 | VRS_URL | CISCO → URL of a local machine. example, https: //192.168.1.101 |
EFCX → URL of VRS machine with port 444. eg IP/Url:444 or https: //192.168.1.101:444 | ||
2 | LOCAL_MACHINE_IP | CISCO → Local machine IP |
. example 192.168.1.101 * EFCX → IP of the local machine with port 444 | ||
3 | KC_HOSTNAME | Keycloak hostname where keycloak is hosted. e.g 192.168.1.101 |
4 | TZ | Time Zone. e.g Asia/Karachi |
5 |
FINESSE_URL
https: //uccx12-5p.ucce.ipcc:8445
DEPLOYMENT_PROFILE | Set it either CISCO or EFCX as per deployment profile | |
6 | PEER_ADDRESS | Add the IP or FQDN of Second Recorder VM (For HA) |
7 | JTAPI_HA_MODE | Set it true for HA and false for Non HA |
8 | SCREEN_RECORDING | If Screen recording is enabled set it true otherwise false |
9 | CONSUL_URL | (Only for CISCO HA) Add the ip address with port 8500 of you local machine http://192.168.1.101:8500 |
10 | CISCO_TYPE | Type of CISCO either UCCX or UCCE |
11 | DIRECTORY_PATH_TO_MONITOR | The path for archival process to monitor, it should be the same path where sessions are kept |
e.g / |
var/ |
vrs/ |
recordings/cucmRecording/sessions/ | ||
12 | FINESSE_URL | Url or FQDN of FInesse e.g https: / |
/uccx12-5p.ucce.ipcc:8445 | ||
13 | ARCHIVED_MEDIA_FILES_EXTENSION | mp4 [keep it same] |
14 | FILE_EXTENSION | wav [keep it same] |
15 | NO_OF_DAYS | No of days before which all the files will be archived. If set 2, then except for last 2 days from the date and time of service started or triggered all the files will be archived. |
16 | SFTP_HOST | SFTP host IP for archival e.g 192.168.1.106 |
17 | SFTP_PORT | 22 |
18 | SFTP_USERNAME | Username of the SFTP server e.g expertflow |
19 | SFTP_PASSWORD | SFTP password e.g Expertflow464 |
20 | ARCHIVAL_JOB_INTERVAL | It is a duration that tells the service to trigger again. This value is in hours. For example, if set 24 the service would be triggered after 24 hrs to get the desired job done. |
21 | STEAM_DELETION_JOB_INTERVAL_HRS | Time in hours before which all the stream is to be deleted. e.g 24 |
22 | RETRY_LIMIT | limit to retry in case the connection fails. e.g 2 |
23 | ARCHIVAL_PROCESS_NODE | active |
24 | NO_OF_DEL_DAYS | No of days before which all the streams will be deleted. If set 2, then except for last 2 days from the date and time of service started or triggered all the files will be deleted. |
25 | CISCO_TYPE | Either UCCE or UCCX |
ACTIVEMQ_BROKER_URL
Connection URL to Consumer as a Broker e.gtcp://192.168.1.101:61616
ACTIVEMQ_URL
Connection URL for ActiveMQ tcp://192.168.1.101:61616?broker.persistent=true&broker.schedulerSupport=true
ACTIVEMQ_USER
UserName for the ActiveMQ service i.e admin
ACTIVEMQ_PASSWORD
Password for the ActiveMQ service i.e admin
26 | CUCM_APPLICATION_USER_NAME | CUCM |
Application username that has been created in step |
8. |
27 | CUCM_APPLICATION_USER_PASSWORD | Password for the CUCM Application User. |
28 | CUCM_IP | IP address |
of Call Manager | ||
29 | DEPLOYMENT_PROFILE | Profile that we want to use for |
Backend “CISCO” or “EFCX“ |
Below The below Env variables are only for UCCX. However, these env variables will also be provided in case of UCCE (Do not comment it out in any case).
Name | Description | |
---|---|---|
1 | CCX_PRIMARY_IP | Primary UCCX IP address. e.g 192.168.1.33 |
2 | CCX_SECONDARY_IP | Secondary UCCX IP e.g 192.168.1.33 |
3 | CCX_ADMIN_USERNAME | CCX Admin username |
4 | CCX_ADMIN_PASSWORD | CCX Admin password |
Below The below Env variables are only for UCCE.
Name | Description | |
---|---|---|
1 | UCCE_IP | UCCE IP |
2 | UCCE_DATABASE | UCCE awdb database name |
3 | UCCE_USERNAME | UCCE awdb database user’s username |
4 | UCCE_PASSWORD | UCCE awdb database user’s password |
Navigate to the recording solution directory and execute the following commands:
Code Block chmod 755 install-cisco.sh chmod 755 install-efcx.sh #for UCCX and UCCE run ./install-cisco.sh
Verify all the containers are up and healthy
Verify if the keycloak container is healthy (docker ps), if it is on restarting, kill (docker kill keycloak) and remove (docker rm keycloak) the keycloak container then run ./install.sh. Wait for keycloak container to become healthy.
Once keycloak is set up for Cisco and for EFCX it is already setup, add the below environment variables accordingly in the
recording-solution/docker/config.env
file.
...
Names
...
For EFCX
...
For Cisco
The following Environment variables are for keycloak.
Names | Description | ||
---|---|---|---|
1 | KEYCLOAK_REALM_NAME | Realm name from EFCX keycloakRealm name created in step 4 of keycloak setup | |
2 | KEYCLOAK_CLIENT_ID | KeyCloak client id from EFCX keycloak | Keycloak client id from step 6 of keycloak setup |
3 | KEYCLOAK_CLIENT_SECRET | Add the client secret from EFCX keycloakkeycloak client secret from step 8 of keycloak setup | |
4 | KEYCLOAK_PERMISSION_GROUP | AGENT_GROUP | AGENT_GROUP |
5 | KEYCLOAK_URL | FQDN of CX or URL of Keycloak | - |
6 | EFCX_FQDN (Only for EFCX) | FQDN of CX | - |
7 | DEPLOYMENT_PROFILE | “EFCX“ | “CISCO” |
8 | VRS_URL | Url of VRS machine with port 444. IP/Url:444 | Url of VRS machine with port 443 |
9 | LOCAL_MACHINE_IP | IP of the local machine with port 444 | IP of the local machine with port 444in case of CISCO comment this out |
Add Following environment variables for pause and resume recording.
Names | Description | |
---|---|---|
1 | ESL_HOST | IP address of the Recorder Machine |
2 | ESL_PORT | Port of the Record where ESL commonly used 8021 |
3 | ESL_PASSWORD | Password of ESL |
4 | REC_PATH_STREAMS | Path where streams are saved e.g vrs/var/recordings/cucmRecording/streams |
Update the Database environment variables in config.env
...
.
...
Name | Description | |
---|---|---|
1 | DB_DRIVER | Driver on which database is running i.e postgres, mysql or mysql sqlserver drive |
2 | DB_ENGINE | Engine on which database is running i.e postgres, mysql or mysqlsqlserver |
3 | DB_HOST | Name or ip of the host on which database is active |
4 | DB_NAME | Name of the database (. In case of EFCX it can be fetch from config.conf on this path /etc/fusionpbx/) |
5 | DB_USER | Username for database (. In case of EFCX it can be fetch from config.conf on this path /etc/fusionpbx/) |
6 | DB_PASSWORD | Password for the database (. In case of EFCX it can be fetch from config.conf on this path /etc/fusionpbx/) |
7 | DB_TABLE | Table name where all the call record is stored (Mixed_Session for Cisco and v_xml_cdr for EFCX) |
8 | DB_PORT | Port of the Database |
To update the self-signed certificates for VRS, get the public authority or domain signed certificate .crt and .key files, name them server.crt and server.key, and replace the files in /recording-solution/config/certificates with these two new files. Names should be the same.
Navigate to the recording-solution directory and assigned permissions to install script.
RunCode Block chmod 755 install-cisco.sh #in case of CISCO chmod 755 install-efcx.
sh #in case of EFCX chmod 755 install-replay.sh #in case of HA CISCO
Run ./install-efcx.sh for EFCX Or run ./install-cisco.sh for Cisco UCCX and UCCE.
Run the following command to ensure all the components are running.
Code Block # docker ps
In case of Cisco go to https : //VRS-IP/#/login to access the application, whereas for EFCX go to https://VRS-IP:444/#/login.Configure the SIP trunk to enable CUCM to send SIP events to VRS for call recordings. Two sip trunks should be configured in case of HA. (Not for EFCX)
...
For HA specific deployment proceed following steps.
Copy Config.env file and paste it in the Recorder 2 VM and Replay Server as most of the environment variables are same
Follow this guide for the creation of rsync job on All VMs Recorder 1 , Recorder 2, Replay Server.
Go to replay server add permission to install.sh script and then run
./install-replay.sh
.Configure two SIP Trunks on Cisco Call Manager in HA Mode and set priorities for both Machines.
Do the following steps for both recorder1 and recorder2.
Open
recoding-solution/docker/docker-compose-cisco
Inside docker-compose-cisco file, Uncomment the Consul Container.
In
container_name
variable, set the name as consul1 for Recorder 1 and consul2 for recorder 2.Add your interface card. This can be found using ifconfig or ip address command.
Code Block |
---|
- CONSUL_BIND_INTERFACE=ens192 #ens192 or end32 |
In the command section set the name of the consul in the
-node=<any-name>
as show in the code snippet below. This name must be different than the second recorder.Set the
-advertise=<Local-Machine-IP>
and set-retry-join=<IP-Second recorder>
. Keep the other values as it is
Code Block |
---|
# command: "agent -node=consul-106 -server -ui -bind=0.0.0.0 -client=0.0.0.0 -advertise=192.168.1.106 -bootstrap-expect=2 -retry-join=192.168.1.101" |
Save changes and exit.
Run ./install-cisco.sh
Check containers on both recorders using
docker ps
commandDeploy the Recorder 2 in the same way.
Deploy the replay server. For replay server just add the config.env file. and run the
./install-replay.sh
command.
...