Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
443/tcp
444/tcp
8088/tcp
5060/tcp (only for Cisco)
16386-32768/udp (only for Cisco)

# Additional port to open in case of High Avaliability (HA)
8500
8300
8301
8302
8303
8600/udp
1433

Installation Steps

  1. Please make sure that Solution Prerequisites are met for the desired deployment type. 

  2. Download the deployment script

    View file
    namedeployment.sh
     and place it in the user’s home or any desired directory. This script will:

    1. delete the recording-solution directory if it exists.

    2. clone the required files for deployment

  3. To execute the script, give it the execute permissions and execute it. This command will clone the skeleton project for the recording solution. the recording-solution directory contains all the required files for deployment.

    Code Block
    languagebash
    $ chmod 755 deployment.sh
    $ ./deployment.sh
  4. Follow the Section 2 for the deployment of HA VRS at the end of this document.

  5. Follow step 5 to 8 for deployment with Cisco UCCX or UCCE in non-HA (non- High Availability) mode.

  6. Follow this guide to install and configure Freeswitch. The recording path should be /usr/share/freeswitch/cucmRecording

  7. Follow this guide to create an application user on CUCM for jtapi-connector.

  8. Open recording-solution/docker/config.env and update the environment variables given below.

...

Section 2

For HA (High Availability)

Requirements

NFS Server

Database on SqlServer

Ask the IT team to provide and configure NFS server for recording storage

Follow the guide for HA

  1. Ask IPCC team for the creation of NFS server.

  2. Mounting point of NFS on both servers is /var/vrs/recordings.

  3. Grant full permission to this directory and any files or subdirectories created within it.

  4. Ask the IPCC team for the creation of Database.

  5. Follow this guide to create an application user on CUCM for jtapi-connector.

  6. Open recoding-solution/docker/docker-compose-cisco

  7. Uncomment the Consul Container and save the changes

  8. Open recoding-solution/docker/config.env

  9. Add url for the consul

Code Block
PEER_ADDRESS= <IP of second VRS>
HA_MODE=true
  1. Change the recording path and other variables as per your configuration.

  2. In the config.env

...

  1. and set the environment variables.

Name

Description

1

VRS_URL

URL of a local machine. example, https: //192.168.1.101 *

2

LOCAL_MACHINE_IP

Local machine IP since it is a non-HA deployment. example 192.168.1.101 *

3

KC_HOSTNAME

Keycloak hostname where keycloak is hosted. e.g 192.168.1.101

4

TZ

Time Zone. e.g Asia/Karachi

5

DEPLOYMENT_PROFILE

“CISCO“ as HA is only available on cisco

6

PEER_ADDRESS

Address of the second VM wehre VRS is deployed

7

HA_MODE

Keep it true. As we are deploying high avaliability

8

SCREEN_RECORDING

True if screen recording is enables and false if diasble

9

KEYCLOAK_URL

Url of the keycloak deployed for fetching user details

10

CISCO_TYPE

Either UCCE or UCCX

11

FINESSE_URL

https: //uccx12-5p.ucce.ipcc:8445

12

DIRECTORY_PATH_TO_MONITOR

...

The path for archival process to monitor, it should be the same path where sessions are kept. e.g /var/vrs/recodings/cucmRecording/sessions/

13

ARCHIVED_MEDIA_FILES_EXTENSION

mp4 [keep it same]

14

FILE_EXTENSION

wav [keep it same]

15

NO_OF_DAYS

No of days before which all the files will be archived. If set 2, then except for last 2 days from the date and time of service started or triggered all the files will be archived. 

16

SFTP_HOST

SFTP host IP for archival e.g 192.168.1.106

17

SFTP_PORT

22

18

SFTP_USERNAME

Username of the SFTP server e.g expertflow

19

SFTP_PASSWORD

SFTP password e.g Expertflow464

20

ARCHIVAL_JOB_INTERVAL

It is a duration that tells the service to trigger again. This value is in hours. For example, if set 24 the service would be triggered after 24 hrs to get the desired job done.

21

STEAM_DELETION_JOB_INTERVAL_HRS

Time in hours before which all the stream is to be deleted. e.g 24

22

RETRY_LIMIT

limit to retry in case the connection fails. e.g 2

23

ARCHIVAL_PROCESS_NODE

active

24

NO_OF_DEL_DAYS

No of days before which all the streams will be deleted. If set 2, then except for last 2 days from the date and time of service started or triggered all the files will be deleted. 

25

CUCM_APPLICATION_USER_NAME

...

CUCM User's username that has been created in step 3.

26

CUCM_APPLICATION_USER_PASSWORD

...

Password for the CUCM Application User.

27

CUCM_IP

IP address where CUCM has been Deployed

Below are the Env variables for UCCX. If CISCO_TYPE = UCCX

Name

Description

1

CCX_PRIMARY_IP

Primary UCCX IP address. e.g 192.168.1.33

2

CCX_SECONDARY_IP

Secondary UCCX IP e.g 192.168.1.33

3

CCX_ADMIN_USERNAME

CCX Admin username

4

CCX_ADMIN_PASSWORD

CCX Admin password

Below are the ENV varialbles for UCCE. If CISCO_TYPE is UCCE

Name

Description

1

UCCE_IP

UCCE IP

2

UCCE_DATABASE

UCCE awdb database name

3

UCCE_USERNAME

UCCE awdb database user’s username

4

UCCE_PASSWORD

UCCE awdb database user’s password

  1. Continue Updating the config.env for the Database environment variables

Name

Description

1

DB_DRIVER

Driver on which database is running i.e postgres or mysql drive

2

DB_ENGINE

Engine on which database is running i.e postgres or mysql

3

DB_HOST

Name or ip of the host on which database is active

4

DB_NAME

Name of the database

5

DB_USER

Username for database

6

DB_PASSWORD

Password for the database

DB_PORT

Port of the Database

  1. Change the recording path and other variables as per your configuration.

Open docker-compose-cisco in docker/docker-compose-cisco

  • Archival Container add the volumes a

...