.Deployment Guide v13.6.1

.Deployment Guide v13.6.1

Solution Prerequisites

Installation Steps

The Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps.  All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.

Configure Log Rotation

Add the following lines in /etc/docker/daemon.json file (create the file if not there already) and restart the docker daemon using systemctl restart docker. Perform this step on all the machines in the cluster in case of HA deployment.

{  
    "log-driver": "json-file"
    "log-opts": {
        "max-size": "50m",
        "max-file": "3"
    
}



Installing Application

  1. Download the deployment script deploy-wallboard.sh and place it in the /root directory. This script will:

    1. delete the wallboard directory in the present working directory if it exists.

    2. clone the wallboard repository from GitLab in the present working directory.

  2. To execute the script, give it the execute permissions and execute it. 



  3. Create two databases in your DB server corresponding to the wallboard and umm. This step is only required if you're using your own DB server/cluster. If you're using the built-in MSSQL database server, you don't need to do anything. 

  4. Update environment variables  in the following files inside /root/wallboard/docker/environment_variables folder.

    1. common-variables.env



    2. umm-variables.env

      Refer to this document for configuration details

    3. synchronizer-variables.env



  5. The following configuration must be done at UCCX. This will allow the UCCX System to write real-time data to tables that Synchronizer uses to fetch for gadgets. This step is not needed for UCCE deployments.

    1. Go to Tools > “Real-Time Snapshot Config” on UCCX Administration UI.

    2. Enable all three checkboxes

    3. Select “5” from the dropdown against “Data Writing Interval”

    4. Provide the IP addresses of the machine where Synchronizer will run (comma separated IPs in case of HA, otherwise only one IP) separated by commas in the field against “Server Name” under the "Wallboard System".

    5. Click on the update button. 

  6. Get domain/CA signed SSL certificates for wallboard FQDN/CN and place the files in /root/wallboard/docker/certificates folder. The file names should be server.crt and server.key.

  7. Update the translation files inside /root/wallboard/docker/translations folder for multilingual UI. The file names in the translation folder should remain unchanged.

  8. Having environment configurations done, copy the wallboard directory on the second machine in/root the directory using the following command. (Only for HA deployment)

    # scp -r /root/wallboard root@<machine-ip>:/root/



  9. Having copied the wallboard directory to the second machine, got to /root/wallboard/docker/environment-variables and update the INSTANCE_NAME variable in synchronizer-variables.env. Set it to any value other than the value that is set in the first machine. It will differentiate the two VMs as only one synchronizer service will be active. If one goes down, the other starts synchronizing data from UCCE/UCCX. (Only for HA deployment)

  10. Execute the following commands inside /root/wallboard directory on both machines (if in HA, otherwise on the only machine in the cluster).



  11. Stop all services by running the following command 



  12. Now open the DB of the wallboard and execute the following script 

    Alter table queue_stats_historical_ccx add session_seq_num smallint NOT NULL,profile_id int NOT NULL,node_id smallint NOT NULL, target_id int NOT NULL,target_type smallint NOT NULL,q_index smallint NOT NULL ALTER TABLE queue_stats_historical_ccx ADD CONSTRAINT ccx_unique_rec UNIQUE (session_id,session_seq_num,profile_id,node_id,target_id,target_type,q_index);





  13. Once the script successfully executed, run 



  14. Run the following command to ensure that all the components are up and running. The screenshot below shows a sample response for a standalone non-HA deployment.