Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from this space and version 13.3


Solution Prerequisites

Following are the solution setup prerequisites.

Hardware requirements

For HA deployment, two VMs with the following hardware specifications are required.

...

Minimum requirement

...

CPU

...

2 cores vCPU 

...

RAM

...

4 GB

...

Disk

...

100 GB mounted on /

...

NICs

...

1 NIC

Software Requirements

OS Compatibility

The following OS software is required on the server:

...

Item

...

Notes

...

CentOS

...

Database Requirements

...

Item

...

Notes

...

Docker Engine Requirements

...

Docker CE

Docker CE 18.06.0+

Browser Compatibility

...

Item

...

Version

...

Notes

...

Firefox

...

68.0.2(64 bit), 69.0.3(64-bit)

...

Chrome

...

Not tested 

...

Not supported

Cisco Unified CCX Compatibility

Recently tested against CCX version 11.6 and 12.0 (Premium edition). On-Demand testing can be done on other CCX versions higher and/or prior to v12.0 and CCX Enhanced editions.

Cisco Unified CCE Compatibility

Planned for an upcoming release of Dashboards & Wallboards.

Installation Steps

Table of Contents

Solution Prerequisites

Installation Steps

The Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps.  All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.

Configure Log Rotation

Add the following lines in /etc/docker/daemon.json file (create the file if not there already) and restart the docker daemon using systemctl restart docker. Perform this step on all the machines in the cluster in case of HA deployment.

{  
    "log-driver": "json-file"
    "log-opts": {
        "max-size": "50m",
        "max-file": "3"
    
}


Installing Application

  1. Download the deployment script deploy-wallboard.sh and sh and place it in the /root directory. This script will:
    1. delete the wallboard directory in the present working directory if it exists.
    2. clone the wallboard repository from gitlab in the present working directory.
  2. To execute the script, give it the execute permissions and execute it. 

    # chmod +x deploy-wallboard.sh
    # ./deploy-wallboard.sh


  3. Create two databases in your DB server corresponding to the wallboard and umm. This step is only required if you're using your own DB server/cluster. If you're using the built-in MSSQL database server, you don't need to do anything. 
  4. Update environment variables  in the following files inside /root/wallboard/docker/environment_variables folder.

    1. common-variables.env

      NameDescription
      Do not change the default values for non-HA deployment OR if you want to use the built-in database. For HA, use SQL server cluster settings instead of the defaults. If you want to use your own MSSQL instance anyway, update the following variables accordingly.
      DB_URL

      Wallboard database connection url

      For example:

      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
      DB_USERdatabase user
      DB_PASSWORDdatabase password
      DB_DRIVERJDBC driver e.g., net.sourceforge.jtds.jdbc.Driver
      DB_DIALECTDatabase dialect e.g., org.hibernate.dialect.SQLServer2008Dialect
      CISCO_TYPE
      The cisco type of solution .It is either "uccx" or "ucce"
      SSO_ENABLEDSingle sign on facility, its value is either "true" or "false"


    2. umm-variables.env

      NameDescription
      Do not change the default values for non-HA deployment OR if you want to use the built-in database. For HA, use SQL server cluster settings instead of the defaults. If you want to use your own MSSQL instance anyway, update the following variables accordingly.
      DB_URL

      UMM  database connection

      url

      URL

      For example:

      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
      DB_USERdatabase user
      DB_PASSdatabase password
      DB_DRIVERJDBC driver e.g., net.sourceforge.jtds.jdbc.Driver
      DB_DIALECTDatabase dialect e.g., org.hibernate.dialect.SQLServer2008Dialect
      REDIRECT_BASE_URLCallback URL for Wallboard.. The format would be: https://wallboard-ip/callback
      IDS1_URLProvide the base URI of UCCE/UCCX node. The format would be: https://<fully qualified hostname of UCCX publisher node>:8553
      IDS2_URLIf UCCE/UCCX is deployed in High Availability mode, provide the base URI of the second node. The format would be: https://<fully qualified hostname of UCCX Subscriber node>:8553
      IDS_CLIENT_IDRegister supervisor tools application in IDS to get client ID by following the steps here. Example:
      973a8f41be45426510c971ce41b6feae8d71bc22
      UMM_BASE_URLUMM base URL. It should be: https://IP:umm-port
      Change the following variables as per your environment
      PRIM_FINESSE_IP

      Primary Finesse URL including port (if not 80 or 443)

      For example:

      SEC_FINESSE_IP

      Secondary Finesse URL including port (if not 80 or 443)

      For example:

      FINESSE_USERFinesse administrator user
      FINESSE_PASSFinesse administrator password
      ADMIN
      SSO_
      PASSPassword of the admin user
      ENABLEDThe single sign-on facility, its value is either "true" or "false"


    3. synchronizer-variables.env

      NameDescription

      CC_TYPE

      Decides whether Cisco contact center is UCCX or UCCE

      Holds string "UCCX" for UCCX and "UCCE" for UCCE. Default is UCCX

      REDUNDANT_DEPLOYMENTDecides if synchronizer deployment is redundant. Set it to "true". Holds strings "true" or "false". Default is false 
      INSTANCE_NAMEUsed for differentiation of instances when deployed redundant, could be any string. It should be different on both machines.
      SYNC_AGENTSTo enable Agent sync set it to "true", default is true
      SYNC_AGENTS_STATSEnables/disables agents stats sync, default is true
      SYNC_AGENTS_EMAIL_STATSEnables/disables agent email stats, default is false
      SYNC_QUEUES_STATSEnables/disables queue stats, default is true
      SYNC_QUEUESEnables/disables queues/skillgroups in case of UCCE, default is true
      Following variables are used when CC_TYPE = "UCCX"
      UCCX_PUB_IPPrimary UCCX IP
      UCCX_PUB_USERNAMEPrimary UCCX admin username
      UCCX_PUB_PASSWORDPrimary UCCX admin password
      UCCX_SUB_USERNAMESecondary UCCX admin username
      UCCX_SUB_PASSWORDSecondary UCCX admin password
      UCCX_PUB_DB_PASSWORDPrimary UCCX database reporting user (hruser) password
      UCCX_SUB_DB_PASSWORDSecondary UCCX database reporting user (hruser) password
      UCCX_REAL_TIME_PORTUCCX real-time APIs port, default is 9080
      SYNC_SKILLSSYNC_SUPERVISORSEnables/disables UCCX supervisors sync, default is false
      Enables/disables UCCX skills sync, default is false
      Following variables are used when CC_TYPE = "UCCE"
      CCE_DB_URL

      UCCE awdb database

      url

      URL

      jdbc:jtds:sqlserver://192.168.1.87:1433/ucce_awdb

      CCE_DB_USERCCE database user
      CCE_DB_PASSWORDCCE database password
      TZThe timezone of UCCE e.g. Asia/Karachi   


  5. The following configuration must be done at UCCX. This will allow the UCCX System to write real-time data to tables that Synchronizer uses to fetch for gadgets. This step is not needed for UCCE deployments.
    1. Go to Tools > “Real-Time Snapshot Config” on UCCX Administration UI.
    2. Enable all three checkboxes
    3. Select “5” from dropdown against “Data Writing Interval”
    4. Provide the IP addresses of the machine where Synchronizer will run (comma separated IPs in case of HA, otherwise only one IP) separated by commas in the field against “Server Name” under “Wallboard the "Wallboard System".
    5. Click on the update button. 
  6. Get domain/CA signed SSL certificates for wallboard FQDN/CN and place the files in /root/wallboard/docker/certificates folder. The file names should be server.crt and server.key.
  7. Update the translation files inside /root/wallboard/docker/translations folder for multilingual UI. The file names in the translation folder should remain unchanged.
  8. Having environment configurations done, copy the wallboard directory on the second machine in/rootroot directory using the following command.   (Only for HA deployment)

    Code Block
    # scp -r /root/wallboard root@<machine-ip>:/root/


  9. Having copied the wallboard directory to the second machine, got to /root/wallboard/docker/environment-variables and update the INSTANCE_NAME variable in synchronizer-variables.env. Set it to any value other than the value that is set in the first machine. It will differentiate the two VMs as only one synchronizer service will be active. If one goes down, the other starts synchronizing data from UCCE/UCCX. (Only for HA deployment)
  10. Execute the following commands inside /root/wallboard directory on both machines (if in HA, otherwise on the only machine in the cluster).

    # chmod 755 install.sh
    # ./install.sh


  11. Run the following command to ensure that all the components are up and running. The screenshot below shows a sample response for a standalone non-HA deployment. 

    # docker ps



Virtual IP

...

configuration (Only for HA deployment)

Repeat the following steps for all the machines in the HA cluster.

  1. Download keepalived.sh script and place it in /root directory.
  2. Give execute permission and execute the script: 

    # chmod +x keepalived.sh
    # ./keepalived.sh


  3. Configure keep.env file inside /root/keep-alived folder

    Name

    Description

    KEEPALIVED_UNICAST_PEERS

    IPs of the machines in the cluster. On each machine, this variable should have a list of IPs of all the other machines in the cluster. The format of the list is as below: 

    "#PYTHON2BASH:['192.168.1.76','192.168.1.80']" 

    KEEPALIVED_VIRTUAL_IPSVirtual IP of the cluster. It should be available in the LAN. For example: 192.168.1.245
    KEEPALIVED_PRIORITY
    Priority
    The priority of the node.
    Instance
    Instances with lower
    number
    numbers will have a higher priority. It can take any value from 1-255. 
    KEEPALIVED_INTERFACEName of the network interface with which your machine is connected to the network. On CentOS, ifconfig or ip addr sh will show all the network interfaces and assigned addresses. 
    CLEARANCE_TIMEOUTCorresponds to the initial startup time of the application in seconds which is being monitored by keepalived. A nominal value of 60-120 is good enough
    KEEPALIVED_ROUTER_IDDo not change this value.
    SCRIPT_VAR

    This script is continuously polled after 2 seconds. Keepalived relinquishes control if this shell script returns a non-zero response. For wallboard, it should be:

    pidof dockerd && wget -O index.html http://localhost/umm/base/index.html


  4. Give the execute permission and execute the script: 

    # chmod +x keep-command.sh
    # ./keep-command.sh


Adding License

  1. Browse to https://FQDN/umm in your browser (FQDN will be the domain name assigned to the IP/VIP). 
  2. Click on the red warning icon on right, paste the license in the field and click save. 
  3. Go to https://FQDN to access wallboard front-end. (FQDN will be the domain name assigned to the VIP).
  4. Go to Data Service on the left side menu to define data services. There will be two data services named Agent Service and Queue Service with the following URLs.

        https://FQDN/agent-service and https://FQDN/queue-service 

             Image Added


Uninstalling

To uninstall the application completely, execute the following commands.

# docker-compose -f /root/wallboard/docker/docker-compose.yml down
# docker rm -f keep-alived
# docker system prune
# docker network prune


Troubleshooting

  • To update any changes in the environment variables and/or compose file, simply execute the install script again:

    /root/wallboard/install.sh


  • To restart the whole stack completely, execute the following commands: 

    # docker-compose -f /root/wallboard/docker/docker-compose.yml down
    # /root/wallboard/install.sh