Skip to content

OpenVidu High Availability: On-premises configuration and administration#

Warning

While in BETA this section is subject to changes. We are working to simplify the configuration and administration of OpenVidu High Availability.

The OpenVidu installer offers an easy way to deploy OpenVidu High Availability on-premises. However, once the deployment is complete, you may need to perform administrative tasks based on your specific requirements, such as changing passwords, specifying custom configurations, and starting or stopping services.

This section provides details on configuration parameters and common administrative tasks for on-premises OpenVidu High Availability deployments.

OpenVidu configuration#

Directory structure#

OpenVidu High Availability is composed of two types of nodes, Master Nodes and Media Nodes. The directory structure is as follows for each type of node:

In each Master Node, the services are installed at /opt/openvidu/ and have a systemd service located at /etc/systemd/system/openvidu.service.

The directory structure of each Master Node is as follows:

|-- /opt/openvidu
    |-- config/
    |-- data/
    |-- deployment-info.yaml
    |-- docker-compose.override.yml
    |-- docker-compose.yml
    |-- .env
    `-- owncert/
  • config/: Contains the configuration files for the services deployed in the Master Node.
  • data/: Contains the data generated by the services deployed with the Master Node.
  • deployment-info.yaml: Contains the deployment information of the Master Node.
  • docker-compose.override.yml: Contains the service with the Default App (OpenVidu Call) deployed with OpenVidu.
  • docker-compose.yml: Contains the main services deployed for the Master Node.
  • .env: Contains parameters managed by multiple services.
  • owncert/: Contains the custom certificates for the Caddy server if you are using your own certificates.

In each Media Node, the services are installed at /opt/openvidu/ and have a systemd service located at /etc/systemd/system/openvidu.service.

The directory structure of the Media Node is as follows:

|-- /opt/openvidu
    |-- config/
    |-- data/
    |-- deployment-info.yaml
    |-- docker-compose.yml
    `-- .env
  • config/: Contains the configuration files for the services deployed with the Media Node.
  • data/: Contains the data generated by the services deployed with the Media Node.
  • deployment-info.yaml: Contains the deployment information of the Media Node.
  • docker-compose.yml: Contains the main services deployed for the Media Node.

Services Configuration#

Some services deployed with OpenVidu have their own configuration files located in the /opt/openvidu/config/ directory, while others are configured in the .env file. Below are the services and their respective configuration files and parameters:

Info

The installer provides default configurations that work out of the box. However, you can modify these configurations to suit your specific requirements.

Configuration Files#

Service Description
Configuration File Location
Reference documentation
Caddy Server Serves OpenVidu services and handles HTTPS. /opt/openvidu/config/caddy.yaml Caddy JSON Structure
Loki Service Used for log aggregation. /opt/openvidu/config/loki.yaml Loki Config
Promtail Service Collects logs and sends them to Loki. /opt/openvidu/config/promtail.yaml Promtail Config
Mimir Service Service for long-term prometheus storage /opt/openvidu/config/mimir.yaml Mimir Config
Grafana Service Used for visualizing monitoring data. /opt/openvidu/config/grafana_config/ Grafana Config
Service Description
Configuration File Location
Reference documentation
OpenVidu Server Manages video rooms. It is compatible with LiveKit configuration and includes its own OpenVidu configuration parameters /opt/openvidu/config/livekit.yaml LiveKit Config
Ingress Service Imports video from other sources into OpenVidu rooms. /opt/openvidu/config/ingress.yaml LiveKit Ingress Config
Egress Service Exports video from OpenVidu rooms for recording or streaming. /opt/openvidu/config/egress.yaml LiveKit Egress Config
Prometheus Service Used for monitoring. /opt/openvidu/config/prometheus.yaml Prometheus Config
Promtail Service Collects logs and sends them to Loki. /opt/openvidu/config/promtail.yaml Promtail Config
Caddy Service Allows Media Nodes to reach Master Node services in a balanced and high-available way. /opt/openvidu/config/caddy.yaml Caddy JSON Structure

Environment variables#

Warning

  • All services use internally the values of MASTER_NODE_X_PRIVATE_IP where X is the Master Node number (1, 2, 3, and 4) defined in the .env file. These values are the private IP addresses of the Master Nodes. Ensure that these values are static IP addresses and are the same in all the Master Nodes and Media Nodes.
Service Description Environment Variables
Grafana Service Used for visualizing monitoring data.
  • GRAFANA_ADMIN_USERNAME: The username to access the Grafana dashboard.
  • GRAFANA_ADMIN_PASSWORD: The password to access the Grafana dashboard.
OpenVidu Dashboard Used to visualize OpenVidu Server Rooms, Ingress, and Egress services.
  • DASHBOARD_ADMIN_USERNAME: The username to access the OpenVidu Dashboard.
  • DASHBOARD_ADMIN_PASSWORD: The password to access the OpenVidu Dashboard.
Default App (OpenVidu Call) Default ready-to-use video conferencing app.
  • CALL_PRIVATE_ACCESS: If set to true, the app will be private and require authentication. If set to false, the app will be public and accessible without authentication. The user is configured with CALL_USER and CALL_SECRET parameters.
  • CALL_USER: The username to access the app. This parameter is only used if CALL_PRIVATE_ACCESS is set to true.
  • CALL_SECRET: The password to access the app. This parameter is only used if CALL_PRIVATE_ACCESS is set to true.
  • CALL_ADMIN_USERNAME: The username to access the OpenVidu Call Admin Panel.
  • CALL_ADMIN_SECRET: The password to access the OpenVidu Call Admin Panel.
  • LIVEKIT_API_KEY: The API key to access the LiveKit service.
  • LIVEKIT_API_SECRET: The API secret to access the LiveKit service.
Redis Service Used as a shared memory database for OpenVidu and Ingress/Egress services.
  • REDIS_PASSWORD: The password for the Redis service.

If you need to change the Redis password after the installation, check the advanced configuration section.
MinIO Service Used for storing recordings.
  • MINIO_ACCESS_KEY: The access key for the MinIO service.
  • MINIO_SECRET_KEY: The secret key for the MinIO service.

If you need to change the MinIO access key and secret key after the installation, check the advanced configuration section.
MongoDB Service Used for storing analytics and monitoring data.
  • MONGO_ADMIN_USERNAME: The username to access the MongoDB database.
  • MONGO_ADMIN_PASSWORD: The password to access the MongoDB database.
  • MONGO_REPLICA_SET_KEY: The replica set key for the MongoDB database.

If you need to change the MongoDB username and password after the installation, check the advanced configuration section.
OpenVidu v2 compatibility Service Used to enable compatibility with OpenVidu v2. Check the OpenVidu v2 Compatibility Configuration Parameters to see all the available parameters.

Warning

  • All services use internally the values of MASTER_NODE_X_PRIVATE_IP where X is the Master Node number (1, 2, 3, and 4) defined in the .env file. These values are the private IP addresses of the Master Nodes. Ensure that these values are static IP addresses and are the same in all the Master Nodes and Media Nodes.
  • All services use internally the MEDIA_NODE_PRIVATE_IP value defined in the .env file. This value is the private IP address of the Media Node. Ensure that this value is a static IP address. It should be different from Media Node to Media Node.
Service Description Environment Variables
OpenVidu Server Manages video rooms.
  • MASTER_NODE_X_PRIVATE_IP: The private IP addresses of all Master Nodes. Used to connect to Master Node services.
  • MEDIA_NODE_PRIVATE_IP: The private IP address of the Media Node. On startup, the OpenVidu Server registers itself with this IP address so that the Caddy server from Master Nodes can route requests to the Media Node.
Ingress Service Imports video from other sources into OpenVidu rooms.
  • MASTER_NODE_X_PRIVATE_IP: The private IP addresses of all Master Nodes. Used to connect to the Master Node services.
Egress Service Exports video from OpenVidu rooms for recording or streaming.
  • MASTER_NODE_X_PRIVATE_IP: The private IP addresses of all Master Nodes. Used to connect to the Master Node services.
Prometheus Service Used for monitoring.
  • MASTER_NODE_X_PRIVATE_IP: The private IP addresses of all Master Nodes. Used to connect to the Master Node services.
Promtail Service Collects logs and sends them to Loki.
  • MASTER_NODE_X_PRIVATE_IP: The private IP addresses of all Master Nodes. Used to connect to the Master Node services.

OpenVidu Configuration Parameters#

OpenVidu Server is built on top of LiveKit and offers extra configuration options. You can find the configuration file at /opt/openvidu/config/livekit.yaml. Additional parameters for configuring OpenVidu Server are:

openvidu:
    license: <YOUR_OPENVIDU_PRO_LICENSE> # (1)
    cluster_id: <YOUR_DOMAIN_NAME> # (2)
    analytics: # (3)
        enabled: true # (4)
        interval: 10s # (5)
        expiration: 768h # (6)
        mongo_url: <MONGO_URL> # (7)
    rtc:
        engine: pion # (8)
    mediasoup:
        debug: "" # (9)
        log_level: error # (10)
        log_tags: [info, ice, rtp, rtcp, message] # (11)
  1. Specify your OpenVidu Pro license key. If you don't have one, you can request one here.
  2. The cluster ID for the OpenVidu deployment. It is configured by default by OpenVidu Installer with the domain name of the deployment.
  3. The analytics configuration should be defined at the openvidu level in the livekit.yaml file.
  4. This must be set to true to send analytics data to MongoDB. If set to false, no analytics data will be sent.
  5. Time interval to send analytics data to MongoDB.
  6. Time to keep the analytics data in MongoDB. In this example, it is set to 32 days.
  7. MongoDB URL. This is the connection string to the MongoDB database where the analytics data will be stored.
  8. The rtc.engine parameter is set to pion by default. This is the WebRTC engine used by OpenVidu. Depending on your requirements, you can use:
    • pion
    • mediasoup
  9. Global toggle to enable debugging logs from MediaSoup. In most debugging cases, using just an asterisk ("*") here is enough, but this can be fine-tuned for specific log levels. More info.
    • Default is an empty string.
  10. Logging level for logs generated by MediaSoup. More info.
    • Valid values are: debug, warn, error, none.
    • Default is error.
  11. Comma-separated list of log tag names, for debugging. More info.
    • Valid values are: info, ice, dtls, rtp, srtp, rtcp, rtx, bwe, score, simulcast, svc, sctp, message.
    • Default is [info, ice, rtp, rtcp, message].

OpenVidu v2 Compatibility Configuration Parameters#

If you are using in COMPOSE_PROFILES at the .env file the v2compatibility profile, you will need to set the following parameters in the .env file for the OpenVidu V2 Compatibility service:

Parameter
Description Default Value
V2COMPAT_OPENVIDU_SECRET OpenVidu secret used to authenticate the OpenVidu V2 Compatibility service. In the .env file, this value is defined with LIVEKIT_API_SECRET. The value of LIVEKIT_API_SECRET in the .env file.
V2COMPAT_OPENVIDU_WEBHOOK true to enable OpenVidu Webhook service. false otherwise. Valid values are true or false. false
V2COMPAT_OPENVIDU_WEBHOOK_ENDPOINT HTTP(S) endpoint to send OpenVidu V2 Webhook events. Must be a valid URL. Example:

V2COMPAT_OPENVIDU_WEBHOOK_ENDPOINT=http://myserver.com/webhook
-
V2COMPAT_OPENVIDU_WEBHOOK_HEADERS JSON Array list of headers to send in the OpenVidu V2 Webhook events. Example:

V2COMPAT_OPENVIDU_WEBHOOK_HEADERS=["Content-Type: application/json"]
[]
V2COMPAT_OPENVIDU_WEBHOOK_EVENTS Comma-separated list of OpenVidu V2 Webhook events to send. Example:

V2COMPAT_OPENVIDU_WEBHOOK_EVENTS=sessionCreated,sessionDestroyed
sessionCreated, sessionDestroyed, participantJoined, participantLeft, webrtcConnectionCreated, webrtcConnectionDestroyed, recordingStatusChanged, signalSent

(All available events)
V2COMPAT_OPENVIDU_PRO_AWS_S3_BUCKET S3 Bucket where to store recording files. openvidu
V2COMPAT_OPENVIDU_PRO_AWS_S3_SERVICE_ENDPOINT S3 Endpoint where to store recording files. http://localhost:9100
V2COMPAT_OPENVIDU_PRO_AWS_ACCESS_KEY S3 Access Key of the S3 Bucket where to store recording files. -
V2COMPAT_OPENVIDU_PRO_AWS_SECRET_KEY S3 Secret Key of the S3 Bucket where to store recording files. -
V2COMPAT_OPENVIDU_PRO_AWS_REGION S3 Region of the S3 Bucket where to store recording files. us-east-1

Starting, stopping, and restarting OpenVidu#

For every OpenVidu node, a systemd service is created during the installation process. This service allows you to start, stop, and restart the OpenVidu services easily.

Start OpenVidu

sudo systemctl start openvidu

Stop OpenVidu

sudo systemctl stop openvidu

Restart OpenVidu

sudo systemctl restart openvidu

Checking the status of services#

You can check the status of the OpenVidu services using the following command:

cd /opt/openvidu/
docker compose ps

Depending on the node type, you will see different services running.

The services are operating correctly if you see an output similar to the following and there are no restarts from any of the services:

NAME                       IMAGE                                              COMMAND                  SERVICE                    CREATED          STATUS
app                        docker.io/openvidu/openvidu-call                   "docker-entrypoint.s…"   app                        12 seconds ago   Up 10 seconds
caddy                      docker.io/openvidu/openvidu-pro-caddy              "/bin/caddy run --co…"   caddy                      12 seconds ago   Up 10 seconds
dashboard                  docker.io/openvidu/openvidu-pro-dashboard          "./openvidu-dashboard"   dashboard                  12 seconds ago   Up 10 seconds
grafana                    docker.io/grafana/grafana                          "/run.sh"                grafana                    11 seconds ago   Up 8 seconds
loki                       docker.io/grafana/loki                             "/usr/bin/loki -conf…"   loki                       11 seconds ago   Up 9 seconds
mimir                      docker.io/grafana/mimir                            "/bin/mimir -config.…"   mimir                      11 seconds ago   Up 9 seconds
minio                      docker.io/bitnami/minio                            "/opt/bitnami/script…"   minio                      11 seconds ago   Up 9 seconds
mongo                      docker.io/mongo                                    "docker-entrypoint.s…"   mongo                      11 seconds ago   Up 9 seconds
openvidu-v2compatibility   docker.io/openvidu/openvidu-v2compatibility        "/bin/server"            openvidu-v2compatibility   12 seconds ago   Up 10 seconds
operator                   docker.io/openvidu/openvidu-operator               "/bin/operator"          operator                   12 seconds ago   Up 10 seconds
promtail                   docker.io/grafana/promtail                         "/usr/bin/promtail -…"   promtail                   11 seconds ago   Up 9 seconds
redis-sentinel             docker.io/redis                                    "docker-entrypoint.s…"   redis-sentinel             10 seconds ago   Up 10 seconds
redis-server               docker.io/redis                                    "docker-entrypoint.s…"   redis-server               10 seconds ago   Up 10 seconds

The services are operating correctly if you see an output similar to the following and there are no restarts from any of the services:

NAME         IMAGE                                          COMMAND                  SERVICE      CREATED          STATUS
caddy        docker.io/openvidu/openvidu-caddy:main         "/bin/caddy run --co…"   caddy        53 seconds ago   Up 53 seconds
egress       docker.io/livekit/egress                       "/entrypoint.sh"         egress       53 seconds ago   Up 51 seconds
ingress      docker.io/livekit/ingress                      "ingress"                ingress      53 seconds ago   Up 52 seconds
openvidu     docker.io/openvidu/openvidu-server-pro         "/livekit-server --c…"   openvidu     53 seconds ago   Up 52 seconds
prometheus   docker.io/prom/prometheus                      "/bin/prometheus --c…"   prometheus   53 seconds ago   Up 51 seconds
promtail     docker.io/grafana/promtail                     "/usr/bin/promtail -…"   promtail     53 seconds ago   Up 52 seconds

Checking logs#

If any of the services are not working as expected, you can check the logs of the services using the following command:

cd /opt/openvidu/
docker compose logs <service-name>

Replace <service-name> with the name of the service you want to check. For example, to check the logs of the OpenVidu Server, use the following command:

cd /opt/openvidu/
docker compose logs openvidu

To check the logs of all services, use the following command:

cd /opt/openvidu/
docker compose logs

You can also review your logs using the Grafana dashboard provided with OpenVidu. To access it, go to https://<your-domain.com>/grafana and use the credentials located in /opt/openvidu/.env to log in. Once inside, navigate to the "Home" section, select "Dashboard", and then click on:

  • "OpenVidu > OpenVidu Cluster Nodes Logs": To check the logs of the OpenVidu services organized per node.
  • "OpenVidu > OpenVidu Cluster Services Logs": To check the logs of the OpenVidu services organized per service.

Adding and Removing Media Nodes#

Adding and removing Media Nodes is straightforward. You can add new Media Nodes to the cluster to increase the capacity of your OpenVidu deployment. Similarly, you can remove Media Nodes to reduce the capacity of your deployment.

Adding Media Nodes#

To add a new Media Node, simply spin up a new VM and run the OpenVidu installer script to integrate it into the existing cluster. Run the installation command on the new Media Node.

Warning

This installation command should be the same as the one you used to install the first Media Node. Make sure to use the same parameters and values as the first Media Node. In case you've changed the .env file in the Master Nodes, you will need to update the .env file or update the installation command with the new values.

To automate the configuration of new nodes, check this section.

Removing Media Nodes Gracefully#

To stop a Media Node gracefully, you need to stop the containers openvidu, ingress, and egress with a SIGINT signal. Here is a simple script that you can use to stop all these containers gracefully:

#!/bin/bash
# Stop OpenVidu, Ingress, and Egress containers gracefully (1)
docker container kill --signal=SIGINT openvidu || true
docker container kill --signal=SIGINT ingress || true
docker container kill --signal=SIGINT egress || true

# Wait for the containers to stop (2)
while [ $(docker inspect -f '{{.State.Running}}' openvidu 2>/dev/null) == "true" ] || \
    [ $(docker inspect -f '{{.State.Running}}' ingress 2>/dev/null) == "true" ] || \
    [ $(docker inspect -f '{{.State.Running}}' egress 2>/dev/null) == "true" ]; do
    echo "Waiting for containers to stop..."
    sleep 5
done
  1. This script stops the OpenVidu, Ingress, and Egress containers gracefully. The true at the end of each command is to avoid the script from stopping if the container is not running.
  2. This script waits for the containers to stop before exiting.

When all the containers are stopped, you can then stop the systemd service and remove the VM:

sudo systemctl stop openvidu

Removing Media Nodes Forcefully#

To remove a Media Node forcefully, without considering the rooms, ingress, and egress processes running in the node, you can simply stop the OpenVidu service in the Media Node and delete the VM.

sudo systemctl stop openvidu

Advanced Configuration#

This section addresses advanced configuration scenarios for customizing your OpenVidu High Availability deployment. It includes automating the installation with personalized settings, enabling or disabling OpenVidu modules, and modifying global parameters such as the domain name, passwords, and API keys.

Automatic installation and configuration of nodes#

For environments like the cloud, where instances are frequently spun up and down, automating the application of custom configurations to Master Nodes and Media Nodes may be useful for you.

If you need to apply custom configurations to your Master Nodes, you can use the following script template:

# 1. First install the Master Node (1)
sh <(curl -fsSL http://get.openvidu.io/pro/ha/latest/install_ov_master_node.sh) \
    --node-role='master-node' \
    ... # Add the rest of the arguments (2)

# 2. Add custom configurations (3)
######### APPLY CUSTOM CONFIGURATIONS #########
# If you want to apply any modification to the configuration files
# of the OpenVidu services at /opt/openvidu/config, you can do it here.

# Example 1: Change Minio public port
yq eval '.apps.http.servers.minio.listen[0] = ":9001"' -i /opt/openvidu/config/caddy.yaml

# Example 2: Disable the /dashboard route in Caddy
yq eval 'del(.apps.http.servers.public.routes[] | \
  select(.handle[]?.handler == "subroute" and \
  .handle[].routes[].handle[].strip_path_prefix == "/dashboard"))' \
  -i /opt/openvidu/config/caddy.yaml

# Example 3: Enable webhooks for OpenVidu V2 compatibility
sed -i \
    's/V2COMPAT_OPENVIDU_WEBHOOK_ENDPOINT=.*/V2COMPAT_OPENVIDU_WEBHOOK_ENDPOINT="http://new-endpoint.example.com/webhook"/' \
    /opt/openvidu/.env

######### END CUSTOM CONFIGURATIONS #########

# 3. Start OpenVidu (4)
systemctl start openvidu
  1. First, install the Master Node using the OpenVidu installer. Check the installation guide for more information.
  2. Add the parameters you need to install the Master Node. You can find all the available parameters in the installation guide.
  3. Add the custom configurations you need to apply to the Master Node services. You can use yq or other tools to modify the configuration files. You can find more information about yq here.
  4. Start the Master Node.

Note

In case you want to deploy a specific version, just replace latest with the desired version. For example: 3.0.0.

Just install the Master Node first with the installer and then run some extra commands to apply the custom configurations. This way, you can automate the process of installing the Master Node and applying custom configurations.

If you need to apply custom configurations to the Media Node, you can use the following script template:

# 1. First install the Media Node (1)
sh <(curl -fsSL http://get.openvidu.io/pro/ha/latest/install_ov_media_node.sh) \
    --node-role='media-node' \
    ... # Add the rest of the arguments (2)

# 2. Add custom configurations (3)
######### APPLY CUSTOM CONFIGURATIONS #########
# If you want to apply any modification to the configuration files
# of the OpenVidu services at /opt/openvidu, you can do it in this section.

# Example 1: Change public IP address announced by OpenVidu for WebRTC connections
yq eval '.rtc.node_ip = 1.2.3.4' \
    -i /opt/openvidu/config/livekit.yaml

# Example 2: Add a webhook to LiveKit
yq eval '.webhook.urls += ["http://new-endpoint.example.com/webhook"]' \
    -i /opt/openvidu/config/livekit.yaml

######### END CUSTOM CONFIGURATIONS #########

# 3. Start OpenVidu (4)
systemctl start openvidu
  1. First, install the Media Node using the OpenVidu installer. Check the installation guide for more information.
  2. Add the parameters you need to install the Media Node. You can find all the available parameters in the installation guide.
  3. Add the custom configurations you need to apply to the Media Node services. You can use yq or other tools to modify the configuration files. You can find more information about yq here.
  4. Start the Media Node.

Note

In case you want to deploy a specific version, just replace latest with the desired version. For example: 3.0.0.

Just install the Media Node first with the installer and then run some extra commands to apply the custom configurations. This way, you can automate the process of installing the Media Node and applying custom configurations.

Enabling webhooks#

A common use case for custom configurations is enabling webhooks in OpenVidu. In every Media Node, add the following parameter to the config/livekit.yaml file:

webhook:
    <LIVEKIT_API_KEY>: <LIVEKIT_API_SECRET>
    urls:
        ... # Other possible URLs
        - <YOUR_WEBHOOK_URL>

In case you want to automate the installation and configuration of OpenVidu with webhooks, you can use the script template provided in the automatic installation and configuration of nodes section, specifically this command:

yq eval '.webhook.urls += ["<YOUR_WEBHOOK_URL"]' \
    -i /opt/openvidu/config/livekit.yaml

Replace <YOUR_WEBHOOK_URL> with the URL where the webhook will send the data.

For this to work, you need to have yq installed in your system. You can find more information about yq here. Also, remember to restart every Media Node after applying the configuration changes.

Enabling OpenVidu v2 webhooks (v2compatibility)#

In case you are using the OpenVidu V2 Compatibility service, the procedure is different to have OpenVidu v2 webhooks working.

In every Master Node, add the following parameter to the .env file:

V2COMPAT_OPENVIDU_WEBHOOK_ENDPOINT="<YOUR_WEBHOOK_URL>"

In case you want to automate the installation and configuration of OpenVidu with webhooks, you can use the script template provided in the automatic installation and configuration of nodes section, specifically this command:

sed -i 's/V2COMPAT_OPENVIDU_WEBHOOK_ENDPOINT=.*/V2COMPAT_OPENVIDU_WEBHOOK_ENDPOINT="<YOUR_WEBHOOK_URL>"/' /opt/openvidu/.env

Replace <YOUR_WEBHOOK_URL> with the URL where the webhook will send the data.

For this to work, you need to restart every Master Node after applying the configuration changes.

Enabling and Disabling OpenVidu Modules#

The COMPOSE_PROFILES parameter in the .env file in Master and Media Nodes allows you to enable or disable specific modules in OpenVidu. The following modules can be enabled or disabled:

In case you have installed OpenVidu with the observability module, you just need to enable the observability module in the .env file in all nodes.

Otherwise, you can follow these steps to enable the observability module:

  1. Stop all Master Nodes and all Media Nodes, and backup the deployment

    sudo systemctl stop openvidu
    sudo cp -r /opt/openvidu/ /opt/openvidu_backup/
    
  2. In the Master Nodes, update the .env with the following changes:

    Add to the COMPOSE_PROFILES the observability module. Also, make sure to set up the GRAFANA_ADMIN_USERNAME and GRAFANA_ADMIN_PASSWORD parameters.

    If you have only the observability module enabled, your .env file should have the following environment variables:

    GRAFANA_ADMIN_USERNAME="<GRAFANA_ADMIN_USERNAME>"
    GRAFANA_ADMIN_PASSWORD="<GRAFANA_ADMIN_PASSWORD>"
    
    COMPOSE_PROFILES="observability"
    
  3. In the Media Nodes, enable the observability module:

    Add to the COMPOSE_PROFILES the observability module in the .env file. If you have only the observability module enabled, your .env file should have the following environment variable:

    COMPOSE_PROFILES="observability"
    

    Then, add the following parameter in the config/livekit.yaml file:

    prometheus_port: 6789
    
  4. Start all Master Nodes and Media Nodes

    sudo systemctl start openvidu
    

Disabling the observability module

If you have the observability module enabled, and you want to disable it, just remove the observability module from the COMPOSE_PROFILES parameter in the .env file of all nodes.

In case you have installed OpenVidu with the v2compatibility module, you just need to enable the v2compatibility module in the .env file in all nodes.

Otherwise, you can follow these steps to enable the v2compatibility module:

  1. Stop all Master Nodes, all Media Nodes, and backup the deployment

    sudo systemctl stop openvidu
    sudo cp -r /opt/openvidu/ /opt/openvidu_backup/
    
  2. In the Master Nodes, update the .env with the following changes:

    Add to the COMPOSE_PROFILES the v2compatibility module.

    If you have only the v2compatibility module enabled, your .env file should have the following environment variable:

    COMPOSE_PROFILES="v2compatibility"
    
  3. In the Media Nodes, update the LiveKit configuration to send webhooks to the V2 Compatibility service

    Just add the following parameter in the config/livekit.yaml file:

    webhook:
        api_key: "<LIVEKIT_API_KEY>"
        urls:
            - http://localhost:4443/livekit/webhook
    

    Where <LIVEKIT_API_KEY> is the LIVEKIT_API_KEY parameter in the .env file.

    Note that the URL is http://localhost:4443 because an internal caddy proxy will balance the requests to all the V2 Compatibility services running in the Master Nodes.

  4. Start all Master Nodes and Media Nodes

    sudo systemctl start openvidu
    

Disabling the v2compatibility module

If you have the v2compatibility module enabled, and you want to disable it, just remove the v2compatibility module from the COMPOSE_PROFILES parameter in the .env file of all nodes.

In case you have installed OpenVidu without the app module, you just need to enable the app module in the .env file in all nodes.

Otherwise, you can follow these steps to enable the app module:

  1. Stop all Master Nodes, all Media Nodes, and backup the deployment

    sudo systemctl stop openvidu
    sudo cp -r /opt/openvidu/ /opt/openvidu_backup/
    
  2. In all Master Nodes, update the .env with the following changes:

    Add to the COMPOSE_PROFILES the app module.

    If you have only the app module enabled, your .env file should have the following environment variable:

    COMPOSE_PROFILES="app"
    
  3. In the Media Nodes, update the LiveKit configuration to send webhooks to the Default App

    Just add the following parameter in the config/livekit.yaml file:

    webhook:
        api_key: "<LIVEKIT_API_KEY>"
        urls:
            - http://localhost:6080/api/webhook
    

    Where <LIVEKIT_API_KEY> is the LIVEKIT_API_KEY parameter in the .env file.

    Note that the URL is http://localhost:6080 because an internal caddy proxy will balance the requests to all the Default App services running in the Master Nodes.

  4. Start all Master Nodes and Media Nodes

    sudo systemctl start openvidu
    

Disabling the app module

If you have the app module enabled, and you want to disable it, just remove the app module from the COMPOSE_PROFILES parameter in the .env file of all nodes.

Global configuration changes#

Some configuration parameters may require modifying multiple configuration files. Below are some examples of advanced configurations and how to apply them:

Info

Usually, this is not needed because the installer takes care of generating all of this parameters. However, it is necessary if any password, credential, or domain change is needed.

Danger

Advanced configurations should be performed with caution. Incorrect configurations can lead to service failures or unexpected behavior.

Before making any changes, make sure to back up your installation by creating a snapshot of your server or by copying the /opt/openvidu/ directory to a safe location. For example:

sudo cp -r /opt/openvidu/ /opt/openvidu_backup/

To change all occurrences of the domain or public IP address in the configuration files, follow these steps:

  1. Stop OpenVidu in all Master Nodes and all Media Nodes and backup the deployment

    sudo systemctl stop openvidu
    sudo cp -r /opt/openvidu/ /opt/openvidu_backup/
    
  2. Find the current locations of DOMAIN_OR_PUBLIC_IP in your Master Nodes

    With the following commands, you can find all occurrences of the current domain or public IP address in the configuration files:

    sudo su
    cd /opt/openvidu/
    CURRENT_DOMAIN_OR_PUBLIC_IP="$(grep '^DOMAIN_OR_PUBLIC_IP' /opt/openvidu/.env | cut -d '=' -f 2)"
    grep --exclude-dir=data -IHnr "$CURRENT_DOMAIN_OR_PUBLIC_IP" .
    

    Warning

    Keep the value of CURRENT_DOMAIN_OR_PUBLIC_IP as you will need it to update the configuration files in the Media Nodes.

    The output should look similar to the following:

    ./.env:DOMAIN_OR_PUBLIC_IP=<CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/caddy.yaml:        - certificate: /owncert/<CURRENT_DOMAIN_OR_PUBLIC_IP>.cert
    ./config/caddy.yaml:          key: /owncert/<CURRENT_DOMAIN_OR_PUBLIC_IP>.key
    ./config/caddy.yaml:            - <CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/caddy.yaml:                    - <CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/caddy.yaml:                        - <CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/caddy.yaml:                    - <CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/caddy.yaml:                        - <CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/caddy.yaml:                  - <CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/promtail.yaml:          cluster_id: <CURRENT_DOMAIN_OR_PUBLIC_IP>
    

    Note

    Don't worry if some values are different in your output. It varies depending on the parameters you've used to install OpenVidu.

  3. Update the Following Files in all your Master Nodes

    Based on the output from the previous step, update the following files with the new domain or public IP address:

    • .env
    • config/caddy.yaml
    • config/promtail.yaml
  4. Verify the changes in all your Master Nodes

    These commands will list all occurrences of the new DOMAIN_OR_PUBLIC_IP in the configuration files. The output should match the locations found in the initial search but with the new domain or public IP address.

    sudo su
    cd /opt/openvidu/
    NEW_DOMAIN_OR_PUBLIC_IP="<NEW_DOMAIN_OR_PUBLIC_IP>"
    grep --exclude-dir=data -IHnr "$NEW_DOMAIN_OR_PUBLIC_IP" .
    
  5. Ensure to follow from step 2 to 4 for all your Master Nodes

  6. Find the current locations of CURRENT_DOMAIN_OR_PUBLIC_IP in your Media Nodes

    With the CURRENT_DOMAIN_OR_PUBLIC_IP value obtained in step 2, you can find all occurrences of the current domain or public IP address in the configuration files of the Media Nodes. To do this, connect to each Media Node and run the following command:

    sudo su
    cd /opt/openvidu/
    CURRENT_DOMAIN_OR_PUBLIC_IP="<CURRENT_DOMAIN_OR_PUBLIC_IP>"
    grep --exclude-dir=data -IHnr "$CURRENT_DOMAIN_OR_PUBLIC_IP" .
    

    The output should look similar to the following:

    ./config/promtail.yaml:          cluster_id: <CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/livekit.yaml:    cluster_id: <CURRENT_DOMAIN_OR_PUBLIC_IP>
    ./config/livekit.yaml:    rtmp_base_url: rtmps://<CURRENT_DOMAIN_OR_PUBLIC_IP>:1935/rtmp
    ./config/livekit.yaml:    whip_base_url: https://<CURRENT_DOMAIN_OR_PUBLIC_IP>/whip
    
  7. Update the Following Files in your Media Nodes

    Based on the output from the previous step, update the following files with the new domain or public IP address:

    • config/promtail.yaml
    • config/livekit.yaml
  8. Verify the changes in your Media Nodes

    These commands will list all occurrences of the new DOMAIN_OR_PUBLIC_IP in the configuration files of the Media Nodes. The output should match the locations found in the initial search but with the new domain or public IP address.

    sudo su
    cd /opt/openvidu/
    NEW_DOMAIN_OR_PUBLIC_IP="<NEW_DOMAIN_OR_PUBLIC_IP>"
    grep --exclude-dir=data -IHnr "$NEW_DOMAIN_OR_PUBLIC_IP" .
    
  9. Ensure to follow from step 5 to 7 for all your Media Nodes

  10. Start all Master Nodes and Media Nodes

    sudo systemctl start openvidu
    

Some notes on changing the DOMAIN_OR_PUBLIC_IP parameter:

  • If you are using your own certificates, you need to place the new ones at /opt/openvidu/owncert/<NEW_DOMAIN_OR_PUBLIC_IP>.cert and /opt/openvidu/owncert/<NEW_DOMAIN_OR_PUBLIC_IP>.key.
  • Make sure your new domain is pointing correctly to the machine where OpenVidu is installed.

To change the REDIS_PASSWORD parameter, follow these steps:

  1. Stop OpenVidu in all the Master Nodes and all Media Nodes and backup the deployment

    sudo systemctl stop openvidu
    sudo cp -r /opt/openvidu/ /opt/openvidu_backup/
    
  2. Replace in all Master Nodes the REDIS_PASSWORD in the .env file with your new value

    Warning

    Keep the previous value of REDIS_PASSWORD as you will need it to update the configuration files in the Media Nodes. We will refer to this value as <CURRENT_REDIS_PASSWORD>.

  3. Ensure to replace the REDIS_PASSWORD in the .env file of all your Master Nodes

  4. Find the current locations of REDIS_PASSWORD in your Media Nodes

    With the CURRENT_REDIS_PASSWORD value obtained in step 2, you can find all occurrences of the current Redis password in the configuration files of the Media Nodes. To do this, connect to each Media Node and run the following command:

    sudo su
    cd /opt/openvidu/
    CURRENT_REDIS_PASSWORD="<CURRENT_REDIS_PASSWORD>"
    grep --exclude-dir=data -IHnr "$CURRENT_REDIS_PASSWORD" .
    

    The output should look similar to the following:

    ./config/egress.yaml:    password: <CURRENT_REDIS_PASSWORD>
    ./config/ingress.yaml:    password: <CURRENT_REDIS_PASSWORD>
    ./config/livekit.yaml:    password: <CURRENT_REDIS_PASSWORD>
    
  5. Update the Following Files in your Media Nodes

    Based on the output from the previous step, update the following files with the new Redis password:

    • config/egress.yaml
    • config/ingress.yaml
    • config/livekit.yaml
  6. Verify the changes in your Media Nodes

    These commands will list all occurrences of the new REDIS_PASSWORD in the configuration files of the Media Nodes. The output should match the locations found in the initial search but with the new Redis password.

    sudo su
    cd /opt/openvidu/
    NEW_REDIS_PASSWORD="<NEW_REDIS_PASSWORD>"
    grep --exclude-dir=data -IHnr "$NEW_REDIS_PASSWORD" .
    
  7. Ensure to follow from step 3 to 5 for all your Media Nodes

  8. Start all Master Nodes and Media Nodes

    sudo systemctl start openvidu
    

To change the LIVEKIT_API_KEY and LIVEKIT_API_SECRET parameters, follow these steps:

  1. Stop OpenVidu in all Master Nodes and all Media Nodes and backup the deployment

    sudo systemctl stop openvidu
    sudo cp -r /opt/openvidu/ /opt/openvidu_backup/
    
  2. Replace at the Master Nodes the LIVEKIT_API_KEY and LIVEKIT_API_SECRET in the .env file with your new values

    Warning

    Keep the previous values of LIVEKIT_API_KEY and LIVEKIT_API_SECRET as you will need them to update the configuration files in the Media Nodes. We will refer to these values as <CURRENT_LIVEKIT_API_KEY> and <CURRENT_LIVEKIT_API_SECRET>.

  3. Ensure to replace the LIVEKIT_API_KEY and LIVEKIT_API_SECRET in the .env file of all your Master Nodes

  4. Find the current locations of LIVEKIT_API_KEY and LIVEKIT_API_SECRET in your Media Nodes

    With the CURRENT_LIVEKIT_API_KEY and CURRENT_LIVEKIT_API_SECRET values obtained in step 2, you can find all occurrences of the current LiveKit API key and secret in the configuration files of the Media Nodes. To do this, connect to each Media Node and run the following command:

    sudo su
    cd /opt/openvidu/
    CURRENT_LIVEKIT_API_KEY="<CURRENT_LIVEKIT_API_KEY>"
    CURRENT_LIVEKIT_API_SECRET="<CURRENT_LIVEKIT_API_SECRET>"
    grep --exclude-dir=data -IHnr "$CURRENT_LIVEKIT_API_KEY" .
    grep --exclude-dir=data -IHnr "$CURRENT_LIVEKIT_API_SECRET" .
    

    The output should look similar to the following for LIVEKIT_API_KEY:

    ./config/egress.yaml:api_key: <CURRENT_LIVEKIT_API_KEY>
    ./config/ingress.yaml:api_key: <CURRENT_LIVEKIT_API_KEY>
    ./config/livekit.yaml:    <CURRENT_LIVEKIT_API_KEY>: <CURRENT_LIVEKIT_API_SECRET>
    ./config/livekit.yaml:    api_key: <CURRENT_LIVEKIT_API_KEY>
    

    And for LIVEKIT_API_SECRET:

    ./config/egress.yaml:api_secret: <CURRENT_LIVEKIT_API_SECRET>
    ./config/ingress.yaml:api_secret: <CURRENT_LIVEKIT_API_SECRET>
    ./config/livekit.yaml:    <CURRENT_LIVEKIT_API_KEY>: <CURRENT_LIVEKIT_API_SECRET>
    
  5. Update the Following Files in your Media Nodes

    Based on the output from the previous step, update the following files with the new values:

    • config/egress.yaml
    • config/ingress.yaml
    • config/livekit.yaml
  6. Verify the changes in your Media Nodes

    These commands will list all occurrences of the new LIVEKIT_API_KEY and LIVEKIT_API_SECRET in the configuration files of the Media Nodes. The output should match the locations found in the initial search but with the new values.

    sudo su
    cd /opt/openvidu/
    NEW_LIVEKIT_API_KEY="<NEW_LIVEKIT_API_KEY>"
    NEW_LIVEKIT_API_SECRET="<NEW_LIVEKIT_API_SECRET>"
    grep --exclude-dir=data -IHnr "$NEW_LIVEKIT_API_KEY" .
    grep --exclude-dir=data -IHnr "$NEW_LIVEKIT_API_SECRET" .
    
  7. Ensure to follow from step 3 to 5 for all your Media Nodes

  8. Start all Master Nodes and Media Nodes

    sudo systemctl start openvidu
    

To change the MINIO_ACCESS_KEY and MINIO_SECRET_KEY parameters, follow these steps:

  1. Stop OpenVidu in all the Master Nodes and all Media Nodes and backup the deployment

    sudo systemctl stop openvidu
    sudo cp -r /opt/openvidu/ /opt/openvidu_backup/
    
  2. Replace in all the Master Nodes the MINIO_ACCESS_KEY and MINIO_SECRET_KEY in the .env file with your new values

    Take into account that if you are using the v2compatibility module in COMPOSE_PROFILES, you will need to change the V2COMPAT_OPENVIDU_PRO_AWS_ACCESS_KEY and V2COMPAT_OPENVIDU_PRO_AWS_SECRET_KEY parameters in the .env file.

    Warning

    Keep the previous values of MINIO_ACCESS_KEY and MINIO_SECRET_KEY as you will need them to update the configuration files in the Media Nodes. We will refer to these values as <CURRENT_MINIO_ACCESS_KEY> and <CURRENT_MINIO_SECRET_KEY>.

  3. Ensure to apply the changes in the .env file of all your Master Nodes

  4. Find the current locations of MINIO_ACCESS_KEY and MINIO_SECRET_KEY in your Media Nodes

    With the CURRENT_MINIO_ACCESS_KEY and CURRENT_MINIO_SECRET_KEY values obtained in step 2, you can find all occurrences of the current MinIO access key and secret in the configuration files of the Media Nodes. To do this, connect to each Media Node and run the following command:

    sudo su
    cd /opt/openvidu/
    CURRENT_MINIO_ACCESS_KEY="<CURRENT_MINIO_ACCESS_KEY>"
    CURRENT_MINIO_SECRET_KEY="<CURRENT_MINIO_SECRET_KEY>"
    grep --exclude-dir=data -IHnr "$CURRENT_MINIO_ACCESS_KEY" .
    grep --exclude-dir=data -IHnr "$CURRENT_MINIO_SECRET_KEY" .
    

    The output should look similar to the following for MINIO_ACCESS_KEY:

    ./config/egress.yaml:access_key: <CURRENT_MINIO_ACCESS_KEY>
    

    And for MINIO_SECRET_KEY:

    ./config/egress.yaml:secret: <CURRENT_MINIO_SECRET_KEY>
    
  5. Update the Following Files in your Media Nodes

    Based on the output from the previous step, update the following files with the new values:

    • config/egress.yaml
  6. Verify the changes in your Media Nodes

    These commands will list all occurrences of the new MINIO_ACCESS_KEY and MINIO_SECRET_KEY in the configuration files of the Media Nodes. The output should match the locations found in the initial search but with the new values.

    sudo su
    cd /opt/openvidu/
    NEW_MINIO_ACCESS_KEY="<NEW_MINIO_ACCESS_KEY>"
    NEW_MINIO_SECRET_KEY="<NEW_MINIO_SECRET_KEY>"
    grep --exclude-dir=data -IHnr "$NEW_MINIO_ACCESS_KEY" .
    grep --exclude-dir=data -IHnr "$NEW_MINIO_SECRET_KEY" .
    
  7. Ensure to follow from step 3 to 5 for all your Media Nodes

  8. Start all the Master Nodes and all the Media Nodes

    sudo systemctl start openvidu
    

To change the MONGO_ADMIN_USERNAME and MONGO_ADMIN_PASSWORD parameters, follow these steps:

  1. Stop OpenVidu in all the Master Nodes and all the Media Nodes and backup the deployment

    sudo systemctl stop openvidu
    sudo cp -r /opt/openvidu/ /opt/openvidu_backup/
    
  2. Replace in all the Master Nodes the MONGO_ADMIN_USERNAME and MONGO_ADMIN_PASSWORD in the .env file with your new values

    Warning

    Keep the previous values of MONGO_ADMIN_USERNAME and MONGO_ADMIN_PASSWORD as you will need them to update the configuration files in the Media Nodes. We will refer to these values as <CURRENT_MONGO_ADMIN_USERNAME> and <CURRENT_MONGO_ADMIN_PASSWORD>.

  3. Ensure to replace the MONGO_ADMIN_USERNAME and MONGO_ADMIN_PASSWORD in the .env file of all your Master Nodes

  4. Find the current locations of MONGO_ADMIN_USERNAME and MONGO_ADMIN_PASSWORD in your Media Nodes

    With the CURRENT_MONGO_ADMIN_USERNAME and CURRENT_MONGO_ADMIN_PASSWORD values obtained in step 2, you can find all occurrences of the current MongoDB admin username and password in the configuration files of the Media Nodes. To do this, connect to each Media Node and run the following command:

    sudo su
    cd /opt/openvidu/
    CURRENT_MONGO_ADMIN_USERNAME="<CURRENT_MONGO_ADMIN_USERNAME>"
    CURRENT_MONGO_ADMIN_PASSWORD="<CURRENT_MONGO_ADMIN_PASSWORD>"
    grep --exclude-dir=data -IHnr "$CURRENT_MONGO_ADMIN_USERNAME" .
    grep --exclude-dir=data -IHnr "$CURRENT_MONGO_ADMIN_PASSWORD" .
    

    The output should look similar to the following for MONGO_ADMIN_USERNAME:

    ./config/livekit.yaml:mongo_url: <MONGO_URL>
    

    And for MONGO_ADMIN_PASSWORD:

    ./config/livekit.yaml:mongo_url: <MONGO_URL>
    
  5. Update the Following Files in your Media Nodes

    Based on the output from the previous step, update the following files with the new values:

    • config/livekit.yaml
  6. Verify the changes in your Media Nodes

    These commands will list all occurrences of the new MONGO_ADMIN_USERNAME and MONGO_ADMIN_PASSWORD in the configuration files of the Media Nodes. The output should match the locations found in the initial search but with the new values.

    sudo su
    cd /opt/openvidu/
    NEW_MONGO_ADMIN_USERNAME="<NEW_MONGO_ADMIN_USERNAME>"
    NEW_MONGO_ADMIN_PASSWORD="<NEW_MONGO_ADMIN_PASSWORD>"
    grep --exclude-dir=data -IHnr "$NEW_MONGO_ADMIN_USERNAME" .
    grep --exclude-dir=data -IHnr "$NEW_MONGO_ADMIN_PASSWORD" .
    
  7. Ensure to follow from step 3 to 5 for all your Media Nodes

  8. Start all the Master Nodes and all the Media Nodes

    sudo systemctl start openvidu
    

Uninstalling OpenVidu#

To uninstall any OpenVidu Node, just execute the following commands:

sudo su
systemctl stop openvidu
rm -rf /opt/openvidu/
rm /etc/systemd/system/openvidu.service