Installing Multiple Jibri as Docker Images - More Scallable and Managable Way of Jitsi Conference Recording

Complete guide to instal and confiure multiple Docker Jibri images in Ubuntu 18.04 and 20.04 LTS.

Why Jibri on Docker?

Jibri is the Jitsi’s componen does recording of a conference. It is possible to install a Jibri component to a single instance and conect to Jitsi. For multiple concurrent recordings you need have multiple Jibri instances running and connected to your Jitsi environment. Since Jibri is a resource consuming component in terms of CPU and memory usage, running these Jibri instances inside Docker cotainers becomes much more feasible and managable option if need to have concurrent recordings. Also if you have lots of concurrent recordings, it is not easy to managage many seperate Jibri instances handling these recordings. Dockerized Jibri installation makes much more easy to manage your Jibri instances and also you will gain some resource to be used by your Jibris instead of your operating system.
I will guide you to install 6 Jibris as Docker images and also explain how to configure your Jitsi environment. I assume that you have basic understanding of Docker and Jitsi. If you don’t, dont worry. Because, by following the instructions below you still be able to set up and configure your Jibris inside Dockers for multiple concurrent recordings.
 

 Table of Contents

Configuring a Jitsi Meet Environment for Jibri

 

Before we begin to install Docker and Dockerized Jibri instances we need to configure our Jitsi environment. To do so, we need to make some configuration in our Jitsi environment (Prosody, Jicofo and Jitsi-Meet web application).

Update internal muc component in /etc/prosody/conf.d/YOUR_JITSI_DOMAIN.cfg.lua file as;

-- internal muc component

Component "internal.auth.YOUR_JITSI_DOMAIN" "muc"

storage = "memory"

modules_enabled = {

      "ping";

    }

    admins = { "focus@auth.YOUR_JITSI_DOMAIN", "jvb@auth.YOUR_JITSI_DOMAIN", "jibri@auth.YOUR_JITSI_DOMAIN" }

    muc_room_locking = false

    muc_room_default_public_jids = true

    muc_room_cache_size = 1000

    c2s_require_encryption = false

In the same file, add recorder virtual host as;

VirtualHost "recorder.YOUR_JITSI_DOMAIN"

    modules_enabled = {

        "ping";

    }

    authentication = "internal_plain"

    c2s_require_encryption = false

    allow_empty_token = true

Create 2 Prosody users for Jibri to connect (Username and password values will be user later in Jibri setup);

 

prosodyctl register jibri auth.YOUR_JITSI_DOMAIN YOUR_JIBRI_USER_PASSWORD

prosodyctl register recorder recorder.YOUR_JITSI_DOMAIN YOUR_RECORDER_USER_PASSWORD

Note: The first account is the one Jibri will use to log into the control MUC (where Jibri will send its status and await commands). The second account is the one, Jibri will use as a client in selenium when it joins to the call so that it can be treated in a special way by the Jitsi Meet web UI.

 

Edit /etc/jitsi/jicofo/sip-communicator.properties file and add following properties;

org.jitsi.jicofo.jibri.BREWERY=JibriBrewery@internal.auth.YOUR_JITSI_DOMAIN
org.jitsi.jicofo.jibri.PENDING_TIMEOUT=90

 

Edit the /etc/jitsi/meet/YOUR_JITSI_DOMAIN-config.js file, add/set the following properties;

fileRecordingsEnabled: true, // If you want to enable file recording

liveStreamingEnabled: true, // If you want to enable live streaming

hiddenDomain: 'recorder.YOUR_JITSI_DOMAIN',

Edit /usr/share/jitsi-meet/interface_config.js file as;

Add to the TOOLBAR_BUTTONS array recording if you want to show the recording button, add the livestreaming if you want to show the live streaming button.

Setting up FQDN of Your Jibri Instance

Login to your Jibri VM;

 

Run;

sudo hostnamectl set-hostname YOUR_JIBRI_DOMAIN

 

Edit /etc/hostname file as;

YOUR_JIBRI_DOMAIN

Edit /etc/hosts file as;

127.0.0.1       localhost
#YOUR_LOCAL_IP_IF_BEHID_NAT  YOUR_JIBRI_DOMAIN YOUR_HOST_NICK
194.146.24.85   YOUR_JIBRI_DOMAIN YOUR_HOST_NICK
127.0.0.1       localhost         YOUR_JIBRI_DOMAIN

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

 

To restart VM run;

reboot

 

After restart to test your FQDN setup run;

ping "$(hostname)"

 

Should ping 127.0.0.1 and command output will be similar to;

PING YOUR_JIBRI_DOMAIN (127.0.0.1) 56(84) bytes of data.

64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.026 ms

64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.041 ms

64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.045 ms

 

Installing Kernel Modules ALSA Loopback

 

Install Extra Virtual Linux Kernel Modules;

apt update

apt install linux-image-extra-virtual

 

To load the ALSA loopback module into the kernel perform the following tasks as the root user;

 Configure 12 capture/playback interfaces;

echo "options snd-aloop enable=1,1,1,1,1,1,1,1,1,1,1,1 index=0,1,2,3,4,5,6,7,8,9,10,11" > /etc/modprobe.d/alsa-loopback.conf

Set up the module to be loaded on boot;

echo "snd-aloop">>/etc/modules

 

Load the module into the running kernel;

modprobe snd-aloop

 

Check to see that the module is already loaded;

lsmod | grep snd_aloop

 

Output should be similar to;

snd_aloop              24576  0

snd_pcm                98304  1 snd_aloop

snd                    81920  3 snd_timer,snd_aloop,snd_pcm

 

If the output shows the snd-aloop module loaded, then the ALSA loopback configuration step is complete.

To reload the module when system restarted automatically edit 

/etc/default/grub file as;

 

Modify the value of GRUB_DEFAULT from “0” to “1>2”

 

Now reboot;

reboot

 

Check your Loopbacks;

 ls -alh /proc/asound

 

Output should be similar to;

dr-xr-xr-x  16 root root 0 Nov 24 13:54 .

dr-xr-xr-x 192 root root 0 Nov 24 13:54 ..

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card0

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card1

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card10

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card11

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card2

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card3

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card4

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card5

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card6

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card7

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card8

dr-xr-xr-x   6 root root 0 Nov 24 13:55 card9

-r--r--r--   1 root root 0 Nov 24 13:55 cards

-r--r--r--   1 root root 0 Nov 24 13:55 devices

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback -> card0

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_1 -> card1

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_2 -> card2

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_3 -> card3

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_4 -> card4

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_5 -> card5

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_6 -> card6

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_7 -> card7

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_8 -> card8

lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_9 -> card9

lrwxrwxrwx   1 root root 6 Nov 24 13:55 Loopback_A -> card10

lrwxrwxrwx   1 root root 6 Nov 24 13:55 Loopback_B -> card11

-r--r--r--   1 root root 0 Nov 24 13:55 modules

System needs to be updated to make it safer and reliable to install Docker. Run;

 

sudo apt update

sudo apt upgrade

 

Note: To be safe keep current configuration files if it is asked during system update.

 

Once we have updated the system, we need to install some necessary packages before we are ready to install Docker. 

sudo apt-get install curl apt-transport-https ca-certificates software-properties-common

 

Note: Packages installed above;

 

apt-transport-https : lets the package manager transfer files and data over https

ca-certificates : lets the web browser and system check security certificates

curl : transfers data

software-properties-common : adds scripts to manage the software

 

 

Adding Docker repositories;

 

Add PGP Key;

 

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

 

Add repository;

 

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

 

Update repository info;

 

sudo apt update

 

Make sure you are installing from the Docker repo instead of the default Ubuntu repo with this command;

apt-cache policy docker-ce

 

A correct output will look like the following with different version numbers (go to the first lines of the output):

 

docker-ce:

  Installed: (none)

  Candidate: 5:19.03.13~3-0~ubuntu-bionic

  Version table:

     5:19.03.13~3-0~ubuntu-bionic 500

        500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages

 

As you can see, docker-ce is not installed, so we can move on to the next step.

 

 

Install Docker;

sudo apt install docker-ce

 

Check Docker Status;

sudo systemctl status docker

 

 

Install Docker Compose

apt install docker-compose

Installing Jibri Dockers

Get Jibri Docker container;

 

docker pull jitsi/jibri

 

 

Creating Jibri Docker configuration file and directories;

 

cd &&
mkdir jibri-docker &&
cd jibri-docker &&
touch .env &&
touch jibri.yml &&
mkdir config &&
cd config &&
touch .asoundrc1 &&
touch .asoundrc2 &&
touch .asoundrc3 &&
touch .asoundrc4 &&
touch .asoundrc5 &&
touch .asoundrc6 &&
cd ..

 

Edit /root/jibri-docker/.env file;

File content will be exactly as follows;

 

# JIBRI CONFIG
# Internal XMPP domain for authenticated services
XMPP_AUTH_DOMAIN=auth.YOUR_JITSI_DOMAIN

# XMPP domain for the internal MUC used for jibri, jigasi and jvb pools
XMPP_INTERNAL_MUC_DOMAIN=internal.auth.YOUR_JITSI_DOMAIN

# XMPP domain for the jibri recorder
XMPP_RECORDER_DOMAIN=recorder.YOUR_JITSI_DOMAIN

# Internal XMPP server
XMPP_SERVER=YOUR_JITSI_DOMAIN

# Internal XMPP domain
XMPP_DOMAIN=YOUR_JITSI_DOMAIN

# XMPP user for Jibri client connections
JIBRI_XMPP_USER=jibri

# XMPP password for Jibri client connections
JIBRI_XMPP_PASSWORD=YOUR_JIBRI_USER_PASSWORD

# MUC name for the Jibri pool
JIBRI_BREWERY_MUC=jibribrewery

# XMPP recorder user for Jibri client connections
JIBRI_RECORDER_USER=recorder

# XMPP recorder password for Jibri client connections
JIBRI_RECORDER_PASSWORD=YOUR_RECORDER_USER_PASSWORD

# Directory for recordings inside Jibri container
JIBRI_RECORDING_DIR=/config/recordings

# The finalizing script. Will run after recording is complete
JIBRI_FINALIZE_RECORDING_SCRIPT_PATH=/config/finalize.sh


# When jibri gets a request to start a service for a room, the room
# jid will look like: roomName@optional.prefixes.subdomain.xmpp_domain
# We'll build the url for the call by transforming that into:
# https://xmpp_domain/subdomain/roomName
# So if there are any prefixes in the jid (like jitsi meet, which
# has its participants join a muc at conference.xmpp_domain) then
# list that prefix here so it can be stripped out to generate
# the call url correctly

JIBRI_STRIP_DOMAIN_JID=conference

# Directory for logs inside Jibri container
JIBRI_LOGS_DIR=/config/logs

DISPLAY=:0=

Edit /root/jibri-docker/jibri.yml file (Configuration file for Docker images);

 

nano /root/jibri-docker/jibri.yml

 

File content will be as follows for 2 Docker Jibri images (To add more Jibri Docker images just copy and paste Jibri service block (Jibri1 or 2) and change red colored numbers.)

 

 

version: '3'

services:
    jibri1:
       image: jitsi/jibri
       restart: ${RESTART_POLICY}
       volumes:
            - ${CONFIG}/jibri1:/config:Z
           - /dev/shm:/dev/shm
            - /root/jibri-docker/config/.asoundrc1:/home/jibri/.asoundrc
           - /root/jibri-docker/recordings:/config/recordings
       cap_add:
           - SYS_ADMIN
           - NET_BIND_SERVICE
       devices:
           - /dev/snd:/dev/snd
       environment:
           - XMPP_AUTH_DOMAIN
           - XMPP_INTERNAL_MUC_DOMAIN
           - XMPP_RECORDER_DOMAIN
           - XMPP_SERVER
           - XMPP_DOMAIN
           - JIBRI_XMPP_USER
           - JIBRI_XMPP_PASSWORD
           - JIBRI_BREWERY_MUC
           - JIBRI_RECORDER_USER
           - JIBRI_RECORDER_PASSWORD
           - JIBRI_RECORDING_DIR
           - JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
           - JIBRI_STRIP_DOMAIN_JID
           - JIBRI_LOGS_DIR
           - DISPLAY=:0
           - TZ
    jibri2:
       image: jitsi/jibri
       restart: ${RESTART_POLICY}
       volumes:
            - ${CONFIG}/jibri2:/config:Z
           - /dev/shm:/dev/shm
           - /root/jibri-docker/config/.asoundrc2:/home/jibri/.asoundrc
           - /root/jibri-docker/recordings:/config/recordings
       cap_add:
           - SYS_ADMIN
           - NET_BIND_SERVICE
       devices:
           - /dev/snd:/dev/snd
       environment:
           - XMPP_AUTH_DOMAIN
           - XMPP_INTERNAL_MUC_DOMAIN
           - XMPP_RECORDER_DOMAIN
           - XMPP_SERVER
           - XMPP_DOMAIN
           - JIBRI_XMPP_USER
           - JIBRI_XMPP_PASSWORD
           - JIBRI_BREWERY_MUC
           - JIBRI_RECORDER_USER
           - JIBRI_RECORDER_PASSWORD
           - JIBRI_RECORDING_DIR
           - JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
           - JIBRI_STRIP_DOMAIN_JID
           - JIBRI_LOGS_DIR
           - DISPLAY=:0
            - TZ

 

 

Edit /root/jibri-docker/config/.asoundrcX files (X is the number for each container i.e. for Jibri container 1 config file is .asoundrc1 )

 

 

Note: Each Jibri Docker image is using 2 ALSA loopback for recording. This seems a new update in Jibri Docker image configuration. To do so, for each .asoundrc file there will be 2 Loopback couples( Loopback and Loopback_1, Loopback_2 and Loopback_3, Loopback_4 and Loopback_5 etc..). Loopback naming convention starts with Loopback and goes on as;  Loopback_1, Loopback_2…Loopback_9, Loopback_A, Loopback_B, Loopback_C etc…

For now 12 Loopbacks are defined in the system which means 6 Jibri Dockers running concurrently. 32 loopbacks can be defined as a maximum limit of ALSA. So first 2 .asoundrc files will be as follows. Others should be configured respectively.

 

Content of the /root/jibri-docker/config/.asoundrc1 file (which will be used by Jibri Docker instance 1) will be as follows.


pcm.amix {
 type dmix
 ipc_key 219345
 slave.pcm "hw:Loopback,0,0"
}

pcm.asnoop {
 type dsnoop
 ipc_key 219346
 slave.pcm "hw:Loopback_1,1,0"
}
pcm.aduplex {
 type asym
 playback.pcm "amix"
 capture.pcm "asnoop"
}

pcm.bmix {
 type dmix
 ipc_key 219347
 slave.pcm "hw:Loopback_1,0,0"
}

pcm.bsnoop {
 type dsnoop
 ipc_key 219348
 slave.pcm "hw:Loopback,1,0"
}

pcm.bduplex {
 type asym
 playback.pcm "bmix"
 capture.pcm "bsnoop"
}

pcm.pjsua {
 type plug
 slave.pcm "bduplex"
}

pcm.!default {
 type plug
 slave.pcm "aduplex"
}

 

Content of the /root/jibri-docker/config/.asoundrc2 file (which will be used by Jibri Docker instance 2) will be as follows.


pcm.amix {
 type dmix
 ipc_key 219345
 slave.pcm "hw:Loopback_2,0,0"
}
pcm.asnoop {
 type dsnoop
 ipc_key 219346
 slave.pcm "hw:Loopback_3,1,0"
}
pcm.aduplex {
 type asym
 playback.pcm "amix"
 capture.pcm "asnoop"
}
pcm.bmix {
 type dmix
 ipc_key 219347
 slave.pcm "hw:Loopback_3,0,0"
}
pcm.bsnoop {
 type dsnoop
 ipc_key 219348
 slave.pcm "hw:Loopback_2,1,0"
}
pcm.bduplex {
 type asym
 playback.pcm "bmix"
 capture.pcm "bsnoop"
}

pcm.pjsua {
 type plug
 slave.pcm "bduplex"
}

pcm.!default {
 type plug
 slave.pcm "aduplex"
}

 

 

To up Jibri Docker containers;

 

cd /root/jibri-docker

docker-compose -f jibri.yml up -d

 

Note: If you’d like the container to restart on reboots or crashes: find the container ID with docker ps -a and use it with docker update –restart unless-stopped CONTAINER ID

 

To list running Dockers;

docker ps

 

To down your Jibri Dockers;

cd /root/jibri-docker

docker-compose -f jibri.yml down

Testing

Start a new meeting as a room owner. Open “more” menu with three dots in the right-down corner. There click start recording. Start up to 6 new recordings until system resources handle.

 

Open “more” menu again with three dots in the right-down corner. There click stop recording.

Log in to your Jibri server as root. You can find recorded MP4 video files inside /root/jibri-docker/recordings directory. For each recording session, a directory named with session id is created by Jibri. You can find recorded MP4 videos under these directories.

 

 

Now you have your new Docker Jibri instances ready for recording !
And if you need support for Jitsi do not hesitate to WhatsApp us. We are giving professional grade Jitsi consultation service including installation, integration, customisation and maintenance support. 
For your questions and comments please contribute below.

2 thoughts on “Installing Jibri as Docker Images – More Scallable and Managable Way of Jitsi Conference Recording”

  1. Ramachandran U

    Hi,

    I can able to record multiple sessions, but my memory filled within minute and jibri server was crashed.
    can you help me to address this issues.

    Regards,
    Ram U

Leave a Comment

Your email address will not be published. Required fields are marked *