Как стать автором

Automatize it, or Docker container delivery for WebRTC

Время на прочтение 8 мин
Количество просмотров 3.8K

The vast majority of IT specialists in various fields strive to perform manually as few actions as possible. I won't be afraid of the loud words: what can be automatized, must be automatized!

Let's imagine a situation: you need to deploy a lot of servers of the same type and do it quickly. Quickly deploy, quickly undeploy. For example, to deploy test rigs for developers. When development is carried out in parallel, you may need to separate the developers, so they don't impede each other and possible errors of one of them don't block the work of the others.

There may be several ways to solve this problem:

  1. Use virtual machines. A somewhat heavy-handed solution. The virtual machine image includes the operating system, the hardware configuration and all the additional software and utilities. It all needs to be stored somewhere. Moreover, the loading time may not be lightning fast since it depends on the host's workload. In this case, each developer creates their own virtual machine with a set of all the necessary software. This option will be optimal if developers need different operating systems.

  2. Use scripts. At first glance, the simplest solution, but in fact, probably the most difficult. In this case, we don't carry an operating system and additional software. And this can play a cruel joke, for example, if suddenly any dependence on the surrounding software is not satisfied. For instance, if you accidentally find Python of the wrong version in the repository, that's it!

  3. Run the main product in containers. This is the most modern solution to date. A container is a kind of environment isolated from external factors. In some ways, it's a bit like a virtual machine but it doesn't require the inclusion of the hardware configuration in the image. In the same way as a virtual machine, it uses the resources of the host. Docker containers can easily be transferred between different hosts which is possible by the small size (in comparison with a virtual machine) and the lack of binding to the OS. The contents of containers, just like in cargo transportation, don't interact with each other in any way. So even conflicting applications can be run on the same host in different containers, as long as there are enough resources.

What's more, with containers you can not only easily deploy test landscapes and rigs for developers. Let's see how you can use containers in the context of video streaming. In streaming, you can actively use the key property of containers: isolation.

Streaming without containers usage:

Streaming with containers usage:

Containers can be used wherever it's necessary to isolate the process and protect the operation of the process from neighbors. In this simple way, you can significantly improve the quality of service to unrelated customers, for example, a blogger has his own container, a house under video surveillance has its own. You can use scripts to automatize the creation, deletion and modification of such client streaming containers.

Overall, why all the same containers, and not virtual machines?

The hypervisor always emulates hardware up to the processor instructions. Therefore, full-fledged virtualization takes up more host resources than the Docker containers. WebRTC streaming itself consumes quite a lot of resources due to traffic encryption, plus add to this more resources for the work of the virtual machine's OS. This way, the media server on the virtual machines is expected to run slower than the media server in the Docker containers when running on the same physical host.

The main question is:"How to run a media server in a Docker container?"

Let's see the example of the Web Call Server.

Easy as pie!

The image Flashphoner Web Call Server is already loaded in Docker Hub.

Deploying WCS comes down to two commands:

  1. Downloading the current build from Docker Hub:

    docker pull flashponer/webcallserver
  2. Runing a Docker container with a trial or commercial license number:

    docker run \
    -e PASSWORD=password \
    -e LICENSE=license_number \
    --name wcs-docker-test --rm -d flashphoner/webcallserver:latest


    PASSWORD - password for access to the container via SSH. If this variable is not defined, it will not be possible to get into the container via SSH;

    LICENSE - WCS licence number. If this variable is not defined, the license can be activated via the web interface.

But if everything had been so simple, I wouldn't have written this article.

The first difficulties

I install Docker on my local machine running on Ubuntu Desktop 20.04 LTS:

sudo apt install docker.io

Then I create a new internal Docker network called "testnet":

sudo docker network create \
 --subnet \
 --gateway= \
 --driver=bridge \
 --opt com.docker.network.bridge.name=br-testnet testnet

After that, I download an up-to-date WCS build from Docker Hub

sudo docker pull flashphoner/webcallserver

And launch a WCS container

sudo docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
-e LOCAL_IP= \
--net testnet --ip \
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

Here, the variables are as follows:

PASSWORD - password for access to the container via SSH. If this variable is not defined, it will not be possible to get into the container via SSH;

LICENSE - WCS licence number. If this variable is not defined, the license can be activated via the web interface;

LOCAL_IP - the IP address of the container on the docker network, which will be written to the ip_local in the settings file flashphoner.properties;

I specify the network, in which the launched container will run, in the key --net. And launch the container on the testnet network.

Then I check container availability by ping


I open the WCS Web interface in a local browser using the link and test the publication of a WebRTC stream using the "Two Way Streaming" example. It works.

Locally, I now have access to the WCS server from my computer with Docker installed. I need to give access to my colleagues.

Closed network

Docker's internal network is isolated, i.e. there is access "to the world" from the Docker's network access, but the Docker's network is not accesible "from the world".

It turns out that in order to provide colleagues with access to the test bench in Docker on my machine, I have to provide console access to my machine. For testing within a development group, this is possible at a push. But I really wanted to put it all into production. Do billions of containers all over the world only work locally?

Of course they don't. The answer was found by smoking manuals. You need to forward ports. Moreover, port forwarding is needed not on the network router, but in the Docker itself.

Great! List of ports is known. Then we forward it:

docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
-e LOCAL_IP= \
-d -p8444:8444 -p8443:8443 -p1935:1935 -p30000-33000:30000-33000 \
--net testnet --ip \
--name wcs-docker-test --rm flashphoner/webcallserver:latest

We use the following variables in this command:

PASSWORD, LICENSE, and LOCAL_IP — all have been reviewed above;

EXTERNAL_IP — external network interface IP address. It is written to the ip parameter in the settings file flashphoner.properties;

In addition, the keys -p appear in the command — this is port forwarding. In this iteration, I use the same "testnet", which I created earlier.

In a browser on another computer, I open (IP address of my Docker machine) and launch an example "Two Way Streaming"

The WCS web interface works, and there is even WebRTC traffic.

And everything would be amazing, if not for one thing!

Well, it takes forever!

It took me about 10 minutes to start the container with port forwarding. During this time, I would have managed to manually install a couple of WCS copies. This delay is due to the fact that Docker generates a binding for each port in the range.

When trying to start a second container with the same list of ports, I expectedly receive an error that the port range is already taken.

It turns out that the port forwarding option does not suit me — because of the container's slow start and the need to change ports to start the second and subsequent containers.

After googling, I found a thread at github, where a similar problem was discussed. In this discussion, it was recommended to use the host network to run the container to work with WebRTC traffic.

I launch the container on the host network (this is indicated by the key --net host)

docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
-e LOCAL_IP= \
--net host \
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

Great! The container starts up quickly. Everything works from an external machine — both the web interface and WebRTC traffic are published and reproduced.

Then I launch a couple more containers. Fortunately, there are several network cards on my computer.

Here I could draw a line under the whole thing. But I was confused by the fact that the number of containers on the host will be limited by the number of network interfaces.

The working version

Since version 1.12 Docker Docker provides two network drivers: Macvlan and IPvlan. They allow you to assign static IPs from the LAN.

  • Macvlan allows one physical network interface (host machine) to have an arbitrary number of containers, each with its own MAC address.

    Requires a Linux kernel v3.9–3.19 or 4.0+.

  • IPvlan allows creating an arbitrary number of containers for your host machine that have the same MAC address.

    Requires a Linux kernel v4.2 + (there is support for earlier kernels, but it is buggy).

I used the IPvlan driver in my installation. Partly, it happened historically, partly I had the expectation of transferring the infrastructure to VMWare ESXi. The fact is that only one MAC address per port is available for VMWare ESXi, and Macvlan technology is not suitable in this case.

So. I have an enp0s3 network interface that gets an IP address from a DHCP server.

because on my network, addresses are issued by a DHCP server, and Docker chooses and assigns addresses on its own; this can lead to conflicts if Docker chooses an address that has already been assigned to another host on the network.

To avoid this, we need to reserve part of the subnet range for using Docker. This solution has two parts:

  1. Configuring the DHCP service on the network so that it does not assign addresses in a specific range.

  2. Telling Docker about this reserved address range.

In this article, I won't tell you how to configure a DHCP server. I think that every IT specialist has come across this more than once in their practice; plus, there are plenty of manuals online.

But we will analyze in detail how to tell Docker what range is allocated for it.

I have limited the range of DHCP server addresses so that it does not issue addresses higher than 192.168.23. 99. Let's give Docker 32 addresses, starting from

The we create a new Docker network called "new-testnet":

docker network create -d ipvlan -o parent=enp0s3 \
--subnet \
--gateway \
--ip-range \


ipvlan is a network driver type;

parent=enp0s3 is a physical network interface (enp0s3) through which container traffic will go;

--subnet is a subnet;

--gateway is a default gateway for subnet;

--ip-range is the range of subnet addresses that Docker can assign to containers.

and then we launch a container with WCS on this network

docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
-e LOCAL_IP= \
--net new-testnet --ip \
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

Check the operation of the web interface and publishing/playing WebRTC traffic using the "Two-Way Streaming" example:

This approach has one small drawback. When using Ipvlan or Macvlan technologies, Docker isolates the container from the host. If, for example, we try to ping a container from the host, then all packets will be lost.

But for my current task — running WCS in a container - this is not critical. We can always ping or ssh from another machine.

Using IPvlan technology on one Docker host, we can raise the required number of containers. This number is limited only by host resources and, in part, by the network addressing of a particular network.

Running containers in Docker can be challenging only for beginners. But once you understand the technology a little, you can appreciate how simple and convenient it is. I really hope that my experience will help some of you appreciate containerization.


WCS in Docker

Deployment WCS in Docker

WCS in DockerHub

Всего голосов 1: ↑1 и ↓0 +1
Комментарии 5
Комментарии Комментарии 5



Дата регистрации
Дата основания
2–10 человек