Pull to refresh
273.48

DevOps *

Software Development Methodology

Show first
Rating limit
Level of difficulty

Using a headless browser for WebRTC load tests

Reading time6 min
Views3.9K

In the previous article we went over a load test whose data could be used to choose a load-appropriate server. In the course of the testing, we would publish a stream on one WCS, and we would pick up that stream several times using a second WCS. The acquired results could be used as a basis for decisions on server operability.

Some would (justly) have concerns regarding the possible biases in such a test — after all, one of our servers was used to test another one of our servers. Could it be that we were using a specially optimized code that skewed the results in our favor?

Read more
Total votes 1: ↑1 and ↓0+1
Comments0

Choosing a server for 1000 WebRTC streams

Reading time9 min
Views2K

In any project, a great deal of importance is placed on the selection of server hardware and WebRTC streaming is no exception. One of the key principles of such a selection is balance – the hardware should be powerful enough to handle the streams with no drops in quality, but not too powerful so as to waste resources. So, how does one choose the right server?

Read more
Total votes 3: ↑3 and ↓0+3
Comments0

Google Cloud Platform for WebRTC CDN with Balancing and Autoscaling

Reading time9 min
Views1.5K

In the previous article we refreshed our memory of WebRTC CDN and the ways this technology helps to minimize latency for WebRTC streams. We also discussed why load balancing and autoscaling wouldn't be amiss in CDNs. Here are the main points from the article:

Read more
Total votes 1: ↑1 and ↓0+1
Comments0

AWS, ELB, CDN, Autoscaling and other abbreviations and terms related to low-latency WebRTC

Reading time11 min
Views1.2K

The modern browsers do not give users a choice between using WebRTC and not using it. And while you can playback streams using HLS or MSE, WebRTC remains the only tool for capturing camera feeds and publishing streams from a browser. The browser developers have accepted this "format" and integrated it into their products – just as they used to support the Flash Player as a plugin. The only difference is that WebRTC comes natively integrated into the browser — as code, not a plugin. If, in a few years, a new and better library for video streaming is introduced they will undoubtedly make a switch. But these days, Chrome maintains its dominance, so no contenders for WebRTC are in sight.

Read more
Total votes 1: ↑1 and ↓0+1
Comments0

Automatize it, or Docker container delivery for WebRTC

Reading time8 min
Views4.1K

The vast majority of IT specialists in various fields strive to perform manually as few actions as possible. I won't be afraid of the loud words: what can be automatized, must be automatized!

Let's imagine a situation: you need to deploy a lot of servers of the same type and do it quickly. Quickly deploy, quickly undeploy. For example, to deploy test rigs for developers. When development is carried out in parallel, you may need to separate the developers, so they don't impede each other and possible errors of one of them don't block the work of the others.

There may be several ways to solve this problem:

Read more
Total votes 1: ↑1 and ↓0+1
Comments5

«No Windows no problems» What?

Reading time5 min
Views4.3K

Hi, My name is Alex and I am a DevOps engineer at Altenar. “No Windows, no problems.” - that is the answer I got by asking a guru of Ansible "How do you manage Windows?" on one of the local Ansible meetups. Although we have been running a modern stack (k8s, helm, .net core, etc) in production for about two years, that’s not how it has always been.

Read more
Total votes 5: ↑5 and ↓0+5
Comments3

PVS-Studio New Features for Notifying Developers About Errors Found

Reading time6 min
Views548

PVS-Studio user support often receives clients' suggestions on product improvement. We are happy to implement many of them. Recently one of the users suggested refining the automatic notification utility for developers (Blame Notifier). They asked us to make Blame Notifier extract the date/the code revision to which the analyzer issued a message using blame information from the version control system. This feature allowed us to expand the utility capabilities, which we'll discuss in this article.

Читать далее
Total votes 2: ↑1 and ↓1+2
Comments0

IncrediBuild: How to Speed up Your Project's Build and Analysis

Reading time7 min
Views1.4K

"How much longer are you going to build it?" - a phrase that every developer has uttered at least once in the middle of the night. Yes, a build can be long and there is no escaping it. One does not simply redistribute the whole thing among 100+ cores, instead of some pathetic 8-12 ones. Or is it possible?

Читать далее
Rating0
Comments0

11 Kubernetes implementation mistakes – and how to avoid them

Reading time13 min
Views4.5K

I manage a team that designs and introduces in-house Kubernetes aaS at Mail.ru Cloud Solutions. And we often see a lack of understanding as to this technology, so I’d like to talk about common strategic mistakes at Kubernetes implementation in major projects.

Most of the problems arise because the technology is quite sophisticated. There are unobvious implementation and operation challenges, as well as poorly used advantages, all of those resulting in money loss. Another issue is the global lack of knowledge and experience with Kubernetes. Learning its use by the book can be tricky, and hiring qualified staff can be challenging. All the hype complicates Kubernetes-related decision making. Curiously enough, Kubernetes is often implemented rather formally – just for it to be there and make their lives better in some way.

Hopefully, this post will help you to make a decision you will feel proud of later (and won’t regret or feel like building a time machine to undo it).
Read more →
Total votes 18: ↑18 and ↓0+18
Comments1

Building projects (CI/CD), instruments

Reading time7 min
Views1.6K

In some projects, the build script is playing the role of Cinderella. The team focuses its main effort on code development. And the build process itself could be handled by people who are far from development (for example, those responsible for operation or deployment). If the build script works somehow, then everyone prefers not to touch it, and no one ever is thinking about optimization. However, in large heterogeneous projects, the build process could be quite complex, and it is possible to approach it as an independent project.If you treat the build script as a secondary unimportant project, then the result will be an indigestible imperative script, the support of which will be rather difficult.


In this note we will take look at the criteria by which we chose the toolkit, and in the next one — how we use this toolkit. (There is also a Russian version.)


CI/CD (opensource.com)

Read more →
Rating0
Comments0

How to Get Nice Error Reports Using SARIF in GitHub

Reading time7 min
Views1.6K

Let's say you use GitHub, write code, and do other fun stuff. You also use a static analyzer to enhance your work quality and optimize the timing. Once you come up with an idea - why not view the errors that the analyzer gave right in GitHub? Yeah, and also it would be great if it looked nice. So, what should you do? The answer is very simple. SARIF is right for you. This article will cover what SARIF is and how to set it up. Enjoy the reading!

Читать далее
Total votes 3: ↑3 and ↓0+3
Comments0

Prometheus in Action: from default counters to SLO-related queries

Reading time8 min
Views7.6K

All Prometheus metrics are based on time series - streams of timestamped values belonging to the same metric. Each time series is uniquely identified by its metric name and optional key-value pairs called labels. The metric name specifies some characteristics of the measured system, such as http_requests_total - the total number of received HTTP requests. In practice, you often will be interested in some subset of the values of a metric, for example, in the number of requests received by a particular endpoint; and here is where the labels come in handy. We can partition a metric by adding endpoint label and see the statics for a particular endpoint: http_requests_total{endpoint="api/status"}. Every metric has two automatically created labels: job_name and instance. We see their roles in the next section.

Prometheus provides a functional query language called PromQL. The result of the query might be evaluated to one of four types:

Scalar (aka float)

String (currently unused)

Instant Vector - a set of time series that have exactly one value per timestamp.

Range Vector - a set of time series that have a range of values between two timestamps.

At first glance, Instant Vector might look like an array, and Range Vector as a matrix.

If that would be the case, then a Range Vector for a single time series "downgrades" to an Instant Vector. However, that's not the case:

Read more
Rating0
Comments2

Distributed Tracing for Microservice Architecture

Reading time8 min
Views5.6K

What is distributed tracing? Distributed tracing is a method used to profile and monitor applications, especially those built using a microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance.

Let’s have a look at a simple prototype. A user fetches information about a shipment from `logistic` service. logistic service does some computation and fetches the data from a database. logistic service doesn’t know the actual status of the shipment, so it has to fetch the updated status from another service `tracking`. `tracking` service also needs to fetch the data from a database and to do some computation.

In the screenshot below, we see a whole life cycle of the request issued to `logistics` service:

Read more
Rating0
Comments0

The Rules for Data Processing Pipeline Builders

Reading time5 min
Views3.8K


"Come, let us make bricks, and burn them thoroughly."
– legendary builders

You may have noticed by 2020 that data is eating the world. And whenever any reasonable amount of data needs processing, a complicated multi-stage data processing pipeline will be involved.


At Bumble — the parent company operating Badoo and Bumble apps — we apply hundreds of data transforming steps while processing our data sources: a high volume of user-generated events, production databases and external systems. This all adds up to quite a complex system! And just as with any other engineering system, unless carefully maintained, pipelines tend to turn into a house of cards — failing daily, requiring manual data fixes and constant monitoring.


For this reason, I want to share certain good engineering practises with you, ones that make it possible to build scalable data processing pipelines from composable steps. While some engineers understand such rules intuitively, I had to learn them by doing, making mistakes, fixing, sweating and fixing things again…


So behold! I bring you my favourite Rules for Data Processing Pipeline Builders.

Read more →
Total votes 9: ↑9 and ↓0+9
Comments2

Authors' contribution