Pull to refresh
0
Иннотех
We help the businesses with digital transformation

Modern Micro-Service Architecture: Key Challenges for System Analysts

Reading time9 min
Views1.7K
Original author: Alexander Solyar

Our previous article covered the key principles behind micro-service infrastructure. Now is the time to discuss the transition from a monolith to a micro-service. Primarily, the transition is typically driven by system analysts.

When designing micro-service infrastructure, system analysts are mainly responsible for ensuring precise, compact and formalised identification of the micro-service scope of application for an existing or detected business need.

Modern corporations have dozens, sometimes hundreds of micro-services and integrations, various technologies and architectural approaches. Although these processes work differently and pursue different goals, they overlap. In this context, teams have a hard time figuring out what, where and why is transmitted, how the entire process operates, what can impact the process and what can be impacted by it. Normally, these questions are tackled by the system analysts. They are the heroes who help:

  • bring order to chaos;

  • identify the needs;

  • design an integration;

  • understand the key and potentially dangerous parts of the processes;

  • manage team knowledge about the system;

  • inform the business of the product's capabilities and limitations.

Why Micro-Services Are Good for System Analysts

First and foremost, they are interesting. It's like a big set of blocks you can use to piece together various processes. If you like creating integrations and digging into the technical aspects, you're on the right track.

Second, it's the simplicity of adding new products and features. To do this, new micro-services are designed, and you don't need to worry about the new code disrupting the existing processes. Furthermore, tracking dependencies and impacts is a lot easier than in a monolith.

Third, it's the simplicity of the process which helps you identify the faulty micro-service if you encounter an error. Since micro-services are relatively simple, fixing the bugs is also quite easy. When updating, you can roll out your micro-service without bothering your neighbours.

Fourth, you'll work with professionals since all team members need to have a certain background and skills to implement this approach.

However, one must admit that all these advantages and the benefits of MSA on the whole can only be leveraged if you can complete a huge — I'd even say herculean — task of efficiently managing big cascades of normally hundreds of micro-services currently in use.

Key Challenges for a System Analyst

Typically, when transitioning to micro-service architecture, system analysts have to resolve a whole series of bottlenecks or challenges in terms of analysis and justification of a path that will determine the efficiency of the system.

Let's consider them one by one in more detail.

Breaking up the Monolith

Breaking up the monolith into micro-services, or refactoring, is not always feasible. Typically, it creates a heap of problems. When transitioning legacy applications to micro-services, refactoring some subsystems may take a very long time or prove to be impossible. However, you still need to interact with obsolete subsystems, even though they might use some outdated technology in terms of API design, data schemes, etc. Typically, it's a handful to deal with, too. You need at least to transform SOAP and REST interaction.

In these cases, you might want to use the Anti-Corruption Layer approach. It can be employed to isolate various subsystems by adding an extra layer between them that can be implemented as an application component or an independent service. This level would connect the two subsystems while keeping them as independent as possible. It would contain all the logic needed to transmit data both ways: when interacting with each particular subsystem, the respective data model would be used. This way, you could nibble away at the functionality of the big monolith each time and transition it to micro-services while maintaining interaction with the monolith.

Creating the Data Management System

The key recommendation when transitioning to micro-services is allocating a separate data repository to each service to prevent strong dependencies on the data level. Otherwise, it would create a whole range of issues and make the micro-service dependent which contradicts the very principle of micro-service design.

What I mean here is the logical division of data, i.e. micro-services can use the same physical database, but interact with a separate scheme, collection or table within the database.

The Database Per Service approach that uses these principles boosts the micro-service autonomy and weakens the link between the teams designing individual services.

This approach has disadvantages, too: it complicates the exchange of data between services and the provision of ACID transactional guarantees (Atomicity, Consistency, Isolation, Durability). This approach is not recommended for small applications — it is designed for large-scale projects with a lot of micro-services where each team needs to fully use all resources to design faster, upscale better and deploy independently.

In big projects where we need to manage data from a huge cascade of micro-services, we use the Saga approach to manage distributed transactions that is usually applied for financial services. It ensures consistency of the clients' financial transaction data which is critical in the financial sector — a factor system analysts need to consider.

By the way, the conventional approach to ACID properties (Atomicity, Consistency, Isolation, Durability) no longer works for the banking sector. This is due to the fact that transaction data are stored in isolated databases. Using the Saga approach resolves this issue by coordinating the process by using a sequence of local message-driven transactions to ensure data consistency.

If this approach is used, each local transaction updates data in the repository within a single micro-service and publishes an event or a message that launches the subsequent local transaction, etc. If a local transaction generates an error, a sequence of compensating transactions is carried out to cancel the changes made by the previous transactions.

There are two main ways to coordinate transactions:

  • Choreography. Decentralised coordination wherein each micro-service listens to the events/messages of another micro-service and decides whether to take action or not.

  • Orchestration. Centralised coordination wherein each individual component (orchestrator) signals the action to be taken to the micro-services.

Using this approach resolves the issue of coordinating transactions in weakly linked distributed systems even though it makes debugging more complicated. Saga works well for the systems managed by events and/or using NoSQL databases. However, it's not recommended when using SQL databases or systems with circular dependencies between services.

Supporting the Communication Centralisation Process

It's important to mention the challenge of supporting the process of centralised communication between a cascade of micro-services that you will inevitably have to resolve.

Why is it so important? It's simple: MSA is a Babylon of sorts where everything integrates with everything else in all conceivable ways.

Apart from the database, each micro-service also has an API that could stick to any protocol and faith. The format of integration between micro-services is not regulated or limited in any way. Within the same system, various protocols and approaches are used — both synchronous and asynchronous. To create a single system, the best practices have been developed by trial and error.

The most obvious way to address micro-services is the direct address from the client to the service. It can be easily used in small projects. However, we recommend using the API Gateway approach for corporate-scale applications with a lot of micro-services.

This approach is based on using a gateway placed between the client application and the micro-services creating a single sign-on point for the client.

Depending on the particular goal of using this approach, the following variations are sometimes singled out:

  • Gateway Routing. The gateway is used as a reverse proxy rerouting client requests to the relevant service.

  • Gateway Aggregation. The gateway is used to route the client request to multiple micro-services and to return aggregated responses to the client.

  • Gateway Offloading. The gateway resolves the end-to-end tasks that are common for the services: authentication, authorisation, SSL, logging, etc.

Using this approach:

  • first, reduces the number of requests;

  • second, ensures the client is independent of the protocols used by the services: REST ensures centralised management of end-to-end functionality.

However, the gateway might become a single point of failure, so it requires careful monitoring and may become a system bottleneck without upscaling.

Big corporations also have external partners. To ensure effective communication, the Double API Gateway or Mirror API Gateway approach is used. Otherwise, without a simple, single sign-on point, partners would get bogged by a variety of services, authorisation methods, access certificates and integrations while the goal is streamlining their access. When it's simple, the number of partners grows a lot faster.

This approach can be used to add APIs tailored to the needs of each partner without having to store a lot of redundant settings in a single location, and to streamline the capabilities related to ensuring the right level of protection and preventing external attacks.

Deployment

It's also important to look at the micro-service deployment process. Typically, it raises a serious issue of rolling out new versions without users noticing it, and minimising the downtime.

The Blue-Green Deployment approach allows you to deploy new service versions without users having the slightest idea while also minimising the downtime. This is achieved by launching two identical working environments denoted as 'blue' and 'green'. For example, the blue environment is the existing active instance while the green environment is the new application version deployed in parallel.

At any particular time, only one environment is active and services all the working traffic. After the new version is successfully deployed — has passed all the tests, etc. — the traffic is switched to this new version. If errors occur, you can always roll back to the previous version.

If this approach is utilised, the user feels no discomfort when the system is updated while system analysts can roll out new releases quietly, without going through hell.

Testing Micro-Services

Testing micro-services is a separate issue. The thing is, you need a lot of unit tests and they need to be technical, rather than business-oriented. Each service also needs behaviour tests. They're not so numerous and are driven by business requirements. E2E tests are very important, but they are complicated and expensive. What's the way out of this complicated situation?

In this case, the Consumer-Driven Contract Testing approach may prove useful. You're recommended to use it in large-scale projects with multiple teams working on various services. Under this approach, a set of automated tests for each service (Provider Micro-service) is written by the developers of other services (Consumer Micro-service) that call the tested service. Each set of tests is a contract that verifies whether the Provider Micro-Service meets the Consumer expectations. The tests include the request and the expected response.

The Consumer-Driven Contract Testing approach makes teams more autonomous and allows you to detect changes in services written by other teams in due time. However, this approach may require additional work on test integration since teams may use different testing tools.

Monitoring

Effective monitoring of a cascade of micro-services is one of the cornerstones whose quality impacts a lot of things, as mentioned before.

When developing micro-services, it pays to keep logs in each service instance. The logs may contain errors, warnings, information or debugging messages. However, with the rising number of services, analysing logs distributed across hosts becomes more complicated. It's really difficult, sometimes even impossible, to understand anything in this variety of logs.

The Log Aggregation approach uses a centralised logging service that collects logs from each service instance. For users, this creates a single point for searching and analysing logs, and for setting up warnings to be triggered for certain messages which makes monitoring a lot easier.

In general, micro-services existing within a single platform or corporation often need common features for monitoring, logging, security settings, network services, etc. However, this is further complicated by the fact that individual services in MSA can be designed using various languages and technologies — meaning that they may have their own dependencies and require certain language libraries which may also lead to a number of challenges during integration.

The Ambassador approach places client frameworks and libraries used to resolve peripheral issues inside an auxiliary service that acts as a proxy between a client application or the main service and other parts of the system.

Using the Ambassador approach enables you to do the following:

  • Standardise calls of client applications to common tasks regardless of the language and framework they use. Streamline integrations.

  • Resolve peripheral tasks without affecting the key functionality, for example, transferring development to separate specialised teams. This might be useful if you need to resort to centralised management of network calls and security features to avoid complex code duplication for each particular component.

  • Add new functionality into legacy applications that are hard to refactor.

Since adding a proxy increases network delays, even though it's not so significant, the Ambassador template is not recommended if the delay time is vital. You don't want to use this approach if a standard client library does the trick: for example, if a single language is used or you cannot single out common peripheral tasks.

Managing Settings

Nearly all applications use various configuration parameters during operation: service addresses, database connection strings, account credentials, certificate paths, etc. These parameters vary depending on the runtime environment: Dev, Prod, etc. Storing a configuration locally — in files deployed with the application — is seen as a particularly bad practice, especially when transitioning to micro-services. This poses serious security risks and requires new deployment after each change of configuration parameters which is very labour-intensive.

This is why it's best to use the External Configuration approach in corporate-level applications where all configurations are stored in an external repository. It may be a cloud storage service, a database or another system.

If this approach is used, the collection process will be independent of the runtime environment and security risks will be minimised because configurations for the working environment will cease to be part of the code base.

Does MSA hold the keys to the future?

Micro-service architecture helps to accelerate implementation and improvement of the services. It's the speed of new functionality deployment that helps new users select the product. Therefore, studying and implementing MSA today is an extra boost for project success in the future.

Tags:
Hubs:
+3
Comments0

Articles

Information

Website
inno.tech
Registered
Founded
Employees
5,001–10,000 employees
Representative
Дмитрий