=Paper=
{{Paper
|id=Vol-2872/paper09
|storemode=property
|title=In-Memory Database Testing Performance Measurements in Azure
|pdfUrl=https://ceur-ws.org/Vol-2872/paper09.pdf
|volume=Vol-2872
|authors=Ledia Bozo,Olta Dedej
|dblpUrl=https://dblp.org/rec/conf/rtacsit/BozoD21
}}
==In-Memory Database Testing Performance Measurements in Azure==
In-Memory Database Testing Performance Measurements in Azure Ledia Bozo a, Olta Dedej b a University of Tirana, Faculty of Natural Sciences, Tirana, Albania b University of Tirana, Faculty of Natural Sciences, Tirana, Albania Abstract Testing is the process which involves different pieces of a system and subsystems in order to interact in certain situations that takes into account factors such as performance, scalability, memory allocation and system reliability, to provide the assurance that the software development has been in accordance with functional requirements. In this paper we first demonstrate different types of testing a web application as Integration and Unit Testing, then we handle a specific case study of performance improvement when In-Memory Database Testing is implemented and how the execution time is reduced. Furthermore, this study will have a focus on how automatic tests can be executed in an online Azure repository every time a new request is published. Keywords 1 System-Testing, Integration-Testing, Unit-Testing, Web Host, In-Memory Database, Pipeline, Azure, Pull Request. execution time as well as their scalability in the 1. Introduction future. Furthermore, the integration testing technique is presented by implementing in host logic to enable a more flexible communication The rapid pace of technology evolution through other subsystems in a distributed shows the need in the market to have dynamic environment or not. The last session of this systems that successfully fulfill the main goals paper covers the logic of implementing that businesses and users have. Testing of such automated unit testing in Azure DevOps applications goes hand in hand with their repository whenever a pull request of a development techniques. As soon as we have a development team member is published. boost in the construction methodology of a system, consequently we have improvements in the ways that this system can be tested. 2. How a software-based system To test a web application during can be tested? development, two main test methodologies are implemented: Unit Testing and Integration There are many different types of testing Testing. For each part of the system such tests that you can use to make sure that changes to are written with the sole purpose of identifying your code are working and the system is still errors, but building them is not easy and they error free. Not all tests are equal [1]. There are often depend on other factors that affect their two categories of testing techniques: manual performance and scalability. tests and automated tests. Manual tests are performed by a person or a group of people In this paper it will be discussed how the interacting with the software or APIs with the implementation of an in-memory database for appropriate tooling. On the other hand, unit testing and integration testing increases automated tests are performed by a machine system performance both in reducing their which executes a test script that has been Proccedings of RTA-CSIT 2021, May 2021, Tirana, Albania EMAIL: ledia.hajderi@fshn.edu.al (L. Bozo); olta.dedej@fshn.edu.al(O. Dedej); ©️ 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) written before. It is also less expensive than test documentation, which prepare for the test manual tests [2]. Not all automated tests are the and accompany it [4]. same, they differ from their complexity to the Unit tests and integration tests can be built quality of their test results. Using automated using different methods to improve their tests is a way to provide continuous integration performance. Some of these techniques are in- and continuous delivery. Test case designs can memory database and in-host testing. In section be automated for a set of test goals with the help below we demonstrate how in-memory of evolutionary algorithms [3]. database and in-host are used with a web application in order to test this application 2.1. Automatic test types faster and more efficiently. Manual testing of the complete system is 2.2. In-memory Database costly and time consuming, because every test technique of Entity case comprises building up a scenario with real data. In contrast, automated tests can perform a Framework Core great number of test cases with less effort. Therefore, automated functional tests Entity Framework Core (EF Core) provides performed in a controlled simulation a persistence layer for .NET applications that environment in addition to manual tests could allows developers to work at a higher level of form an important quality assurance measure. abstraction when interacting with data and data- Different types of automated tests mostly used access interfaces [5]. EF Core offers an in- in every built-in system are as follows: memory database provider which allows it to be ● Unit tests which are created to test all used with an in-memory database [5]. This functionalities of a unit independently provider is designed for testing purposes only. from other parts of the system. The in-memory database can be used to test ● Integration tests show how different your code without the need of having a real individual parts of a system can be database server configured on your local tested grouped together. They are machine, as well as it will allow you to save executed after unit tests. The unit serves data that would violate referential integrity as an input for those tests. Integration constraints in a relational database. tests are used to verify the interaction Using in-memory databases with unit tests with the database or the communication or integrations tests shows how the general between microservices in order to verify performance of executing those tests has that they work as expected. improved. ● Functional tests represent the most important test procedure - they are used 2.3. In-Host Test Server to check the correct functioning of a system without analyzing the internal Modern times require modern software system structures. techniques to handle all the requests for ● Acceptance tests (according to ISO accomplishing their goals. To build an 10360 standard), the main principle application we can choose from different types which is to perform an overall of architectural styles as are Microservices performance test of the entire coordinate Architecture, Model-View-Controller (MVC) / measuring system (CMS). Therefore, Model-View-ViewModel (MVVM), n-tier the test should be performed as an architectures etc. The execution of tests written integrated system (i.e. such as a Black for an application using Microservices Box testing technique) and it should Architecture requires the endpoint (which assess the system using the complete includes the logic of communication between measurement chain. microservices) to be running at all times, otherwise these tests cannot be executed. Using A systematic test is divided into the core in-host technique removes the need of having activities of test case design, test execution, the endpoint running every time that integration monitoring and test evaluation as well as the tests should be executed. In-host or Software activities of test planning, test organization and Under Test (SUT) environments [6] can be configured using environment variables in the user. If you are a system administrator you .NET5. Next time when a SUT is configured, have the right to manage all the parts of the executing integration tests will not require for system no matter what area you are, but if you the endpoint to be running so they will execute are an area administrator you can manage only independently. Designing tests using this requests inside your area. An area can be a approach improves execution time for country in which the system is published, for integration tests as well as verifies that different example Germany, Italy or Spain. Making this components of the application are working as work, requires that all 6 projects communicate expected or not. together in the order specified in Figure 1, but what are the functionalities that a project is 3. Monitoring Data Tool responsible for? Figure 2 explains how requests are processed in different phases of their life- Application cycle: Monitoring Data Tool is an application specifically designed for a worldwide automobile company. This application is designed to handle daily work to company subsidiaries. The real concept behind this is to create different input forms for specific daily routines as are entry reports for spare parts, vehicles, tires etc. Once a form is created by a super-user it can be shared between other users. The form is created using drag and drop functionality in the user interface, then it is Figure 2: Request life-cycle translated into readable code for machines. The architecture style that the application uses is The primary purpose of Windows Azure Microservice Architecture, consisting of five Service Bus is to relay messages through the main microservices that communicate with cloud in order to support application each-other and the Web App project according connectivity. The Service Bus gets its name to the schema in Figure 1: from the Enterprise Service Bus (ESB) Pattern. This ESB pattern defines a standard based integration model that combines Web Services, data transformation, messaging and intelligent routing. The ESB platform is used to coordinate the interaction of diverse applications [7]. The Azure Service Bus exposes an application's services through an endpoint. Each endpoint is assigned a URI (Universal Resource Identifier), which is published using the service register. Endpoints can then be discovered by clients that use the service register. Each endpoint provides a rendezvous address that can be used for communication. Some of the available communication types are [8]: Figure 1: Monitoring Data Tool projects communication between each-other ● One-Way Messaging ● Publish/Subscribe Messaging 3.1. Communication ● Direct Connectivity between projects and Azure Monitoring Data Tool uses Service Bus Queue publish/subscribe messaging where different Monitoring Data Tool (MTD) is designed to services specially Core and Common are handle requests from users all over the world. It registered to the same Service Bus rendezvous offers functionalities according to the role of address. When Core or Common submit a message to this address, the relay will distribute with a large number of unit tests. If we compare the messages to all services that have registered. the best execution times, the results show that In this way Core and Common can be both the in-memory database approach is 1.629 publishers and subscribers which share data times faster than the physical database together using queue messages. approach. 4. Unit Tests performance 𝑡1 ÷ 𝑡2 = 440 ÷ 270 = 1.629 measures in Monitoring Data The two different approaches are also used Tool application on integration tests to show the difference in seconds. MTD application for every core logic has unit tests written using unit test boilerplate 4.1. Integration Tests generators. It is a tool that generates a unit test boilerplate from a given C# setting app mocks performance measures in for all dependencies [9]. Using this tool MTD Monitoring Data Tool has successfully generated 2900 unit tests application which are executed every time a core logic changes. MTD application in total has 1760 Unit tests are designed to use two different integration tests written to test communication approaches for databases (in-memory and between different parts of the system. Those physical database). We will compare the tests are executed using the two different executed time for all 2900 unit tests in two test approaches explained in section 4. Integration cases: tests have been executed using an in-host 1. In-Memory Database Execution testing environment, which increases the 2. Physical Database Execution general performance of executing these kinds of tests. The tests have been performed The tests have been executed exactly 30 specifically 20 times in both approaches, and times for each database. The range in minutes the best execution time in seconds is shown in for an in-memory database varies from 4 min Figure 4: and 30 seconds which is the best execution time to 9 min and 45 seconds which is the slowest. The following diagram shows time per seconds needed to execute 2900 unit tests in both approaches. Figure 4: Execution time in seconds per integration tests If the best execution times for the physical Figure 3: Execution time in seconds per unit and the in-memory database are compared, tests we come to the conclusion that an in- While investigating the data we come to the memory database approach is 1.604 times conclusion that in-memory database approach faster than a physical database approach. is less time consuming than physical database approach. If you repeat these tests 50, 100, or 𝑡1 ÷ 𝑡2 = 337 ÷ 210 = 1.604 200 times, all the data show that in-memory approach is better to use in complex systems 5. Execution of unit tests in Azure configuring Azure Pipeline keeps the software cleaner from unneeded code and improves the DevOps quality of pull requests. MTD application is configured in Azure DevOps repository. Using Azure DevOps as an online server to save our code has improved a 7. References lot the quality of code the MTD’s developer [1] Nadia Alshahwan and Mark Harman. write. When developers commit changes into Automated Web Application Testing Using their branch, they are required to create a pull Search Based Software Engineering, 2011 26th request to the main branch. Once the pull IEEE/ACM International Conference on request is created, before improvement, the Automated Software Engineering (ASE 2011), Azure pipeline has to execute all the unit tests pp 2-4 configured for the repository. [2] Wasif Afzal, Richard Torkar, and Azure DevOps CI/CD (Continuous Robert Feldt. A systematic review of search- integration/Continuous Delivery) pipelines are based testing for non-functional system used to manage building software. Continuous properties. Inf. Softw. Technol., 51:957–976, integration means that new code is frequently June 2009. integrated with the existing code, for instance [3] H. H. AlBreiki and Q. H. Mahmoud. through pull requests [10]. During this Evaluation of static analysis tools for software integration the code is compiled to make sure security. In 2014 10th International Conference nothing is broken, and sometimes running on Innovations in Information Technology automated (unit) tests is also part of CI. (IIT), pages 93–98, 2014. Continuous Delivery means that there is always [4] Mohd. Ehmer Khan, Different Forms a tested and working product ready to deploy. of Software Testing Techniques for Finding There are CD pipelines that build and deploy Errors, IJCSI International Journal of Computer the application to test servers automatically. Science Issues,Vol. 7, Issue 3, No 1, 2010. Unit tests have to be written for specific parts [5] Y.W, G.Zh, L.K, L.W, H.K, F.G, of code, and the correctness and completeness CH.L, X.D. The Performance Survey of in is mostly checked in the review process [11]. Memory Database, in: 2015 21st IEEE International Conference on Parallel and Distributed Systems (ICPADS), doi:10.1109/ICPADS.2015.109 [6] Integration tests in ASP.NET Core | Figure 5: MDT pipeline configuration Microsoft Docs, 2020 URL: https://docs.microsoft.com/en- If the output of the steps explained in Figure us/aspnet/core/test/integration- 5 are errors, then the process starts from step 1 tests?view=aspnetcore-5.0 only after the developer responsible for the pull [7] Don Champers, Windows Azure: request has completed the needed changes. Using Windows Azure's Service Bus to Solve Using Azure Pipelines helps a lot in improving Data Security Issues, Msc thesis, Columbus the quality of software deployment. State University, 2010. [8] Julia Lerman, Programming Entity Framework, Building Data-Centric Apps with 6. Conclusions the Ado.Net Entity Framework, 2nd. ed., O’Reilly Media, United States of America, 2010. In this paper we discussed how an [9] Unit Test Boilerplate Generator - application’s performance can be improved Visual Studio Marketplace. URL: using in-memory database testing approach. https://marketplace.visualstudio.com/items?ite We implemented this approach in unit tests and mName=RandomEngy.UnitTestBoilerplateGe integration tests, compared with the physical nerator database approach, the results show that in both [10] Configure and pay for parallel jobs tests the better approach to choose is in- 2021 Microsoft Docs, URL: memory database since it is specific for tests https://docs.microsoft.com/en- goals and it offers minor time execution. Also us/azure/devops/pipelines/licensing/concurrent -jobs?view=azure-devops&tabs=ms-hosted [11] Add Continuous Security Validation to your CICD Pipeline | Microsoft Docs, 2018 URL: https://docs.microsoft.com/en- us/azure/devops/migrate/security-validation- cicd-pipeline?view=azure-devops