Fortunately, de-coupling of backend systems is more or less a norm now. Irrespective of the client you have, a desktop, mobile or a web application, there is a high probability that it talks to a backend to post or retrieve data or information you need.
Restful web services and architectures based on the micro-services are common now. These web services power most of the mobile and desktop apps. This architecture and de-coupled systems have many benefits. These backend systems (Webservices / Microservices) can have their release cycle, code base, deployment infrastructure, and automated tests associated with them.
Web services and microservices are ideal for testing because they can be tested in isolation. They often have a well-defined endpoint and expected responses for various conditions. They are often more testable and since their implementation is (often) client agnostic (Platform / Device / Versions etc.), tests associated with them are often more robust.
However, how do you test clients that consume these web services? These services could have various clients such as mobile apps, mobile web, web applications, desktop or connected devices. How do you test that these clients are working as expected?
Often these clients have a decent unit test coverage, and they are tested against a shared test environment for the integration with these backend services. This approach works well, but it is not very efficient.
With this setup, test environment becomes a critical piece for many teams. Managing uptime, versions, and data on these test environments to satisfy the needs for services teams, as well as all the clients, is a tricky task. Services team may want to deploy continuously on the integration environment, and clients may want to have a stable environment as changes in the test environment can severely impact all the clients.
Shared test environment affects testers as well as the developers. Many teams try and adhere to the policy of running acceptance tests against every branch before merging in. If the test environment is shared, it becomes a bit tricky to follow this policy because
- Test execution would take longer as test environment is a shared resource.
- Changes in the test environment might result in false test failures.
- Tests executed from one client can have an unintentional side effect on the tests executed by other clients.
- If an execution queue is maintained, time to get feedback from the test execution will increase with the number of tests in the test suites.
All these reasons are detrimental to the productivity of development team. The development team would spend loads of time in fighting the wrong battles, and it can even create the boundaries between various teams.
So what could be the solution?
A MockServer, that mimics the backend, can be of great help in such situations. I have been usingMockServer for a while now, and some of the key benefits of using MockServer are
1. Reliable and robust tests
Data in the MockServer is more or less static. It is not affected by the changes made from other teams. It is also not affected by the network because it often runs on your machine. Unintentional or uninformed changes in the data and network speed are often the main reason for occasional test failures. With a MockServer, we can eliminate both the causes and increase reliability and robustness of the automated tests.
2. Faster execution
If the test environment is shared, we often need to make network calls to setup the test data or to execute tests. These network calls add a delay in the test execution. Not only that, because the response from the network call is not predictable, we often use wait (fluent or static) in our automation projects to increase reliability. Lastly, in a shared test environment, response time also depends on the load on the test environment.
With MockServer, since the MockServer can run on the same machine, there are no network calls and latencies related to the network is non-existent. Also, MockServer is not a shared resource and has to serve only one client. As a result, automation code has to wait less (fluent or otherwise), and test execution could be much quicker.
3. Increased test coverage
It is very difficult to test error conditions in a shared test environment. How would your client handle an error response from the server? How would you test it? How would you test different variations of the error response? If you configure your shared test environment to return these error response, it might affect other teams/tests. Not only that, what if someone else unintentionally changes it back? Not only that, it may not even be possible to get a proper test environment to return the data you need for edge cases.
MockServer gives you complete control over the response you need. You can not only specify the response you need, but you can also change it on the fly in the context of a test. This flexibility makes it easier to test various error conditions with the MockServer.
MockServer is an excellent solution and can help us increase reliability and robustness of our tests, reduce test execution time, and increase test coverage and cover edge cases.
It is not difficult to implement, but you do need to be able to write a little bit of code to implement the MockServers. It may take a little while to convince the team to use MockServers and get it up and running, but believe me, a robust and reliable test automation is an excellent reward for your investment in this.
Hope you found this post informative. Please share your thoughts, it would be nice to discuss any challenges you may have faced or overcome with the MockServer or mocking in general.