Hi,
in a side project in which I’m working right now, we encounter a problem that after we made some changes we break some other functionality on production. It’s always a pain in a heart when you discover that you break up something on production and have to fix that immediately. Fix to production arrives in less than a minute, but I don’t want to have such situations in a future, especially that I’m not a person that discovers this error. It is also a little bit disappointing that besides having a nice base of unit and integration tests something like that happened. Hi,
So going into details the problem was that we passed invalid data from a front-end side to backend app when changing a value in one of the dropdowns. So the result was a 500
status code from backend.
So to leverage errors like this in a future I decided to cover application with automated tests but I have to keep in mind that those tests on production shouldn’t add anything in a system, cause any action related to adding some products, etc. triggers an event that sends an email/notification to every interested person (financial related project). So those tests should be a smoke test on production. But thanks to them we would have fast feedback that some core functionalities don’t work correctly.
Because of business logic which is written in C# and F# automatic tests were written in F# thanks to a Canopy
library. So I started from configuring a project to run tests on a chrome browser. So I created a project and add below NuGet packages to it:
The first test should be responsible for login to an application. It should enter the page, fill the box for login and password and then click sign-in. Then we should be redirected to the main application view. The test looks like this:
After that we wrote some tests that should simply open a subpage and check if it loads correctly, and those tests looked like this:
As we could see, the test is very smooth and doesn’t need any additional information to describe it. The only interesting things are operators and functions used by canopy (the whole list could be found here):
So this is how the first test looks like. How we could run it? We could use normal browser or a headless mode. When running locally on my computer I run it in a normal browser so I could see if everything is cool. So the code responsible for running test(s) looks like this.
At first, we choose a browser, then set some options to a browser like width, height, etc. Set a reporter so we could get a nice report after test run. Then we invoke action which contains our test(s). In the end, we want to close a browser and return information about failed tests.
So we don’t have anything to do right now, we could run tests! But when I run them at first I get an error that chrome driver doesn’t exist in a particular directory.
Right now we have two options we could ensure that the driver would be always available in a path, or we could do the same as I did so I link the driver to a project. There are some up and downsides of this solution. I have the full control of driver which is used to run tests, but on the other hand, this driver could not be in a line with the installed browser on a machine or agent, especially if it is a predefined Azure agent.
But for me it was good enough, so I have to do one more thing, set a path to a driver in the main method like this:
So when we run tests right now, everything should be cool.
So the next step would be to configure them on CI. The expected result would be to run them after every deploy to test/prod environment in a build pipeline. So every predefined agent on azure DevOps should have a chrome/firefox browser preinstalled. But when we configure a step in our pipeline like this:
And we run this pipeline, we get an error like this.
This is because an agent is not able to run chrome in a normal mode. The solution to this problem is to use chrome in a headless mode. We adjust a code:
We run pipeline one more time and tests are green/red depending on the current status of the environment ;).
To sum up, because of those small things we could easily write and run some base automated tests on azure DevOps after each deployment including a report of them. Which helps us in keeping the integrity and reliability of key functionalities in our app between front and backend layer.
Thanks for reading :)