Today we continue to share our expertise with you and answer the question of how we develop our Magento extensions.
In this article, we’ll talk about testing:
- How we check our extensions before release;
- What testing techniques we use;
- How we automate the testing process.
Where does testing begin?
The process of testing starts as soon as a development team receives the epic. QA engineers are responsible for the quality of products, we make sure they’re bug-free and user-friendly. Once we receive a task for testing, we make a preliminary review and at this very stage, we consider what can go wrong, what the bottlenecks are to sketch out some specific cases. Then we check if the task meets entry criteria and can be taken into work right after the development.
During the estimation, we discuss our concerns with developers to make sure they’ll take them into account while developing. The same format works the other way too when developers warn us about glitches we can come across while testing the product/feature. This interaction scheme allows us to foresee risks and gives room for future test cases’ realization.
How do we estimate features testing and address risks?
When developing a new module, QAs usually get several decomposed tasks, each of which represents a separate fully-fledged feature of a new extension. Given all the features are developed, we receive a high-grade module. This system allows us to estimate each feature separately taking into account all the possible risks and after that the whole extension. Thus, we estimate each feature to predict the volume of testing work required for the extension release.
How do we test features and write checklists?
When a task is not yet passed to testing but already taken in work by developers, QAs can draw up a list of checks for new features. To avoid test-blocking bugs, they send the list to the developers making sure the written feature meets the requirements.
Once the feature is written and passed to a responsible testing specialist, they start running through the test cases based on the previously created list. In this way, we run testing where we check a feature and its influence on the vanilla Magento functionality.
First and foremost, we write test cases for a feature that covers its functionality. Say, taking the feature that speeds up pop-up window work for the Free Gift extension, we check if the time for window opening was reduced and all the elements are displayed correctly. After that, we write more in-depth test cases for the feature including the influence of different settings of Magento conditions, cart price rules, taxes, discounts, etc.
Also, we add general test cases for Magento, such as the work under different store and website settings, work with different customer groups, tax calculating configurations, the possibility to create an order from the frontend and backend side.
To manage test documentation we use the TestRail test cases management software.
The test cases contain different types of verification, like:
and different levels of verification:
- Critical - main extension’s features;
- High - module settings, positive cases;
- Medium - combinations of our and standard Magento settings, positive and negative cases;
- Low - additional verification or rare cases.
This help us test the module when it’s further extended with new features. Thus, when testing a small feature, we can run critical and high tests, and if we need a regression testing, we run the whole list of test cases.
What testing techniques & approaches do we use?
We use functional, security, and access control testing, GUI, installation testing, usability, and other types of testing.
When talking about the techniques, we go for equivalence partitioning testing, boundary value analysis, cause/effect, error guessing, and others.
When a feature implies the display of any new elements, like the pop-up window in Free Gift or banner in Promo Countdown, we check them for correct display on mobile devices with different screen resolutions, as well as, in browsers (Chrome, IE11, Mozilla Firefox, Opera, Safari).
If a new feature implies support for API and GraphQL, we write test cases for each request for both frontend and backend (in the case with GraphQL, only for backend). For testing API and GraphQL, we use Postman, Swagger, ChromeQL, and other tools.
We resort to using exploratory testing and such methodology as “Whittaker tours”. This allows us to find a perfect match for the extension in testing.
Besides, we always test a new extension or feature on popular Magento versions. From an in-house research, we revealed the most popular versions among our customers. Driven by the insights, we build our future testing strategy. Anyway, we always test our extensions for compatibility with the latest Magento version available at the time of testing.
Developing a new feature or extension we always check them for compatibility with other Amasty’s extensions. We choose the testing order based on the analysis of functional dependencies between extensions and of the extensions that clients buy in a bundle or pair.
Given a new Magento version is released, each QA team at Amasty receives a task to test extensions for compatibility. QA specialists deal with the new version features, run regression tests and launch all the MFTF tests covering our extensions.
Moreover, we test our extensions for compatibility with the pre-released Magento version before it goes to production. Thus, we make sure our modules will work stable at the time of release. After the new Magento version official release, we run smoke tests once again to make sure our extensions work well and make our clients happy.
How do we do pre-release testing?
After features are fully tested and we’ve eliminated all the glitches to make them fully meet all the release criteria, we confirm that the task can be passed to the pre-release testing stage.
At the stage of pre-release testing, we create a release task with all the features subtasks and release branch where all the new extension features are merged. This is the moment we test the extension in the form in which it will be sent to its customers. We verify the features don’t cause any conflicts working together and perform as described. Thus, now all the checklists we wrote for each new feature are merged into one extension’s test documentation.
After the features compatibility check, we run all the test cases stated in the checklist. (When talking about new features of the released extension, we run smoke and regression tests, which depends on the feature complexity.) Also, we run the default or custom MFTF tests, if there are such.
Extension release and transferring to the marketing department
After a successful testing session, we add a comment to the release task with the information about what and how we tested. Then our developers release the extension. Once a QA Team Lead renews the Demo for our website, we transfer the task to the Marketing Dept.
Why do we choose automation? MFTF as what we’ve come to
We are striving to automate our testing processes and one of the ways we actively apply is writing tests on MFTF. Now there's a leading QA specialist responsible for writing the tests in each of our development teams.
At the moment, we have a separate MFTF process in which our QA specialists should spend a certain percent of work time on covering extensions with MFTF tests, making tests review and teaching other QAs. Thanks to the implementation of MFTF tests we can write more stable, extendable and flexible tests and also provide a continuous in- and cross-team learning process.
Thanks to automation we’ve reduced the time spent on running regression tests and yet be sure that extensions work smoothly. At the moment we cover the already-released extensions with tests, and some of the new functionality we cover with MFTF tests on the development stage.
This is the way we test new extensions at Amasty. We cherish and love the solutions we release and our clients and do our best to provide you with bug-free extensions.