Over the past 3 years (and over COVID) the Readiness team has worked on evolving the assessment algorithms as part of our online compatibility, suitability, security and quality reporting. We undertook a major overhaul of our systems and added runtime assessments into the testing “mix”.
This was a game change for us. As part of our heritage, we have access to a very large testing application testing library. Maybe the largest single source of applications in the world. I mean, “who else collects applications across countries/regions, partners and customers?” The applications have been anonymized (cleaned of any client data and license information) but they provided a very rich testing platform indeed.
So much testing data was generated – we had to rethink our approach to reporting. With the data-flood of capturing :
- millions of registry keys (funny how they add up so quickly)
- hundreds of thousands of files (and their properties)
- millions of rows of data relating to COM/DCOM, ODBC, Printing, Firewalls, Services, Scheduled tasks, and Defender settings
- millions of data images (screenshots of smoke testing results)
We ended up with up with (approximately) 80 billion possible permutations. Which seems like a lot.
Rather than report on these issues, we decided what we needed was an “exception report“: “Just show me the differences between two (or more tests)”. If the results across multiple tests, platforms and builds are the same, then we can safely move on. This helps a lot with customer developed or custom applications, as quite often what is considered an error can be safely ignored. As long as you get the same error across two environments, then the application is “behaving as expected”.
This type of difference or “Delta” reporting is incredibly useful. Say that you have 500 applications that need to be tested against an existing platform and possibly two new platforms (maybe desktop and server). You don’t want all of the data back (believe me). You just want to see the differences. For example, the 10 applications that did not start as expected and the three others that generated errors. For a total of 13 applications out of the 500.
Automating this process means repeatability. This means that the platforms can change, and rapid testing results are delivered. This scenario applies very well to Patch Tuesday updates from Microsoft. Testing on the new build (with the addition of the latest patches) needs to be done in a timely manner – ideally, completed that day.
This automated “Delta” testing technology is now sufficiently mature that we have embarked on the patent process. Given that we have offices in Canada, US, Australia and the United Kingdom, there was a question about which country was the best for software patents.
After much deliberation, we have decided on the UK. The process can take up to a few years, but I will post on our progress as things develop.