Editor’s Note: ASTQB has established an “observatory” function in which we actively search for testing trends. See our trend analysis below written by Taz Daughtrey, ASTQB Director and ASTQB News Editor. Then join in on the conversation by sharing your ideas on Twitter @astqb #astqbhorizon.

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.”

When Bill Gates made that insight in his book The Road Ahead (published in 1996) I wonder what changes he overestimated for 1998 and what he underestimated for 2006…much less for 2018.

The “hype” and “buzz words” in today’s technology talk may well not pan out to be all that world-changing by 2020 (a convenient short-term target for foresighted pundits), but can we have a better perspective on 2028 or even 2023?

Consensus-based standards, including those embodied in the ISTQB, ASTQB, and IQBBA certifications, respond methodically and often painfully slowly when confronted with major changes in technology or society’s expectations. Smaller excursions can prompt smaller changes with shorter lag times. For instance, as agile approaches unfold into variations such as DevOps with continuous integration and continuous testing, the underlying certification can be reinterpreted or tweaked as necessary.

It is the paradigm-changers that require much more thought and time — and could well leave the certification space vacant for far too long. In order to “stay ahead of the curve” it would be helpful to anticipate and even try seeing beyond the curve. If we call that curve of visibility the horizon, then we need to be scanning the horizon and even seeing if we can devise something like over-the-horizon radar — in order to get early indication of changes that will warrant significant organizational response.

ASTQB now has such an “observatory” function, in which we actively search for the trends more on the ten-year than the two-year scale of maturing. Here are some topics that seem worth exploring for their potential impact on the practice and profession of testing over the next-but-one revision cycle. Some may fade or morph in surprising ways, but I’ve included a few ideas about what prompted their inclusion in our viewfinder.


Cyber-physical Systems.

Computers do not simply compute anymore. Cyber-physical systems manipulate physical objects based on computational logic. Applications range from home appliances and medical devices to building automation, industrial machinery, and critical infrastructure components such as power, water, transportation, and emergency management. Computing failures thus affect the operation of corresponding physical systems, which can threaten safety or trigger loss of life, cause enormous economic damage, and impair vital public services.

Computing devices also don’t keep to themselves anymore. The Internet of Things refers to interconnection of physical objects embedded with electronics, sensors, and software that enable them to collect and exchange data. Not initially intended for computing or for being remotely accessible, many of these devices lack basic quality assurance or  security controls. Misuse of these systems has potential for serious privacy concerns, and they have already been abused for ideological and criminal purposes.

The IEEE 1012 standard (whose first version I worked on before it came out over 30 years ago) has evolved from a standard on software-only V&V to now be the “IEEE Standard for System, Software, and Hardware Verification and Validation.” Undoubtedly there are testing certifications to be gleaned there and elsewhere.


Machine Learning.

A branch of Artificial Intelligence, this approach doesn’t try to program as much as teach. Building and testing reliable, robust software is hard. It is even harder as we move from deterministic domains, such as balancing a checkbook, to uncertain domains, such as recognizing speech or objects in an image.

Traditionally, computer code is written to meet requirements and then tested to check that it met them. But with machine learning the process is not so straightforward. It is difficult to write the requirement, much less specify the steps by which the program is to solve the problem at hand. Moreover there is a moving target: these systems’ responses adapt to what they have learned from previous transactions so they don’t always return the same answer to the same inputs.

One key might be to focus on the set of training data. Perhaps we can employ “data fuzzing” — random incremental changes to the data — for sensitivity analysis given variations in the data used. Without a test oracle to verify the correctness of the computed outputs, this approach has come to be known as metamorphic testing. Despite some academic literature on the subject there appear to be major challenges to widespread practical adaptation, including the absence of tool support.


Quantum computing.

As the old saying goes, “If you think you understand quantum physics you don’t.” But — as the qubit vies to transcend the bit — this seemingly mystical abstraction is moving to empower real-world practice. Breakthroughs are promised in almost every realm explored. Several high-level programming languages for quantum computers have been developed, including QCL, Quipper, and most recently (from Microsoft) Q#. Recent months have also seen the introduction of a software development kit and a classical-computer-based analysis tool to simulate quantum software of up to 40 qubits.

Small-scale quantum computing has already been employed to generate test cases for classical computer applications. A so-called quantum-inspired genetic algorithm has generated test data with greater structural coverage ability and improved efficiency. Yet how are we to test this or any other quantum computer application? If the oracle problem is regarded as a major difficulty for Machine Learning it seems infinitely more intractable for applications using quantum computing.

Given the fragility of the quantum states, users now need reassurance that potential accidental faults (or even malicious tampering) didn’t intrude on their computations. One recent research report suggested a retroactive verification approach. However, the proposed testing scheme requires networked access to five other quantum computers! Needless to say, the scalability of such an approach is suspect. Confirming the correctness of the results remain an even more distant goal.


What’s Next? Share Your Ideas.

This brief survey is meant to introduce the observatory concept as it may contribute to the development of improved and new ISTQB certifications. What else may be looming? What unanticipated paths may be opening to the testing profession? I can’t predict, but I look forward to the journey. Share your ideas with us on Twitter @astqb #astqbhorizon.

 

About the Author: Taz Daughtrey has been a Director on the American Software Testing Qualifications Board since 2005 and is currently serving as its Treasurer. He leads the cybersecurity education offerings at Central Virginia Community College.