By Patrick J. Quilter Jr. Over the past three years we have heard a lot of astonishing claims around the impact of machine learning (ML) and artificial intelligence (AI) on software testing. At times it becomes arduous to understand what’s genuine versus myth. As a consequence of not having a clear depiction of what is meant by AI/ML in software testing, our perceptions conflict with today’s reality. For example, a lot of ML/AI solutions in software testing apply to source code analysis tools. Computers legitimately leverage algorithms to identify bugs, vulnerabilities, and development standards at the source code level without being programmed by humans. These solutions have provided valuable information and have evolved over the years. Solution providers are also trying to apply this same success to traditional Quality Assurance (QA) and software testing roles. Unlike source code analysis, these solutions are generally outside of the coding process and interact at the user interface (UI) level. Claims of algorithmic approaches are used to analyze various aspects of the UI and derive test cases. Are these claims real, new, and meet the definition of ML/AI or have they been embellished for marketing purposes? It’s easy to let the imagination run wild with the looming vision of an autonomous, robot-like solution that’s capable of taking over the role of a software tester but rest assured, we are not quite there yet. Here are some things to keep in mind: The most impactful ML/AI solutions are happening at the coding level and even these approaches have limitations. Algorithms are “trained” from development patterns meaning they are only as effective as the training allows Applying ML/AI at the UI level to find bugs is more difficult than finding them at the coding level especially when connected to other systems (like in integration or system testing). Consider how many problems testers uncover because of usability, vague requirements, or constantly changing business rules These are just some aspects of a proficient software tester and at least for now, there doesn’t exist an ML/AI solution that encompasses all of them: Critical thinking (especially about software ambiguities that arise) Analysis, precise documentation, and reporting Rapid test case authoring Complex automated programming and maintenance Organizing and communicating with multiple teams that have different concerns In conclusion, we should look at these solutions as aids (not replacements) and continue to focus on the great work professional software testers have always done. Over time ML/AI placement in software development will handle the grunt work so that testers can spend less effort on tedious bugs resulting from poor coding and more effort on impactful tasks like edge cases. We should learn to embrace these new technologies and evaluate them based on their true return on investment. Ultimately, for software testers, learning how to properly leverage these solutions will enable them to grow in other areas of software development. Patrick J. Quilter Jr. is the CEO of Prometheus8 providing leadership at both the business and technical levels. He has over 20 years of software development experience leading FinTech teams within several enterprise organizations, and has also worked as a DevOps consultant providing technical guidance to the Department of Defense (DoD) and other federal government agencies.