Artificial Intelligence in Quality Assurance



  • Lately I keep seeing articles about how artificial intelligence will revolutionize how quality assurance is done (one simple example here. There are even online courses online for AI in Software Testing.

    Is AI in software testing a real thing - in terms of creating test cases, creating the appropriate tests and executing them?



  • It is to be defined probably in such a way that the workflows build on each other and become so more and more learnable. It aims at test automation.

    enter image description here

    Bringing AI into Quality Assurance

    AI-led cognitive automation solutions (Intelligent Automation) combine the best of automation approaches with AI and help bring superior results. The focus is three dimensional – to eliminate test coverage overlaps, optimize efforts with more predictable testing and lastly to move from defect detection to defect prevention. Today, organizations have better machine learning algorithms for pattern analysis and processing huge volumes of data that result in better run-time decisions. For instance, during a software upgrade, machine learning algorithms can traverse the code to detect key changes in functionality and link them to the requirements in order to identify test cases. This helps optimize testing and prevents the making of decisions on hot spots that could lead to failure. Infosys PANDIT is one such AI-based testing platform that is helping our clients improve agility and predictability while optimizing efforts in testing by integrating AI in testing.

    further information https://www.infosys.com/insights/ai-automation/Pages/quality-assurance.aspx

    In many cases today Qualit Assurance is used in many processes and of course in CI, TDD, BDD as an AI that is capable of learning, but draws its experience from the networked process flows.

    Shown here using Ebay as an example

    enter image description here

    Ebay describes it that way in her article:

    Deep Learning Technology

    DL simulates the human way of finding errors or anomalies. Humans are driven by past experience and conditioning to make decisions. Machines with the proper application of training or conditioning can detect errors that surpass human precision.

    We begin our understanding of DL as the subset of a broader class called as the supervised machine learning algorithm. The supervised learning algorithms take a set of training examples called as the training data. The learning algorithm is provided with the training data to learn a desired function. Further, we also validate our learning algorithm by a set of test data. This process of learning from training data and validating against test data is called modeling.

    In the further course we explain among other things how an AI operated GUI test looks like:

    enter image description here

    Facebook describes their approach as follows:

    Why using build dependencies is inefficient

    A common approach to regression testing is to use information extracted from build metadata to determine which tests to run on a particular code change. By analyzing build dependencies between units of code, one can determine all tests that transitively depend on sources modified in that code change. For example, in the diagram below, circles represent tests; squares represent intermediate units of code, such as libraries; and diamonds represent individual source files in the repository. An arrow connects entities A → B if and only if B directly depends on A, which we interpret as A impacting B. The blue diamonds represent two files modified in a sample code change. All entities transitively dependent upon them are also shown in blue. In this scenario, a test selection strategy based on build dependencies would exercise tests 1, 2, 3, and 4. But tests 5 and 6 would not be exercised, as they do not depend on modified files.

    Shown in this example

    enter image description here



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2