Exhaustive Resources on Concrete, Advanced test automation practices



  • Answer posted at the end of this post

    Disclaimer

    This topic might violate the general rule about how to ask questions. But I'm very sure its answers will add great value and hope the question will therefore not be closed.

    Question

    What resources (ideally books, because they're consistent) are there explaining about advanced test automation practices - Something like follow-up guide on auto testing after one is familiar with the basic concepts (Like if you read a programming book and now need to learn about software architecture).

    Example topics that should be covered

    • How do you express the "functional dependency order" of tests?
      I.e. test2 makes use of functionality test1 tests exhaustively. How do I express that in a test? Background: I'd If test1 and test2 both fail, it should be clear, that test1 is the one I should look for an error first.

    • How do I express priorities for tests?
      E.g. there is test_show_error_message and test_software_starts_up. test_software_starts_up is the the test with more priority and if anything happens should be looked at first. How do I express that?

    • How do I express that tests take only short time or a long time to test?

    • How do I express that tests have external dependencies?

    • Unit tests, Integration Tests, System Tests, ... Do I have a completely separated test system for each of these? If yes, How do I make sure to not forget to start up each of them after a change?

    • How should I implement the same test with many different inputs?
      E.g. I want to test an add(a,b) function: do I have many different tests with rather silly names like test_test_lower_bound and make it difficult to overview or do I use some kind of CSV table to feed in data? Pros? Cons?

    • How do I handle database test?
      Do I build up and destroy a database server everytime a test starts that needs the database? Do I use one huge test database containing all test data for all tests and use transactions and rollbacks? What I test code that finishes transaction? What I I'm working with a MySQL database?

    • How do I maintain all of the tests?

    What I've found so far

    The following resources explain about these advanced topics:

    • Robert Nystrom touches some topics in his book http://gameprogrammingpatterns.com/ .

    • Justin Searls https://www.youtube.com/watch?v=VD51AkG8EZw

    • https://sqa.stackexchange.com/a/45609/52466

    • https://www.youtube.com/watch?v=VL-_pnICmGY

    • https://sqa.stackexchange.com/a/45608/52466

    • https://www.tutorialspoint.com/end-to-end-testing-tutorial-what-is-e2e-testing-with-example

    Background

    I see many, many books explaining the basics and abstract concepts of (auto) testing (e.g. why should we automate?). However, I cannot find resources explaining how to handle/manage/structure tests.

    Out of scope topics

    The following are topics, that should not be covered, as they should already be familiar concepts to the reader.

    • What is testing?
    • Why is testing necessary?
    • Psychology of testing
    • Unit tests are not integration tests
    • The Software development lifecycle
    • White Box VS black box testing
    • Test categories

    Answer

    (Still people here feel that answering your own question is a bad idea, so I'll post my answer here:)

    It seems like there really isn't any resource like this.

    For the sake of reference: I'm trying to gather information http://gameprogrammingpatterns.com/ .



  • While I'm on the fence on closing vs keeping this questions open, I'll give it a go at a general answer.

    First, writing test automation is writing software. No one should tell you otherwise. So any book, course, etc that has a focus on writing software, architecting software, design patterns, etc, is valid and valuable when writing software testing frameworks! You can take those same concepts developers use and apply them to the testing side.

    For example, when writing Selenium frameworks, we often use POM - Page Object Model - which is a popular software design pattern. There is nothing inherent about this design pattern that says it's "for testing only". In fact, it's used by developers in the applications they create. You could also use MVC or MVVC design models when creating testing frameworks.

    I cannot find resources explaining how to handle/manage/structure tests.

    This is just code organization, which different design patterns attempt to explain. A clear rule of thumb is to "keep like things together." You can also just have a basic directory structure. Note, not all of these are needed; use what you need when you need it; this is also non-exhaustive.

    • /lib
    • /logs
    • /config
    • /reports
    • /helper
    • /fixtures
    • /factories
    • /specs (or tests)

    Under /specs, you can have:

    • /unit
    • /integration
    • /e2e
    • folders by feature, ecommerce example here can be: login, header, footer, checkout, search, pdp, cart (very POM oriented structure)

    Don't put all your tests in one file. Aim to write clean code that is also readable. I've seen spec files 1000+ lines long due to poor organizing. You can also separate out tests by tags in the spec file or even in separate files: positive, negative, sub-feature.

    Examples here can be:

    • featureName.positive.spec
    • featureName.negative.spec
    • featureName.subfeature.positive.spec
    • featureName.subfeature.negative.spec

    When it comes to the spec file, make sure that it's just about the test (assertion) and code that supports getting to that assertion. Any code you keep repeating is meant for a helper file, as a fixture, or as a Page Object (if you're doing UI tests). Keep your code DRY (don't repeat yourself) not WET (we enjoy typing)!

    Unit tests, Integration Tests, System Tests, ... Do I have a completely separated test system for each of these? If yes, how do I make sure to not forget to start up each of them after a change?

    Other ways to phrase it:

    • Do you keep all your tests in the same repository as the main application?
    • Do you only keep unit tests in the same repository as the main application?
    • Do you keep all your tests in a separate repository?

    Yes, to all. No, to all. It really depends on team structure. Running them via separate repos can be handled via a CI/CD system.

    How do you express the "functional dependency order" of tests?

    You don't. Tests should follow the FIRST principle. Plenty has been written about this including me on https://sqa.stackexchange.com/questions/49833/how-to-automatically-test-mobile-device-and-desktop-website-connections/49842 . In general, tests should follow an order, they should be independent of that.

    How do I express priorities for tests?

    A lot of test runners and test libraries have a tagging system. You can use that to target tests by tag, e.g. smoke tests, feature name, security test, etc

    How do I express that tests take only short time or a long time to test?

    Not sure there is an answer here. Is this question about writing tests? Running the tests? There is a different answer depending on which task.

    How do I express that tests have external dependencies?

    You don't. It's inherent to the type of testing. Unit tests usually don't have external dependencies. Integration tests usually do. Sometimes you can mock those external dependencies.

    How should I implement the same test with many different inputs?

    You can write one test that takes different values of inputs. Here, add() is the method you call in your spec file that should return a value that you assert against.

    add(1, 2)
    add(-1, -2)
    add(0, 0)
    

    Or, you create separate tests in your test library

    it "adds 2 positive numbers"
    it "adds 2 negative numbers"
    it "adds 2 floating point numbers"
    

    Or, you can store input values in a JSON object, array, CSV file, etc that just loops through those values in a single test.

    Many options. It's dependent on context.

    How do I handle database test? Do I build up and destroy a database server every time a test starts that needs the database? Do I use one huge test database containing all test data for all tests and use transactions and rollbacks? What I test code that finishes transaction? What if I'm working with a MySQL database?

    You should ideally have a separate testing environment with a test database ready for use. The testing environment should mirror what is in production. On a team, this is usually already setup for you, usually by a DevOps team. For proper e2e testing or CRUD like test, yes, do clean up your test. All testing libraries have beforeAll, beforeEach, afterAll, afterEach to setup and clean up the tests.

    How do I maintain all of the tests?

    Like any other software. You will change the tests over time. You will delete things. You will add things. It gets maintained just like your main application.

    Just to reiterate, most of what I'm describing here is NOT testing related. Its standard software architecture, design patterns, organizational principles, so any books and resources you find on those can be used when creating software tests, frameworks, infrastructure, etc.

    Would it be useful for someone to write about all these things from a testing/QA perspective? 100%, but not entirely necessary.


Log in to reply
 


Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2