While I'm on the fence on closing vs keeping this questions open, I'll give it a go at a general answer.
First, writing test automation is writing software. No one should tell you otherwise. So any book, course, etc that has a focus on writing software, architecting software, design patterns, etc, is valid and valuable when writing software testing frameworks! You can take those same concepts developers use and apply them to the testing side.
For example, when writing Selenium frameworks, we often use POM - Page Object Model - which is a popular software design pattern. There is nothing inherent about this design pattern that says it's "for testing only". In fact, it's used by developers in the applications they create. You could also use MVC or MVVC design models when creating testing frameworks.
I cannot find resources explaining how to handle/manage/structure tests.
This is just code organization, which different design patterns attempt to explain. A clear rule of thumb is to "keep like things together." You can also just have a basic directory structure. Note, not all of these are needed; use what you need when you need it; this is also non-exhaustive.
- /lib
- /logs
- /config
- /reports
- /helper
- /fixtures
- /factories
- /specs (or tests)
Under /specs, you can have:
- /unit
- /integration
- /e2e
- folders by feature, ecommerce example here can be: login, header, footer, checkout, search, pdp, cart (very POM oriented structure)
Don't put all your tests in one file. Aim to write clean code that is also readable. I've seen spec files 1000+ lines long due to poor organizing. You can also separate out tests by tags in the spec file or even in separate files: positive, negative, sub-feature.
Examples here can be:
- featureName.positive.spec
- featureName.negative.spec
- featureName.subfeature.positive.spec
- featureName.subfeature.negative.spec
When it comes to the spec file, make sure that it's just about the test (assertion) and code that supports getting to that assertion. Any code you keep repeating is meant for a helper file, as a fixture, or as a Page Object (if you're doing UI tests). Keep your code DRY (don't repeat yourself) not WET (we enjoy typing)!
Unit tests, Integration Tests, System Tests, ... Do I have a completely separated test system for each of these? If yes, how do I make sure to not forget to start up each of them after a change?
Other ways to phrase it:
- Do you keep all your tests in the same repository as the main application?
- Do you only keep unit tests in the same repository as the main application?
- Do you keep all your tests in a separate repository?
Yes, to all. No, to all. It really depends on team structure. Running them via separate repos can be handled via a CI/CD system.
How do you express the "functional dependency order" of tests?
You don't. Tests should follow the FIRST principle. Plenty has been written about this including me on https://sqa.stackexchange.com/questions/49833/how-to-automatically-test-mobile-device-and-desktop-website-connections/49842 . In general, tests should follow an order, they should be independent of that.
How do I express priorities for tests?
A lot of test runners and test libraries have a tagging system. You can use that to target tests by tag, e.g. smoke tests, feature name, security test, etc
How do I express that tests take only short time or a long time to test?
Not sure there is an answer here. Is this question about writing tests? Running the tests? There is a different answer depending on which task.
How do I express that tests have external dependencies?
You don't. It's inherent to the type of testing. Unit tests usually don't have external dependencies. Integration tests usually do. Sometimes you can mock those external dependencies.
How should I implement the same test with many different inputs?
You can write one test that takes different values of inputs. Here, add() is the method you call in your spec file that should return a value that you assert against.
add(1, 2)
add(-1, -2)
add(0, 0)
Or, you create separate tests in your test library
it "adds 2 positive numbers"
it "adds 2 negative numbers"
it "adds 2 floating point numbers"
Or, you can store input values in a JSON object, array, CSV file, etc that just loops through those values in a single test.
Many options. It's dependent on context.
How do I handle database test? Do I build up and destroy a database server every time a test starts that needs the database? Do I use one huge test database containing all test data for all tests and use transactions and rollbacks? What I test code that finishes transaction? What if I'm working with a MySQL database?
You should ideally have a separate testing environment with a test database ready for use. The testing environment should mirror what is in production. On a team, this is usually already setup for you, usually by a DevOps team. For proper e2e testing or CRUD like test, yes, do clean up your test. All testing libraries have beforeAll
, beforeEach
, afterAll
, afterEach
to setup and clean up the tests.
How do I maintain all of the tests?
Like any other software. You will change the tests over time. You will delete things. You will add things. It gets maintained just like your main application.
Just to reiterate, most of what I'm describing here is NOT testing related. Its standard software architecture, design patterns, organizational principles, so any books and resources you find on those can be used when creating software tests, frameworks, infrastructure, etc.
Would it be useful for someone to write about all these things from a testing/QA perspective? 100%, but not entirely necessary.