Some feelings about Testing Engineer and Software Engineer in Agile development?



    • I am Software developer, have worked for Agile development in some years. In theory, team members should support together. But the reality is different,I always struggle with Testing Engineers about the scope between us.
      • Testing Engineer usually force developers to write unit test, must have 80%-90% test coverage. I don't know why 80%-90%? Why they don't write unit test?
      • Should they provide the solution in coding to improve ability to testability or improve quality? and How?
      • They always be happy and then complains about the bugs? However, If the software have no bug, I feel they are unhappy
    • Please feel free sharing your experience


  • Why don't test engineers write unit tests?

    There are several reasons developers are usually tasked with creating unit tests:

    • Developers are intimately familiar with the code they are writing. They understand what that code is intended to do and more importantly why that specific implementation was chosen. Ideally they are writing the unit tests while or before the code is written.
    • Test specialists (whether they are called testers, QAs, test engineers or something else) are broadly familiar with the application as a whole. They may not have as much experience with code as developers, nor are they necessarily aware of the underlying logic that's needed.
    • While testers can write unit tests, it is usually a less effective use of their time, since the ideal time to create unit tests is before or during application development - a time when testers often aren't involved with whatever part of the system is being built.
    • The proper time to run unit tests is whenever and wherever the application is compiled. Testers often don't have the authority to check in code to the main application, and they often don't have the ability to add to tests being run in whatever continuous integration system is being used.
    • There are usually more developers than testers, leading to functional testing often becoming something of a bottleneck.

    Should testers provide code to improve testability and quality?

    Ideally testers should provide suggestions to improve testability and quality. Those suggestions may not be the best way to improve testability or quality, because as I said above, testers are less familiar with the application code and architecture than developers.

    Testers can certainly work with developers to:

    • Suggest tests a developer may not have considered
    • Request specific information be surfaced in the application to allow for easier integration and functional automated tests (for instance, if the application architecture dynamically generates HTML Ids, a tester can request developers use a different attribute to provide a unique reference to each element on a web page)
    • Use information on unit tests to effectively improve coverage by minimizing overlap (for instance, when business logic is thoroughly unit-tested, integration tests do not need to cover all possible business logic paths, but can focus on ensuring that the correct information is passed from module to module)

    Should testers be unhappy if they don't find bugs?

    Yes

    All non-trivial software has bugs. Software is complex, and the number of paths through a piece of non-trivial software is if not actually infinite, then so large that testing every possible path is impossible. Even with something very simple, like a basic text editor, users can add text and delete text they added as often as they please. It's feasible that something can happen the 513th time text is deleted after the last time the text was saved because of an internal limit - but how many people are going to find this?

    Every tester with any experience knows that if they haven't found any bugs, they've missed something. If they've missed something, they don't know how bad it is.

    Testing generally focuses on these areas (more or less in this order):

    • Does the software do what it should do?
    • Does the software prevent itself doing what it should not do?
    • Can the software be made to not do something it should do?
    • Can the software be made to do something it should not do?
    • Can the software handle unexpected events with grace? (for example, if you pull the network connection mid action will you get a generic error message and no partial transactions or do you get a stack dump and bad data?)
    • Can the software handle catastrophic events without generating bad data?

    A happy tester has gone through to the last of these types of tests, is reasonably confident they have found most of the serious problems, and reasonably confident that none of the bugs they have missed will cause customers to lose valuable data.

    I'd add from my experience that testers don't complain about bugs unless they fear that bugs they have found and believe pose a serious risk to the application are not being fixed.


Log in to reply
 

Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2