There are a couple of aspects worth mentioning here. Developer abilities to write tests, and tester ability to write test code. I’ve worked with many different ways of handling the project QA in various teams over the years.

Back to basics - systems need proof that they do and continue doing what they are supposed to. That is testing. It can take many forms for different purposes - there are unit, component and integration tests for the developers to prove to themselves and others that they’ve correctly created the functionality they were asked to. There are regression and performance tests to prove that the new development didn’t break something already there, and that the system is still running optimally (or tolerably to be more honest). There are smoke tests designed to work in different environments (including production) to give a quick estimate of the health of a system. The list goes on.

Having a good, separate QA/test team can be extremely useful. A team of people good at finding all the things developers missed. That takes skill. However, a poor quality QA team is worse than useless - best case they slow down route to live for a system for minimal benefit, worst case they give confidence in a system that shouldn’t exist.

In my view, a QA team should be as skilled at interpreting specifications and business requirements as developers. Logically, they should be generating their test cases directly from the same specifications as the developers are using to develop their code. They may ask for refinement of the specifications for particular scenarios in exactly the same way as developers. The end result for a particular business feature should be a test suite created independently of the coding the developers are simultaneously doing. In the ideal, the teams then run this new test suite against the new code, it passes, and everyone goes home early.

In reality, the results of this test run are the start of a conversation - the tests are incorrect, the production code is incorrect, or both are incorrect. This can be due to simple coding errors, although the developers should have caught those before QA, or a difference in interpretation of business requirements. This is the important case - it highlights where the business requirements were ambiguous and open to interpretation. Finding these cases is a win, and part of the reason QA exists in the first place.

I’ve worked in environments where QA would ask the development teams for example scenarios and expected results for their tests. This is just wrong. It reduces the QA phase to just a box ticking exercise, and wastes a lot of the value they can provide. QA should be breaking down the requirements and generating inputs and expectations themselves, otherwise what exactly are they really testing?

I’ve also worked in environments where QA testing is largely manual. This also is wrong in my view. For any system of reasonable size, the run of the full regression test suite will take too long. Suddenly terms like ‘risk based testing’ start being used - only running subsets of the tests that people hope are most likely to catch issues. Instead, automating the tests means that QA can spend their time coming up with test scenarios and provide continuing creative benefit to their organisation, rather than just being grunts spending their days going through a scripted sequence of mouse-clicks.

That latter behaviour I think has led to the perception that testers are less valuable and less skilled than developers. It’s a false and dangerous view. Hiring people who are perceived to be good enough only for testing as they’re not good enough for development work is a bad idea. Testers need to have the skills to code well in their testing platform; otherwise the test base ends up being a horrific, flaky, unmaintainable mess. We’ve all seen it happen - its not good.

If necessary, pair developers with testers to transfer the appropriate coding skills and engineering practices. I’ve done that on a few projects and it has worked well. Testers still come up with the scenarios that need testing, and the developers teach them how to code them up cleanly and maintainably. Over time, the testers pick up the skills to do it themselves, and in passing the developers get more of an appreciation as to what QA generally look for when testing. Everybody wins.

Shared at https://www.linkedin.com/pulse/testing-skills-donal-stewart