In the beginning of my career I was assigned to test a part of a very complex automotive system. The project was behind schedule and a fixed release date was closing fast. In the automotive industry there are fixed release dates and patching is very expensive since it requires the car to be brought to a workshop. The system developed was much more complex than the ones currently in production; unfortunately it was in terrible condition. The system crashed on a regular basis, features were missing or not working and there were a lot of re-design and requirement updates.
My approach was to divide the system on the different GUI views. I listed these in Excel. Then I added the subparts of the views into my Excel spreadsheet. By reading the different specifications and talking to the system owners I came up with a lot of things to test and added them to the items. The reporting was done by color marking the items based on my opinion.
I reported issues with the system or specifications into a bug tracking tool. The severities of these issues were discussed with system owners and the project manager. If needed, I updated the colors of my excel items to reflect any new information regarding severity. I also added references to the bug reports next to the colored items. When they were retested ok I crossed them out but kept them in my spreadsheet for regression testing. Both system owner and project manager were very satisfied with my work.
Eager to learn more about testing I signed up for a course. At this point an international known course like ISTQB Foundation sounded as a good idea. It wasn't... at least not for me...
The course was packed with text-heavy PowerPoint slides. There were almost no discussion around concepts and no actual practice. Since the goal of the training was to pass an exam there was a big focus on memorizing answers to questions. When I challenged some of the "facts" in the training material the teacher told me to just remember it and take it with a grain of salt.
Unfortunately, as a rookie in software testing I did not bring enough salt. There was very little discussion about in which situation use certain approach would be recommendable. Context was not considered at all. When I got back to my assignment I tried to implement some of the stuff we had learned at the training.
I started to make detailed test case specifications and mapping these to requirements. I also spent time thinking on how to measure requirement coverage. After a while I realized that even if I did test all the requirements they would have changed when I was done. And updating the test specifications would take a lot of the time I could use for doing testing. And the requirements were nowhere near a full description of the system. So I stopped, and went back to my former way of testing and reporting. All good? Unfortunately not...
Although the project manager, system owners and suppliers thought I was doing a great job, and there was really good progress, I couldn't lose the feeling that I was cheating. I did not do it the "right" way... How could they be happy about my work? Was it because they didn't know anything about testing? For a couple of months I got nothing but praise for my excellent work but still felt bad. Then I went on a seminar called "Beyond Test Cases" by James Bach... He made me realize that there was no right way of doing testing and that testing is not about presenting a number of failed or passed test cases. Nor is it about producing requirement coverage metrics. It's about questioning the product to reveal information about it which is useful for the stakeholders. And the way you approach this challenging task is different in every situation. When the assignment ended and the customer made a performance review of me I got 4,7/5 with the motivation " I don't want to give you all 5's because you might get cocky". That was great, but the best part was that I finally felt that I had done a great job and that I deserved the praise.
Thank you James for showing me a better path!