Checkpoint Omicron

13th of June 2016

In order to check the quality and compliance to the demand, the employ of acceptance criteria is a common practice. We check the delivered solution for a number of things, usually detailed as part of the master test plan. Compliance to the functional and non-functional requirements is verified, and if the solution matches them, or through negotiations the differences can be tolerated, a validation of the deliverable is made.

But what to do when your deliverable is an intermediate stage to this solution? Envision the situation where the technical designs should be validated. Here the approach surrounding acceptance criteria becomes more ephemeral. Giving the subject some thought and scouring the internet for clues as to an approach that makes sense to me, I can conclude that I am left to my own devices on this topic.

 

The starting point should be solution architecture. This should present a frame in which to place the technical designs and give them the proper context. Already, this provides a matching with the non-functional requirements to some extent. The broad stokes, as we say. But the technical designs should go into more detail on which requirements they cover, and how they tackle those requirements. In some cases, they could conceivably even cover functional requirements (such as bought packages with for example standard CRM processes).

Since the decisions in the technical designs might have an impact on how components other than the ones described by the design should behave, it is imperative that these technical designs specify this expected behavior as well as the contracts by which they will communicate with these other components. The boundaries of the designs should be clearly defined and detailed.

Then there are best practices and design patterns (ideally referenced in the architecture design) on several levels:

  • Technological: These can be either general standards (such as the S.O.L.I.D. practices, or the OWASP standards for security) or based on the chosen technology (for example Java Coding standards).
  • Industry: There can be standards which are applicable to the industry in which the solution is to act, such as Sarbanes-Oxley or HIPAA for the healthcare industry.
  • Company: Then there are those policies determined on the enterprise level, such as for example an information security policy.

I did come across several articles online suggesting that technical user stories “As <persona>, I want <what?>, so that <why?>” would be formulated. This is sometimes difficult in the case of technical issues, so it is suggested to veer away from the grammatical constraint, and specify them as "Verify that <what> <when>". For example: "Verify that 2-phase questions are applied after a 3x password entry failure" or "Verify that all web-based requests get through the service layers and receive a reply within 2 seconds". This could be an interesting way of dealing with these requirements, but my gut feeling tells me this might be more effort than what could be reaped as benefit from this approach. User stories are used to structure functional requirements, but since non-functional requirements are usually a lot more concrete, consisting of measurable topics, their structuring becomes less needed. The categories in which to group these would be technical user stories would still be the ISO 25010 standard.

Finally, any validation of deliverables should always happen in multiple iterations. After the first validation, a number of comments will present themselves, and these need to be addressed (or tolerated) before passing to the formal validation. It is difficult to say how many iterations will be needed, but two is a bare minimum you can go for.

Thought Testing