Managing Non-Functional Requirements in SAFe
Из ленты: Ivar Jacobson International » Ivar Jacobson International
Managing non-functional requirements (NFR’s) in software development has always been a challenge. These “system capabilities”, such as ‘how fast a page loads’, ‘how many concurrent users the system can sustain’ or ‘how vulnerable to denial-of-server attacks can we be», traditionally have been ascribed as belonging to “quadrant four of the agile testing quadrants” of Brian Marick. That is, these are tests that are technology facing and which critique the product. That said, it has never been clear *why* this is so as this information can be critical for the business to clearly understand.
In the Scaled Agile Framework (SAFe) NFR’s are represented as a symbol bolted to the bottom of the various backlogs in the system. This indicates that they apply to all of the other stories in the backlog. One of the challenges of managing them lies in at least one aspect of our testing strategies: When do we accept them if they represent a «constant» or «persistent constraint» on all the rest of the requirements?
This paper advances an approach to handling NFR’s in SAFe which promotes the concept that NFRs are more valuable when considered as first class objects in our business facing testing and dialogs. It suggests that the business would be highly interested in knowing, for example, how many concurrent users the system can sustain on-line. If you’re not sure about this just ask the business people around the healthcare.gov project! One outcome of this approach is that we see a process emerge that reduces or need to treat them as a special class of requirements at all.
If we expose the NFR’s to the business, in a language and manner that would create shared understanding of them, we could avoid surprises while solving a major challenge.
Please consider the following Gherkin example:
Feature: Online performance
In order to ensure a positive customer experience while on our website
I’d like acceptable performance and reliability
So that the site visitor will not lose interest or valuable time
Scenario: Maximum concurrent signed-in user page response times
- Given there are 1,000 people logged on
- When they navigate to random pages on the site
- Then no response should take longer than 4 seconds
Scenario: Maximum concurrent signed-in user error responses
- Given there are 1,000 people logged on
- When they navigate to random pages on the site for 15 minutes
- Then all pages are viewed without any errors
These are pretty straight-forward and easy to understand test scenarios. If they were managed like any other feature in the system the creation, elaboration and implementation of them would serve as a ‘forcing function’ where derived value in the form of shared understanding between the business and the development would be gained. As well these directly executable specifications could be automated such that they could run against every build of the software. This fast feedback is very important to development flow. If we check in a change, perhaps a configuration parameter, or new library, that broke any NFR, we’d know immediately what changed (and where to go look!). Something that is also very valuable (and often overlooked!) is that each build serves as a critical on-going baseline for comparison of performance and other system capabilities.
Any NFR expressed in this fashion becomes a form of negotiation. It makes visible economic trade-off possibilities that might not otherwise be well understood by the business. For example, if push came to shove, would there still be business value if, under sustained load, page responses were sometimes reduced to 5 seconds in some cases?
Another benefit of writing the test first is that it would increase the dialog about *how* we will implement the NFR scenario helps to ensure, by definition, that a «testable design» emerges.
This approach to requirements/test management is known as «Behavior Driven Development» (BDD) and «Specification By Example». The question of how and when to implement these stories in the flow sequence remains a challenge and the remainder of this article addresses this challenge directly. I’ll address one solution in SAFe.
The recommendation is to Implement the NFR an an executable requirement using natural language tools like Cucumber, SpecFlow (which supports Gherkin) or Fit/FitNesse (which uses natural language and tables) as soon as they are accepted as NFRs in an iteration as part of the architectural flow. Create a Feature in the Program backlog that describes implementation of the actual NFR (load, capacity, security etc.) and treat it like any other feature that point. Have the system team discuss, describe and build the architectural runway to drive the construction of the systems that will support the testing of them. Use the stories as acceptance against the architectural runway, if that is appropriate. If you do not implement the actual test itself right away (not recommended) at least wire it up to result in a “Pending” test failure (not really recommended but I’ll describe that more in a moment). When the Scenarios are running in your continuous integration (CI) environment, the story can be accepted. With regards to your CI, keep in mind that some of these tests, with large data sets or with long up time requirements will take a while to complete so it is very important to separate them from your fast failing unit tests.
The next important step is to make these tests visible to the business and to the development team. To the business, one way to make them visible, along with your other customer facing acceptance tests, is you use a tool like Relish that can publish them, along with markup and images as well as navigation and search.
Another recommendation in this approach would be to build a “quality” dashboard using the testing quadrants as described earlier. That is, each quadrant would report a pass/fail/pending status that could be used for governance and management of the system. When all quadrants are green, you can release. You can get quite creative with this approach and use external data sources, such as Sonar and Cast (coverage and code quality tools, respectively) and even integrate with Q3 exploratory testing results, for example. There is work to be done in this area. Hopefully someone will write a Jenkins plugin or add this to a process management tool.
Using this approach you will always know what the status of your NFR’s are and get the information you need in a timely fashion, when there is still time to react. This approach would help to eliminate surprises and remove the need for a major (unknown cost) effort at the end of your development cycle. In the case above, even if these tests had been marked “Pending” you’d have the knowledge that the status of these NFR’s were unknown, which would increase trust and share the responsibility across the entire value stream.
Learn more about the Scaled Agile Framework: download SAFe Foundations.
Learn more about our Scaled Agile Framework Training and Certification Classes.