Definition of Done for Accessibility Tasks.
If you have ever worked on an agile team, you have probably come across the term “Definition of Done”.
The DoD as it is known, is the checklist which standardises the requirements before work is considered finished. You can read more about the definition of done on scrumalliance.org)
A DoD for a11y.
I’m currently building a small virtual team which will focus on accessibility related tasks. The tasks are pretty wide ranging. For example, conducting design reviews, resolving bugs, or helping to prototype a complex UI widget. That sort of thing.
Part of building this team has been to build the process; and today I have been thinking about what makes a great DoD for accessibility tasks.
A prototype DoD for a11y.
This is my prototype DoD so far:
- Peer Review
- Testing
- Documentation
- Labels and Component Check
1: Peer Review
I think all good DoD’s should start with a “peer review”. That is, a code review or for the task to have been paired on. The purpose of a peer review or pairing is to introduce a second set of eyes to the code. This helps to reduce future bugs, and also allows for knowledge transfer.
2: Testing
The BBC Accessibility team has a recommended test matrix. In an ideal; world, we would suggest that all tickets are tested in the entire matrix. However, as the team is operating on these tickets is doing so on a voluntary bases i’m not convinced testing everything is going to be productive.
The aim of the testing is to provide confidence; knowing how much confidence is require is a complex topic… So my ideal accessibly DoD includes testing, but I am not sure how much yet. If you have a suggestion, or a model for this, please let me know in the comments.
3: Documentation
This will not apply to all types of tickets, but I think for many tickets documentation is a requirement. In short, if no documentation is being produced, I think there needs to be a good reason as to why.
Like testing, documentation is a bit of a slippery slope. You can easily do too little or too much. At this stage I am thinking that each ticket type should have the following documentation requirements before it is considered “done”:
- Bugs – a description of the cause of the bug, and a description of the fix and why it is believed to work. Possibly adding a label for later stats purposes. This documentation should be a comment on the JIRA ticket.
- Reviews – a review (design or code) should have at least a comment describing issues which were discussed and if possible any recommended actions (e.g., notes on tab order, or ARIA roles etc).
- UI Support Requests – if a ticket relates to advice on how to approach building some kind of UI pattern; then the documentation requirement is an overview of the methods considered and the reasoning behind which approach was chosen.
4: Labels & Component Check
One of the outcomes required for the process is to enhance the understanding of the type and quantity of accessibility tasks coming through the team. So an explicit step in the DoD to check that labels and component attributes have been correctly set is a cheap investment to aid future statistical analysis.
What am I missing?
Thats some of my thinking so far; It’s early days the proposal above is very rough. What would you suggest? Do you have an a11y element in your DoD? Let me know in the comments, or via twitter (@jamieknight) and email