When it comes to designing and developing successful products, quality plays a pivotal part in the process. Effective testing ensures the product achieves the ambitions that stakeholders desire.
Your approach to development often determines how testing is conducted, either phased, concurrent, or otherwise but the role of the tester is clear: to test every possible instance and outcome to ensure quality is embedded into the product.
“Quality is never an accident; it is always the result of intelligent effort.” John Ruskin.
But what does this mean in practice?
Let me set the scene for you: rain is sheeting down like wet bricks, I’m soaked, slipping about and sinking with each step into thick mud. In one hand, a bell-shaped sensor. In the other, a mobile device running an app. Through thick gloves, I’m trying to plug the sensor into the phone while keeping it out of the wind and rain.
I manage to jam the cable into the socket! I try to tap the app’s connect button with my off-hand. It’s a struggle as the button is tiny and my thumb is slipping all over the wet screen. The phone wriggles and slides like an eel in my hand. I can’t grip it and press the button.
I balance the sensor precariously on my boot, trying to keep the phone out of the rain and jab at the buttons. ‘No connection’ appears on the screen. I unplug it, plug it back in. Another stab at the button. Nothing. ’No connection’ the device smugly returns.
The sensor in one hand, phone in the other I stomp back across the field and into the warm, mercifully dry office.
I sit beside my team and report back: ‘the button is too small. The colours have poor contrast and are difficult to read outside. We need to explain what ’No connection means’ and give the user support to get it working’.
Testing literally, in a field
That’s what testing really is. Some might say standing in a thunderstorm in a muddy field is bonkers. I call it acceptance testing. As testers have to embody the customer, (literally) in the field. By testing in the customer environment we expose problems with the software that you just don’t see outside of it. It’s part of our QA manifesto to meet and exceed customer requirements. If the button to connect is only 5 pixels wide and is too small for me, it’s too small for our customer. If the button is grey, on a slightly less grey background, it’s hard to see so we must make the UI clearer.
By performance testing the product in the field, we discover how delicate the connection process is. We know to add more resilience – partial connections? Loose connections? Unplugged mid-connect? Unplugged and plugged in before the app is ready? All these scenarios should be tried under the conditions the user will operate in.
We learn we need to give the user feedback: what if the device is connecting and because we disconnected too early – we reset a ‘retry’ timer? How can we inform the user of this? Can we handle the behaviour without providing endless alerts and notices?
What we learn by putting ourselves in the field improves the usability, reliability, resilience and quality of our products; and we can ask ourselves that all-important question: does this app achieve the ambition it was created to achieve?
Testing is not about writing test cases in a warm, dry office. It’s about getting your feet wet for the customer.
If you’d like to find out more about our processes and delivery methods at Dootrix you can always drop in to see us on the Farm or at the Mill. Let us know! firstname.lastname@example.org