Oct
2013
Testing is critical to success!
Long couple of nights, but then a relatively early finish last night by the Red Sox. Hopefully they will wrap up with a victory in Game 6 on Wednesday. It is also the halloween season. We vividly remember the days when our boys were little. In the last minute, they used to come up with ideas for costumes that simply didn’t exist in stores that you could buy. My wife used to work her magic to make them happen. The happiness in them for the few hours that they wore the custom costumes were priceless! It is like the customization that we are asked to do at work on a regular basis. No, I am not saying that our collaborators are like children!
At the entrance to Clapp library you will see the “Halloween Desk” inviting you check out one of the thrilling books. I saw them setting this up yesterday evening. The work we do every day is “thrilling” in some sense. The excitement as well as anxiety associated with any project rollout is amazing. The thrill comes from the fact that we have collaborated and contributed to something that is typically exciting. But the anxiety is real – “How well is this going to go?”. In case you have not heard about a couple of recent technical debacles, such as the rollout of signing up for the affordable care act, you must be living in some other planet. You can view the Saturday Night Live version of this here. In some sense, all of us in this business worry about issues such as the website performance for Obamacare in a much smaller scale. This is why comprehensive testing is critical. You have heard me say this several times before – that it is impossible to predict all the variations in the theme up front even if you do comprehensive testing, but that does not mean you shouldn’t test. In other words, no matter how comprehensive a test you perform, you will always encounter issues. You want to make sure that these issues are more of an outlier than the norm and that you have a plan/strategy for handling the outliers.
Whether it is a software project or a hardware implementation, we go through a process of justification based on needs. After this begins the implementation process, when we make it abundantly clear that we need to keep the scope of the project contained (never happens) and that testing is key. In almost all cases, these two key components of the project are overlooked and sometimes the most boring aspect of an exciting project. It is up to the project lead and the leadership team to insist on both. Having been in this field long enough, you will hear how impossible it is to test certain things. This is where some deep discussions to understand why and providing alternatives comes into play.
For example, network hardware implementations are notorious for being not able to be tested when certain key components require reconfiguration or replacement hardware is needed. Without going into details, imagine a cascading structure whereby individual devices are connected either to network switches through wired connections or to the wireless access points. Traffic from these switches and access points are then consolidated to what are called as distribution switches based on proximity. Then the traffic from distribution switches are routed to core switches. Obviously any changes to cores generally will affect everything, distribution switches will affect everything connected to them and so on. And it is simply not possible to do a lab testing of this environment if a distribution or core switch needs to be replaced. This is where, simulating the environment as closely as possible and spending a lot of time to think through all eventualities and being ready when a problem strikes are the best we can do.
On the other hand, when we develop a software, it is different. We need the office for which we are developing the software to understand that as subject matter experts, they need to have a detailed test plan and devote enough staff time to do the test as we implement system. Systems will simply fail if we cannot get the buy-in from the collaborators to do this. Often, being a good technologist is equated to also being a good subject matter expert. Whereas this may be true to some extent (it depends a lot on the technologist), this assumption is dangerous and each side to should understand the pitfalls of this assumption.
A technologist implements a system as defined by the functional office. During the course of this implementation, they need to have a good understanding of the quirks of the subject matter, but by no means they can be expected to be the subject matter experts in all areas that they collaborate. We are developing a benefits portal for HR. During the process we discovered so many variations to the benefits depending on the employee and other factors as we were moving along. These were picked up by the HR staff, because they committed tremendous amount of time and resources to test every aspect of the software. Without this test, we will have a system that shows potentially wrong information. In addition the users will lose faith in the system, which will be very hard to regain.
So, though it is very obvious, let us remember how critical it is to test all technologies as thoroughly as you can. Use the affordable act fiasco as a prime example when you talk to your collaborators. They will be able to relate to it right away. A well tested system, even if it takes a bit longer than you wish, will be a much better system in the end!
Go Sox!