Cloud QA Testing

Lady Gaga Killed The Cloud

Yes, she really did kill the Cloud. Amazon’s servers were so overwhelmed that some users were complaining that even 10 hours later they still only have some of their songs downloaded or worse can’t even connect to Amazon’s servers.

So, aside from the marketing and sales aspects of this promotion – whether Amazon is doing this based on concerns over Apple’s upcoming cloud service or to entice customers over to its Amazon Cloud Storage by offering 20GB as part of the promotion, one thing remains clear – Amazon is having technical issues in supporting the very architecture that they are offering to their customers. The premise of The Cloud is to support scalability and to offer the ability to automatically scale up based on demand. With the Lady Gaga debacle and the problems last April that brought down major websites who were customers of Amazon’s EC2 Cloud Service (,, and it brings into question the ability for companies such as Amazon to deliver the goods.

Yes, the Cloud is still a relatively new and emerging technology and it has and will continue to hit bumps along the way to stability. How will the news affect your decisions on implementing Cloud technologies for your infrastructure? Will you wait to deploy to the Cloud? Is the idea of handing over the keys to your infrastructure still appealing?

I for one will probably take a more blended hybrid approach for now. Not putting all of our apples in the Cloud cart, but utilizing some of our non-critical infrastructure within the Cloud while continuing to utilize more traditional non-cloud based approaches for the more critical day-to-day operations. As the Cloud continues to mature then we’ll consider moving more services over, but that will be done only with a proven standard offering sitting in the wings as a backup until at least a good amount of time has gone by and things are more proven.

Top 27 Things for QA Testing New Website or Software

Launching A New Website Or Application? Top 27 Things To Be Aware Of

This list of questions covers the types of testing that must be considered to ensure a quality product when launching a new website or application:

1. Design Validation – Statements regarding coverage of the feature design – including both specification and development documents. Will testing review design? Is design an issue on this release? How much concern does testing have regarding design, etc. etc..

2. Data Validation – What types of data will require validation? What parts of the feature will use what types of data? What are the data types that test cases will address? Etc.

3. API Testing – What level of API testing will be performed? What is justification for taking this approach (only if none is being taken)?

4. Content Testing – Is your area/feature/product content based? What is the nature of the content? What strategies will be employed in your feature/area to address content related issues?

5. Low-Resource Testing – What resources does your feature use? Which are used most, and are most likely to cause problems? What tools/methods will be used in testing to cover low resource (memory, disk, etc.) issues?

6. Setup Testing – How is your feature affected by setup? What are the necessary requirements for a successful setup of your feature? What is the testing approach that will be employed to confirm valid setup of the feature?

7. Modes and Runtime Operations – What are the different run time modes the tool can be in? Are there views that can be turned off and on? Controls that toggle visibility states? Are there options a user can set which will affect the run of the tool? List here the different run time states and options the tool has available. It may be worthwhile to indicate here which ones demonstrate a need for more testing focus.

8. Interoperability – How will this product interact with other products? What level of knowledge does it need to have about other tools/software — “good neighbor”, software cognizant, software interaction, fundamental system changes? What methods will be used to verify these capabilities?

9. Integration Testing – Go through each area in the product and determine how it might interact with other aspects of the project. Start with the ones that are obviously connected, but try every area to some degree. There may be subtle connections you do not think about until you start using the features together. The test cases created with this approach may duplicate the modes and objects approaches, but there are some areas which do not fit in those categories and might be missed if you do not check each area.

10. Compatibility: Browsers – Is your feature a server based component that interacts properly with browsers? Is there a standard protocol that many browsers are expected to use? How many and which browsers are expected to use your feature? How will you approach testing browser compatibility? Is your server suited to handle ill-behaved browsers? Are there subtleties in the interpretation of standard protocols that might cause incompatibilities? Are there non-standard, but widely practiced use of your protocols that might cause incompatibilities?

11. Compatibility: Operating Systems – Is your feature a client based component that interacts with operating system? Is there a standard protocol supported by many servers that your client speaks? How many different operating systems will your client need to support? How will you approach testing server compatibility? Is your client suited to handle ill-behaved or non-standard operating systems? Are there subtleties in the interpretation of standard protocols that might cause incompatibilities? Are there non-standard, but widely practiced use of protocols that might cause incompatibilities?

12. Beta Testing – What is the beta schedule? What is the distribution scale of the beta? What is the entry criteria for beta? How is testing planning on utilizing the beta for feedback on this feature? What problems do you anticipate discovering in the beta? Who is coordinating the beta, and how?

13. Environment/System: General – Are there issues regarding the environment, system, or platform that should get special attention in the test plan? What are the run time modes and options in the environment that may cause difference in the feature? List the components of critical concern here. Are there platform or system specific compliance issues that must be maintained?

14. Configuration – Are there configuration issues regarding hardware and software in the environment that may get special attention in the test plan? Some of the classical issues are machine and bios types, printers, modems, video cards and drivers, special or popular TSR’s, memory managers, networks, etc. List those types of configurations that will need special attention.

15. User Interface – List the items in the feature that explicitly require a user interface. Is the user interface designed such that a user will be able to use the feature satisfactorily? Which part of the user interface is most likely to have bugs? How will the interface testing be approached?

16. Performance and Capacity Testing – How fast and how much can the feature do? Does it do enough fast enough? What testing methodology will be used to determine this information? What criterion will be used to indicate acceptable performance? If modifications of an existing product, what are the current metrics? What are the expected major bottlenecks and performance problem areas on this feature?

17. Scalability – Is the ability to scale and expand this feature a major requirement? What parts of the feature are most likely to have scalability problems? What approach will testing use to define the scalability issues in the feature?

18. Stress Testing – How does the feature do when pushed beyond its performance and capacity limits? How is its recovery? What is its breakpoint? What is the user experience when this occurs? What is the expected behavior when the client reaches stress levels? What testing methodology will be used to determine this information? What area is expected to have the most stress related problems?

19. Volume Testing – Volume testing differs from performance and stress testing in so much as it focuses on doing volumes of work in realistic environments, durations, and configurations. Run the software as expected user will – with certain other components running, or for so many hours, or with data sets of a certain size, or with certain expected number of repetitions.

20. International Issues – Confirm localized functionality, that strings are localized and that code pages are mapped properly. Assure tool works properly on localized builds, and that international settings in the tool and environment do not break functionality. How is localization and internationalization being done on this project? List those parts of the feature that are most likely to be affected by localization. State methodology used to verify International sufficiency and localization.

21. Robustness – How stable is the code base? Does it break easily? Are there memory leaks? Are there portions of code prone to crash, save failure, or data corruption? How good is the tool’s recovery when these problems occur? How is the user affected when the tool behaves incorrectly? What is the testing approach to find these problem areas? What is the overall robustness goal and criteria?

22. Error Testing – How does the tool handle error conditions? List the possible error conditions. What testing methodology will be used to evoke and determine proper behavior for error conditions? What feedback mechanism is being given to the user, and is it sufficient? What criteria will be used to define sufficient error recovery?

23. Usability – What are the major usability issues on the feature? What is testing’s approach to discover more problems? What sorts of usability tests and studies have been performed, or will be performed? What is the usability goal and criteria for this feature?

24. Accessibility – Is the feature designed in compliance with accessibility guidelines? Could a user with special accessibility requirements still be able to utilize this feature? What is the criteria for acceptance on accessibility issues on this feature? What is the testing approach to discover problems and issues? Are there particular parts of the feature that are more problematic than others?

25. User Scenarios – What real world user activities are you going to try to mimic? What classes of users (i.e. secretaries, artist, writers, animators, construction worker, airline pilot, shoemaker, etc.) are expected to use this tool, and doing which activities? How will you attempt to mimic these key scenarios? Are there special niche markets that your product is aimed at (intentionally or unintentionally) where mimic real user scenarios is critical?

26. Boundaries & Limits – Are there particular boundaries and limits inherent in the feature or area that deserve special mention here? What is the testing methodology to discover problems handling these boundaries and limits?

27. Operational Issues – If deployed in a data center, or as part of a customer’s operational facility, then testing must, in the very least, mimic the user scenario of performing basic operational tasks with the software.

Observations in Effective Software Development

Observations In Effective Software Development – Part 1

It seems like only yesterday that my involvement in software testing/quality assurance adventures began, but in reality it has been well over 20 years! It has been a slow, winding journey through many various types of software including video colorization tools, entertainment CD-ROMs, video-on-demand applications, animated 3D shopping apps, chat rooms, educational CD-ROMs, disk utilities, web sites and visualization/prototyping tools.

The testing approach for each of these products has been challenging to say the least, however the QA methods are usually very similar, though outwardly the look, feel, functionality and purpose of these products are strikingly different.

There are several items that can make the development process more painful than need be. The first and most simple common issue that many software projects fail to address in the very early stages of the development process is the lack of precise and final product design/functional or technical specifications. Or if we want to consider the worst case scenario, that would be the complete lack of any specification documents… period! Yes, it sounds absurd in today’s software development world to not have prepared any documented specs at all, but it does happen and it happens more often than one would imagine.

In this unimaginable scenario of zero specs, the problems created down the line, especially for QA personnel, is by far not a comfortable situation. Whenever this occurs, QA will often be performing ad hoc testing at first and hopefully find a few software defects. Having a seasoned team of QA folks on board is very vital as ad hoc involves experience and imagination in making reasonable progress in finding defects without any docs to refer to. Good luck to any QA team who is facing this! Usually, having good relations with the product design team and programmers is essential because there will be many questions raised during early testing cycles such as “Is this a feature or a bug?”, “Should navigation really be this slow?”, “Is this the correct graphic?” etc.

In any case, attempting to apply quality assurance methods against a product with non-existent documentation is never a good approach to successful software development and should be avoided. It slows the process, sometimes to a standstill and causes confusion, frustration and wasted resources.

Read Part 2 of this three part article

Observations in Effective Software Development

Observations In Effective Software Development – Part 2

The concept has been summarized and all the pieces are in place… right down to the finest details that will make this product the best thing since sliced bread. The specification document is given the final approval by the directors, managers, designers, etc. and is distributed among the various development groups that will be responsible for making the project come alive and function as expected. As the programmers and quality assurance people are reading and getting a feel for what is going to be a large part of their work for at least the next 6 months, there are some furrowed brows and puzzled expressions appearing on the faces of these brave souls. What has caused this dark cloud that looms over their collective heads? The functional specifications document is one of the worst things they’ve ever seen!

There are several undesirable specification scenarios that can be listed:

1. Long-winded explanations of functionality that lead nowhere. Sometimes, less is more when writing details about how the product will function. When writing the specifications, the creators may want to take into account that the document is going to be read and utilized by many people in the development process. Make the details of the functionality readable by all by not needlessly going on and on and on about how/why an element behaves the way it does. The items should be linear and not read like a rambling fiction novel or a letter home to mom and dad from summer camp. Clear, precise explanations are much better in the long run and development personnel will have fewer headaches, sleepiness and inertia in getting the project moving in a solid, tangible direction.

2. Screen mockups do not match required functionality or updates have not been distributed to all groups. This is very common in the beginning, middle and end of productd evelopment. This usually affects the quality assurance group more than any other. While changes are being made by designers and programmers in order to make the product more useable as desired, visual mockups and functionality details are often changed to the point of being unrecognizable when compared to the earlier concepts. This can be avoided somewhat by applying an agile process to the project where changes are noted along the way and the document is updated accordingly in a timely fashion. When using the agile process, it is helpful to reference changes to the specifications by each individual line item (example: “Homepage Navigation”) in the agile document to insure that everyone is on the same page. Screen mockups should also be updated in the agile documentation and/or within the functional specification document itself. Please! We all want to be included in all specification changes on an ongoing basis!

3. Specification items include personal notes on functional changes/opinions embedded within the lines. This may be a useful practice when changes are understood within a small group of designers who all understand the scope of their decisions, but the rest of development would benefit from not having to wade through all of that extra verbal complication. Notes should be kept out of the main functional specification document and tracked/referenced in a separate document or individual spreadsheet tab to avoid confusion and enable the specs to be comprehended easily by all involved groups.

Everyone involved in ensuring an effective software development project wants to do their very best work. Providing clear, linear functional specifications that can be absorbed and understood by all on an ongoing basis will help make the development process go smoother in the long run. Even in the very best scenarios, there are always major challenges and issues to be addressed, so at least let’s start out on the right foot with some incredibly amazing specifications before the “you-know-what” hits the fan in the development cycle!

Read Part 3 or Part 1 of this three part article

Observations in Effective Software Development

Observations In Effective Software Development – Part 3

This approach may sound like something straight out of a science fiction movie, and it very well may be what we can only dream about… however it would really be nice if this could be a reality when designing a software product. Since this has never been experienced by anyone that I know, including myself, I can only envision what this might entail.

Here are several “dreamlike” design and technical/functional specification scenarios that will help towards effective software development :

1. Precise technical specifications. The bottom line on machine operating systems (PC and Mac), processor speeds, minimum memory, supported browsers and browser versions (for web), programming tools and language (not changing this in mid-development), preferred database, networks, etc. upon which the new software will work optimally would be most welcome. This would a great thing for everyone involved in the development process except for the fact that it is very difficult to predict what level of the most important technical aspects that an end user has in his/her computer setup. Even if these specs are decided in the beginning, testing the actual software on multiple platform combinations will reveal that system requirements do not always mesh well with what is going on out there in the real world. Oh, well.

2. Functional specs are clear as to what this product will do, plus screen mockups contain all final UI elements and graphics. Wow. Just reading that sentence was somewhat exciting. This scenario would take an awful lot of careful thought and planning well beyond the normal linear design process. Understanding what an end user will like about the way the software works (how it looks, functions and feels) and the general overall user-friendly aspects is most important in the development and ultimate deployment of successful software. Unfortunately, many designers and programmers have no clue as to what real people are able to deal with in the midst of using software. Development folks, from the concept phase, through the design phase, through programming and even quality assurance miss the mark when it comes to “ease of use” as they are often too self-absorbed in their own world and can’t clearly see or understand that the software that they’ve placed so highly on a pedestal is really just a train wreck in slow motion. Realizing that business and user requirements may change many, many times throughout the development process is something that all dev teams should learn to accept to keep themselves from burning out or ending up in the mental ward.

3. Notes involving changes within the technical/functional specifications are clear to everyone involved. In making changes to specification documents, once again (this seems to be an ongoing, frustrating experience) any notes concerning changes to the product should be written precisely within the spec documents as to what the changes are and how the changes will be applied and also by whom. When change notes are scattered throughout a spec document that read like a cheap novel or sound like a conversation with a little kid at a playground, how can everyone really understand what is being changed and why? Keep the notes very clear, in the right spots, and realize that everyone in the development process benefits from solid terminology and communication.

4. Be realistic about product delivery deadlines. Yes, the business folks want it on that date… then development managers make promises that the software will be finished in that date. It is never finished on that date! I don’t know of any software that has been “ready to go” on the date promised. Usually, when it gets down to the wire, there is a lot running around and panic which trickles down into the QA group to hurry up and test so the product can go out the door. More often than not, some massive defect is discovered the day before release and everyone goes nuts. There can be lots of finger-pointing when this occurs. Take a deep breath, fix the problem, get the release to the customers either on time or a few days later… relax. It is not the end of the world (maybe your job, but not the world).

Finally, we want to do great work because all of us in development take pride in what we do. Nothing is perfect and we are only human… and good software is simply a reflection of the good stuff in our human nature. Enjoy!

Read Part 1 or Part 2 of this three part article