5 Common Performance Testing Mistakes to Avoid
Lots of good organizations are the victim of performance issues because they are not testing their applications under real world situations. Find out if you’re making one of the common mistakes seen in performance testing.
In this 3-min video Robert Schneider, from WiseClouds, guides us through five common performance testing mistakes and how to avoid them. Make sure your testing is up to standards, and that you’re not falling through some of the common performance testing holes that exists. The entire transcript of the video is also provided below the video for your convenience.
1. Ignore it During Design
The first problem we see is one of planning, or a lack of planning. We see a lot of organizations omit performance considerations during the design phase and this can lead to all sorts of problems later. Instead it's much better idea to include quality assurance, performance and other types of considerations at the very begging of the design phase. You want to be able to get Service Level Agreements (SLAs) in place during design so you know what you're aiming at as you build your solution.
2. Wait Until Software is Finished
Lots of enterprises delay vital performance testing until the very end of the application development lifecycle. That usually leads to some unpleasant surprises. Instead, it is so much better to run performance tests throughout the development lifecycle, even if they are unit tests, or even if they are tests against the infrastructure or database; at least its going to give you some kind of idea what kind of performance you can expect once the entire solution is complete.
3. Use a Small Amount of Hardcoded Data
It's very easy to test your applications using a relatively small set of hardcoded, static data. Unfortunately that does not give you a true measure of what kind of real performance you can expect once the application goes live. There are many inexpensive data generation tools on the market place that you can use to create massive amounts of realistic information. You can then feed this information to your tests which is going to give you a much better indication as to what kind of performance you can expect once your solution goes live.
4. Focus on a Single Use Case
We encounter lots of software development teams that only test a single use case scenario when it comes to performance testing. That can be very problematic because you're not truly getting a measure of what kind of performance scenarios your application is likely to encounter in the real world. Testing tools such as soapUI and loadUI allow you to use a variety of different, statistical approaches from burst to random to steady state to many other kind of variances that will try to give you a much more realistic indication of the real world scenarios that your solution is likely to encounter. And they are easy to use.
5. Run Tests from One Location
We see lots of teams that run their performance tests from inside the firewall. This may be because of budgetary reasons or technical reasons. But that truly doesn't measure what kind of performance you are going to get when you application is out there or your service is out there being used in the real world.
There are a collection of technologies such as those provided by SmartBear that allow us to distribute our tests into the cloud and then truly get a measure of what kind of network latency we're going to experience once our solution goes live.
Robert Schneider is a senior consultant for WiseClouds, a SmartBear partner who offers training in using soapUI. He has provided distributed computing, database optimization, and other technical expertise to a wide variety of enterprises in the financial, technology, and government sectors.