Insights

At DVT, we run regular online events focused on the latest technology trends within the IT industry and invite guest speakers to share their knowledge and insights on various topics. The DVT Insights Events aim to enlighten you, educate you and often, provide a new view on a burning issue within the technology space.

Software testing: Stress testing your production systems
Mario Matthee
Head of SQA Architecture and R&D, Global Testing Solutions, DVT

Software testing: Stress testing your production systems

Wednesday, 18 January 2017 13:17

By Mario Matthee


For all the benefits of performance testing your software, most (rigorous) performance testing is not done on your actual production systems. This somewhat mitigates the benefits, as if you were running ‘what-if’ scenarios on a simulator rather than the real thing.


While limited performance testing can safely be attempted on production systems (and sometimes is), there are good reasons for not running extreme performance testing (stress tests) on your production systems. Where basic performance testing might put a small load on your systems, stress testing is just that – stressful – on you and your systems. It necessarily pushes your systems to the limit, hitting them from different angles and pushing obscene amounts of traffic through narrow bandwidth pipes in a concerted effort to break them.


For most organisations, it is therefore impractical at best – and negligent at worst – to run anything even resembling a stress test on a live system. Except you can, and you should.


Don’t get me wrong; I’m not advocating the risky scenarios you’re probably imagining. In fact, while most commercial performance testing tools could perceivably run on production systems, they generally can’t work through the highly encrypted and secure tunnels that organisations like banks use to protect their systems from any and all forms of malicious intrusion.


But there is a way to performance test your production systems. Not only that, there is a way to automate the performance tests on your production systems, so that as soon as performance drops below a certain level, you will know about it. I call it Automated Functional Performance Testing, which very cleverly overcomes the limitations of traditional performance testing using very mainstream automated test scripts not necessarily designed to test performance.


Let me explain, using the aforementioned banking system as an example. As a bank, you use expensive proprietary software and secure protocols to guarantee your customers’ safety and verify the integrity of your transactions. The same software prevents performance testing tools from seeing – and therefore diagnosing – any problems that may occur inside the secure code.


In other words, if something is broken while a secure transaction is taking place, and that something is slowing down your system, a performance testing tool is not going to help you troubleshoot the problem because it can’t see it, so to speak.


However, install an automated script that keeps tabs on the time it takes the system to process a transaction request – the time between the customer issuing the request (before it goes through the security tunnel) and getting a result (on the other side of the tunnel) – and suddenly you have eyes on a critical part of your system’s performance without compromising security or adding any overhead to the process.


Not only that but the automated script you would use to ‘test’ the performance of this aspect of your system will also cost significantly less than a typical performance testing tool which, as you probably know, is not cheap.


The same technique can be used in different ways on different systems to the same effect. For example, run an automated script on your web server to test the response times on your newsletter signup form, or run a test script to let you know when a page click takes longer to return a result than your SLA demands.


Performance testers establish SLAs to make sure that apps reach certain benchmarks of performance. For example, a typical might include a clause that requires the first page of search results to be returned within three seconds when performing a product search.


You can even extend the script to send alerts to any number of people responsible for maintaining your business systems, giving you an early warning system to proactively rectify any faults before they start impacting your customers’ experience.


If all of this sounds very much like the results you expect to get from performance testing, that’s because it is. No, you’re not using testing tools that have been specifically developed to isolate and remedy serious performance issues, but then you’re going one step further and working with production rather than development systems.


That said, I’m not advocating that one is necessarily better than the other. Dedicated performance testing and performance testing tools play a vital role in generating quality software from the very start of the development process right through its lifecycle. Performance testing tools are also increasingly becoming mainstream, and therefore becoming more cost effective to run.


This approach is almost like production monitoring from an infrastructure perspective without focusing on the detailed performance of the memory usage and CPUs, but rather on the customer experience from a speed/response perspective. It’s also handy for monitoring the uptime/response times of third party system the application integrates with. In most cases, if these third party systems are not available, the business transaction cannot be processed.


Automated Functional Performance Testing is not performance testing in the traditional sense, but gets you many of the advantages of performance testing with the additional advantages of live information from your live systems at a lower cost.

Published in Software Testing
DVT 25 Years of Service