Performance Zone is brought to you in partnership with:

Leaving university, I thought I'd be a developer happily knocking out code but was always drawn to tech support. How do we help people use our tools better? Now, I mostly specialize in consulting and introducing new customers to our tools. I am a Tech Evangelist and Lead Consultant at Urbancode. Eric is a DZone MVB and is not an employee of DZone and has posted 80 posts at DZone. You can read more from them at their website. View Full User Profile

The argument for testing performance in production

08.21.2014
| 6158 views |
  • submit to reddit

I encounter teams that do formal performance testing less frequently than in the past. For many teams, this is just the result of being spread too thin across QA and not having the resources to do performance testing well. However, in some more agile shops there’s an active argument against performance testing, especially load testing.

For many new applications, the usage patterns and number of concurrent users are unknown. This makes anticipating the performance “requirements” an exercise in guesswork. Development effort to expand the capacity of the system from 100 concurrent users to 1000 might be completely wasted if no more than 100 users ever log on at the same time.

Agile, continuous delivery and lean startup thinking all nudge teams towards going to production first to discover what’s truly needed. The danger is that when wildly successful, the application may fall down. Generally, that’s a good problem to have, although there are notable exceptions. An enabler of this experimentation, is a hedge against overwhelming success – being able to scale your infrastructure quickly. If you are able to add additional servers quickly, the time needed to fix scalability problems can be bought by renting servers – and bought while the application is delivering economic value your firm.

The danger is that your application architecture doesn’t scale – there are perhaps some nasty bottlenecks that additional servers can’t compensate for easily. Typically, this is the database.

This approach is actually used fairly commonly for applications using known architectures and where the scalability characteristics are relatively predictable. The application can built and deployed initially to a public cloud to verify demand. Once the team has a sense for the performance characteristics of the application, they can allocate server resources in their data-center and bring the application in house.

Performance testing still has its place. It’s awesome for existing applications when looking for changes that degrade performance, and for major application launches where the scalability absolutely has to be right the first time. However, much like the feature “requirements” for an application are often speculative and need to be tested against actual users, when infrastructure needs are unknown dynamic infrastructure offers the option of discovering the requirements in the field more effectively.

Published at DZone with permission of Eric Minick, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)