This is part three in a four-part series that will walk you through everything you need to think about when performance testing a new site.
While previous posts have focused on why performance tests are important, what to think about before testing, and which tools are available, in this post we’ll get into the details a bit and focus on everything you need to get organized in the lead-up to your performance test.
In order to stage a performance test you will need to gather the following information.
Expected User Behavior. For any performance test to be accurate, you will need to mimic credible user interactions with the system. Some of these interactions can be duplicated from your functional tests. These tests are usually broken down by a specific set of business functionality (user logs in, user creates content, user looks at catalog). Generally, however, you will want to broaden the test to mimic an actual user trip through the site. I have found that there are a few guidelines that can help here:
- Determine the average number of page views per session. This can be determined from existing traffic, if the site currently exists, or based on observed user interaction with the system during testing. Having this number gives the test script developer a target for the level of complexity for the scenarios they will be creating.
- Determine a set of paths that users will likely take through the system. Modify these paths dynamically to simulate true usage. Again web traffic on existing systems is a good point of reference for this.
- Include a mixture of anonymous and authenticated traffic in the tests. This, of course, would be determined if there is user authentication or anonymous browsing on the site.
- Define a set of users, with known passwords, that can be used as part of the test bed. The number of users should be large enough so that it would be unlikely that the same user would be on the system in multiple sessions. These users will need to be seeded into the authentication data store prior to the test.
- Create a realistic mix of browsing and content creation (if allowed). A general rule of thumb would be a 10% content creation 90% content browsing mix. In the case of a WCMS system, if the users are not creating content, you will still want to add content from the authoring nodes during the test cycle (if this is a realistic scenario).
- Determine a way to reset the state of the cluster to, at least, a single start point. If creating content is a use case we are testing, the content of the system will change; this change could affect the system performance. Generally, though, we will want to be able to create multiple start points for the performance tests as we start testing specific scenarios.
- Make sure the network is aware the performance test will be happening. Performance tests look like DDOS attacks on the servers. The network will be flooded with traffic from the virtual users, and this will adversely affect anyone else on the network.
- Conduct the performance test on a cluster that closely mimics the production system. This would include the size of the servers, end points on external vendors.
- Create sandbox instances for external vendors. In a green field deployment, this can be the production instances (as long as the vendors are aware/allow the testing). For Upgraded systems, or systems where the production endpoints are already in use, the vendor will have to provide instances that the performance cluster can use for the test.
- Enable system monitoring for the systems being tested. Being able to accurately monitor the use of resources on the testing cluster is essential for determining the health of the system. The primary resources that should be monitored would be CPU and I/O usage.
Check out the rest of the series: