One challenge in designing a SharePoint environment is providing an architecture that meets the performance needs of the organisation, but without throwing more resource than required at the design.
This in practise if a very difficult thing to balance. Even when you know the performance aspirations of a deployment – how do you translate that into something tangible that you can design your architecture around?
It’s not uncommon to receive what I would call a non-functional/subjective success criteria such as “the system must perform well for all users”. From the technical perspective, “well” is going to be received differently depending on the expectations of each and every end user.
A more realistic point to work towards is a test case formed similar to “a user should be able to login to the site, navigate to three pages, and download a document – with a total load time of 5 seconds”. This albeit perhaps slightly unrealistic is at least something we can work with.
Typically armed with this knowledge you can then start to decide how you are going to compose your server farm. If we know that there are 500 end users we may be persuaded by Microsoft guidelines to recommend a medium three tiered server farm consisting of:
- A couple of load balanced “front-end” servers to handle end user requests
- A couple of “application” servers to run internal SharePoint processes such as search and any other service applications
- A single or clustered SQL environment
This still presents us with unanswered questions such as what is our concurrency of use across our user base (i.e. how many users are likely to be using the system at the same time). To add to this, if our users are geo-distributed we may find that due to time zone differences we have a much smaller potential concurrent user base at any one time, hence simplifying our system requirements to provide the desired end user experience.
What is hopefully becoming apparent is the number of variables to consider when designing a SharePoint environment. Whilst we have recommendations from Microsoft that suggest a user range that a particular configuration could support – no SharePoint architect can truly guarantee a performance outcome.
Why? Because performance criteria are based on assumptions of how a user will interact with the system. Browsing a site is a simplified metric, but what if we have an intranet that has heavy use for document management? What if we have implemented remote blob storage for a product design company to store their large CAD files which will be updated on a regular basis? All of these different use cases result in a different user profile that will interact with the system.
The simplest and safest way to proceed is to agree a number of representative user journey’s and an anticipated concurrency profile. From an organisation of 500 individuals, it might be realistic to assume 20% of all users are concurrent at any one time (100 people). Of that sub set, if we introduce time zones into the mix – your actual concurrent users may be much lower.
A key concept at this stage of design is to make educated assumptions. This is OK, as the next key stage is to validate those assumptions and react accordingly. Your concurrent user information may already be available from other key systems (e.g. Exchange users) or quite simply you can measure IIS logs retrospectively to converge your estimated concurrency.
Load testing is one of those things that really needs to be thought about before you start designing a SharePoint solution. Most notably from the perspective of expectation management with key project stakeholders. Load testing is there to validate a design which in itself must be allowed to “fail” based on the load tests in order to modify the architecture to revalidate the performance criteria. Obviously you will be aiming to get things fairly close the first time around, or alternatively look at load testing as stress testing to see how many user journey’s a configuration could hypothetically handle.
In this scenario I am going to focus on the former – how to perform a simple measure of a prescribed user journey and look at the results. These results can then be discussed with key stakeholders. If the response times fall within desired parameters then you are in theory all done.
There are many different approaches to load testing, and many different tools to help achieve this. The first thing you need to ask yourself, is whether you need to see what is happening on the server with regards to performance in parallel to the actual request test results.
If this is important (i.e. it will be for stress testing), then you either need a tool that will do this for you (typically via agents installed on the server) – or you need to do this manually e.g. setup a set of perfmon counters to monitor the server at the same time you are initiating a test. If you are testing a virtual server (e.g. VMWare of HyperV) then you have additional monitoring options via the management tools provided by those platforms.
Another factor to consider is where the load will be generated from. There are two sides to this, both from a geographical perspective – i.e. you should run the tests from where end users reside to identify any latency influence, and also from a load generation perspective. For load generation, a single test rig is only capable of pushing a server to a certain point. After all, your test rig has constraints such as its own processing power to generate tests and network throughput to perform those tests.
The answer to the above questions will steer you towards the most appropriate tools.
To start things off, there are a few tools (this is by no means an exhaustive list) that you could consider:
- Visual Studio
Jmeter is free and is a powerful option that enables you to both record scenarios and integrate these into complex testing scripts. From my perspective, this has a reasonable learning curve and is not a clean fit for an administrator or power user who has either been charged with performance testing, or who is curious to validate the performance of an environment. On this second point, it’s appropriate to point out the obvious disclaimer that you should not run performance tests against an environment which you do not own or otherwise have permission to test against. Load testing can generate serious load on an environment and be potentially seen as a denial of service attack.
Visual Studio is another product which you will probably come across recommended on a regular basis. The issue I have with this approach is the necessity to have the Ultimate edition of Visual Studio (which does not come cheap), plus the limitations on the number of virtual users you can simulate (although you can of course purchase more “users”).
The third option, LoadUI has a couple of flavours – a free and pro version. The pro version introduces rich functionality similar to what you would expect in Jmeter or Visual Studio such as browser recording and server load monitoring.
That said, I’m going to introduce a simple scenario with the free version, and its accompanying free sibling of SoapUI.
These two products work well together. One feature of SoapUI (in addition to many others) is the ability to record a browser session and save the steps to a configuration file. LoadUI then has the ability to read in this configuration file and wrap this within a defined load test.
To start the process, open up SoapUI and create a new Generic Project. Be sure to click the bottom check box “Creates a TestCase with a Web Recording session for functional web testing”.
Next, create a test case which points to your desired site. In this case a shameless plug to my site.
You are then presented with your site.
Note that if you have authentication in place, e.g. this is your corporate intranet – then you can insert authentication information in the properties window.
As you click around and navigate your site as per your defined user journey, you will note that each step is recorded within your test case. This can be simple clicking between page, or even downloading a document from a document library.
When you have completed your user journey, stop recording and save your SoapUI project. We have finished using SoapUI in this example, it’s main purpose was to automate the recording of your user journey.
Now open LoadUI and create a new project. You are presented with a blank canvas within which you can add a number of controls. For this example, I am going to keep things simple and utilise a generator component to simulate my user load (Usage), a SoapUI Runner component to import the user journey we created above, and a Table Log component to record the outputs from the test.
Once you have the components into your project there are a number of configuration steps:
Project – set a limit for the amount of time you want to run the test for, e.g. 30 seconds
Usage – this simulates end user requests, e.g. 20 concurrent users with a request every 10 seconds (2 requests per second)
SoapUI Runner – this is where you simply browse to the SoapUI project you saved above
Table Log – where basic results will be stored
You can then run the test and wait for it to complete. Once completed, your Table Log will contain a summary similar to the below:
From the Table Log you can simply export the results to CSV or alternatively look at the other reporting options available such as the graphs in the statistics tab (you can modify these), and the report generation button in the top right corner which will create you a report (which you can save as PDF) summarising the key results.
With a relatively simple (and free) process, you can implement a basic set of user journey load tests against your SharePoint environment. If the performance results are within the expected/design range then you are good to go! If not, then you have some homework to do to troubleshoot where the performance bottleneck may be.