To provide an estimation of Web and SQL server requirements, a regime of server load testing is carried out as part of the Intelligent Agent release process. As exact server requirements depend heavily upon Workflow composition and throughput, a representative Workflow and throughput is used to generate the load test results used for the generic estimation process.
Test Environment
Server Infrastructure
The web and SQL servers are running in Amazon EC2 on General Purpose (M4) VM servers. They run Windows Server 2012R2 and Microsoft SQL Server 2014.
Client Emulation
An internal tool is used to open and manage a separate Chrome browser session for each test user, and move the users through the test Workflow. As each Workflow run is being by independent Intelligent Agent users with their own credentials and browser session, this ensures all network, internal Intelligent Agent process, and supporting activities are being put under realistic load.
The internal tool is run on one or more Amazon EC2 Compute (C4) VM servers to provide the CPU resources for the required multiply concurrent browser sessions.
Intelligent Agent Version
The last test was concluded on 22nd April 2017, using Intelligent Agent version 4.6.20.
Test Workflow
The test Workflow is designed to be a representative example of Intelligent Agent Workflow functionality. It includes multiple page navigation events, data capture, and read/write operations to custom database tables. Full details of the test Workflow are available in the Test Workflow Overview.
Test Run Scenario
To ensure a reasonable representation of real-world usage conditions, there are two different initiation scenarios used for each user count:
Workflow runs are initiated evenly across a 1 minute time period. This gives a steady initial baseline server usage, and represents a steady-state usage case.
Workflow runs are initiated in four groups split across a 1 minute time period. This gives four usage spikes through the first minute as each group commences, and represents a severe scenario for user activity clustering. This scenario can show an increase in server response due to the large number of concurrent requests being received by the server and the dynamic nature of Intelligent Agent.
In both scenarios the test Workflow is run 5 times in a row per user, with each new test run being started once the previous run is complete. This further replicates real-world usage, as it causes natural variation in server loading through the test as the activities move in and out of phase.