Planning and Managing Cloud Capacity

By CIOReview | Wednesday, June 21, 2017
416
624
117

Experts opine that it is far easier to enhance application capacity and performance management if it is a part of the application development process. The underlying mechanism to enhance capacity and performance management is to setup goals and then establish resources and resource relationships that meet them. Often profoundly, the cloud changes capacity and performance planning as well. Application demand shows performance (quality of experience or QoE) under various load levels in a curve format. The demand is measured by "transactions," for example—an application update, and email. To address this issue, planners need to understand the constraints and demands, assess performance in the cloud, and how resources and configurations affect it.

Focusing on the Users

Typically, QoE is measured by tracking the time taken for a user to complete a work cycle presented to applications in a request and response format. Measuring through an entire business cycle is necessary for reliable data, and when not possible, a full month's data and use application logs can be used to correlate it with annual activity rates.

Cloud is primarily designed to scale resources under load. Database updates and accesses often form a bottleneck, for example, an organization needs to measure not only the number of database updates and accesses but also the associated delay. Network delay is greater in the cloud and harder to predict as well. It's often difficult to break out network delay in an application because it's hard to time-stamp all the steps, for example, if the Internet is used for cloud access, or the cloud provider is likely to distribute application copies geographically. To overcome this obstacle, organizations use an "echo" transaction that sends a time-stamped message and receives an immediate response. By testing this across the range of cloud options and locations available, firms can measure the variability of reaction.

Establishing 'Performance Zones'

An application's performance is the sum of the delays experienced in these zones, and the performance and storage capacity plans to augment zone performance. This is used to pull performance levels into QoE boundaries set by business operations. The goal is to establish "performance zones," within the network, front-end processing, and database activities. Using zones not only helps in identifying delay choke points, but also divides application performance according to the type of "capacity augmentation" needed to remedy issues. The "front-end" zone is easiest for improving performance through capacity augmentation that is responsible for structuring information for the user. A zone can be optimized to change the overall application performance level. A given zone can be optimized and its response time changed, and that will change the overall application performance level, recognizing that most capacity changes will affect only a limited part of the application performance.

Making Gradual Changes

Adding capacity in a single zone has little or no effect on performance. Because elastic scaling of cloud components can add overhead through load balancing, the overall performance may slow down as well. To increase the performance, by adding capacity, firms should primarily test the effect of adding capacity. The capacity upgrades must be done slowly for correlating actions with results. While scaling an application, it is important to test performance with a single instance, but with load balancing enabled, add other instances to observe the performance curve. This is the way to identify the point of failure of the performance. The primary choke point is the application's "back end," where database activity is concentrated. For example, SSDs can significantly improve access and update performance. But due to the higher costs, it is important to measure the response time improvement and work through cost benefits.

Ensuring Accuracy of Data

The sum of the network, front-end, and back-end database zone response times is the total response time and application QoE. Ensuring accurate data at the boundary points and understanding the overall performance change zone-specific capacity augmentation is very critical for performance management. Performance management is best addressed in parallel with development rather than considered later, when problems emerge, and making it a part of the application development process will make it easier to accomplish.