ALL YOU NEED TO KNOW ABOUT APPLICATION PERFORMANCE MODELING

By CIOReview | Monday, July 4, 2016
648
1041
209

Overview

Performance modeling is a methodology to assess the performance limiting factors for a given set of hardware resources, and it plays a pivotal role in capacity planning and resource management. The process of testing performance with approximation methods, in the absence of a load-testing environment is termed Application Performance Modeling (APM). It monitors user transactions executed at a lower level, and gives a better understanding of the system requirements to developers and architects.

Due to the wide-ranging differences in the APM implementations, it is clear that the operational aspect of simulation will vary greatly from application codes to deployable components. However, a general notion that prevails is that tenderfoots in the performance-modeling realm are more prone to making technical slipups, such as choosing an unsustainable approach in modeling.

Apparently, the dynamic and widely distributed IT sector of today requires application performance monitoring that fosters agility. Poor performance can directly affect a business—tear down the brand image or cut straight into the revenue pipeline as customers strain to perform transactions. Therefore, to stay upright with your modeling policy, know the prime performance modeling approaches; collect the accurate data to supply your model and benchmark the models against standard random variables to ensure your results correspond to the actual values.

Tightly Coupled Modeling and Component-level Modeling

A tightly integrated performance modeling is similar to DevOps—closely bound to development, right from the primary phase and subjected to changes whenever the code changes. In this performance modeling technique, architects and planners are introduced with precise information on how the code-changes, demand or deployment models will impinge on performance. This type of model, however, involves a significant amount of work on the part of the users, which most of them disapprove. Though it can provide planners and architects with very accurate information on how changes in code, demand or deployment model can affect performance, it caters only to the internally developed codes and not the third party applications.

Tightly coupled modeling tools are useful when you are in control of the code or have performance-modeling hooks imbedded by the developer. To implement this model, refer to your application development tool provider or other reputed third party solutions.

A component-level performance model is an ideal fit for those users who depend on packaged software and other developer apps. It goes well with software that is segregated into deployable components, including couplings appertaining to Service Oriented Architecture (SOA) or REST (Representational State Transfer) architectural style.

Knowing the different approaches to APM

The entire premise of Application Performance is built on three modeling approaches namely discrete event, analytical model and statistical model. For coupled modeling applications discrete event comes in handy, whereas in analytical models an expression or code structure that correspond to the original performance of a component is employed. Unlike the former two, Statistical model draws conclusions from a series of graphs representing the range of performances. Analytical components may comprise of Java Modeling and Pretty Damn Quick (PDQ) software tools, while the lists of statistical tools range out from IBM’s SPSS to SpotFire and Minitab.

Engaging these statistical or analytical models however requires a set of conditions for measuring the performance. Of all the measurements, the most common one is ‘offered load’ or demand, and it can be taken as a basic entity to measure transactions if there aren’t any notable performance differences. Where performance difference exists, it is essential to measure based on transaction type for accuracy. Analytical model represents component behavior based on functionality instead of simply plotting the data.

In the long run, some users may find that a successive approximation method for setting up a performance model is ideal for them. In that case, establish a component map that considers each component to have constant performance range and test the results against the real applications, specifying a range of conditions. However, the model should permit you to make out projected performance at different points and tally it with the observed performance. When the results turn out to be within acceptable limits no further observation is required. But if there’s a mismatch, then it is necessary to assess the component’s logic or even modeling discreet logic paths as subcomponents may be required. Those seasoned professionals with hands on experience in developing models may avert this particular approach, by building a model with some sub-component structure based on application workflows and transaction handling.

The main advantage of tightly integrated models is that, it prepares an organization to level up with application changes by changing their model. On the other hand, component level modeling includes a validation of an application performance model in the application lifecycle management (ALM) objective.

The Cloud Application Space awaits better monitoring tools

In fact, the lifecycle progression of an ALM will involve a load test and that is the right time to include application performance modeling validation. Check for data suites for changes and update the model accordingly. Subsequently, gather the expected performance of the test data suite by running the performance model before the load test and match the results.

Users should set up performance monitoring and network monitoring tools to validate performance models in cloud applications. Where third party applications are employed, preference should be given to the monitoring tools for them. For better results, span out the monitoring to cover spaces such as traffic probes.

The APM tools for cloud applications require a significant amount of skills to use it, which users often find unappealing. But with the evolving technologies, preeminent tools are expected to make inroads in the near future. So keep up with your vendor’s latest updates.