Sizing

Sizing post #5: How to map our existing processing power to new hardware.

In our last blog post “Sizing post #4: How much pain are we willing to tolerate?” we talked about how much pain we can tolerate so we can decide on which percentile we want to size our database workload. It is not a size fits all because it depends on the database workload profile, some… Continue reading Sizing post #5: How to map our existing processing power to new hardware.

Sizing

Sizing post #4: How much pain are we willing to tolerate?

We have gone through 3 posts already and have learned how to standardize ASH data for sizing, reviewed some basic statistics like mean, median, maximum and minimum and the use of percentiles to calculate the CPU requirement for a single instance database. In this post we want to show a way to figure out what… Continue reading Sizing post #4: How much pain are we willing to tolerate?

Sizing

Sizing post #2: The not so good mean, the bad median, and the ugly minimum and maximum

This is the second blog entry for a series of posts related to the topic of “sizing.” This time we are going to chart the cpu usage and calculate the average (mean), median, minimum, and maximum from the ASH data we standardized in previous post “Sizing post #1: How to standardize ASH data for sizing.”… Continue reading Sizing post #2: The not so good mean, the bad median, and the ugly minimum and maximum

Sizing

Sizing post #1: How to standardize ASH data for sizing

This is a first blog entry for a series of posts related to the topic of “sizing.” The standardization of ASH data for sizing consists of aggregating the number of sessions on CPU, adding the 0’s (zeroes) to this aggregated data and filling in with sample times to have 10 seconds equidistant sample times. The… Continue reading Sizing post #1: How to standardize ASH data for sizing