Hello,I'm trying to figure out how you would calculate the average read/write latency experienced by a SQL Server instance during a specific time window in order to monitor this for multiple instances. From this MSDN blog, I know that you have to take multiple samples and do some calculations to get the correct latency.[url]http://blogs.msdn.com/b/psssql/archive/2013/09/23/interpreting-the-counter-values-from-sys-dm-os-performance-counters.aspx[/url]However, the SQLServer:Resource Pool Stats object tracks these numbers per resource pool and we want to get one number for the whole server. Since there can be a different base value for each resource pool, you can't simply sum the numerator values together. Here's some sample data from a server that illustrates the problem.[code="plain"]object_name counter_name instance_name cntr_value cntr_typeSQLServer:Resource Pool Stats Avg Disk Read IO (ms) default 307318919 1073874176SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base default 25546724 1073939712SQLServer:Resource Pool Stats Avg Disk Read IO (ms) internal 2045730 1073874176SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base internal 208270 1073939712[/code]I'm thinking I would need to do some sort of weighted average, but I'm not sure if that will result in the correct value. Here's the formula I am thinking about using currently before doing the calculation over time((default * default[base]) + (internal * internal[base]))/(default[base] + internal[base])Then to do the calculation over time, I'd use the changes in the calculated numerator and denominator to get the average.Does this sound like to correct way to get this value? Is there a good way to verify?
↧