I am writing a performance baseline test.The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it helped me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.In test 1 min/max server memory was set to 9215MB/10751MBIn test 2 min/max server memory was set to 13311MB/14847MB[img]http://www.hoveseliga.be/images/perfstats.png[/img]The script assures the number of bytes inserted in the nvarchar columns is always the same.This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9)In addition you can see at that moment- buffer cache hit ratio is decreasing- page life expectance becomes "terrible"- free list stall/sec increases- lazy writes/sec increases- readlatency increases (write latency does not)In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).Now what I do not understand is :Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.Finally I want to notice- I'm the only user on this machine- the table has a clustered index on a identity column- there are no foreign key constraints- inserts are executed using a loop, not one big transaction- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queriesSo I wonder why SQL Server starts to execute so many reads in test 1. Any ideas?Thanks in advance,Kind RegardsPeter.
↧