In this topic http://www.sqlservercentral.com/Forums/Topic1576549-3411-1.aspx you can see I wrote some custom baseline performance tests.The first test just insert 5.000.000 rows in one table using a loop and I'm tracking lead time, page life expectancy, buffer cache hit ratio, number of IO's on data and log file each 10.000 rows, etc.Now I have added a transaction around the loop.Consequence is that the lead time to insert the 5.000.000 rows decreases with factor 5.I guess that's because writing to the datafile is postponed until all 5.000.000 rows are written in the transaction log file.I also see the log files grows much larger than during previous tests in which I did not use one big transaction (actually I used 5.000.000 implicit transactions).Now I wonder what happens internally. I insert about 13GB of data, but my OS has only 6GB of RAM and max memory is set to 5GB.I presume all rows are written in the log file and stored in memory as dirty pages. At a certain moment the buffer cache is full with dirty pages but I guess SQL Server can not write those pages to the data files since they are all part of a transaction that has not been committed yet. So my question is what happens with the record to be inserted once buffer cache is filled with uncommitted dirty pages?Will SQL Server stop storing new records in RAM?Will it remove dirty pages from RAM to store the new dirty pages?Will it write the new records in tempdb?Will SQL Server use OS paging file to store the data?Will SQL Server store the data in the datafile with a flag that's the pages are not committed yet?I'm sure there is no real problem since all 5.000.000 rows are in the transaction log file, that can be used to flush all data to the datafile once the transaction is committed, but I wonder what SQL Server will be doing when the buffer cache is filled with dirty pages that have not been committed yet and therefor can not be subject for the regular checkpoint/lazy writer system.ThxKind regardsPeter.
↧