Not too long ago we migrated some 2008 databases to 2014, but the compatibility level remained set at 2008 (100). I recently changed those to 2014 (120) hoping to take advantage of the improved query optimizer and since we were no longer going to use any deprecated features. After the change there have been some processes that randomly take much longer than they used to prior changing the compatibility level. It's not always the same process and a very large majority of the processes seem to not be impacted. I should also point at that the runtimes of the processes were very consistent prior to the change. I wouldn't think anything of it if the first run after the change took longer, but some ran normal for multiple days, take a long time to run one day, and then returned to normal the next. My question is were statistics wiped out upon changing the compatibility level? What else could the change of compatibility level have done that I'm not thinking of? We currently check for index fragmentation every night and reorganize/rebuild accordingly but do not do anything with recompiling statistics (this might lead to another discussion) other than setting them to update automatically.
↧