-2

Colleagues, based on your experience, what is the maximum page count of a table when statistics on this table doesn't matter? We have several relatively small tables - around 5000 8K pages.

But! Statistics really matter on them. If it is outdated, the queries hitting the table may run for hours (edit: queries with joins to these tables).

Please, share your experience.

George K
  • 2,271
  • 1
  • 15
  • 32
  • Are you trying to figure out the threshold size that a table needs to be where the optimizer decides it's no longer easier to simply scan it all into memory, or are you looking for other details? – John Eisbrener Oct 04 '17 at 14:25
  • 1
    Are you talking about when the optimiser will or will not perform full table scans (FTS), ignoring any indexes because it would be quicker to scan the entire table than to use an index? I'd be interested to learn if there is a general answer to this question - I suspect (as with so many things! :-) ) , that the answer is, it depends! – Vérace Oct 04 '17 at 14:26
  • 1
  • Actually I am trying to figure out the threshold in IndexOptimize sp by Ola Hallengren. I already have it configured, but perhaps I should reconsider the value for @PageCountLevel – George K Oct 04 '17 at 14:28
  • Agree with all of the above and am also concerned/curious about your queries themselves if they take hours on tables that small. – LowlyDBA - John M Oct 04 '17 at 14:29
  • @LowlyDBA, in fact these small tables are in joins – George K Oct 04 '17 at 14:30
  • 1
    But is it a statistics problem or a bad execution plan problem? When you update stats on an index, the plans that reference that index get removed from the cache. Then the queries for those plans have to recompile, which gives it the opportunity to get a better plan. People often think outdated stats is the problem since update stats fixed it, but it's often bad Parameter Sniffing. – Tara Kizer Oct 04 '17 at 14:47
  • @TaraKizer, it is a bad execution plan problem due to incorrect estimates. I am aware of Parameter Sniffing and it is not the case since I have tried to flush the plan from cache several times and it did not help. – George K Oct 05 '17 at 06:42

1 Answers1

4

You can do statistics separately to indexes

Pass 1: all statistics

@FragmentationLow = NULL, @FragmentationMedium = NULL, @FragmentationHigh = NULL,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y',

Pass 2: indexes

@UpdateStatistics = NULL, 

This covers points addressed in this answer: Does a re-index update statistics?

However, this separation is a foible of mine based on this article (and older articles)

Of course, the worst case scenario would be if you first rebuild your indexes, which also updates the index statistics by scanning the entire table, and later you run UPDATE STATISTICS using the default values, which again updates the index statistics but this time only with a default and smaller sample.

According to Brent Ozar Ola H's script is clever enough to not do this. But...

gbn
  • 69,809
  • 8
  • 163
  • 243
  • This is not really an answer to the first question and your examples actually come from a script that does not come from Microsoft... – clifton_h Oct 05 '17 at 00:37
  • @clifton_h: the page limit affects whether indexes are examined and rebuilt etc. This affects index statistics. This affects the query plan. So, I demo how to manage statistics independently of indexes/page count. Where does the script come from? Does it matter? – gbn Oct 05 '17 at 06:53