SQL Server Fragmentation| Internal And External Fragmentation Part 2

Neerajprasadsharma
Posted by in Sql Server category on for Advance level | Points: 250 | Views : 7593 red flag

This is the second part on the topic fragmentation in SQL Server,
which majorly focus on read problems with fragmentations.

Introduction

Welcome to the second part of the series of fragmentation in the SQL Server. In the first part we learnt what is fragmentation, page split and its problems, what are the differences between the internal and external fragmentation etc.
If you are new to fragmentation we  recommend you to read the first article in the series, In this article we will continue to learn about fragmentation and its problems, again this article will also be a demo oriented so let`s get started.
In the 1st article we wrote about write problems with the fragmentation so let us start this article with the read related problems with the fragmentation.

Read problem with external Fragmentation

Before we start the demo we need a fairly large fragmented table to see the performance difference, to do that we will create a new table with approx 900 MB data (if you have much powerful disk subsystem you should create a table with more data size)and to fragment the table we will update the tables primarykey column (Unique Clustered index) so fragmentation can occure in the table, and because we need to compare  the performance impact, we will create a exact copy of the table and we will find the performance difference.



Generate sample table:

BEGIN TRAN

Select  top 1000000
row_number () over ( order by (Select null)) n into Numbers from
sys.all_columns  a cross  join  sys.all_columns

CREATE TABLE [DBO].ReadProblemsWithFragmentation (
Primarykey int NOT NULL ,
SomeData3 char(1000) NOT NULL )

ALTER TABLE DBO.ReadProblemsWithFragmentation
ADD CONSTRAINT PK_ReadProblemsWithFragmentation PRIMARY KEY (Primarykey)

INSERT INTO [DBO].ReadProblemsWithFragmentation
SELECT n , 'Some text..'
FROM Numbers
Where N/2 = N/2.0

Update DBO.ReadProblemsWithFragmentation SET Primarykey =  Primarykey-500001
Where  Primarykey>500001




CREATE TABLE [DBO].CopyofReadProblemsWithFragmentation (
Primarykey int NOT NULL , 
SomeData3 char(1000) NOT NULL )



ALTER TABLE DBO.CopyofReadProblemsWithFragmentation
ADD CONSTRAINT PK_CopyofReadProblemsWithFragmentation PRIMARY KEY (Primarykey)

INSERT INTO DBO.CopyofReadProblemsWithFragmentation
SELECT * FROM DBO.ReadProblemsWithFragmentation

COMMIT


Now there are two tables one is fragmented and other is not fragmented, we can see how much these tables are fragmented using sys.dm_db_index_physical_stats 
See below:

SELECT OBJECT_NAME(OBJECT_ID) Table_Name, index_id,index_type_desc,index_level,
avg_fragmentation_in_percent fragmentation ,avg_page_space_used_in_percent Page_Density ,page_count,Record_count
FROM sys.dm_db_index_physical_stats 
(db_id('TutorialSQLServer'), object_id ('ReadProblemsWithFragmentation' ), null ,null,'DETAILED')
SELECT OBJECT_NAME(OBJECT_ID) Table_Name, index_id,index_type_desc,index_level,
avg_fragmentation_in_percent fragmentation ,avg_page_space_used_in_percent Page_Density ,page_count,Record_count
FROM sys.dm_db_index_physical_stats 
(db_id('TutorialSQLServer'), object_id ('copyOfReadProblemsWithFragmentation' ), null ,null,'DETAILED')





So yes readproblemwithfragmentation is heavily fragmented and the copyofreadproblemwithfragmentation is not, to test the read latency with fragmentation using the simple queries, the query is designed in that way so it can use index order scan and  it can request the data from storage engine in index order, you can verify this in execution plan data access strategy would be index seek with ordered true property.
Please note in this test we are using DROPCLEANBUFFERS command, it will clear all the Buffer Cache so do not run this in the production server.


CHECKPOINT;--Flush all the Dirty Reads
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS; --Clean the buffer Cache
GO
SET STATISTICS TIME ON;
Select COUNT_BIG (*) FROM ReadProblemsWithFragmentation  WHERE Primarykey>0 OPTION (MAXDOP 1)
SET STATISTICS TIME OFF;
GO
CHECKPOINT;--Flush all the Dirty Reads
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS; --Clean the buffer Cache
GO
SET STATISTICS TIME ON;
Select COUNT_BIG (*) FROM CopyofReadProblemsWithFragmentation  WHERE Primarykey>0  OPTION (MAXDOP 1)
SET STATISTICS TIME OFF;
GO




You can see in the above GIF both tables consist same amount of data, but CopyofReadProblemsWithFragmentation table executed within 5980 ms whereas Readproblemwithfragmentation took 13051 ms. Keep in mind, we conducted the test in the pretty small table.
In the real world we can have very large fragmented tables and in some scenarios we are required to issue large data scan and that could lead to dramatic performance effect if the table is heavily fragmented.


Read milage of external fragmentation with storage devices

The performance is heavily depends on so many things, when you test the demo you might find some odd results because it depends on the hardware type you have, for example, I am conducting these tests on traditional rotating hard disk subsystem which is infamous for Its random read latency where the SSD or SSD NVMe or Fusion IO are a much improved version of traditional disk subsystem and  perform way faster than traditional disk subsystem and even some external media, random reads performed better than sequential reads.
Your read milage also depends on timings for example, even if you have an ideal machine still you might find these tests are not as expected because there might be some operating system or some other background system process is running and that can impact the overall performance metrics etc etc.
The laptop I am currently using has traditional rotating hard disk and i captured its performance benchmark using crystal disk mark, and a matrix of my laptop is:



If you are more interested in learning about CrystalDiskMark you ask in the comment section or there is a quite nice article by Brent Ozar you can read it here.



Query Optimizer doesn`t consider external fragmentation


In this section we will try to learn that does the Query Optimizer consider the fragmentation in the generation of execution plan or not. We will use the both above table reeadproblmewithfragmetation and copyofreadproblemwithfragmentation  and check the difference in the execution plan to see the execution you can use shortcut CTRL+M or right click on the SSMS query window and choose "include an actual execution plan" or you can select it from the title bar and run the query (this can be seen in estimated execution plan a well).
Look at the query and execution plan below:

Select COUNT_BIG (*) FROM  ReadProblemsWithFragmentation   WHERE Primarykey >0

Select COUNT_BIG (*) FROM  CopyofReadProblemsWithFragmentation   WHERE Primarykey >0 





you can see in the above execution plan the fist query is costing 60 percent of this batch and bottom query is costing 40 percent of this batch(these costs are arbitrary and doesn`t reflect the actual cost of the query) just because of the number of pages in the both index and if you look at the execution plan more closely you will see the the top plan is mainly costing high because of its Estimated IO Cost on Index Seek  otherwise  everything in the both execution plan is almost same.
But that is because of number of pages not the fragmentation. to prove that we will fake the page count of the top table so the Query Optimzer can generate the execution plan on the basis of those fake numbers.
 Again, this is just for testing purpose and not at all recommended on the production system.
Lets us match the page count of the both tables so we can see does the Query Optimizer take fragmentation into its query plan costing.
The bottom table has 71429 number of pages in the leaf level, so we are faking the pagecount of top table with  71429 using the undocumented update statistics option(again its undocumented command and not recommended in the production server, we presented it for educational purpose only).


Update Statistics ReadProblemsWithFragmentation with pagecount = 71429 -- Number of Leaf/Data level pages in the Index



Now check the execution plan again:

Select COUNT_BIG (*) FROM  ReadProblemsWithFragmentation   WHERE Primarykey>0

Select COUNT_BIG (*) FROM  CopyofReadProblemsWithFragmentation   WHERE Primarykey>0 




Look at the cost of both queries they are identical, only because of the Estimated IO cost of index seek has reduced in the query, but we should not forget the still the top query is highly fragmented and the Query Optimizer is not considering this fact.
So the question arises shouldn`t it consider the fragmentation in the query plan generation?
Well, it doesn`t matter, even if the Query Optimzer stating considering the fragmentation in the query generation plan just the query cost will increase, but the plan quality won`t because after the data have pulled from the secondary storage subsystem, then there is no effect of the fragmentation on the query execution, but yes page count and number of rows matter a lot.


External Fragmentation Doesn`t Matter

External fragmentation has no effect at all, if all the data pages are in the buffer cache and relational engine is pulling all the requested data from the buffer cache because that's how RAM (random access memory) works, if you are working on the pure OLAP environment where you worry about read only and your organization has provided you a lots of buffer  cache, and most of the times your requested data is stays in the memory then its not as worry some to think a lot of about external fragmentation about the fragmentation. Brent Ozar has written a full length article on that you can read it here.
Yes, it's true, but keep in mind here we are only and only considering reads with external fragmentation.
So here we go, let us use the last example of where we saw ReadProblemsWithFragmentation table, took more than 13 seconds and CopyofReadProblemsWithFragmentation table took approx 6 seconds, this time we will pull all the data in the buffer cache by running the query couple of times then measure the performance time to check how much time the table take to perform the query:

SET STATISTICS TIME ON;
Select COUNT_BIG (*) FROM  CopyofReadProblemsWithFragmentation   WHERE Primarykey>0 option (maxdop 1)
GO
Select COUNT_BIG (*) FROM  ReadProblemsWithFragmentation   WHERE Primarykey>0  option (maxdop 1)
SET STATISTICS TIME OFF;
GO





You can see in the above image , the query which was taking 13 seconds to execute from the disk subsystem now taking only 177 ms to execute from the buffer pool, whereas the other table which were taking 5 Sec to execute now taking only 77 seconds to execute.
Once the requested data in the cache it don`t matter, it is fragmented or not.



Variable Length Data Type And External fragmentation

So far till now we have seen only one way which makes the table fragmented, but you might be wondering that in your production server you have static and ever increasing clustered index/primary key and still you find your tables are fragmented right?
So the reason behind is the Storage Engine does not allocate the full size(and that make sense) to the variable data types as they are variable in the nature, so if the data page is full enough and it does not have any extra space in the page and variable data column updates so the page split is bound to happen and fragmentation occurs in the table.
So let us try to understand this with an example, in this example we will use the table from the last part of fragmentation series and see the external fragmentation level in the table.

--BEFORE UPDATE
SELECT OBJECT_NAME(OBJECT_ID) Table_Name, index_id,index_type_desc,index_level,
avg_fragmentation_in_percent fragmentation ,avg_page_space_used_in_percent Page_Density ,page_count,Record_count
FROM sys.dm_db_index_physical_stats
(db_id('TutorialSQLServer'), object_id ('Fragmented' ), null ,null,'DETAILED')
GO





Ok, so we can see in the above image that the Clustered Index is not externally fragmented and page density is near about 100 percent so this table is ideal to test the scenarios we discussed above, so let us update the table`s keycol which has varchar datatype with size 50.

UPDATE Fragmented SET KeyCol = replicate ('A',50) GO
The table is updated so, according to the above theory the table should be fragmented externally, so let us again use the sys.dm_db_index_physical_stats  DMV to find out the level of fragmentation.


--AFTER UPDATE
SELECT OBJECT_NAME(OBJECT_ID) Table_Name, index_id,index_type_desc,index_level,
avg_fragmentation_in_percent fragmentation ,avg_page_space_used_in_percent Page_Density ,page_count,Record_count
FROM sys.dm_db_index_physical_stats
(db_id('TutorialSQLServer'), object_id ('Fragmented' ), null ,null,'DETAILED')
GO




WOhhhhh.. see the above image now the clustered index is highly fragmented so yes variable column plays the major part to fragment the index and usually you can not avoid this situation as you don`t know what length should be provided and can not restrict the user to update the column :)


Internal Fragmentation, it matters a lot


Whenever we talk about the fragmentation we usually talk only about the external fragmentation and internal fragmentation is usually overlooked, but in this section we will show internal fragmentation matter even more than external fragmentation in some situation as we have already explained in this series is internal fragmentations nothing its just the page density or page fullness,if you think that 100 percent or near to 100 percent is a good number for the page density, then think again because it depends on the workload you are working with if you are running with pure OALP system them offcourse its better to have a close to 100 percent would be a good idea because there is no point of keeping the extra space in the pages because the environment is read intensive and in the OTLP environment, thus depends on the nature of data and the transaction type for some system its ok to have 95 percent internal fragmentation (page density) and some environment considers even up to 50 percent internal fragmentation just to avoid the page split as we have already seen in the part 1 how bad a page split can happen.  We will again use our table readproblemwithfragmentation and copyofreadprobleminfrgmentation, we will rebuild or reorganize the readproblemwithfragmentation table, then we will free some page space by deleting the data and see what happens so first we are reorganizing the index first:

--BEGIN REORGANIZE the Index
ALTER INDEX ALL ON [DBO].ReadProblemsWithFragmentation REORGANIZE ;
--END REORGANIZE the Index


The Index should not be fragmented, so let us look into to sys.dm_db_index_physical_stats

--Before Delete
SELECT OBJECT_NAME(OBJECT_ID) Table_Name, index_id,index_type_desc,index_level,
avg_fragmentation_in_percent fragmentation ,avg_page_space_used_in_percent Page_Density ,page_count,Record_count
FROM sys.dm_db_index_physical_stats
(db_id('TutorialSQLServer'), object_id ('copyOfReadProblemsWithFragmentation' ), null ,null,'DETAILED')



We can see the above result the page density is 87.58 percent so let us free some space by deleting the odd numbers in the table.

DELETE ReadProblemsWithFragmentation
WHERE Primarykey/2.0=Primarykey/2

--After Delete
SELECT OBJECT_NAME(OBJECT_ID) Table_Name, index_id,index_type_desc,index_level,
avg_fragmentation_in_percent fragmentation ,avg_page_space_used_in_percent Page_Density ,page_count,Record_count
FROM sys.dm_db_index_physical_stats
(db_id('tutorialsqlserver'), object_id ('ReadProblemsWithFragmentation' ), null ,null,'DETAILED')



Look at the result and compare the before and after delete, the page density has reduced a lot and page count and fragmentation is same as before, but row count in the table is half.
Let us quickly use again the copyofreadproblemwithfragmentation table, truncate it first, then  copy all the data from ReadProblemsWithFragmentation into it and look into its index fragmentation and density.


BEGIN TRAN
truncate TABLE CopyofReadProblemsWithFragmentation

 insert into CopyofReadProblemsWithFragmentation
 select * from ReadProblemsWithFragmentation
 COMMIT


SELECT OBJECT_NAME(OBJECT_ID) Table_Name, index_id,index_type_desc,index_level,
avg_fragmentation_in_percent fragmentation ,avg_page_space_used_in_percent Page_Density ,page_count,Record_count
FROM sys.dm_db_index_physical_stats
(db_id('tutorialsqlserver'), object_id ('CopyofReadProblemsWithFragmentation' ), null ,null,'DETAILED')




Look at the above image, CopyofReadProblemsWithFragmentation index requires only 35715 pages and its page density is approx 87.58 percent, and it's obvious it will take only half of the time to pull the data from the table whether you are pulling from external drives or from the buffer cache.



Quick test  below:


CHECKPOINT;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
GO
SET STATISTICS TIME ON;
SELECT COUNT_BIG (*) FROM  ReadProblemsWithFragmentation   WHERE Primarykey>0 OPTION (MAXDOP 1)
SET STATISTICS TIME OFF;
GO
CHECKPOINT;
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
GO
SET STATISTICS TIME ON;
SELECT  COUNT_BIG (*) FROM  CopyofReadProblemsWithFragmentation   WHERE Primarykey>0 OPTION (MAXDOP 1)
SET STATISTICS TIME OFF;
GO

PRINT 'BEGIN: Load into the Buffer Cache..'
SELECT COUNT_BIG (*) FROM  ReadProblemsWithFragmentation   WHERE Primarykey>0 OPTION (MAXDOP 1)
GO
SELECT  COUNT_BIG (*) FROM  CopyofReadProblemsWithFragmentation   WHERE Primarykey>0 OPTION (MAXDOP 1)
GO
PRINT 'END: Load into the Buffer Cache..'

print 'BEGIN: Pull Data from the buffer Cache'
GO
SET STATISTICS TIME ON;
SELECT COUNT_BIG (*) FROM  ReadProblemsWithFragmentation   WHERE Primarykey>0 OPTION (MAXDOP 1)
GO
SELECT  COUNT_BIG (*) FROM  CopyofReadProblemsWithFragmentation   WHERE Primarykey>0 OPTION (MAXDOP 1)
GO
print 'END: Pull Data from the buffer Cache'
SET STATISTICS TIME OFF






As expected table readproblemwithfragmentation took almost double time to compare to copyofreadproblemwithfragmentation  just because it is internally fragmented.
This is really important factor and should be considered even in the pure OLAP environment no matter what hardware type you are using how a much powerful CPU your server have, this will cost you and you should consider the internal fragmentation in your performance strategies.


SUMMARY

In the above article we have seen how fragmentation (internal and external) can wreak the performance usually when people think about the fragmentation they think about only external fragmentation and with new, faster and better media type solves and sometimes even outperform the random IOs, but that doesn`t not mean you shouldn`t worry about fragmentation, page splits still happens, internal fragmentation still matters, to close this article  i would like to say test and monitor your environment and then decide what is best for your database.
So far we have seen the problems with the fragmentation in the next part we will see the possible solutions.


"Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better" Samuel Beckett
Page copy protected against web site content infringement by Copyscape

About the Author

Neerajprasadsharma
Full Name: Neeraj Prasad Sharma
Member Level: Bronze
Member Status: Member
Member Since: 5/13/2016 8:42:37 AM
Country: India
Contact for Free SQL Server Performance Consulting and Training for you or your Organization.

Neeraj Prasad Sharma is a SQL Server developer who started his work as a dot net programmer. He loves SQL Server query optimizer`s capability to process the queries optimally. For the last six years he has been experimenting and testing Query Optimizer default behaviour and if something goes wrong his goal is to identify the reason behind it and fix it. I write technical article here: https://www.sqlshack.com/author/neeraj/ https://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=12524731 https://www.mssqltips.com/sqlserverauthor/243/neeraj-prasad-sharma-/

Login to vote for this post.

Comments or Responses

Posted by: Stonemaddox on: 12/28/2017 | Points: 25
Thanks for the info, good article.
Posted by: NEERAJPRASADSHARMA on: 12/28/2017 | Points: 25
I am glad you like it.. Thanks:)
Posted by: Aumickmanuela on: 2/7/2018 | Points: 25
pretty nice article, thanks a lot

Login to post response

Comment using Facebook(Author doesn't get notification)