Sunday, March 11, 2012

Defrag large production table

I currently have one table, highly utilized, and it is 80% fragmented. I
need to defrag the index, but due to log shipping, I am unable to defrag the
index without creating a 2 gig log. A log this size will hose the pipe
between our production and offsite sql server. Any ideas on how to defrag
without creating such a huge log?Hi
You are probably going to take this hit regardless, what you may want to try
is to stop the log shipping and re-syncronise afterwards. Alternatively drop
all the indexes and re-create them one at a time allowing the log to be
shipped in-between ( I would also make sure that all indexes are necessary
and test the system with alternative indexes).
What you would need to ascertain is why the index is so fragmented and try
and rectify this.
John
"Patrick" wrote:
> I currently have one table, highly utilized, and it is 80% fragmented. I
> need to defrag the index, but due to log shipping, I am unable to defrag the
> index without creating a 2 gig log. A log this size will hose the pipe
> between our production and offsite sql server. Any ideas on how to defrag
> without creating such a huge log?
>|||I forgot to add...
Have you considered partitioning this table?
John
"Patrick" wrote:
> I currently have one table, highly utilized, and it is 80% fragmented. I
> need to defrag the index, but due to log shipping, I am unable to defrag the
> index without creating a 2 gig log. A log this size will hose the pipe
> between our production and offsite sql server. Any ideas on how to defrag
> without creating such a huge log?
>|||Just started a new job and came accross tables fragmented 80% or more;
however, they also have log shipping. Users are complaining about
performance and I can see why, but resolving the fragmentation issue is more
of a task than first anticipated.
I will look into partitioning the table, but I want to see if I defrag some
of the tables before I start the partitioning project.
"John Bell" wrote:
> I forgot to add...
> Have you considered partitioning this table?
> John
> "Patrick" wrote:
> > I currently have one table, highly utilized, and it is 80% fragmented. I
> > need to defrag the index, but due to log shipping, I am unable to defrag the
> > index without creating a 2 gig log. A log this size will hose the pipe
> > between our production and offsite sql server. Any ideas on how to defrag
> > without creating such a huge log?
> >
> >|||Hi Patrick
It sounds like you certainly need to look at what the indexes are and if
they are really needed or if a better indexing scheme could be employed.
John
"Patrick" wrote:
> Just started a new job and came accross tables fragmented 80% or more;
> however, they also have log shipping. Users are complaining about
> performance and I can see why, but resolving the fragmentation issue is more
> of a task than first anticipated.
> I will look into partitioning the table, but I want to see if I defrag some
> of the tables before I start the partitioning project.
> "John Bell" wrote:
> > I forgot to add...
> >
> > Have you considered partitioning this table?
> >
> > John
> >
> > "Patrick" wrote:
> >
> > > I currently have one table, highly utilized, and it is 80% fragmented. I
> > > need to defrag the index, but due to log shipping, I am unable to defrag the
> > > index without creating a 2 gig log. A log this size will hose the pipe
> > > between our production and offsite sql server. Any ideas on how to defrag
> > > without creating such a huge log?
> > >
> > >|||Will switching from Full to Bulk recovery during DEFRAG and then switching
back from Bulk to Full recovery break log shipping?
"John Bell" wrote:
> Hi Patrick
> It sounds like you certainly need to look at what the indexes are and if
> they are really needed or if a better indexing scheme could be employed.
> John
> "Patrick" wrote:
> > Just started a new job and came accross tables fragmented 80% or more;
> > however, they also have log shipping. Users are complaining about
> > performance and I can see why, but resolving the fragmentation issue is more
> > of a task than first anticipated.
> >
> > I will look into partitioning the table, but I want to see if I defrag some
> > of the tables before I start the partitioning project.
> >
> > "John Bell" wrote:
> >
> > > I forgot to add...
> > >
> > > Have you considered partitioning this table?
> > >
> > > John
> > >
> > > "Patrick" wrote:
> > >
> > > > I currently have one table, highly utilized, and it is 80% fragmented. I
> > > > need to defrag the index, but due to log shipping, I am unable to defrag the
> > > > index without creating a 2 gig log. A log this size will hose the pipe
> > > > between our production and offsite sql server. Any ideas on how to defrag
> > > > without creating such a huge log?
> > > >
> > > >|||Hi Patrick
You can use log shipping on a BULK_LOGGED database, but I am not sure if you
can switch half way through.
Have you tried decreasing the log shipping interval (make it more frequent)?
Have easy would it be to switch of log shipping during this interval and
restore afterwards?
John
"Patrick" wrote:
> Will switching from Full to Bulk recovery during DEFRAG and then switching
> back from Bulk to Full recovery break log shipping?
> "John Bell" wrote:
> > Hi Patrick
> >
> > It sounds like you certainly need to look at what the indexes are and if
> > they are really needed or if a better indexing scheme could be employed.
> >
> > John
> >
> > "Patrick" wrote:
> >
> > > Just started a new job and came accross tables fragmented 80% or more;
> > > however, they also have log shipping. Users are complaining about
> > > performance and I can see why, but resolving the fragmentation issue is more
> > > of a task than first anticipated.
> > >
> > > I will look into partitioning the table, but I want to see if I defrag some
> > > of the tables before I start the partitioning project.
> > >
> > > "John Bell" wrote:
> > >
> > > > I forgot to add...
> > > >
> > > > Have you considered partitioning this table?
> > > >
> > > > John
> > > >
> > > > "Patrick" wrote:
> > > >
> > > > > I currently have one table, highly utilized, and it is 80% fragmented. I
> > > > > need to defrag the index, but due to log shipping, I am unable to defrag the
> > > > > index without creating a 2 gig log. A log this size will hose the pipe
> > > > > between our production and offsite sql server. Any ideas on how to defrag
> > > > > without creating such a huge log?
> > > > >
> > > > >|||That won't help you at all. Creating an index in Bulk Logged mode will send
less data to the log. But the log backup will be just as large as before
since it copies the extents that were modified in a Bulk Logged operation to
the log backup file. IndexDefrag has the potential to send even more data
to the log file than a DBREINDEX. This is especially true if the file is
heavily fragmented. The only thing is you can control it better as to when
this happens. You can start INDEXDEFRAG and stop it shortly afterwards.
Issue a Log backup then restart it, stop it etc. until it is complete. You
can't get away from logging the data but you do have some options to deal
with it. Ultimately if this reindexing is a problem then Johns suggestions
for partitioning and proper fill factors is something you may want to look
into.
--
Andrew J. Kelly SQL MVP
"Patrick" <Patrick@.discussions.microsoft.com> wrote in message
news:D5B901A7-135E-40D3-AAFF-B9CC30D533E6@.microsoft.com...
> Will switching from Full to Bulk recovery during DEFRAG and then switching
> back from Bulk to Full recovery break log shipping?
> "John Bell" wrote:
>> Hi Patrick
>> It sounds like you certainly need to look at what the indexes are and if
>> they are really needed or if a better indexing scheme could be employed.
>> John
>> "Patrick" wrote:
>> > Just started a new job and came accross tables fragmented 80% or more;
>> > however, they also have log shipping. Users are complaining about
>> > performance and I can see why, but resolving the fragmentation issue is
>> > more
>> > of a task than first anticipated.
>> >
>> > I will look into partitioning the table, but I want to see if I defrag
>> > some
>> > of the tables before I start the partitioning project.
>> >
>> > "John Bell" wrote:
>> >
>> > > I forgot to add...
>> > >
>> > > Have you considered partitioning this table?
>> > >
>> > > John
>> > >
>> > > "Patrick" wrote:
>> > >
>> > > > I currently have one table, highly utilized, and it is 80%
>> > > > fragmented. I
>> > > > need to defrag the index, but due to log shipping, I am unable to
>> > > > defrag the
>> > > > index without creating a 2 gig log. A log this size will hose the
>> > > > pipe
>> > > > between our production and offsite sql server. Any ideas on how to
>> > > > defrag
>> > > > without creating such a huge log?
>> > > >
>> > > >|||Hi
A different approach to the stop/backup/restart method might be to use DBCC
SHOWCONTIG and DBCC INDEXDEFRAG with decreasing levels of fragmentation,
therefore limiting the number of indexes defragmented per go. Once all
completed then do a final DBCC DBREINDEX.
John
"Patrick" wrote:
> Will switching from Full to Bulk recovery during DEFRAG and then switching
> back from Bulk to Full recovery break log shipping?
> "John Bell" wrote:
> > Hi Patrick
> >
> > It sounds like you certainly need to look at what the indexes are and if
> > they are really needed or if a better indexing scheme could be employed.
> >
> > John
> >
> > "Patrick" wrote:
> >
> > > Just started a new job and came accross tables fragmented 80% or more;
> > > however, they also have log shipping. Users are complaining about
> > > performance and I can see why, but resolving the fragmentation issue is more
> > > of a task than first anticipated.
> > >
> > > I will look into partitioning the table, but I want to see if I defrag some
> > > of the tables before I start the partitioning project.
> > >
> > > "John Bell" wrote:
> > >
> > > > I forgot to add...
> > > >
> > > > Have you considered partitioning this table?
> > > >
> > > > John
> > > >
> > > > "Patrick" wrote:
> > > >
> > > > > I currently have one table, highly utilized, and it is 80% fragmented. I
> > > > > need to defrag the index, but due to log shipping, I am unable to defrag the
> > > > > index without creating a 2 gig log. A log this size will hose the pipe
> > > > > between our production and offsite sql server. Any ideas on how to defrag
> > > > > without creating such a huge log?
> > > > >
> > > > >|||John,
Can you explain what you mean by (or want to accomplish) by doing a final
DBREINDEX? I am confused by the statement.
--
Andrew J. Kelly SQL MVP
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:9E899144-8414-435B-B6E9-374365B6015A@.microsoft.com...
> Hi
> A different approach to the stop/backup/restart method might be to use
> DBCC
> SHOWCONTIG and DBCC INDEXDEFRAG with decreasing levels of fragmentation,
> therefore limiting the number of indexes defragmented per go. Once all
> completed then do a final DBCC DBREINDEX.
> John
> "Patrick" wrote:
>> Will switching from Full to Bulk recovery during DEFRAG and then
>> switching
>> back from Bulk to Full recovery break log shipping?
>> "John Bell" wrote:
>> > Hi Patrick
>> >
>> > It sounds like you certainly need to look at what the indexes are and
>> > if
>> > they are really needed or if a better indexing scheme could be
>> > employed.
>> >
>> > John
>> >
>> > "Patrick" wrote:
>> >
>> > > Just started a new job and came accross tables fragmented 80% or
>> > > more;
>> > > however, they also have log shipping. Users are complaining about
>> > > performance and I can see why, but resolving the fragmentation issue
>> > > is more
>> > > of a task than first anticipated.
>> > >
>> > > I will look into partitioning the table, but I want to see if I
>> > > defrag some
>> > > of the tables before I start the partitioning project.
>> > >
>> > > "John Bell" wrote:
>> > >
>> > > > I forgot to add...
>> > > >
>> > > > Have you considered partitioning this table?
>> > > >
>> > > > John
>> > > >
>> > > > "Patrick" wrote:
>> > > >
>> > > > > I currently have one table, highly utilized, and it is 80%
>> > > > > fragmented. I
>> > > > > need to defrag the index, but due to log shipping, I am unable to
>> > > > > defrag the
>> > > > > index without creating a 2 gig log. A log this size will hose
>> > > > > the pipe
>> > > > > between our production and offsite sql server. Any ideas on how
>> > > > > to defrag
>> > > > > without creating such a huge log?
>> > > > >
>> > > > >|||Hi Andrew
The final dbreindex would be to defragment root and intermediate level
pages, and hopefully if the leaf level pages were already defragmented, less
work would be required to defragment them. It is a theory that would need to
be tested, certainly if the root/intermediate nodes were very fragmented
there may be less gain from the indexdefrags, there may be a point where you
don't do the indexdefrags if DBCC SHOWCONTIG shows non-leaf levels to be very
fragmented.
John
"Andrew J. Kelly" wrote:
> John,
> Can you explain what you mean by (or want to accomplish) by doing a final
> DBREINDEX? I am confused by the statement.
> --
> Andrew J. Kelly SQL MVP
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:9E899144-8414-435B-B6E9-374365B6015A@.microsoft.com...
> > Hi
> >
> > A different approach to the stop/backup/restart method might be to use
> > DBCC
> > SHOWCONTIG and DBCC INDEXDEFRAG with decreasing levels of fragmentation,
> > therefore limiting the number of indexes defragmented per go. Once all
> > completed then do a final DBCC DBREINDEX.
> >
> > John
> >
> > "Patrick" wrote:
> >
> >> Will switching from Full to Bulk recovery during DEFRAG and then
> >> switching
> >> back from Bulk to Full recovery break log shipping?
> >>
> >> "John Bell" wrote:
> >>
> >> > Hi Patrick
> >> >
> >> > It sounds like you certainly need to look at what the indexes are and
> >> > if
> >> > they are really needed or if a better indexing scheme could be
> >> > employed.
> >> >
> >> > John
> >> >
> >> > "Patrick" wrote:
> >> >
> >> > > Just started a new job and came accross tables fragmented 80% or
> >> > > more;
> >> > > however, they also have log shipping. Users are complaining about
> >> > > performance and I can see why, but resolving the fragmentation issue
> >> > > is more
> >> > > of a task than first anticipated.
> >> > >
> >> > > I will look into partitioning the table, but I want to see if I
> >> > > defrag some
> >> > > of the tables before I start the partitioning project.
> >> > >
> >> > > "John Bell" wrote:
> >> > >
> >> > > > I forgot to add...
> >> > > >
> >> > > > Have you considered partitioning this table?
> >> > > >
> >> > > > John
> >> > > >
> >> > > > "Patrick" wrote:
> >> > > >
> >> > > > > I currently have one table, highly utilized, and it is 80%
> >> > > > > fragmented. I
> >> > > > > need to defrag the index, but due to log shipping, I am unable to
> >> > > > > defrag the
> >> > > > > index without creating a 2 gig log. A log this size will hose
> >> > > > > the pipe
> >> > > > > between our production and offsite sql server. Any ideas on how
> >> > > > > to defrag
> >> > > > > without creating such a huge log?
> >> > > > >
> >> > > > >
>
>|||OK I see what you were after now. But unfortunately it would not matter too
much in his case if the table was heavily or lightly fragmented. When you
issue a DBREINDEX it will completely rebuild the indexes in a new section of
the data files. This will always result in basically the same amount of
pages or extents being logged or needed to be backed up in the case of
BulkLogged recovery model. It is true that the work itself may be a little
easier to sort the data for the rebuild when there is less fragmentation.
But as far as log shipping goes it would not matter either way. Actually
that way would be worse since you would log it twice. Once for Indexdefrag
and once for DBreindex.
--
Andrew J. Kelly SQL MVP
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:082B1590-3F81-4D82-8AD3-F008C974EC4B@.microsoft.com...
> Hi Andrew
> The final dbreindex would be to defragment root and intermediate level
> pages, and hopefully if the leaf level pages were already defragmented,
> less
> work would be required to defragment them. It is a theory that would need
> to
> be tested, certainly if the root/intermediate nodes were very fragmented
> there may be less gain from the indexdefrags, there may be a point where
> you
> don't do the indexdefrags if DBCC SHOWCONTIG shows non-leaf levels to be
> very
> fragmented.
> John
>
> "Andrew J. Kelly" wrote:
>> John,
>> Can you explain what you mean by (or want to accomplish) by doing a final
>> DBREINDEX? I am confused by the statement.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
>> news:9E899144-8414-435B-B6E9-374365B6015A@.microsoft.com...
>> > Hi
>> >
>> > A different approach to the stop/backup/restart method might be to use
>> > DBCC
>> > SHOWCONTIG and DBCC INDEXDEFRAG with decreasing levels of
>> > fragmentation,
>> > therefore limiting the number of indexes defragmented per go. Once all
>> > completed then do a final DBCC DBREINDEX.
>> >
>> > John
>> >
>> > "Patrick" wrote:
>> >
>> >> Will switching from Full to Bulk recovery during DEFRAG and then
>> >> switching
>> >> back from Bulk to Full recovery break log shipping?
>> >>
>> >> "John Bell" wrote:
>> >>
>> >> > Hi Patrick
>> >> >
>> >> > It sounds like you certainly need to look at what the indexes are
>> >> > and
>> >> > if
>> >> > they are really needed or if a better indexing scheme could be
>> >> > employed.
>> >> >
>> >> > John
>> >> >
>> >> > "Patrick" wrote:
>> >> >
>> >> > > Just started a new job and came accross tables fragmented 80% or
>> >> > > more;
>> >> > > however, they also have log shipping. Users are complaining about
>> >> > > performance and I can see why, but resolving the fragmentation
>> >> > > issue
>> >> > > is more
>> >> > > of a task than first anticipated.
>> >> > >
>> >> > > I will look into partitioning the table, but I want to see if I
>> >> > > defrag some
>> >> > > of the tables before I start the partitioning project.
>> >> > >
>> >> > > "John Bell" wrote:
>> >> > >
>> >> > > > I forgot to add...
>> >> > > >
>> >> > > > Have you considered partitioning this table?
>> >> > > >
>> >> > > > John
>> >> > > >
>> >> > > > "Patrick" wrote:
>> >> > > >
>> >> > > > > I currently have one table, highly utilized, and it is 80%
>> >> > > > > fragmented. I
>> >> > > > > need to defrag the index, but due to log shipping, I am unable
>> >> > > > > to
>> >> > > > > defrag the
>> >> > > > > index without creating a 2 gig log. A log this size will hose
>> >> > > > > the pipe
>> >> > > > > between our production and offsite sql server. Any ideas on
>> >> > > > > how
>> >> > > > > to defrag
>> >> > > > > without creating such a huge log?
>> >> > > > >
>> >> > > > >
>>|||Hi Andrew
Thinking of it that way, you are right, the last DBREINDEX would increase
the logging. I would still favour a more controlled defragmentation than one
that would entailed killed of the reindexing!
John
"Andrew J. Kelly" wrote:
> OK I see what you were after now. But unfortunately it would not matter too
> much in his case if the table was heavily or lightly fragmented. When you
> issue a DBREINDEX it will completely rebuild the indexes in a new section of
> the data files. This will always result in basically the same amount of
> pages or extents being logged or needed to be backed up in the case of
> BulkLogged recovery model. It is true that the work itself may be a little
> easier to sort the data for the rebuild when there is less fragmentation.
> But as far as log shipping goes it would not matter either way. Actually
> that way would be worse since you would log it twice. Once for Indexdefrag
> and once for DBreindex.
> --
> Andrew J. Kelly SQL MVP
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:082B1590-3F81-4D82-8AD3-F008C974EC4B@.microsoft.com...
> > Hi Andrew
> >
> > The final dbreindex would be to defragment root and intermediate level
> > pages, and hopefully if the leaf level pages were already defragmented,
> > less
> > work would be required to defragment them. It is a theory that would need
> > to
> > be tested, certainly if the root/intermediate nodes were very fragmented
> > there may be less gain from the indexdefrags, there may be a point where
> > you
> > don't do the indexdefrags if DBCC SHOWCONTIG shows non-leaf levels to be
> > very
> > fragmented.
> >
> > John
> >
> >
> > "Andrew J. Kelly" wrote:
> >
> >> John,
> >>
> >> Can you explain what you mean by (or want to accomplish) by doing a final
> >> DBREINDEX? I am confused by the statement.
> >>
> >> --
> >> Andrew J. Kelly SQL MVP
> >>
> >>
> >> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> >> news:9E899144-8414-435B-B6E9-374365B6015A@.microsoft.com...
> >> > Hi
> >> >
> >> > A different approach to the stop/backup/restart method might be to use
> >> > DBCC
> >> > SHOWCONTIG and DBCC INDEXDEFRAG with decreasing levels of
> >> > fragmentation,
> >> > therefore limiting the number of indexes defragmented per go. Once all
> >> > completed then do a final DBCC DBREINDEX.
> >> >
> >> > John
> >> >
> >> > "Patrick" wrote:
> >> >
> >> >> Will switching from Full to Bulk recovery during DEFRAG and then
> >> >> switching
> >> >> back from Bulk to Full recovery break log shipping?
> >> >>
> >> >> "John Bell" wrote:
> >> >>
> >> >> > Hi Patrick
> >> >> >
> >> >> > It sounds like you certainly need to look at what the indexes are
> >> >> > and
> >> >> > if
> >> >> > they are really needed or if a better indexing scheme could be
> >> >> > employed.
> >> >> >
> >> >> > John
> >> >> >
> >> >> > "Patrick" wrote:
> >> >> >
> >> >> > > Just started a new job and came accross tables fragmented 80% or
> >> >> > > more;
> >> >> > > however, they also have log shipping. Users are complaining about
> >> >> > > performance and I can see why, but resolving the fragmentation
> >> >> > > issue
> >> >> > > is more
> >> >> > > of a task than first anticipated.
> >> >> > >
> >> >> > > I will look into partitioning the table, but I want to see if I
> >> >> > > defrag some
> >> >> > > of the tables before I start the partitioning project.
> >> >> > >
> >> >> > > "John Bell" wrote:
> >> >> > >
> >> >> > > > I forgot to add...
> >> >> > > >
> >> >> > > > Have you considered partitioning this table?
> >> >> > > >
> >> >> > > > John
> >> >> > > >
> >> >> > > > "Patrick" wrote:
> >> >> > > >
> >> >> > > > > I currently have one table, highly utilized, and it is 80%
> >> >> > > > > fragmented. I
> >> >> > > > > need to defrag the index, but due to log shipping, I am unable
> >> >> > > > > to
> >> >> > > > > defrag the
> >> >> > > > > index without creating a 2 gig log. A log this size will hose
> >> >> > > > > the pipe
> >> >> > > > > between our production and offsite sql server. Any ideas on
> >> >> > > > > how
> >> >> > > > > to defrag
> >> >> > > > > without creating such a huge log?
> >> >> > > > >
> >> >> > > > >
> >>
> >>
> >>
>
>|||"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:8B4E0C43-B5BE-4631-AE23-0AF894DCC3F6@.microsoft.com...
> Hi Andrew
> Thinking of it that way, you are right, the last DBREINDEX would increase
> the logging. I would still favour a more controlled defragmentation than
> one
> that would entailed killed of the reindexing!
> John
Can you explain what you mean by "controlled defragmentation?" I know you
can use dbcc showcontig results in logic to determine whether or not to
defrag (i.e., "if more than 20% fragmented, then defrag"). And I know you
can run dbcc indexdefrag for a while and stop it at any point. But I was
unaware that you could control the amount of defragmentation dbcc
indexdefrag does.
Thanks
Karen|||Hi Karen
You have described what I would call a controlled defragmentation, only
defrag tables/indexes that are necessary, choosing whether to do a DBCC
DBREINDEX or DBCC INDEXDEFRAG as necessary, rather than having (say) a timer
that kills the process part way through.
Without partitioning then you may have to resort to killing off the reindex
process.
John
"Karen Collins" wrote:
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:8B4E0C43-B5BE-4631-AE23-0AF894DCC3F6@.microsoft.com...
> > Hi Andrew
> >
> > Thinking of it that way, you are right, the last DBREINDEX would increase
> > the logging. I would still favour a more controlled defragmentation than
> > one
> > that would entailed killed of the reindexing!
> >
> > John
> Can you explain what you mean by "controlled defragmentation?" I know you
> can use dbcc showcontig results in logic to determine whether or not to
> defrag (i.e., "if more than 20% fragmented, then defrag"). And I know you
> can run dbcc indexdefrag for a while and stop it at any point. But I was
> unaware that you could control the amount of defragmentation dbcc
> indexdefrag does.
> Thanks
> Karen
>
>

No comments:

Post a Comment