I currently have one table, highly utilized, and it is 80% fragmented. I
need to defrag the index, but due to log shipping, I am unable to defrag the
index without creating a 2 gig log. A log this size will hose the pipe
between our production and offsite sql server. Any ideas on how to defrag
without creating such a huge log?Hi
You are probably going to take this hit regardless, what you may want to try
is to stop the log shipping and re-syncronise afterwards. Alternatively drop
all the indexes and re-create them one at a time allowing the log to be
shipped in-between ( I would also make sure that all indexes are necessary
and test the system with alternative indexes).
What you would need to ascertain is why the index is so fragmented and try
and rectify this.
John
"Patrick" wrote:
> I currently have one table, highly utilized, and it is 80% fragmented. I
> need to defrag the index, but due to log shipping, I am unable to defrag t
he
> index without creating a 2 gig log. A log this size will hose the pipe
> between our production and offsite sql server. Any ideas on how to defrag
> without creating such a huge log?
>|||I forgot to add...
Have you considered partitioning this table?
John
"Patrick" wrote:
> I currently have one table, highly utilized, and it is 80% fragmented. I
> need to defrag the index, but due to log shipping, I am unable to defrag t
he
> index without creating a 2 gig log. A log this size will hose the pipe
> between our production and offsite sql server. Any ideas on how to defrag
> without creating such a huge log?
>|||Just started a new job and came accross tables fragmented 80% or more;
however, they also have log shipping. Users are complaining about
performance and I can see why, but resolving the fragmentation issue is more
of a task than first anticipated.
I will look into partitioning the table, but I want to see if I defrag some
of the tables before I start the partitioning project.
"John Bell" wrote:
[vbcol=seagreen]
> I forgot to add...
> Have you considered partitioning this table?
> John
> "Patrick" wrote:
>|||Hi Patrick
It sounds like you certainly need to look at what the indexes are and if
they are really needed or if a better indexing scheme could be employed.
John
"Patrick" wrote:
[vbcol=seagreen]
> Just started a new job and came accross tables fragmented 80% or more;
> however, they also have log shipping. Users are complaining about
> performance and I can see why, but resolving the fragmentation issue is mo
re
> of a task than first anticipated.
> I will look into partitioning the table, but I want to see if I defrag som
e
> of the tables before I start the partitioning project.
> "John Bell" wrote:
>|||Will switching from Full to Bulk recovery during DEFRAG and then switching
back from Bulk to Full recovery break log shipping?
"John Bell" wrote:
[vbcol=seagreen]
> Hi Patrick
> It sounds like you certainly need to look at what the indexes are and if
> they are really needed or if a better indexing scheme could be employed.
> John
> "Patrick" wrote:
>|||Hi Patrick
You can use log shipping on a BULK_LOGGED database, but I am not sure if you
can switch half way through.
Have you tried decreasing the log shipping interval (make it more frequent)?
Have easy would it be to switch of log shipping during this interval and
restore afterwards?
John
"Patrick" wrote:
[vbcol=seagreen]
> Will switching from Full to Bulk recovery during DEFRAG and then switching
> back from Bulk to Full recovery break log shipping?
> "John Bell" wrote:
>|||That won't help you at all. Creating an index in Bulk Logged mode will send
less data to the log. But the log backup will be just as large as before
since it copies the extents that were modified in a Bulk Logged operation to
the log backup file. IndexDefrag has the potential to send even more data
to the log file than a DBREINDEX. This is especially true if the file is
heavily fragmented. The only thing is you can control it better as to when
this happens. You can start INDEXDEFRAG and stop it shortly afterwards.
Issue a Log backup then restart it, stop it etc. until it is complete. You
can't get away from logging the data but you do have some options to deal
with it. Ultimately if this reindexing is a problem then Johns suggestions
for partitioning and proper fill factors is something you may want to look
into.
Andrew J. Kelly SQL MVP
"Patrick" <Patrick@.discussions.microsoft.com> wrote in message
news:D5B901A7-135E-40D3-AAFF-B9CC30D533E6@.microsoft.com...[vbcol=seagreen]
> Will switching from Full to Bulk recovery during DEFRAG and then switching
> back from Bulk to Full recovery break log shipping?
> "John Bell" wrote:
>|||Hi
A different approach to the stop/backup/restart method might be to use DBCC
SHOWCONTIG and DBCC INDEXDEFRAG with decreasing levels of fragmentation,
therefore limiting the number of indexes defragmented per go. Once all
completed then do a final DBCC DBREINDEX.
John
"Patrick" wrote:
[vbcol=seagreen]
> Will switching from Full to Bulk recovery during DEFRAG and then switching
> back from Bulk to Full recovery break log shipping?
> "John Bell" wrote:
>|||John,
Can you explain what you mean by (or want to accomplish) by doing a final
DBREINDEX? I am confused by the statement.
Andrew J. Kelly SQL MVP
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:9E899144-8414-435B-B6E9-374365B6015A@.microsoft.com...[vbcol=seagreen]
> Hi
> A different approach to the stop/backup/restart method might be to use
> DBCC
> SHOWCONTIG and DBCC INDEXDEFRAG with decreasing levels of fragmentation,
> therefore limiting the number of indexes defragmented per go. Once all
> completed then do a final DBCC DBREINDEX.
> John
> "Patrick" wrote:
>
No comments:
Post a Comment