Is there any easy way to defrag a table with no clustered idx other than bcp out then back in ? (SQL 2000)
Actually I have a 100GB db that I deleted data from other tables, about 10GB worth, but access to the table in question (38GB) seems to have slowed down dramatically, Or does the entire disk need to be defragmented and a bcp out/in would be a waste of time on this table ?
I run an index defrag every night, reindex weekly.
Thanks.Do you run
DBCC INDEXDEFRAG
??
If so what messages do you get?|||Run it every night on all indexes, it runs via an step in an agent job with no reported errors, so I assumed it defragged all indexes successfully
I have sprocs that run the defrag and reindex jobs.
they are the isp_DBCC_DBREINDEX and isp_DBCC_INDEXDEFRAG sprocs that are available pretty much everywhere.|||the Disk Defragmenter shows a lot of defragmented files on the drive that contains the db mdf files, so I was assuming an o/s level defragment must be performed to resolve this issue. Am I correct in assuming so ?|||Totally correct dude. You need to defrag your disk drive FIRST (very important), THEN defrag your databases. If you don't do both of them, AND do them both in this order, you can suffer badly degraded performance, as the indexes will get rebuilt in order, then scattered across the disk by the disk defrag, effectively randomising their physical order on the disk surface, which can massively increase track-to-track seek times, the enemy of every DBA.
You can do all this manually, but it gets to be a real chore with dozens of servers - fine if you're a developer with a single box, but for the rest of us it's a no-no. If you have the cash, I recommend you save yourself a heap of time by using the server version of Diskeeper (http://www.diskeeper.com/) to run the file level disk defrag once a month ( you can reduce the need for this by sizing/growing your databases so that they're not continually growing, as this causes file fragmentation). Then run Visual Defrag (http://www.visualdefrag.com/) against all your indexes once a week to check them and keep them ordered and running efficiently - this tool's cool as it only defrags the indexes that need doing instead of all of them, which can take an age if you have loads of big indexes. Both decent tools with good scheduling automation (did I mention you probably want to do disk and database defrags at night when no one's using the system and no jobs are running), not too expensive, and both companies are solid on their technical support. Obviously you'll need to adjust the frequency that you run jobs on a per server/database basis according to the usage patterns, but these tools will at least analyze your server and tell you if it needs doing without having to plough through loads of info.|||There's no point running a SQL Server defrag on a heap. Logical fragmentation is when the pages are out of logical order. There is no logical order to a heap. There are other sorts of fragmentation (such as page denisity) but heaps don't page split either, although you do get off-page pointers.
There is no good reason I know for not having a clustered index. How come you don't have one?|||"There's no point running a SQL Server defrag on a heap. Logical fragmentation is when the pages are out of logical order"
Just to clarify that, when we're talking about the logical fragmentation, we're refering to the index pages being out of logical order, which in the case of a clustered index also happens to be the data pages, as the clustered index is formed by re-ordering the pages that contain the data (which is why you can only have one per table)
As for running a defrag on a heap (table with no indexes), totally correct - it's won't take very long as there's nothing to defrag ! As pootle flump said, there's rarely a good reason not to have a clustered index - the PK is often a good candidate if it is used in queries and increments sequentially, go for that as you won't suffer page splits as nothing will ever be inserted between records in a page that's already ordered.|||A free alternative for reorganising\ rebuilding indexes is to code it yourself e.g. http://weblogs.sqlteam.com/tarad/archive/2007/04/17/60176.aspx
This is how we do it in our shop.
Indexer - we do have some issues with physical fragmentation - is diskkeeper specifically for SQL boxes? For example will it handle defragging the mdf & ldf files automajically (for example with an option to stop & start the service)?|||No, it's not specifically for SQL boxes, but as you're fundamentally just defragging another Windows file, which is what Diskeeper does, it does the job. It doesn't control service start/stop events, however, although SQL Server does lock the database files to other applications, Diskeeper will defrag them in-situ without stopping and re-starting the SQL Server boxes. Personally, I prefer to stop and start the db engine, you can do this with a Windows scheduled task (or an AT task), using Service Control Manager - just type sc /? at a command prompt and take it from there.
The only thing you have to be aware of with Diskeeper is that it'll obviously impact disk I/O big time (this is unavoidable), so avoid busy SQL Server periods when you do it.
The info at sqlteam is useful, but the only problem is it can't work on SQL Server 2000, which is what the majority of boxes out there still are :( That's where Visual Defrag (or Idera's SQL Defrag Manager) comes into its own, as they'll handle both SQL Server versions, as well as SQL Server 2008, I believe.|||Ah - then you just want a version using the old syntax. This is supported, though deprecated, on 2005. The same principle - just different syntax. I have not checked 2008 out yet.
http://weblogs.sqlteam.com/tarad/archive/2005/01/04/3933.aspx
Ta for the info re diskeeper.|||No problem :) Yeah, your problem there is that although you can still do the defrag, there is no old syntax in 2000 for getting the info to check if the indexes need defragmenting in the first place, as there are no DMVs in 2000 - you're back to horrible old console commands, which are a pain in the ass to trawl through if you've got more than a dozen tables. That's the main reason for me using third party tools, as the SQL Server output is slow to go through each time you decide to defrag.
You also need to be aware that the DMVs aren't always current, and if you want the most up to date info rather than a snapshot from the last time SQL Server checked, you need to specify it in the DMV parameters, which will slow down the server - it's effectively scanning the leaf levels of the index structures to update the DMV, just as showcontig used to.|||Yeah, your problem there is that although you can still do the defrag, there is no old syntax in 2000 for getting the info to check if the indexes need defragmenting in the first place, as there are no DMVs in 2000 - you're back to horrible old console commands, which are a pain in the ass to trawl through if you've got more than a dozen tables. That's the main reason for me using third party tools, as the SQL Server output is slow to go through each time you decide to defrag.What do the third party tools use?|||There's no point running a SQL Server defrag on a heap. Logical fragmentation is when the pages are out of logical order. There is no logical order to a heap. There are other sorts of fragmentation (such as page denisity) but heaps don't page split either, although you do get off-page pointers.
There is no good reason I know for not having a clustered index. How come you don't have one?
This is a 38GB table with 4K record size and 9+million rows. I was thinking of adding an identity column as a PK. This "Heap" (perfect term for this table) existed before I started at this godforsaken place.
Performance is much better today for some reason. I need to stage this table, add the ID/PK and see if it does anything to degrade performance. Outside of an ID PK, the natural PK may be a ridiculously sized composite index.
I've bcp'd out/in this table before and performance has improved 2-3 fold, it just a long operation and I really have no window to do so, except on a Sunday, and you know how we all love to work on Sundays.|||Just in case you thought I made it up - a Heap is the proper name for a table with no clustered index. It is an excellent name though :)
4K record sizes huh? You defo need a clustered index. Remember - clustered index is not the same as a PK. Is this a heavy insert and\ or update table? In particular do records tend to get inserted quite small and then have lots of data updated in them to swell them up to 4k? Heavy reads? You know what - fancy posting the DDL?|||Just in case you thought I made it up - a Heap is the proper name for a table with no clustered index. It is an excellent name though :)
4K record sizes huh? You defo need a clustered index. Remember - clustered index is not the same as a PK. Is this a heavy insert and\ or update table? In particular do records tend to get inserted quite small and then have lots of data updated in them to swell them up to 4k? Heavy reads? You know what - fancy posting the DDL?
36K rows a day inserted in the wee hours, then many updates. It is used for DSS, but 2-3 batch loads a day from our securitymaster with virtually no window for maintenance, except our backup processes. I know about the heap definition, just fits this in another way.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment