Showing posts with label time. Show all posts
Showing posts with label time. Show all posts

Sunday, March 25, 2012

Delete all Indexes

I have several databases that are supposed to have the same structure, over
time some things (mostly indexes) have been changed, added, etc. differently
on each database. I would like to write a script that would strip all user
tables of their indexes and then run a script that would add the indexes I
want on them. I have no problem writing the script to create the indexes.
What I need is a script that will delete read the system table and delete
all of the indexes in a particular database (including primary keys).
If anybody has such a thing and wouldn't mind sharing it, I would be most
grateful.Here is the actual error:
An explicit DROP INDEX is not allowed on index
'tblTableName.PrimaryKeyName_PK'. It is being used for PRIMARY KEY
constraint enforcement.
"Dinesh.T.K" <tkdinesh@.nospam.mail.tkdinesh.com> wrote in message
news:OY7JRpGTDHA.1912@.tk2msftngp13.phx.gbl...
> John,
> Execute the output of the below query .
> SELECT 'DROP INDEX ' + OBJECT_NAME([id]) + '.' + [name]+CHAR(13)+'GO'
> FROM sysindexes
> WHERE indid BETWEEN 1 AND 250
> AND OBJECTPROPERTY([id], 'IsMsShipped') = 0
> AND INDEXPROPERTY([id], [name], 'IsStatistics') = 0
>
> --
> Dinesh.
> SQL Server FAQ at
> http://www.tkdinesh.com
> "John Hamilton" <jhamil@.nowhere.com> wrote in message
> news:uFM7uiGTDHA.2852@.tk2msftngp13.phx.gbl...
> > I have several databases that are supposed to have the same structure,
> over
> > time some things (mostly indexes) have been changed, added, etc.
> differently
> > on each database. I would like to write a script that would strip all
> user
> > tables of their indexes and then run a script that would add the indexes
I
> > want on them. I have no problem writing the script to create the
indexes.
> > What I need is a script that will delete read the system table and
delete
> > all of the indexes in a particular database (including primary keys).
> >
> > If anybody has such a thing and wouldn't mind sharing it, I would be
most
> > grateful.
> >
> >
>|||You need to use DROP CONSTRAINT on constraints.
--
Andrew J. Kelly
SQL Server MVP
"John Hamilton" <jhamil@.nowhere.com> wrote in message
news:ukge0sHTDHA.2020@.TK2MSFTNGP11.phx.gbl...
> Here is the actual error:
> An explicit DROP INDEX is not allowed on index
> 'tblTableName.PrimaryKeyName_PK'. It is being used for PRIMARY KEY
> constraint enforcement.
> "Dinesh.T.K" <tkdinesh@.nospam.mail.tkdinesh.com> wrote in message
> news:OY7JRpGTDHA.1912@.tk2msftngp13.phx.gbl...
> > John,
> >
> > Execute the output of the below query .
> >
> > SELECT 'DROP INDEX ' + OBJECT_NAME([id]) + '.' + [name]+CHAR(13)+'GO'
> > FROM sysindexes
> > WHERE indid BETWEEN 1 AND 250
> > AND OBJECTPROPERTY([id], 'IsMsShipped') = 0
> > AND INDEXPROPERTY([id], [name], 'IsStatistics') = 0
> >
> >
> > --
> > Dinesh.
> > SQL Server FAQ at
> > http://www.tkdinesh.com
> >
> > "John Hamilton" <jhamil@.nowhere.com> wrote in message
> > news:uFM7uiGTDHA.2852@.tk2msftngp13.phx.gbl...
> > > I have several databases that are supposed to have the same structure,
> > over
> > > time some things (mostly indexes) have been changed, added, etc.
> > differently
> > > on each database. I would like to write a script that would strip all
> > user
> > > tables of their indexes and then run a script that would add the
indexes
> I
> > > want on them. I have no problem writing the script to create the
> indexes.
> > > What I need is a script that will delete read the system table and
> delete
> > > all of the indexes in a particular database (including primary keys).
> > >
> > > If anybody has such a thing and wouldn't mind sharing it, I would be
> most
> > > grateful.
> > >
> > >
> >
> >
>|||What is the best way to seperate which are regular indexes and which are
primary keys?
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:e1E7LwHTDHA.2148@.TK2MSFTNGP12.phx.gbl...
> You need to use DROP CONSTRAINT on constraints.
> --
> Andrew J. Kelly
> SQL Server MVP
>
> "John Hamilton" <jhamil@.nowhere.com> wrote in message
> news:ukge0sHTDHA.2020@.TK2MSFTNGP11.phx.gbl...
> > Here is the actual error:
> >
> > An explicit DROP INDEX is not allowed on index
> > 'tblTableName.PrimaryKeyName_PK'. It is being used for PRIMARY KEY
> > constraint enforcement.
> >
> > "Dinesh.T.K" <tkdinesh@.nospam.mail.tkdinesh.com> wrote in message
> > news:OY7JRpGTDHA.1912@.tk2msftngp13.phx.gbl...
> > > John,
> > >
> > > Execute the output of the below query .
> > >
> > > SELECT 'DROP INDEX ' + OBJECT_NAME([id]) + '.' + [name]+CHAR(13)+'GO'
> > > FROM sysindexes
> > > WHERE indid BETWEEN 1 AND 250
> > > AND OBJECTPROPERTY([id], 'IsMsShipped') = 0
> > > AND INDEXPROPERTY([id], [name], 'IsStatistics') = 0
> > >
> > >
> > > --
> > > Dinesh.
> > > SQL Server FAQ at
> > > http://www.tkdinesh.com
> > >
> > > "John Hamilton" <jhamil@.nowhere.com> wrote in message
> > > news:uFM7uiGTDHA.2852@.tk2msftngp13.phx.gbl...
> > > > I have several databases that are supposed to have the same
structure,
> > > over
> > > > time some things (mostly indexes) have been changed, added, etc.
> > > differently
> > > > on each database. I would like to write a script that would strip
all
> > > user
> > > > tables of their indexes and then run a script that would add the
> indexes
> > I
> > > > want on them. I have no problem writing the script to create the
> > indexes.
> > > > What I need is a script that will delete read the system table and
> > delete
> > > > all of the indexes in a particular database (including primary
keys).
> > > >
> > > > If anybody has such a thing and wouldn't mind sharing it, I would be
> > most
> > > > grateful.
> > > >
> > > >
> > >
> > >
> >
> >
>|||You can loop through the sysconstraints table first to remove all the
constraints and then do the indexes.
--
Andrew J. Kelly
SQL Server MVP
"John Hamilton" <jhamil@.nowhere.com> wrote in message
news:e6gR0AITDHA.2180@.TK2MSFTNGP10.phx.gbl...
> What is the best way to seperate which are regular indexes and which are
> primary keys?
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:e1E7LwHTDHA.2148@.TK2MSFTNGP12.phx.gbl...
> > You need to use DROP CONSTRAINT on constraints.
> >
> > --
> >
> > Andrew J. Kelly
> > SQL Server MVP
> >
> >
> > "John Hamilton" <jhamil@.nowhere.com> wrote in message
> > news:ukge0sHTDHA.2020@.TK2MSFTNGP11.phx.gbl...
> > > Here is the actual error:
> > >
> > > An explicit DROP INDEX is not allowed on index
> > > 'tblTableName.PrimaryKeyName_PK'. It is being used for PRIMARY KEY
> > > constraint enforcement.
> > >
> > > "Dinesh.T.K" <tkdinesh@.nospam.mail.tkdinesh.com> wrote in message
> > > news:OY7JRpGTDHA.1912@.tk2msftngp13.phx.gbl...
> > > > John,
> > > >
> > > > Execute the output of the below query .
> > > >
> > > > SELECT 'DROP INDEX ' + OBJECT_NAME([id]) + '.' +
[name]+CHAR(13)+'GO'
> > > > FROM sysindexes
> > > > WHERE indid BETWEEN 1 AND 250
> > > > AND OBJECTPROPERTY([id], 'IsMsShipped') = 0
> > > > AND INDEXPROPERTY([id], [name], 'IsStatistics') = 0
> > > >
> > > >
> > > > --
> > > > Dinesh.
> > > > SQL Server FAQ at
> > > > http://www.tkdinesh.com
> > > >
> > > > "John Hamilton" <jhamil@.nowhere.com> wrote in message
> > > > news:uFM7uiGTDHA.2852@.tk2msftngp13.phx.gbl...
> > > > > I have several databases that are supposed to have the same
> structure,
> > > > over
> > > > > time some things (mostly indexes) have been changed, added, etc.
> > > > differently
> > > > > on each database. I would like to write a script that would strip
> all
> > > > user
> > > > > tables of their indexes and then run a script that would add the
> > indexes
> > > I
> > > > > want on them. I have no problem writing the script to create the
> > > indexes.
> > > > > What I need is a script that will delete read the system table and
> > > delete
> > > > > all of the indexes in a particular database (including primary
> keys).
> > > > >
> > > > > If anybody has such a thing and wouldn't mind sharing it, I would
be
> > > most
> > > > > grateful.
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
>|||Thanks...that did it.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uL1KjTITDHA.3144@.tk2msftngp13.phx.gbl...
> You can loop through the sysconstraints table first to remove all the
> constraints and then do the indexes.
> --
> Andrew J. Kelly
> SQL Server MVP
>
> "John Hamilton" <jhamil@.nowhere.com> wrote in message
> news:e6gR0AITDHA.2180@.TK2MSFTNGP10.phx.gbl...
> > What is the best way to seperate which are regular indexes and which are
> > primary keys?
> >
> > "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> > news:e1E7LwHTDHA.2148@.TK2MSFTNGP12.phx.gbl...
> > > You need to use DROP CONSTRAINT on constraints.
> > >
> > > --
> > >
> > > Andrew J. Kelly
> > > SQL Server MVP
> > >
> > >
> > > "John Hamilton" <jhamil@.nowhere.com> wrote in message
> > > news:ukge0sHTDHA.2020@.TK2MSFTNGP11.phx.gbl...
> > > > Here is the actual error:
> > > >
> > > > An explicit DROP INDEX is not allowed on index
> > > > 'tblTableName.PrimaryKeyName_PK'. It is being used for PRIMARY KEY
> > > > constraint enforcement.
> > > >
> > > > "Dinesh.T.K" <tkdinesh@.nospam.mail.tkdinesh.com> wrote in message
> > > > news:OY7JRpGTDHA.1912@.tk2msftngp13.phx.gbl...
> > > > > John,
> > > > >
> > > > > Execute the output of the below query .
> > > > >
> > > > > SELECT 'DROP INDEX ' + OBJECT_NAME([id]) + '.' +
> [name]+CHAR(13)+'GO'
> > > > > FROM sysindexes
> > > > > WHERE indid BETWEEN 1 AND 250
> > > > > AND OBJECTPROPERTY([id], 'IsMsShipped') = 0
> > > > > AND INDEXPROPERTY([id], [name], 'IsStatistics') = 0
> > > > >
> > > > >
> > > > > --
> > > > > Dinesh.
> > > > > SQL Server FAQ at
> > > > > http://www.tkdinesh.com
> > > > >
> > > > > "John Hamilton" <jhamil@.nowhere.com> wrote in message
> > > > > news:uFM7uiGTDHA.2852@.tk2msftngp13.phx.gbl...
> > > > > > I have several databases that are supposed to have the same
> > structure,
> > > > > over
> > > > > > time some things (mostly indexes) have been changed, added, etc.
> > > > > differently
> > > > > > on each database. I would like to write a script that would
strip
> > all
> > > > > user
> > > > > > tables of their indexes and then run a script that would add the
> > > indexes
> > > > I
> > > > > > want on them. I have no problem writing the script to create
the
> > > > indexes.
> > > > > > What I need is a script that will delete read the system table
and
> > > > delete
> > > > > > all of the indexes in a particular database (including primary
> > keys).
> > > > > >
> > > > > > If anybody has such a thing and wouldn't mind sharing it, I
would
> be
> > > > most
> > > > > > grateful.
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
>

Delete a record that exist in 6 tables at the same time

Hi! Is there a way to delete a record from multiple tables at the same time? Thanks for the help!

hmm not totally sure what you mean

Are you referring to atomically? So like nothing happens in between deleting them? if that's the case you have to do it as a transaction

http://msdn2.microsoft.com/en-us/library/system.data.sqlclient.sqltransaction.aspx

Is how you'd do it in code|||

Hi James25,

Is there a way to delete a record from multiple tables at the same time?

I would suggest you using triggers. You can create a trigger on one table and within that trigger defination, delete the corresponding records from other 5 tables.

For detailed information, please refer to this msdn document:http://msdn2.microsoft.com/en-us/library/ms189799.aspx

Hope my suggestion helps

|||

Thanks! That's what I am gonna implement.

Cheers!

|||

Thank you for the reply! Sorry, it took me a while to get back, I am gonna use a trigger for this problem.

|||

k

but be really careful about using triggers too extensively

they typically violate the seperation of layers

since they cause logic to happen unbeknownest to your business logic layers

|||

I'll keep that in mind. Thank you!

Thursday, March 22, 2012

Delete & create a partition

Hello All,
I am working on a hugeee table partionned in 20.
Working on one partition at one time, I need to drop and re-create a
partition before inserting treated data.
Anyone know if I can drop then re-create a partion ? if yes, How. if no, any
alternative like truncate partion maybe...
Thanks !!
Arnold.
> Anyone know if I can drop then re-create a partion ? if yes, How. if no,
> any
> alternative like truncate partion maybe...
To effectively truncate a partition, SWITCH the desired partition into a
staging table. The staging table needs to be on the same filegroup(s) with
like schema and indexes. You can then drop or truncate the staging table to
permanently remove the data.
Hope this helps.
Dan Guzman
SQL Server MVP
"r.no" <rno@.discussions.microsoft.com> wrote in message
news:85E5D494-3BDE-4B68-9B52-04ED3A9B1C43@.microsoft.com...
> Hello All,
> I am working on a hugeee table partionned in 20.
> Working on one partition at one time, I need to drop and re-create a
> partition before inserting treated data.
> Anyone know if I can drop then re-create a partion ? if yes, How. if no,
> any
> alternative like truncate partion maybe...
> Thanks !!
> Arnold.
|||But what do I do after doing the truncate on the staging table ? how do I go
back to main table ? i need some kind of "switch back" to original table
partition ?
Arnold
"Dan Guzman" wrote:

> To effectively truncate a partition, SWITCH the desired partition into a
> staging table. The staging table needs to be on the same filegroup(s) with
> like schema and indexes. You can then drop or truncate the staging table to
> permanently remove the data.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "r.no" <rno@.discussions.microsoft.com> wrote in message
> news:85E5D494-3BDE-4B68-9B52-04ED3A9B1C43@.microsoft.com...
>
|||After the switch out, the source partition will still exist with the same
boundaries but will be empty. No need to switch anything back.
Hope this helps.
Dan Guzman
SQL Server MVP
"r.no" <rno@.discussions.microsoft.com> wrote in message
news:33DB4230-786C-4661-9905-9A23D5CE3C84@.microsoft.com...[vbcol=seagreen]
> But what do I do after doing the truncate on the staging table ? how do I
> go
> back to main table ? i need some kind of "switch back" to original table
> partition ?
> --
> Arnold
>
> "Dan Guzman" wrote:

Delete & create a partition

Hello All,
I am working on a hugeee table partionned in 20.
Working on one partition at one time, I need to drop and re-create a
partition before inserting treated data.
Anyone know if I can drop then re-create a partion ? if yes, How. if no, any
alternative like truncate partion maybe...
Thanks !!
Arnold.> Anyone know if I can drop then re-create a partion ? if yes, How. if no,
> any
> alternative like truncate partion maybe...
To effectively truncate a partition, SWITCH the desired partition into a
staging table. The staging table needs to be on the same filegroup(s) with
like schema and indexes. You can then drop or truncate the staging table to
permanently remove the data.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"r.no" <rno@.discussions.microsoft.com> wrote in message
news:85E5D494-3BDE-4B68-9B52-04ED3A9B1C43@.microsoft.com...
> Hello All,
> I am working on a hugeee table partionned in 20.
> Working on one partition at one time, I need to drop and re-create a
> partition before inserting treated data.
> Anyone know if I can drop then re-create a partion ? if yes, How. if no,
> any
> alternative like truncate partion maybe...
> Thanks !!
> Arnold.|||But what do I do after doing the truncate on the staging table ? how do I go
back to main table ? i need some kind of "switch back" to original table
partition ?
--
Arnold
"Dan Guzman" wrote:
> > Anyone know if I can drop then re-create a partion ? if yes, How. if no,
> > any
> > alternative like truncate partion maybe...
> To effectively truncate a partition, SWITCH the desired partition into a
> staging table. The staging table needs to be on the same filegroup(s) with
> like schema and indexes. You can then drop or truncate the staging table to
> permanently remove the data.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "r.no" <rno@.discussions.microsoft.com> wrote in message
> news:85E5D494-3BDE-4B68-9B52-04ED3A9B1C43@.microsoft.com...
> > Hello All,
> >
> > I am working on a hugeee table partionned in 20.
> >
> > Working on one partition at one time, I need to drop and re-create a
> > partition before inserting treated data.
> >
> > Anyone know if I can drop then re-create a partion ? if yes, How. if no,
> > any
> > alternative like truncate partion maybe...
> >
> > Thanks !!
> > Arnold.
>|||After the switch out, the source partition will still exist with the same
boundaries but will be empty. No need to switch anything back.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"r.no" <rno@.discussions.microsoft.com> wrote in message
news:33DB4230-786C-4661-9905-9A23D5CE3C84@.microsoft.com...
> But what do I do after doing the truncate on the staging table ? how do I
> go
> back to main table ? i need some kind of "switch back" to original table
> partition ?
> --
> Arnold
>
> "Dan Guzman" wrote:
>> > Anyone know if I can drop then re-create a partion ? if yes, How. if
>> > no,
>> > any
>> > alternative like truncate partion maybe...
>> To effectively truncate a partition, SWITCH the desired partition into a
>> staging table. The staging table needs to be on the same filegroup(s)
>> with
>> like schema and indexes. You can then drop or truncate the staging table
>> to
>> permanently remove the data.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "r.no" <rno@.discussions.microsoft.com> wrote in message
>> news:85E5D494-3BDE-4B68-9B52-04ED3A9B1C43@.microsoft.com...
>> > Hello All,
>> >
>> > I am working on a hugeee table partionned in 20.
>> >
>> > Working on one partition at one time, I need to drop and re-create a
>> > partition before inserting treated data.
>> >
>> > Anyone know if I can drop then re-create a partion ? if yes, How. if
>> > no,
>> > any
>> > alternative like truncate partion maybe...
>> >
>> > Thanks !!
>> > Arnold.

Delete & create a partition

Hello All,
I am working on a hugeee table partionned in 20.
Working on one partition at one time, I need to drop and re-create a
partition before inserting treated data.
Anyone know if I can drop then re-create a partion ? if yes, How. if no, any
alternative like truncate partion maybe...
Thanks !!
Arnold.> Anyone know if I can drop then re-create a partion ? if yes, How. if no,
> any
> alternative like truncate partion maybe...
To effectively truncate a partition, SWITCH the desired partition into a
staging table. The staging table needs to be on the same filegroup(s) with
like schema and indexes. You can then drop or truncate the staging table to
permanently remove the data.
Hope this helps.
Dan Guzman
SQL Server MVP
"r.no" <rno@.discussions.microsoft.com> wrote in message
news:85E5D494-3BDE-4B68-9B52-04ED3A9B1C43@.microsoft.com...
> Hello All,
> I am working on a hugeee table partionned in 20.
> Working on one partition at one time, I need to drop and re-create a
> partition before inserting treated data.
> Anyone know if I can drop then re-create a partion ? if yes, How. if no,
> any
> alternative like truncate partion maybe...
> Thanks !!
> Arnold.|||But what do I do after doing the truncate on the staging table ? how do I go
back to main table ? i need some kind of "switch back" to original table
partition ?
Arnold
"Dan Guzman" wrote:

> To effectively truncate a partition, SWITCH the desired partition into a
> staging table. The staging table needs to be on the same filegroup(s) wit
h
> like schema and indexes. You can then drop or truncate the staging table
to
> permanently remove the data.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "r.no" <rno@.discussions.microsoft.com> wrote in message
> news:85E5D494-3BDE-4B68-9B52-04ED3A9B1C43@.microsoft.com...
>|||After the switch out, the source partition will still exist with the same
boundaries but will be empty. No need to switch anything back.
Hope this helps.
Dan Guzman
SQL Server MVP
"r.no" <rno@.discussions.microsoft.com> wrote in message
news:33DB4230-786C-4661-9905-9A23D5CE3C84@.microsoft.com...[vbcol=seagreen]
> But what do I do after doing the truncate on the staging table ? how do I
> go
> back to main table ? i need some kind of "switch back" to original table
> partition ?
> --
> Arnold
>
> "Dan Guzman" wrote:
>

Delete

I am trying a delete statement as follows which is taking a long time
although the tables concerned are not large.
delete from table where value not in ('a','b','c'....'s')
table as a trigger operation to delete from further child tables.
Is there any way to speed this query?bjones wrote:
> I am trying a delete statement as follows which is taking a long time
> although the tables concerned are not large.
> delete from table where value not in ('a','b','c'....'s')
> table as a trigger operation to delete from further child tables.
> Is there any way to speed this query?
NOT IN cannot be index optimized. Every time you perform a delete, the
table has to be scanned and the value column checked against each
element in the IN clause. I'm not sure what you mean by "table as a
trigger operation to delete from further child tables". It would help if
you could provide DDL and full explanation of what you need to do.
--
David Gugick
Quest Software
www.imceda.com
www.quest.com

delaying compilation real time

Hi,
I would like to find out about SSIS compilation. Can you mention anything regarding this issue or can you point me out to a website for this topic please?

Thanks

fmardani wrote:

Hi,
I would like to find out about SSIS compilation. Can you mention anything regarding this issue or can you point me out to a website for this topic please?

Thanks

Compilation, to me, suggests that a binary object file is created. There is nothing like this in SSIS. i.e. No compilation occurs.

-Jamie

delay when running with DTexec.exe

Hi,
when I try to run my package with dtexec.exe, it starts fine but in the process it package calles another subpackage and at the time there is big delay before start processing the subpackage.
the subpackage has been setup so executionoutofprocess pramater has been set to true.
any idea what migth be the problem.
I have to metion even when I run this with in the visual studio still I have a big delay.

cheersWhat is a "big" delay? SSIS has to validate the package and its connections/metadata before executing.|||it takes normally between 30 sec to one min. for it to start the sub-package. is there any way to reduce this delay by pre-validation
Thanks|||Try setting DelayValidation = True on your connection managers, sources and destinations.|||

From another thread on this forum:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=739674&SiteID=1

Usually, executing using DTEXEC /F File.DTSX is the fastest way, but gives the mininum logging. Varios logging options for DTEXEC add functionality but may affect performance, although I've never seen 10x times slowdown because of it.

I have tested myself and I can tell the speed improves but just too litle.

|||Is there any way to do the validation once and then cache it. instead of SSIS vaidating it each time I run as it has got a big overhead.
cheers|||

Kolf wrote:

Is there any way to do the validation once and then cache it. instead of SSIS vaidating it each time I run as it has got a big overhead.
cheers

Validation HAS to happen. And unless you set DelayValidation=TRUE a task will get validated when the package spins up and immediately pror to the task executing. There isn't alot of point in caching the results of the first Validation because that would negate the point of doing the second validation.

Does that answer the question?

-Jamie

Wednesday, March 21, 2012

Delay before publishing

Hello
Is it possible that the Publisher could wait some time ( about 15 seconds)
before it starts publishing ?
I want it to start publishing when it gets new data, but there can be about
1000 new records in few seconds and I think it would be better to
wait for few seconds then to create new publication after single record is
inserted.
Best Regards
Wojciech Znaniecki
Wojciech ,
there is a difference between creating a publication and synchronizing data
for an existing publication. The publication can be created and as long as
the logreader and distribution agents don't run (no synchronization) there
is no effect on the publisher. Typically the log reader runs continuously,
but you can schedule the distribution agent to run whenever you want. If
this also runs continuously and you want to enforce a delay, you could
increase the POLLINGINTERVAL parameter's value. Other parameters you might
be interested in are -CommitBatchThreshold, CommitBatchSize,
MaxDeliveredTransactions.
HTH,
Paul Ibison
|||Thanks for fast anwser
I've tried to do it but I cant create a publication with stopped
Distribution Agent and Log Reader Agent.
Can I do it with sql script?
for example: my script that create Publication looks like that :
"(...)
exec sp_addpublication @.publication = N'RozkazyReplikacja', @.restricted =
N'false', @.sync_method = N'native', @.repl_freq = N'continuous', @.description
= N'Transactional publication of replikacyjna database from Publisher
WOJTEK-Z\W1.', @.status = N'active', @.allow_push = N'true', @.allow_pull =
N'true', @.allow_anonymous = N'false', @.enabled_for_internet = N'false',
@.independent_agent = N'false', @.immediate_sync = N'false', @.allow_sync_tran
= N'false', @.autogen_sync_procs = N'false', @.retention = 336,
@.allow_queued_tran = N'false', @.snapshot_in_defaultfolder = N'true',
@.compress_snapshot = N'false', @.ftp_port = 21, @.ftp_login = N'anonymous',
@.allow_dts = N'false', @.allow_subscription_copy = N'false',
@.add_to_active_directory = N'false', @.logreader_job_name =
N'WOJTEK-Z\W1-replikacyjna-5'
exec sp_addpublication_snapshot @.publication = N'RozkazyReplikacja',
@.frequency_type = 4, @.frequency_interval = 1, @.frequency_relative_interval =
0, @.frequency_recurrence_factor = 1, @.frequency_subday = 4,
@.frequency_subday_interval = 1, @.active_start_date = 0, @.active_end_date =
0, @.active_start_time_of_day = 0, @.active_end_time_of_day = 235959,
@.snapshot_job_name = N'WOJTEK-Z\W1-replikacyjna-RozkazyReplikacja-7'
GO
(...) "
What should i change to create publication without starting Distribution and
Log Reader Agent ?
Best Regards
Wojciech Znaniecki
|||Wojtek,
using sp_addsubscriber will allow you to set the frequency for the
distribution agent. However, the easeist way to do what you require is to
create the publication without starting the snapshot agent. Then edit the
logreader and distribution agent jobs to run on a schedule.
HTH,
Paul Ibison
|||Can I set logreader and distribution agent to run more then once per minute ?
Or could I create a trigger to start, let logreader and distribution reader
to their job and stop them ?
Best Regards
Wojciech Znaniecki
"Paul Ibison" wrote:

> Wojtek,
> using sp_addsubscriber will allow you to set the frequency for the
> distribution agent. However, the easeist way to do what you require is to
> create the publication without starting the snapshot agent. Then edit the
> logreader and distribution agent jobs to run on a schedule.
> HTH,
> Paul Ibison
>
>
|||Wojciech,
you can use sp_start_job to start the agents if you want to do it
programatically. If you want to schedule it, AFAIR the maximum frequency is
1 minute or you can run continuously. If the latter, the pollinginterval
will potantially increase it in a more granular way. However, the log reader
agent normally runs continuously and the distribution agent may be
controlled in such a way. The log reader agent marks the log as having been
read, and if it is not run very frequently, the log will increase in size
and not be fully truncated during a backup.
HTH,
Paul Ibison

Degradation of performance with time

I have OLTP database, 24x5. The database used by application. The application
have new version like every half a year. During deployment of new version,
old database renamed, creating new database with same structure and move
data from old database to new with DTS. After deployment of new version I see
performance improvements for like 2-3 weeks. Next, after 2-3 monthes, it
return back to not so good state. I guess it may be caused by fragmentation?
Every week runs dbcc dbreindex, to defragment database. Database file and
log file are on separate LUNs, each RAID10. Storage device - EMC CX300. No
any other files on same drive where database files are located. Most of
tables have clustered indices, but not all tables have it - our benchmark
tests showed what for some tables better to not have clustered index. SQL2k
SP3.
Why is such performance degradation? What can be done to fix it?
What about statistics, are those kept up to date?
Denis the SQL Menace
http://sqlservercode.blogspot.com/
|||Hi
Your description seems to follow the sort of pattern associated with
fragmented indexes, but as you rebuild them this should not happen and you
should see an improvement after the rebuild that decays over time. Try using
DBCC SHOWCONTIG to monitor the index fragmentation, you may also want to
update the statistics more often. Check the query plans to see if they differ
when the performance is slower.
You will need to know how your LUNs map to physical discs to determine if
there is any contention, but this should not follow the performance patterns
you have described. Check out physical disc fragmentation and defragment if
necessary.
You may want to also look at the blocker script
http://support.microsoft.com/kb/271509 and monitor performance with perfmon
and compare the results when it is working well against when it isn't.
You don't say if your system databases or tempdb are located on the same
discs, if they are you may want to consider moving them. That may not solve
your problem but it would be good practices.
John
"andsm" wrote:

> I have OLTP database, 24x5. The database used by application. The application
> have new version like every half a year. During deployment of new version,
> old database renamed, creating new database with same structure and move
> data from old database to new with DTS. After deployment of new version I see
> performance improvements for like 2-3 weeks. Next, after 2-3 monthes, it
> return back to not so good state. I guess it may be caused by fragmentation?
> Every week runs dbcc dbreindex, to defragment database. Database file and
> log file are on separate LUNs, each RAID10. Storage device - EMC CX300. No
> any other files on same drive where database files are located. Most of
> tables have clustered indices, but not all tables have it - our benchmark
> tests showed what for some tables better to not have clustered index. SQL2k
> SP3.
> Why is such performance degradation? What can be done to fix it?
|||I'd look into specificallhy what is going slow. Are there specific
tranactions that you run frequently that become slow?
Can you reduce the number of indexes on the biggest tables?

Monday, March 19, 2012

Degradation of performance with time

I have OLTP database, 24x5. The database used by application. The applicatio
n
have new version like every half a year. During deployment of new version,
old database renamed, creating new database with same structure and move
data from old database to new with DTS. After deployment of new version I se
e
performance improvements for like 2-3 weeks. Next, after 2-3 monthes, it
return back to not so good state. I guess it may be caused by fragmentation?
Every week runs dbcc dbreindex, to defragment database. Database file and
log file are on separate LUNs, each RAID10. Storage device - EMC CX300. No
any other files on same drive where database files are located. Most of
tables have clustered indices, but not all tables have it - our benchmark
tests showed what for some tables better to not have clustered index. SQL2k
SP3.
Why is such performance degradation? What can be done to fix it?What about statistics, are those kept up to date?
Denis the SQL Menace
http://sqlservercode.blogspot.com/|||Hi
Your description seems to follow the sort of pattern associated with
fragmented indexes, but as you rebuild them this should not happen and you
should see an improvement after the rebuild that decays over time. Try using
DBCC SHOWCONTIG to monitor the index fragmentation, you may also want to
update the statistics more often. Check the query plans to see if they diffe
r
when the performance is slower.
You will need to know how your LUNs map to physical discs to determine if
there is any contention, but this should not follow the performance patterns
you have described. Check out physical disc fragmentation and defragment if
necessary.
You may want to also look at the blocker script
http://support.microsoft.com/kb/271509 and monitor performance with perfmon
and compare the results when it is working well against when it isn't.
You don't say if your system databases or tempdb are located on the same
discs, if they are you may want to consider moving them. That may not solve
your problem but it would be good practices.
John
"andsm" wrote:

> I have OLTP database, 24x5. The database used by application. The applicat
ion
> have new version like every half a year. During deployment of new version,
> old database renamed, creating new database with same structure and move
> data from old database to new with DTS. After deployment of new version I
see
> performance improvements for like 2-3 weeks. Next, after 2-3 monthes, it
> return back to not so good state. I guess it may be caused by fragmentatio
n?
> Every week runs dbcc dbreindex, to defragment database. Database file and
> log file are on separate LUNs, each RAID10. Storage device - EMC CX300. N
o
> any other files on same drive where database files are located. Most of
> tables have clustered indices, but not all tables have it - our benchmark
> tests showed what for some tables better to not have clustered index. SQL2
k
> SP3.
> Why is such performance degradation? What can be done to fix it?|||I'd look into specificallhy what is going slow. Are there specific
tranactions that you run frequently that become slow?
Can you reduce the number of indexes on the biggest tables?

Degradation of performance with time

I have OLTP database, 24x5. The database used by application. The application
have new version like every half a year. During deployment of new version,
old database renamed, creating new database with same structure and move
data from old database to new with DTS. After deployment of new version I see
performance improvements for like 2-3 weeks. Next, after 2-3 monthes, it
return back to not so good state. I guess it may be caused by fragmentation?
Every week runs dbcc dbreindex, to defragment database. Database file and
log file are on separate LUNs, each RAID10. Storage device - EMC CX300. No
any other files on same drive where database files are located. Most of
tables have clustered indices, but not all tables have it - our benchmark
tests showed what for some tables better to not have clustered index. SQL2k
SP3.
Why is such performance degradation? What can be done to fix it?What about statistics, are those kept up to date?
Denis the SQL Menace
http://sqlservercode.blogspot.com/|||Hi
Your description seems to follow the sort of pattern associated with
fragmented indexes, but as you rebuild them this should not happen and you
should see an improvement after the rebuild that decays over time. Try using
DBCC SHOWCONTIG to monitor the index fragmentation, you may also want to
update the statistics more often. Check the query plans to see if they differ
when the performance is slower.
You will need to know how your LUNs map to physical discs to determine if
there is any contention, but this should not follow the performance patterns
you have described. Check out physical disc fragmentation and defragment if
necessary.
You may want to also look at the blocker script
http://support.microsoft.com/kb/271509 and monitor performance with perfmon
and compare the results when it is working well against when it isn't.
You don't say if your system databases or tempdb are located on the same
discs, if they are you may want to consider moving them. That may not solve
your problem but it would be good practices.
John
"andsm" wrote:
> I have OLTP database, 24x5. The database used by application. The application
> have new version like every half a year. During deployment of new version,
> old database renamed, creating new database with same structure and move
> data from old database to new with DTS. After deployment of new version I see
> performance improvements for like 2-3 weeks. Next, after 2-3 monthes, it
> return back to not so good state. I guess it may be caused by fragmentation?
> Every week runs dbcc dbreindex, to defragment database. Database file and
> log file are on separate LUNs, each RAID10. Storage device - EMC CX300. No
> any other files on same drive where database files are located. Most of
> tables have clustered indices, but not all tables have it - our benchmark
> tests showed what for some tables better to not have clustered index. SQL2k
> SP3.
> Why is such performance degradation? What can be done to fix it?|||I'd look into specificallhy what is going slow. Are there specific
tranactions that you run frequently that become slow?
Can you reduce the number of indexes on the biggest tables?

Friday, March 9, 2012

Definition of Object Has Changed Since it was last compiles

All I'm having a weird problem.. I have 2 stored procedures that run 98% of the time without any issue but inconsistenly through the following error.

'The definition of object 'proc name goes here' has changed since it was compiled'

We have adding 'with recompile' to the proc but we still get this error - but not consistently. The stored proc is not changing nor is the table structure of any of the objects that are being used in the sp. Any idea to trace down the why this is happening or what objecte it thinks is changing? Let me know your thoughts.

Ken

We've started experiencing this, except:
- It's occurring 100% of the time on SQL 2K5, for a particular data set, but not for another data set on the same schema.
- It never occurred in SQL 2K.
- It only occurs on SQL 2K5 (w/ DB in 2K compatibility mode)

My first reaction, for our case, is that it's a broken 2K5/2K compatibility issue. We're doing something pretty shady - disabling a trigger on table B from within a trigger firing on table A. IOW:

Trigger A, Table A:
- Disable trigger B on table B
- UPDATE table B
- Re-enable trigger B

So I'm guessing SQL 2K5 is finally calling us out on this. But I'd still prefer a quick fix to rewriting the triggers. Have you had any luck with your issue?

Defining Analysis Services Named Calculation

Hi,

I am creating a new Named Caluclation for time dimension in dsv and added in the time dimension cube from the data source view in the time dimension structre.

While processing it is giving the error message that

"Memory error: The operation cannot be completed because the memory quota estimate exceeds the available system memory"

Can you please help me.

Thanks

Dinesh

What does the named calculation look like? And does it process if you take it out?

Cheers

Matt

|||

Show us the named calculation statment...

Works ok without the Named Calculation?!

Regards

|||

Hi

I have used the named calculation "ReportDate + ' ' + Session" in time dimension.

can you let me know how to add this namedcalculation to the cube after creating the named calcualtion in dsv.

Thanks

Dinesh

|||

Hi,

ReportDate and Session are both strings? If not you probably get problems, you will need to cast them as a string/varchar in order for your named calculation to work.

Adding to a dimension, just edit your dimension and drag it onto the attributes list. If it is a new measure, edit your cube and right click, add new measure, select the new named calculation.

Hope that helps

Matt

|||

You already created the NC in the DSV? Correct?

The cube processing generate errors only after your created the NC, correct?

Regards

|||

Hi,

ReportDate and Session are both strings. I added that into atribute list and processed.

It is giving the error message like

"Memory error: The operation cannot be completed because the memory quota estimate (1976MB) exceeds the available system memory (659MB). "

So, what to do for this type of error

Thanks

Dinesh

|||

Hi,

Looks like this is a known bug with SSAS, I haven't seen it before, but it looks like there is a hotfix for it.

http://support.microsoft.com/kb/914595

Hope that helps

Matt

|||

Hi,

Yes after creating the NC only this problem came. before i tested the cube there is no problem.

can you please let me know what is the problem.

Thanks

Dinesh

|||

Check the Matt link and fix the bug, if you will still need some help, tell us!

regards!

Saturday, February 25, 2012

Defeat SQLDUMPER?

Every time DTEXEC.EXE crashes, SQLDUMPER.EXE process pops up and quickly exits. But I would like my JIT debugger to attach like it happens with other processes. Possible to do? Please inform how..
Thanks.There is no way to disable sqldumper except to attach your debugger to the process before the crash. However, you can use your debugger (at least VS and WinDBG) to open the generated crash dump and look at it.

DefaultMember Slows Down Time Intelligence

I searched for this and found no web hits so I thought I'd post a warning here.

As of SP2, the function Hierarchy.DefaultMember is still not optimized very well in MDX expressions and queries. By replacing it with the literal member equivalent from the hierarchy (eg. Hierarchy.[Current Period]), I've been able to improve performance of my time intelligence queries by a scale of magnitude or more.

As a fair warning about the output from the time intelligence wizard, you need to fix the generated calculations in the following ways:

Get rid of DefaultMember references

Replace * operator for the CrossJoin() function

Hope these hints save others as much time as I've wasted discovering them. Please let me know if anybody else has general hints to improve the performance of time intelligence calculations.

Chris mentioned your post and gave some more guidelines at:

http://cwebbbi.spaces.live.com/Blog/cns!7B84B0F2C239489A!1076.entry

DefaultMember Slows Down Time Intelligence

I searched for this and found no web hits so I thought I'd post a warning here.

As of SP2, the function Hierarchy.DefaultMember is still not optimized very well in MDX expressions and queries. By replacing it with the literal member equivalent from the hierarchy (eg. Hierarchy.[Current Period]), I've been able to improve performance of my time intelligence queries by a scale of magnitude or more.

As a fair warning about the output from the time intelligence wizard, you need to fix the generated calculations in the following ways:

Get rid of DefaultMember references

Replace * operator for the CrossJoin() function

Hope these hints save others as much time as I've wasted discovering them. Please let me know if anybody else has general hints to improve the performance of time intelligence calculations.

Chris mentioned your post and gave some more guidelines at:

http://cwebbbi.spaces.live.com/Blog/cns!7B84B0F2C239489A!1076.entry

DefaultMember of Time Dimension (Analysis Services 2005)

I have a very small time dimension. This time dimension has only Year, Quarter, and Month (plus DateID which is just an identity column). I have one hierarchy for Y-Q-M. The DateID attribute is hidden.

I have the default member of the Date ID attribute set to:

Tail(NonEmptyCrossjoin([Date].[Date ID].Members, 1), 1).Item(0).Item(0)

This works if I do not choose any dates, meaning if my last date is December of 2006, I see the values for December 2006 by default. However, if I add the Month attribute into the browser as a row or a column, I only see October, November, and December (though every month has data)

If I remove the DefaultMember property, I can see all months data when I add the months to the columns or rows, but the data doesn't default to the month (without manually selecting a date) like I want it to. I tried removing the Y-Q-M hierarchy and just making a Y-M hierarchy, but still the same results.

I swear this was working once upon a time (maybe before the latest SP install?) ... I cannot recall exactly.

has anyone else experienced this? am I missing something?

Thanks,

Jason

Hi jsaido,

The behaviour of your solution is right, but unfortunatly is not right for you.

It's named autoexist.

Why you don't use Named sets?

For example:

CREATE SET CURRENTCUBE.[Current Date]

AS Tail(NonEmptyCrossJoin([Date].[Date ID].[Date ID].Members, 1), 1);

CREATE SET CURRENTCUBE.[Current Montht]

AS EXISTS([Date].[Month].[Month].members, [Current Date]);

CREATE SET CURRENTCUBE.[Current Quarter]

AS EXISTS([Date].[Quarter].[Quarter].members, [Current Date]);

CREATE SET CURRENTCUBE.[Current Year]

AS EXISTS([Date].[Year].[Year].members, [Current Date]);

|||I did try named sets - my delivery vehicle for this cube is Excel 2003 pivot tables which do not support named sets.

DefaultMember of Time Dimension (Analysis Services 2005)

I have a very small time dimension. This time dimension has only Year, Quarter, and Month (plus DateID which is just an identity column). I have one hierarchy for Y-Q-M. The DateID attribute is hidden.

I have the default member of the Date ID attribute set to:

Tail(NonEmptyCrossjoin([Date].[Date ID].Members, 1), 1).Item(0).Item(0)

This works if I do not choose any dates, meaning if my last date is December of 2006, I see the values for December 2006 by default. However, if I add the Month attribute into the browser as a row or a column, I only see October, November, and December (though every month has data)

If I remove the DefaultMember property, I can see all months data when I add the months to the columns or rows, but the data doesn't default to the month (without manually selecting a date) like I want it to. I tried removing the Y-Q-M hierarchy and just making a Y-M hierarchy, but still the same results.

I swear this was working once upon a time (maybe before the latest SP install?) ... I cannot recall exactly.

has anyone else experienced this? am I missing something?

Thanks,

Jason

Hi jsaido,

The behaviour of your solution is right, but unfortunatly is not right for you.

It's named autoexist.

Why you don't use Named sets?

For example:

CREATE SET CURRENTCUBE.[Current Date]

AS Tail(NonEmptyCrossJoin([Date].[Date ID].[Date ID].Members, 1), 1);

CREATE SET CURRENTCUBE.[Current Montht]

AS EXISTS([Date].[Month].[Month].members, [Current Date]);

CREATE SET CURRENTCUBE.[Current Quarter]

AS EXISTS([Date].[Quarter].[Quarter].members, [Current Date]);

CREATE SET CURRENTCUBE.[Current Year]

AS EXISTS([Date].[Year].[Year].members, [Current Date]);

|||I did try named sets - my delivery vehicle for this cube is Excel 2003 pivot tables which do not support named sets.

Defaulting Parameters based on time of day?

I need to set up some parameters which are based on the time of day. For instance, night shift vs/ day shift. If I run the report during the day, default to the day shift criteria (5 am to 5 pm), and likewise for the night shift (5 pm to 5 am). On the night shift, I also need to accomodate for the change in dates as well. Is this even possible in a single report? HELP!!

Thanks!

Does it really need to be parameters, or could you just check the time in your SQL or report code?