Saturday, October 31, 2009

Deprecated features of SQL Server 2008


Today we will discuss depricated features of SQL Server 2008. Let's list down the features, their usage and alternatives in SQL Server 2008.


Description: The sp_attach_db is a command to attach a database. This is one option for upgrading a database from SQL Server 2000 to 2005 or used in move databases from one SQL Server to another.


EXEC sp_attach_db @dbname = 'YourDatabase',
@filename1 = 'c:\Program Files\Microsoft SQL Server\MSSQL\Data\YourDatabase.mdf',
@filename2 = 'c:\Program Files\Microsoft SQL Server\MSSQL\Data\YourDatabase_log.ldf'


CREATE DATABASE database_name
ON [ ,...n ]

The system stored procedure sp_detach_db according to SQL Server 2008 Books Online appears to remain the command to detach a database whereas sp_attach_db database is marked as deprecated.


Description: commonly used when servers are migrated or databases are moved.

EXEC sp_renamedb '', ''


Modify Name =


Description: commonly used for setting a database to read only or offline.

USE master
EXEC sp_dboption 'YourDatabase', 'read only', 'TRUE'
USE master
EXEC sp_dboption 'YourDatabase', 'offline', 'TRUE'



Friday, October 30, 2009

What's coming in SQL Server 2008 R2

There are a lot of things which are a part of SQL Server 2008 R2. These are:
  • Kilimanjaro is now officially SQL Server 2008 R2
  • Support for 256 logical processers
  • Self-service Business Intelligence (BI) in-memory analysis update
  • Utility Data Platform
  • Master Data Services
  • Low Latency Complex Event Processing announcement
  • Geospatial visualizations in Reporting Services
  • Cool SQL Server 2008 stats
 SQL Server 2008 R2

Microsoft has decided on a product name for the next release, SQL Server 2008 R2, so it’s time to say goodbye to “Kilimanjaro”. SQL Server R2 will be shipped in sync with the release of Microsoft Office 2010. Community Technology Previews (CTPs) for SQL Server 2008 R2 will be available in the second half of 2009. SQL Server 2008 R2 “is an incremental release so it's an opportunity to deliver incremental technology on top of SQL Server 2008. On the business intelligence side, the release is aligned with Office 2010.

Support for 256 logical processors

Organizations pushing the high end of scalability will want to take advantage of SQL Server 2008 R2 support for up to 256 logical processors (which requires Windows Server 2008 R2). This is a big improvement over the previous maximum of 64 processors.

Self-Service BI In-Memory Analysis

One of the big things coming in SQL Server 2008 R2 is Self-Service BI, a joint solution that offers though SQL Server, SharePoint, and Excel. “Customers want solutions, not just individual products. Solutions enable employees to seek new business solutions and get better results out of what they have. Ultimately the goal is to allow all the employees in an organization to get the benefits of BI.”

So what effect self-service business intelligence (BI) would have on the relationship between the traditional BI professional and business knowledge workers or decision-makers?

Today we have a consultant or IT expert build a BI solution for employees. That is costly and takes time. Today many users don’t get access to BI tools and data sources. Self-service BI allows them to analyze all that data and create reports, charts, and dashboards and then share solutions and models with colleagues by publishing them through SharePoint.

The biggest innovation in self service BI is in-memory analysis. With advanced compression technology that’s incorporated into the release you’ll have 10s or 100s of millions of data records inside of a laptop’s memory, available to slice and dice instantly to get better insights out of the data. IT Professionals provide the environment, know that data is secure, and know which data sets are being used more. They can refresh the data and apply the right policies. So it’s IT controlled but end users are empowered.

Utility Data Platform

Another major feature of SQL Server 2008 R2 that tackles the control issue is: application and mulitiserver management. When a developer builds an application he or she can specify policies for that application. As the application is deployed the DBA knows about the policies and can enforce them and then manage that application as it moves from server to server or as their data volumes grow to adapt to that. This is part of the vision of what is called the Utility Data Platform where we can provide incremental capacity to users and move databases around servers and provide varying SLAs to customers.

SQL Server Management Studio (SSMS) has always been great for managing SQL Server. However, SSMS was really for single server management and wasn’t strong in the area of multi-server management. New wizards, dashboards, and enhancements to Policy Based Management will boost SQL Server 2008 R2’s multi-server management capabilities.

Master Data Services

Another TechEd announcement is the inclusion of Master Data Services, a new capability, in SQL Server 2008 R2. Think about different entities that IT needs to deal with such as customers or partners or products. Multiple databases contain multiple information. A single entity can be in multiple databases, with multiple IDs. A business challenge has been reconciling all that data so they can make the right decisions about customers or products using the data. The goal of master data management is to bring all that data together so that we have a single version of the truth—A single authoritative source of data. We do that by synchronizing data across multiple data sources, understanding where there are exceptions, and managing those exceptions. What we’re doing with Master Data Services is building that capability as part of the data platform so when customers buy SQL Server 2008 R2 they get that functionality.

Almost all large organizations face the problem of multiple data definitions where the same data is defined and used differently in various locations. SQL Server 2008 R2’s new Master Data Services (MDS) feature provides a central portal from which administrators can create and update master data members and hierarchies, with the goal of ensuring data consistency across the enterprise.

Low Latency Complex Event Processing

Also announced at TechEd is that Microsoft is working on Low Latency Complex Event Processing. MS is developing this technology with the idea of building a more comprehensive data platform,  there are business opportunities around being able to analyze streaming transient data, identify patterns, and then act on the findings. We’re building a platform to enable customers and partners to build complex event processing solutions. This will be released in 2010. We haven’t announced the packaging or pricing. There will be a CTP in 2009 and for now we’re calling it Low Latency Complex Event Processing. It’s not clear yet if these CTPs will align with the other CTPs in the pipeline.

Geospatial visualizations in Reporting Services

Building on the geospatial data types in the base SQL Server 2008 release, SQL Server 2008 R2's Reporting Services will support reports with visual geographic mapping. New geospatial features include visualizations for mapping routine and custom shapes and support for Microsoft Virtual Earth tiles.

Cool SQL Server Facts

Some interesting SQL Server 2008 facts: There have had over three and a half million downloads of SQL Server 2008; companies in every country, every industry are looking at SQL Server 2008 today; there’s 160% ROI in deploying it just from savings associated with things like Resource Governor, Data Protection, Encryption, and Policy-based management.

Tuesday, October 27, 2009

Policy-Based Management in SQL Server 2008

As we discussed earlier that we can use DDL triggers to enforce developers to follow some guidelines, but this is restricted only to DDL commands. What if we need to create policies to check other things or take corrective/preventive actions?

Policy-Based Management is a new feature in SQL Server 2008. It allows to define and enforce policies for configuring and managing SQL Server across the enterprise. Initially this was called Declarative Management Framework but was renamed.

Let's understand Policy-Based Management and its components :

Target - an entity that is managed by Policy-Based management; e.g. a database, a table, an index, etc.
Facet - a predefined set of properties that can be managed
Condition - a property expression that evaluates to True or False; i.e. the state of a Facet
Policy - a condition to be checked and/or enforced

Policy-Based Management is configured in SQL Server Management Studio (SSMS). Navigate to the Object Explorer and expand the Management node and the Policy Management node; you will see the Policies, Conditions, and Facets nodes:

Click on Facets to see available facets:

There is a comprehensive collection of facets in SQL Server 2008, allowing us to manage just about every aspect of SQL Server. Let's look at Database facet:

These facet properties are used to specify a condition; A policy specifies an expression that evaluates to True or False. The expression can be made up of one or more conditions logically joined by And / Or operators.


The DBA team has released a checklist that none of the procedure names will start with 'sp_'. Now we need to create a policy to satisfy this criterion.

Let's Create a Condition

The starting point in Policy-Based Management is to create a Condition. Right click on Conditions in the SSMS Object Explorer (under the Management | Policy Management node) then select New Condition from the menu. Fill in the dialog as follows:

Select a single Facet for a Condition, then enter an Expression. The Expression evaluates to either True or False. This is the essence of Policy-Based Management which will test whether the Condition is True or False.

Let's create a policy now:

Right click Policies in the SSMS Object Explorer (under the Management | Policy Management node) then select New Policy from the menu. Fill in the dialog as follows:

The Check Condition drop down will include the list of conditions that are defined. We can check Every Database in the Against targets list and define a condition. Execution Mode can have one of the following values:

  • On Demand (this is the default)

  • On Schedule

  • On Change - Log Only

  • On Change - Prevent

  • The On Demand option only evaluates the policy when a user right clicks on the policy in the SSMS Object Explorer and selects Evaluate from the menu.

    The On Schedule option takes advantage of SQL Agent to execute a job on a particular schedule to check the policy. After selecting On Schedule from the Execution Mode drop down list, you can click either the Pick or New button.

    To pick an existing schedule, make a selection from the available options:

    To create a new schedule, click New:

    When policy evaluation is scheduled, any violations are logged to the Windows Event Log.

    The On Change - Log Only option evaluates the policy whenever the property in the facet is changed and any violation is logged to the Windows Event Log. The On Change - Prevent option evaluates the policy whenever the property in the facet is changed and actually prevents the change; this option uses DDL triggers to enforce the policy. Not all changes can be detected and rolled back by DDL triggers; the Execution Mode drop down list will include the On Change - Prevent option only when it is available.

    What is Enabled check box? When the Execution Mode is On Demand, the Enabled check box must be unchecked; for all other options you must check the Enabled check box in order for the policy to be evaluated.

    How to evaluate a Policy:

    To evaluate a policy on demand, right click on the policy in the SSMS Object Explorer and select Evaluate from the menu. The following is a partial screen shot of the output from evaluating a policy on demand:

    The green check icon signifies that the policy evaluated to True. Red indicate false. To view the details of error, click on VIEW link and following window will open with complete details:

    Edit the policy and change the Execution Mode to On Change - Log Only. Try to create a procedure which starts with sp_. It will create the procedure, but will log the error in Event Viewer. Open Windows Event Viewer, click on Application and you will see an event that was written when the policy evaluation detected the violation:

    To test the On Change - Prevent Execution Mode for a policy, create a new condition and a new policy. Follow the same steps to create the policy. The trick is that the policy must be ENABLED, only then it can stop user from violating conditions. There are 2 ways to enable a policy, one is when you create a policy and selected On Change - Prevent Execute Mode, a checkbox appears at top, just below policy name, check it. The other way is from SSMS, Right Click on Policy and select Enable.

    Now whenever the user will try to create a procedure that starts with sp_, it will roll it back and shows policy violation error as follows:

    This way we can have many policies running for many databases and can be used to control / monitor violations.

    Do let me know if there is any feedback.

    Happy SQL Coding.

    Monday, October 26, 2009

    Using the EventData() Function with DDL triggers

    Let's discuss how we store the events captured by DDL triggers and store data it in a table for reporting/auditing purposes?

    In SQL Server 2005, we can get data regarding the event that initiated the DDL trigger by accessing the EventData() function. This function returns information about server or database events and is stored in a variable that uses the XML data type. What we need to do is capture the data returned by the EventData() function and store it in a database table for reporting purposes. Since the returned type is XML, we need to apply a bit of XQuery against the data generated by the function. The DDL_TABLE_EVENTS type describes what data can be captured for a CREATE, ALTER or DROP TABLE event.

    We will create a table that will store the data returned by the EventData() function. We will only choose a few items from the schema generated by the ALTER_TABLE event; namely, the T-SQL Command, ExecuteTime, LoginName, and EventType which is included in every event.

    Create the Audit log table

    USE Northwind
    ExecuteTime DATETIME,
    LoginName NVARCHAR(100),
    EventType NVARCHAR(100),
    Command NVARCHAR(2000)

    Now Create the DDL trigger

    CREATE TRIGGER trgAlterTblChange
    SET @Data = EventData()
    INSERT AuditLog (ExecuteTime, LoginName, EventType, Command)
    @Data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'),
    @Data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)') ) ;

    Let's understand the code.

    Declare a variable named @Data of XML data type. Assign the value returned by the EventData() function to the variable.

    In the INSERT statement, retrieve the element values in the @Data variable using XQuery and the value() method. We used value() method as it takes a second parameter, represented by the value 1 as it is based on the zero index, that is the name of one of the SQL Server built-in data types. The value is returned as an instance of that type. Also, the value() method gives you better results with the formatting of the XML data

    Let's test it.

    Add a new column "ShipAdd" to the Shippers table in the database

    USE Northwind

    ALTER TABLE Shippers
    ADD ShipAdd Varchar(100) NULL

    Querying the AuditLog table will give you the data returned by the EventData() function, stored in tabular format as we have already extracted the element values inside the XML data from inside our trigger definition.

    We can define DDL triggers as well as capture event details using the EventData() function on the database or server level. This approach is just to audit DDL activities and store the event details in a log table. We can have a central database that will store all the events done on both the database and server level. It is not efficient nor effective to audit every event due to the immense amount of audit record logging processes so make sure that you plan what events you need to audit.

    Let's take a look at another example, as we already know that its not efficient to have stored procedure starting with 'sp_'. Let's use DDL triggers to stop people from creating procedures starting with 'sp_'

    Let's create a trigger:

    CREATE TRIGGER [ddltrg_CheckCreateProc] ON DATABASE
    FOR CREATE_Procedure
    DECLARE @xmlEventData XML,
    @tableName VARCHAR(50)

    SET @xmlEventData = eventdata()

    SET @tableName = CONVERT(VARCHAR(25), @xmlEventData.query('data(/EVENT_INSTANCE/ObjectName)'))

    IF LEFT(@tableName, 3) = 'sp_'
    RAISERROR ( 'You cannot create procedures starting with sp_',16,- 1 )


    Let's Test it:

    create proc sp_testDDLTrigger
    select getdate()

    When we execute it, we get following error:

    It throws the error as defined in the Trigger. This way we can force developers follow guidelines. SQL Server 2008 has introduced Policy based management, which help achieve same thing easily.

    Auditing of DDL (Create, Alter, Drop) Commands

    With the growing need to audit activities in our database servers, there is a need to figure out the best approach to collect changes on your system. With SQL Server 2000 we are limited to auditing login information and capturing data changes using DML (Data Manipulation Language) triggers, but there was no easy way to track DDL (Data Definition Language) changes in our database.

    With SQL Server 2005 DDL (Data Definition Language) triggers have been introduced. This type of trigger is different then INSERT, UPDATE and DELETE triggers, this trigger is fired when a change is made using such commands as ALTER, CREATE or DROP. The trigger is fired after the event occurs, so there is not an INSTEAD of trigger option like you have with DML triggers.

    Creating a DDL trigger is just as simple as creating a DML trigger. Here is an example of a trigger that would fire whenever there is a DROP_TABLE or ALTER_TABLE event.

    CREATE TRIGGER tr_tableChange
    print 'your statement is logged for auditing purpose'

    DDL events can be collected by using event groups. This example here is triggered whenever there is DDL_LOGIN_EVENTS action. This includes DROP_LOGIN, CREATE_LOGIN and ALTER_LOGIN.

    CREATE TRIGGER tr_LoginEvents
    print 'your statement is logged for auditing purpose'

    Here is a listing of the event groups. This list can be found in SQL Server 2005 Books Online. The way this works is that each group consists of individual events. Also, some event groups contain other event groups. So for example the DDL_SERVER_LEVEL_EVENTS would capture all events that occur on the server.

    Next we will see how we can extract information from this DDL Trigger using EVENTDATA() function.

    Sunday, October 25, 2009

    Use of Function in Where clause


    Let's look at what happens when we use function in Where clause.

    There are 2 places where we can use a function, 1st when we want to fetch some value in select clause and 2nd when we want to compare a value using a function in where clause.

    Usage of both ways will affect the performance of the query.

    Let's see a simple example of function.

    Create a function that return Employee LastName, Firstname for passed Employee ID.

    -- Add the parameters for the function here
    @EmployeeId int
    -- Declare the return variable here
    DECLARE @empname NVARCHAR(100)
    SELECT @empname = LastName +', ' + FirstName FROM Employees WHERE EmployeeID=@EmployeeId
    RETURN @empname

    Let's see how this works:

    SELECT dbo.GetEmpName(9) AS Employee_Name

    Dodsworth, Anne

    SELECT distinct dbo.GetEmpName(a.EmployeeID), b.TerritoryDescription
    FROM dbo.EmployeeTerritories a
    INNER JOIN dbo.Territories b
    ON a.TerritoryID=b.TerritoryID
    WHERE dbo.GetEmpName(a.EmployeeID)='Davolio, Nancy'


    Employee TerritoryDescription
    Davolio, Nancy Neward
    Davolio, Nancy Wilton

    Let's look at the profiler to see how the query is executing:

    As we can see that the profiler has executed the function so many times, which actually slow down the query performance.

    Let's change our function a little:

    CREATE FUNCTION dbo.GetEmpNameCrossJoin 
     -- Add the parameters for the function here
     @EmpId int
    RETURNS Table
     SELECT LastName + ', ' + FirstName EName FROM dbo.Employees)
    --Now execute this using Cross APPLY
    SELECT distinct I.EName, b.TerritoryDescription
    FROM dbo.EmployeeTerritories a
    INNER JOIN dbo.Territories b
    ON a.TerritoryID=b.TerritoryID
    CROSS APPLY dbo.GetEmpNameCrossJoin(a.EmployeeID) I

    We have changed the function to return table instead of a scalar value and the function call is done using CROSS APPLY.

    Now there is only 1 row in profiler.

    This way we can improve the performance of the query.

    Happy SQL Coding.

    Wednesday, October 21, 2009



    In SQL Server 2005, a new feature called Synonyms was introduced. Let's discuss about it today.

    SQL Server 2005 has introduced synonyms which enables the reference of another object (View, Table, Stored Procedure or Function) potentially on a different server, database or schema in our environment. This means that the original object that is referenced in all of our code is really using a completely different underlying object, but no coding changes are necessary. Think of this as an alias as a means to simplify migrations and application testing without the need to make any dependent coding changes.

    Create a synonym via SQL Server 2005 Management Studio

    To access the synonym interface navigate to SQL Server 2005 Management Studio => Server Name => Databases => Database Name => Synonyms => Right click on the Synonyms folder => New Synonym

    In General Tab - Key in Synonym name which should be a new name in the schema and information for the aliased object

    In Permissions Tab - Click the 'Add' button to grant rights to the synonym for users

    In Extended Properties Tab - Add any extended properties for documentation purposes.

    How to add Synonyms using T-SQL

    USE [NorthWind]

    CREATE SYNONYM [dbo].[Empls]
    FOR [WSNG1110065709\SQLEXPRESS].[Northwind].[dbo].[Employees]
    EXEC sys.sp_addextendedproperty @name=N'Description',
    @value=N'This is a synonym referencing the Northwind database ',
    @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'SYNONYM',@level1name=N'Empls'

    Let's validate this:

    select * from [dbo].[Empls]

    EmployeeID LastName FirstName Title
    1 Davolio Nancy Sales Representative
    2 Fuller Andrew Vice President, Sales
    3 Leverling Janet Sales Representative

    Let's Insert data using Synonym:

    Insert into [dbo].[Empls]
    ( LastName, FirstName, Title)
    values ('Thru Synonym', 'Synonym','Synonym')

    Let's see data through Synonym:

    select * from [dbo].[Empls]

    Let's see data directly from table:

    select * from Employees

    EmployeeID LastName FirstName Title TitleOfCourtesy
    7 King Robert Sales Representative Mr.
    8 Callahan Laura Inside Sales Coordinator Ms.
    9 Dodsworth Anne Sales Representative Ms.
    11 Thru Synonym Synonym Synonym NULL

    Do let me know if there is any concern.
    Happy SQL Coding..

    Monday, October 19, 2009

    SQL Server database recovery model


    Today we will discuss about various recovery models of SQL Server. A recovery plan is as important as water for a person who is searching for land in a desert. A recovery plan help to restore database whenever there is a database failure.

    Each database on database server can be setup differently and have the ability to change the recovery model as needed.

    Let's discuss the three plans in detail:


    The simple recovery model gives a simple backup that can be used to replace our entire database in the event of a failure or if we have a need to restore your database to another server. With this recovery model we have the ability to do complete backups (an entire copy) or differential backups (any changes since the last complete backup). With this recovery model we are exposed to any failures since the last backup completed.

    Why we may choose this recovery model:

  • Your data is not critical and can easily be recreated

  • The database is only used for test or development

  • Data is static and does not change

  • Losing any or all transactions since the last backup is not a problem

  • Data is derived and can easily be recreated

  • Type of backups:

  • Complete backups

  • Differential backups

  • File and/or Filegroup backups

  • Partial backups

  • Copy-Only backups

  • Bulk_Logged

    With Bulk_Logged recovery model, bulk operations such as BULK INSERT, CREATE INDEX, SELECT INTO, etc... that are not fully logged in the transaction log and therefore do not take as much space in the transaction log, are logged. The advantage of using this recovery model is that our transaction logs will not get that large if you are doing bulk operations and we have the ability to do point in time recovery as long as our last transaction log backup does not include a bulk operation as mentioned above. If no bulk operations are run this recovery model works the same as the Full recovery model. One thing to note is that if we use this recovery model we also need to issue transaction log backups otherwise our database transaction log will continue to grow.

    Here are some reasons why we may choose this recovery model:

  • Data is critical, but you do not want to log large bulk operations

  • Bulk operations are done at different times versus normal processing.

  • You still want to be able to recover to a point in time

  • Type of backups you can run:

  • Complete backups

  • Differential backups

  • File and/or Filegroup backups

  • Partial backups

  • Copy-Only backups

  • Transaction log backups

  • Full

    The full recovery model is the most complete recovery model and allows to recover all data to any point in time as long as all backup files are useable. With this model all operations are fully logged which means that can recover our database to any point. In addition, if the database is set to the full recovery model we need to also issue transaction log backups otherwise our database transaction log will continue to grow forever.

    Here are some reasons why you may choose this recovery model:

  • Data is critical and data can not be lost.

  • You always need the ability to do a point-in-time recovery.

  • You are using database mirroring

  • Type of backups you can run:

  • Complete backups

  • Differential backups

  • File and/or Filegroup backups

  • Partial backups

  • Copy-Only backups

  • Transaction log backups

  • How to update / select Recovery Models

    The recovery model can be changed as needed, so if your database is in the Full recovery model and you want to issue some bulk operations that you want to minimally log you can change the recovery model to Bulk_Logged complete your operations and then change your database model again. One thing to note is that since there will be a bulk operation in your transaction log, in backup you can not do a point in time recovery using this transaction log backup file that contains this bulk operation, but any subsequent transaction log backup can be used to do a point in time recovery.

    Also, if your database is in the Simple recovery model and you change to the Full recovery model you will want to issue a full backup immediately, so you can then begin to also do transaction log backups. Until you issue a full backup you will not be able to take transaction log backups.

    To change the recovery model you can use either SQL Server Management Studio or T-SQL as follows:

    Management Studio

    Right click on the database name, select Properties, select the Options tab and select recovery model from the drop-down list and select OK to save.


    -- set to Full recovery
    -- set to Bulk Logged recovery
    -- set to Simple recovery

    What is TempDB database?

    In SQL Server 2005, TempDB plays a very important role and some of the best practices have changed and so has the necessity to follow these best practices on a more wide scale basis. In many cases TempDB has been left to default configurations in many of our SQL Server installations. Unfortunately, these configurations are not necessarily ideal in many environments.

    Let's see how TempDB can be optimized to improve the overall SQL Server performance.

    Responsibilities of TempDB

  • Global (##temp) or local (#temp) temporary tables, temporary table indexes, temporary stored procedures, table variables, tables returned in table-valued functions or cursors.

  • Database Engine objects to complete a query such as work tables to store intermediate results for spools or sorting from particular GROUP BY, ORDER BY, or UNION queries.

  • Row versioning values for online index processes, Multiple Active Result Sets (MARS) sessions, AFTER triggers and index operations (SORT_IN_TEMPDB).

  • DBCC CHECKDB work tables.

  • Large object (varchar(max), nvarchar(max), varbinary(max) text, ntext, image, xml) data type variables and parameters.

  • Best practices for TempDB are:

  • Do not change collation from the SQL Server instance collation.

  • Do not change the database owner from sa.

  • Do not drop the TempDB database.

  • Do not drop the guest user from the database.

  • Do not change the recovery model from SIMPLE.

  • Ensure the disk drives TempDB resides on have RAID protection i.e. 1, 1 + 0 or 5 in order to prevent a single disk failure from shutting down SQL Server. Keep in mind that if TempDB is not available then SQL Server cannot operate.

  • If SQL Server system databases are installed on the system partition, at a minimum move the TempDB database from the system partition to another set of disks.

  • Size the TempDB database appropriately. For example, if you use the SORT_IN_TEMPDB option when you rebuild indexes, be sure to have sufficient free space in TempDB to store sorting operations. In addition, if you are running into insufficient space errors in TempDB, be sure to determine the culprit and either expand TempDB or re-code the offending process.

  • Hope it will help develop some love for TempDB also.

    Saturday, October 17, 2009

    Derived Tables

    Sometimes querying data is not that simple and there may be the need to create temporary tables or views to predefine how the data should look prior to its final output. Unfortunately there are problems with both of these approaches if you are trying to query data on the fly.

    Temporary table approach
    With the temporary tables approach you need to have multiple steps in your process, first to create the temporary table, then to populate the temporary table, then to select data from the temporary table and lastly cleanup of the temporary table.

    View approach
    With the view approach you need to predefine how this data will look, create the view and then use the view in your query. Granted if this is something that you would be doing over and over again this might make sense to just create a view, but let's look at a totally different approach.

    SQL Server provides allows to create derived tables on the fly and then use these derived tables within your query. This is similar to creating a temporary table and then using the temporary table in your query.

    Let's take a look at an example where we query Northwind database to find out how many customers fall into various categories based on sales. The categories that we have predefined are as follows:

    Total Sales between 0 and 5,000 = Micro
    Total Sales between 5,001 and 10,000 = Small
    Total Sales between 10,001 and 15,000 = Medium
    Total Sales between 15,001 and 20,000 = Large
    Total Sales > 20,000 = Very Large

    There are several ways to get this data but we will use derived tables approach.

    The first step is to find out the total sales by each customer, which can be done with the following statement.

    SELECT o.CustomerID,
    SUM(UnitPrice * Quantity) AS TotalSales
    FROM [Order Details] AS od
    INNER JOIN Orders AS o
    ON od.OrderID = o.OrderID
    GROUP BY o.CustomerID

    This is a partial list of the output:

    CustomerID TotalSales
    ALFKI 4596.2000
    ANATR 1402.9500
    ANTON 7515.3500
    WOLZA 3531.9500

    Now classify the TotalSales value into the OrderGroups that was specified above:

    SELECT o.CustomerID,
    SUM(UnitPrice * Quantity) AS TotalSales,
    WHEN SUM(UnitPrice * Quantity)
    BETWEEN 0 AND 5000 THEN 'Micro'
    WHEN SUM(UnitPrice * Quantity)
    BETWEEN 5001 AND 10000 THEN 'Small'
    WHEN SUM(UnitPrice * Quantity)
    BETWEEN 10001 AND 15000 THEN 'Medium'
    WHEN SUM(UnitPrice * Quantity)
    BETWEEN 15001 AND 20000 THEN 'Large'
    WHEN SUM(UnitPrice * Quantity)
    > 20000 THEN 'Very Large'
    END AS OrderGroup
    FROM [Order Details] AS od
    INNER JOIN Orders AS o
    ON od.OrderID = o.OrderID
    GROUP BY o.CustomerID

    This is a partial list of the output:

    CustomerID TotalSales OrderGroup

    ALFKI 4596.2000 Micro
    ANATR 1402.9500 Micro
    ANTON 7515.3500 Small
    WOLZA 3531.9500 Micro

    There can be many customers who fit into each of these groups and this is where the derived table comes into play. What we are doing here is using the same query from the step above, but using it as derived table OG. Then we are selecting data from this derived table for our final output just like we would with any other query. All of the columns that are created in the derived table are now available in our final query.

    SELECT OG.OrderGroup,
    COUNT(OG.OrderGroup) AS OrderGroupCount
    FROM (SELECT o.CustomerID,
    SUM(UnitPrice * Quantity) AS TotalSales,
    WHEN SUM(UnitPrice * Quantity)
    BETWEEN 0 AND 5000 THEN 'Micro'
    WHEN SUM(UnitPrice * Quantity)
    BETWEEN 5001 AND 10000 THEN 'Small'
    WHEN SUM(UnitPrice * Quantity)
    BETWEEN 10001 AND 15000 THEN 'Medium'
    WHEN SUM(UnitPrice * Quantity)
    BETWEEN 15001 AND 20000 THEN 'Large'
    WHEN SUM(UnitPrice * Quantity)
    > 20000 THEN 'Very Large'
    END AS OrderGroup
    FROM [Order Details] AS od
    INNER JOIN Orders AS o
    ON od.OrderID = o.OrderID
    GROUP BY o.CustomerID) AS OG
    GROUP BY OG.OrderGroup

    This is the complete list of the output from the above query.

    OrderGroup OrderGroupCount
    Large 10
    Medium 11
    Micro 33
    Small 15
    Very Large 20

    Dropping multiple objects using a single DROP statement

    Today, let's see how we can drop multiple objects with single statement.

    Almost every SQL Server object that is created may need to be dropped at some time, especially when you are developing. You create a bunch of temporary objects which you do not want to keep in the database for long. Most SQL Server users drop one object at a time using either SSMS or a drop statement. In many scenarios we may need to drop several objects of the same type. Is there a way to drop several objects through less lines of code?

    With T-SQL we can drop multiple objects of the same type through a single drop statement. Almost any object that can be dropped in a single drop statement can also be dropped simultaneously with other objects of the same type through one drop statement.

    Some of these include objects like databases, tables, functions, stored procedures, rules, synonyms etc.

    Let's look at an example.

    First we create a few stored procedures, so we can test single and multiple drops.

    Script # 1: Create 4 stored procedures

    USE AdventureWorks
    SELECT TOP 10 * FROM Person.Address
    SELECT TOP 10 * FROM Person.Address
    SELECT TOP 10 * FROM Person.Address
    SELECT TOP 10 * FROM Person.Address

    Now we have 4 stored procedures to work with.

    Let's drop the first three using a single drop statement as shown below.

    Script # 2: Drop USP1, USP2, USP3 through three drop statements

    USE AdventureWorks

    Let's create USP1, USP2 and USP3 again.

    USE AdventureWorks
    SELECT TOP 10 * FROM Person.Address
    SELECT TOP 10 * FROM Person.Address
    SELECT TOP 10 * FROM Person.Address

    The following script will drop multiple stored procedures through one drop statement. We can see that we just need to put the list of objects to drop and separate them with a comma. as shown below. The rest of the syntax is the same.

    Script # 3: Drop USP1, USP2, USP3 through single drop statement
    USE AdventureWorks

    Through Script # 3 USP1, USP2 and USP3 have been dropped in a single drop statement.

    Following are some of the pros and cons of multiple object drops:


  • The multiple objects drop approach is applicable to all versions of SQL Server.

  • If some objects in the list do not exist or can not be dropped due to privileges or they do not exist, the remaining objects will be successfully dropped without any negative impact.

  • Although no query plan is generated for drop statements, you can see the dropping of multiple objects approach consumes less bytes while requesting data over the network. This can be verified from network statistics while client statistics are enabled in SQL Server Management Studio (SSMS).

  • Through less lines of code you can get more done.

  • Short Comings

  • It is not possible to apply pre-existence check for the objects you want to drop, such as IF EXISTS

  • It should be obvious, but good to mention that you can not drop objects of different types together in a single statement. For example you can not drop tables and stored procedures at the same time.

  • Hope this way we can reduce our statements and reduce network traffic.

    Friday, October 16, 2009

    Tables variables & how its different from Temp Table

    Hello Friends,

    I am sure most of you would have heard of table variables, but not sure how to use them in a stored procedure. The question would be What purpose do they serve and why not just use temporary tables instead?

    let's discuss this today.

    If you already know how to create and use a temporary table then you're going to have no problem understanding how to use a table variable. The usage is just about identical.

    Temporary Tables

    Temporary tables are created in tempdb. The name "temporary" is slightly misleading, for even though the tables are instantiated in tempdb, they are backed by physical disk and are even logged into the transaction log. They act like regular tables in that you can query their data via SELECT queries and modify their data via UPDATE, INSERT, and DELETE statements. If created inside a stored procedure they are destroyed upon completion of the stored procedure. Furthermore, the scope of any particular temporary table is the session in which it is created; meaning it is only visible to the current user. Multiple users could create a temp table named #TableX and any queries run simultaneously would not affect one another - they would remain autonomous transactions and the tables would remain autonomous objects. You may notice that my sample temporary table name started with a "#" sign. This is the identifier for SQL Server that it is dealing with a temporary table.

    The syntax for creating a temporary table is identical to creating a physical table in Microsoft SQL Server with the exception of the aforementioned pound sign (#):

    CREATE TABLE dbo.#Cars
    Car_id int NOT NULL,
    ColorCode varchar(10),
    ModelName varchar(20),
    Code int,
    DateEntered datetime

    Temporary tables act like physical tables in many ways. You can create indexes and statistics on temporary tables. You can also apply Data Definition Language (DDL) statements against temporary tables to add constraints, defaults, and referential integrity such as primary and foreign keys. You can also add and drop columns from temporary tables. For example, if I wanted to add a default value to the DateEntered column and create a primary key using the Car_id field I would use the following syntax:

    ALTER TABLE dbo.#Cars
    CONSTRAINT [DF_DateEntered] DEFAULT (GETDATE()) FOR [DateEntered],
    ( [Car_id] ) ON [PRIMARY]

    Table Variables

    The syntax for creating table variables is quite similar to creating either regular or temporary tables. The only differences involve a naming convention unique to variables in general, and the need to declare the table variable as you would any other local variable in Transact SQL:

    DECLARE @Cars table (
    Car_id int NOT NULL,
    ColorCode varchar(10),
    ModelName varchar(20),
    Code int,
    DateEntered datetime )

    As you can see the syntax bridges local variable declaration (DECLARE @variable_name variable_data_type) and table creation (column_name, data_type, nullability). As with any other local variable in T-SQL, the table variable must be prefixed with an "@" sign. Unlike temporary or regular table objects, table variables have certain clear limitations.

  • Table variables can not have Non-Clustered Indexes

  • You can not create constraints in table variables

  • You can not create default values on table variable columns

  • Statistics can not be created against table variables

  • Similarities with temporary tables include:

  • Instantiated in tempdb

  • Clustered indexes can be created on table variables and temporary tables

  • Both are logged in the transaction log

  • Just as with temp and regular tables, users can perform all Data Modification Language (DML) queries against a table variable: SELECT, INSERT, UPDATE, and DELETE.

  • Temporary tables are usually preferred over table variables for a few important reasons: they behave more like physical tables in respect to indexing and statistics creation and lifespan. An interesting limitation of table variables comes into play when executing code that involves a table variable. The following two blocks of code both create a table called #Cars and @Cars. A row is then inserted into the table and the table is finally queried for its values.

    --Temp Table:
    CREATE TABLE dbo.#Cars
    Car_id int NOT NULL,
    ColorCode varchar(10),
    ModelName varchar(20),
    Code int ,
    DateEntered datetime

    INSERT INTO dbo.#Cars (Car_id, ColorCode, ModelName, Code, DateEntered)
    VALUES (1,'BlueGreen', 'Austen', 200801, GETDATE())

    SELECT Car_id, ColorCode, ModelName, Code, DateEntered FROM dbo.#Cars

    DROP TABLE dbo.[#Cars]

    This returns the following results:

    --Table Variable:

    ( Car_id int NOT NULL,
    ColorCode varchar(10),
    ModelName varchar(20),
    Code int ,
    DateEntered datetime )

    INSERT INTO @Cars (Car_id, ColorCode, ModelName, Code, DateEntered)
    VALUES (1,'BlueGreen', 'Austen', 200801, GETDATE())

    SELECT Car_id, ColorCode, ModelName, Code, DateEntered FROM @Cars
    The results differ, depending upon how you run the code. If you run the entire block of code the following results are returned:


    However, you receive an error if you don't execute all the code simultaneously:

    Msg 1087, Level 15, State 2, Line 1
    Must declare the table variable "@Cars"

    What is the reason for this behavior? It is quite simple. A table variable's lifespan is only for the duration of the transaction that it runs in. If we execute the DECLARE statement first, then attempt to insert records into the @Cars table variable we receive the error because the table variable has passed out of existence. The results are the same if we declare and insert records into @Cars in one transaction and then attempt to query the table. If you notice, we need to execute a DROP TABLE statement against #Cars. This is because the table persists until the session ends or until the table is dropped.

    Table variables serve a very useful purpose in returning results from table value functions. Take for example the following code for creating a user-defined function that returns values from the Customers table in the Northwind database for any customers in a given PostalCode:

    CREATE FUNCTION dbo.usp_customersbyPostalCode ( @PostalCode VARCHAR(15) )
    @CustomerHitsTab TABLE (
    [CustomerID] [nchar] (5),
    [ContactName] [nvarchar] (30),
    [Phone] [nvarchar] (24),
    [Fax] [nvarchar] (24)
    DECLARE @HitCount INT

    INSERT INTO @CustomerHitsTab
    SELECT [CustomerID],
    FROM [Northwind].[dbo].[Customers]
    WHERE PostalCode = @PostalCode

    SELECT @HitCount = COUNT(*) FROM @CustomerHitsTab

    IF @HitCount = 0
    --No Records Match Criteria
    INSERT INTO @CustomerHitsTab (
    [Fax] )
    VALUES ('','No Companies In Area','','')


    The @CustomerHitsTab table variable is created for the purpose of collecting and returning results of a function to the end user calling the dbo.usp_customersbyPostalCode function.

    SELECT * FROM dbo.usp_customersbyPostalCode('1010')

    SELECT * FROM dbo.usp_customersbyPostalCode('05033')

    An unofficial rule-of-thumb for usage is to use table variables for returning results from user-defined functions that return table values and to use temporary tables for storage and manipulation of temporary data; particularly when dealing with large amounts of data. However, when lesser row counts are involved, and when indexing is not a factor, both table variables and temporary tables perform comparably. It then comes down to preference of the individual responsible for the coding process.

    Happy SQL Coding

    Thursday, October 15, 2009

    How to execute T-SQL code from a file

    You may have many reasons for wanting to execute T-SQL code from a file, but one reason comes to mind for me: To build a loaded database from scratch. I have one master script that creates an empty database and performs several small tasks. In between each task, the master script also executes T-SQL code from several files. The files contain code to create a variety of database objects and copy data. The same files are also used for other database administration chores.

    You may also have many reasons for wanting to avoid osql, but I can think of only one: osql is a command prompt utility and, as such, it is used through the xp_cmdshell extended stored procedure. Many database administrators strictly control which users can execute that stored procedure to avoid a potential security risk. The idea behind writing a custom routine to execute T-SQL code from a file is to avoid a reliance on xp_cmdshell. However, one of the stored procedures mentioned above uses xp_cmdshell to import the file into a table. I might be able to overlook that inexplicable choice if the routine offered features that osql lacks

    -- but that's not the case. In fact, the routine imposes some significant additional limitations.

    The two stored procedures in question have a number of other shortcomings. One or both routines impose:

  • Maximum file size of 80 KB.

  • Maximum batch size of 8 KB.

  • Single-batch limitation within the file, enforced by removing any batch separators ("GO"). Such an action would render many script files useless.

  • Maximum line length of approximately 250 characters.

  • Both routines use the temporary database in an odd and risky way. One creates a normal user table within tempdb and the other creates a global temporary table, but neither does so by necessity. Both run the risk of having table names conflict if two connections run the stored procedure at the same time, which could have some ugly consequences. One of them fails to ensure that the lines of T-SQL code are executed in the order in which they appear in the file and the other one fails to preserve the format of the T-SQL code. The format of the code matters when database objects (such as stored procedures) are created by that code.

    The system stored procedure presented in this tip addresses those shortcomings: It does not impose a maximum file size. It allows batch sizes up to 80 KB. It allows an effectively unlimited number of batches in the file. It allows thousands of characters in a line of T-SQL code. This routine uses the temporary database in a standard and safe way. It ensures that the lines of T-SQL code are executed in the correct order and it preserves the format of the T-SQL code as it exists in the file. Of course, this stored procedure does not rely on xp_cmdshell or osql, and it offers a feature that osql lacks.

    The SQL code in Listing 1 creates a system stored procedure named sp_ExecuteSQLFromFile. The routine reads a specified file and processes the contents as batches of T-SQL code. Each batch should end with the word "GO" on a line by itself, which is the standard batch separator in T-SQL files. The routine optionally returns a result set containing information about each T-SQL batch in the file. The result set includes batch numbers, locations within the file (first and last lines), line counts, execution periods (start and end times), elapsed times and error codes.

    The SQL code in Listing 1 also creates a format file in the SQL Server program directory ("C:\Program Files\Microsoft SQL Server\File.fmt"). The location can be changed by modifying Listing 1 in two places. The xp_cmdshell extended stored procedure is used to create the format file. The ability to execute xp_cmdshell is required only when the sp_ExecuteSQLFromFile stored procedure is created and only by the user doing the creation. That's a very different operation than actually executing sp_ExecuteSQLFromFile.

    The format file is used when importing the T-SQL code from a file into a temporary table with the BULK INSERT statement. The format file allows an IDENTITY column to exist in the temporary table. The IDENTITY column provides a way to order the rows of the temporary table when assembling the T-SQL code to be executed. Unless an ORDER BY clause is used in the involved SELECT statement there's no guarantee that the rows are returned in the order they appear in the file.

    The sp_ExecuteSQLFromFile stored procedure accepts up to three parameters, but only one of them is required.

    The first parameter (@PCFetch) specifies the location for the T-SQL code file. The parameter must provide a complete path, including file name, to the desired T-SQL code file. The SQL Server service account must be allowed to read files in that location.

    The second parameter (@PCAdmin) is optional and it specifies the location of the format file. The parameter must provide a complete path, including file name, to the format file. The default location is a file named "File.fmt" in the SQL Server program directory ("C:\Program Files\Microsoft SQL Server\File.fmt"). The SQL code in Listing 1 creates a format file in that location.

    The third parameter (@PCUltra) is optional and it specifies whether a batch information result set is returned. A value of zero (0) means no result set. A value of one (1) means a result set is returned with a row for each batch in the file.

    The example below is for illustration purposes only. The path must be changed to a location appropriate for your environment.

    Suppose the contents of Script.sql are

    Select Getdate()
    Select Count(1) from sysobjects
    Select top 10 * from sysobjects

    and contents of file.fmt file are

    1 SQLCHAR 0 0 "\r\n" 1 Line ""

    Where 1.0 is version, 1 is Number of Fields,
    1 is Serial Number, SQLCHAR is datatype, 0 is prefix-length, used only for binary files, 0 is the length, and is used for fixed-length fields. "\r\n" is the terminator. 1 is the database column and Line is database column name, which is not used. Last colunm is the collation for the data in the file

    As Fmt file is set and script that needs to execute is set, now prepare the procedure to execute it.

    EXECUTE sp_ExecuteSQLFromFile 'D:\Scripts\Script.sql',NULL,1

    I hope you find this system stored procedure to be useful.
    --Stored Procedure: sp_ExecuteSQLFromFile

    USE master

    CREATE PROCEDURE dbo.sp_ExecuteSQLFromFile
    @PCFetch varchar(1000),
    @PCAdmin varchar(1000) = NULL,
    @PCUltra bit = 0

    DECLARE @Return int
    DECLARE @Retain int
    DECLARE @Status int
    SET @Status = 0
    DECLARE @Task varchar(2000)
    DECLARE @Work varchar(2000)
    DECLARE @Line varchar(8000)
    DECLARE @SQL1 varchar(8000)
    DECLARE @SQL2 varchar(8000)
    DECLARE @SQL3 varchar(8000)
    DECLARE @SQL4 varchar(8000)
    DECLARE @SQL5 varchar(8000)
    DECLARE @SQL6 varchar(8000)
    DECLARE @SQL7 varchar(8000)
    DECLARE @SQL8 varchar(8000)
    DECLARE @SQL9 varchar(8000)
    DECLARE @CRLF char(2)
    DECLARE @Save tinyint
    DECLARE @Have int
    DECLARE @BOLA datetime
    DECLARE @BOLZ datetime
    DECLARE @BOLN datetime

    CREATE TABLE #DBAT (Line varchar(8000), Work int IDENTITY(1,1))

    CREATE TABLE #DBAZ (Batch int, SQLA int, SQLZ int, SQLN int, BOLA datetime, BOLZ datetime, BOLN datetime, Status int)

    SET @CRLF = CHAR(13) + CHAR(10)

    SET @SQL1 = ''
    SET @SQL2 = ''
    SET @SQL3 = ''
    SET @SQL4 = ''
    SET @SQL5 = ''
    SET @SQL6 = ''
    SET @SQL7 = ''
    SET @SQL8 = ''
    SET @SQL9 = ''

    SET @Save = 1
    SET @Have = 0

    SET @SQLA = 1

    SET @PCAdmin = ISNULL(@PCAdmin,'C:\Program Files\Microsoft SQL Server\Files.fmt')

    SET @Task = 'BULK INSERT #DBAT FROM ' + CHAR(39) + @PCFetch + CHAR(39) + ' WITH (FORMATFILE = ' + CHAR(39) + @PCAdmin + CHAR(39) + ')'

    EXECUTE (@Task)

    --To display the scripts that will be executed.
    Select * from #DBAT

    SET @Return = @@ERROR
    IF @Status = 0
    SET @Status = @Return


    OPEN Lines


    WHILE @@FETCH_STATUS = 0 AND @Status = 0


    IF UPPER(LTRIM(RTRIM(@Line))) = 'GO'

    SET @Have = @Have + 1


    SET @Return = @@ERROR -- IF @Status = 0 SET @Status = @Return



    SET @SQL1 = ''
    SET @SQL2 = ''
    SET @SQL3 = ''
    SET @SQL4 = ''
    SET @SQL5 = ''
    SET @SQL6 = ''
    SET @SQL7 = ''
    SET @SQL8 = ''
    SET @SQL9 = ''

    SET @Save = 1

    SET @SQLA = @SQLZ + 1




    IF @Save = 1
    SET @SQL1 = @SQL1 + @Line + @CRLF
    SET @Save = 2

    IF @Save = 2
    SET @SQL2 = @SQL2 + @Line + @CRLF
    SET @Save = 3

    IF @Save = 3
    SET @SQL3 = @SQL3 + @Line + @CRLF
    SET @Save = 4

    IF @Save = 4
    SET @SQL4 = @SQL4 + @Line + @CRLF
    SET @Save = 5

    IF @Save = 5 IF DATALENGTH(@SQL5) + DATALENGTH(@Line) < 7998
    SET @SQL5 = @SQL5 + @Line + @CRLF
    SET @Save = 6

    IF @Save = 6 IF DATALENGTH(@SQL6) + DATALENGTH(@Line) < 7998
    SET @SQL6 = @SQL6 + @Line + @CRLF
    SET @Save = 7

    IF @Save = 7
    SET @SQL7 = @SQL7 + @Line + @CRLF
    SET @Save = 8

    IF @Save = 8
    SET @SQL8 = @SQL8 + @Line + @CRLF
    SET @Save = 9

    IF @Save = 9
    SET @SQL9 = @SQL9 + @Line + @CRLF
    SET @Save = 0
    CLOSE Lines
    IF DATALENGTH(@SQL1) > 0 AND @Status = 0
    SET @Have = @Have + 1


    SET @Return = @@ERROR -- IF @Status = 0 SET @Status = @Return


    SET @SQLN = @SQLZ - @SQLA + 1




    IF @PCUltra <> 0

    SELECT Batch, SQLA AS LineFrom, SQLZ AS LineThru, SQLN AS CodeSize, CONVERT(char(12),BOLA,14) AS TimeFrom, CONVERT(char(12),BOLZ,14) AS TimeThru, CONVERT(char(12),BOLN,14) AS Duration, Status FROM #DBAZ
    ORDER BY Batch




    RETURN (@Status)

    DECLARE @Task varchar(1000)
    DECLARE @Work varchar(2000)

    SET @Task = ' PRINT ' + CHAR(39) + '7.0' + CHAR(39)
    + ' PRINT ' + CHAR(39) + '1' + CHAR(39)
    + ' PRINT ' + CHAR(39) + '1 SQLCHAR 0 8000 ' + CHAR(39) + ' + CHAR(34) + ' + CHAR(39) + '\r\n' + CHAR(39) + ' + CHAR(34) + ' + CHAR(39) + ' 1 Line SQL_Latin1_General_CP1_CI_AS' + CHAR(39)

    SET @Work = 'osql -E -Q "' + @Task + '" -o "C:\Program Files\Microsoft SQL Server\File.fmt" -s "" -w 8000'

    EXECUTE master.dbo.xp_cmdshell @Work, NO_OUTPUT


    NOTE: Make sure xp_cmdshell is enabled on Surface Area Configuration options.

    Happy SQL Coding.

    Wednesday, October 14, 2009

    Auto generate change scripts in SQL Server Management Studio (SSMS) for tables

    As a part of best practices, we always save the T-SQL scripts used for creation and modification of objects in SQL Server. When creating and modifying tables using SQL Server Management Studio designer it is easy to right click in the designer and select "Generate Change Script...", but is there a way to automatically script the creation and/or modification of tables made through of SQL Server Management Studio (SSMS) designer? This tip shows you an option that exists within SSMS to automatically generate these scripts for all table changes when using the table designer.

    Within SQL Server Management Studio (SSMS) there is an option to automatically generate scripts for your changes done through the GUI. The solution for this problem is provided in SSMS and works with both SQL Server 2005 and SQL Server 2008, but by default this option that we will discuss is disabled.

    To enable the option
  • From the SSMS menus click on "Tools"  

  • Click on "Options..."

  • Click on "Designers"

  • Check the checkbox "Auto generate change scripts" as highlighted below

  • After checking the specified checkbox, SSMS is now able to generate scripts for the creation and/or modification of any table when you use the designer.

    Testing it out

    Let's confirm the script generation for the creation of a new table.

  • In SSMS select a database and expand the tree

  • Right click on the tables folder inside and click on "New Table".

  • A designer will appear for the creation of new table.

  • Create a single column for this table, I have created a column named TestCol1 as nchar(10)

  • Save the new table and give the table a name and click OK

  • Another window will popup that includes the actual script for creating this new table as shown below

  • Click on "Yes" to save the script and you will be prompted to specify the location. The file name by default will be the name of the table such as "tableName.sql", but this can be changed to anything you like.

    If you do not want to save the script then click "No", clicking "No" will not roll back the implemented changes, but the script will not be saved


    It is important to mention here that unchecking the box in this frame as highlighted above "Automatically generate change script on every save" will cause this change to be undone in the options that we first configured.

    Also scripts are not automatically generated for deletion of a table

    While working on views through designers no script is generated, however the code for views operation is provided inside the designer and may be copied and saved.

    By implementing script auto generation for your SSMS designer you will be able to save changes made through designers easily, which will allow you to have change scripts for changes made through SSMS designer

    Speed up reports in SQL Server Reporting Services with caching

    Reporting applications tend to be slow, as reports have to process large amounts of data before they can display results. The built-in report caching feature in SQL Server Reporting Services (SSRS) feature helps speed up report execution and lowers the load on the reporting server.
    When designing a report, it is important to keep in mind how much data might be processed. This is often affected by what parameters the user can select as well as which ones they are most likely to select when viewing reports.

    When a report is run, SSRS first runs the database queries in the report and then renders the report in the specified format.

    If report caching is on, however, SSRS stores the report layout and datasets returned from the database the first time the report is executed. Therefore, when the report is re-executed, a cached copy exists and the report only needs to be rendered from cache to a device specific format.

    The main benefit is SSRS does not need to make database calls to the data sources if there is a cached report. Query execution is the most time consuming part for many reports and with report caching, report run time may be reduced from several minutes to a second or two.

    Using report caching

    The report caching feature is off by default. To turn it on, go to the Properties tab in the Execution page in Report Manager, as shown below:

    Report caching can be turned on for a specific number of minutes or it can expire based on a schedule.

    The first option is suitable for reports that could benefit from caching, but have data that changes often enough that you may want to refresh with certain frequency. The frequency is report specific and can range from minutes to days.

    The other expiration option is either a report-specific schedule or a shared schedule. For example, if a data warehouse is loaded nightly between midnight and 5 a.m., a shared schedule could be created for 5 a.m. daily, and the cache could be configured to expire on that schedule. Users would benefit from caching refreshed with new data every morning.

    It is important to understand that since reports often have parameters, and the data that comes back from the database differs based on their values, SSRS has to cache a copy of the report for each unique combination of parameters.

    For example, if you have a Year and Month parameter and you execute January 2009, the first report execution will be slow but the subsequent executions will be fast. When you select February 2009, the report has to run database queries again and it might run for a while. But once the report is cached, subsequent runs will be fast again.

    The cache is report-specific, meaning any user that executes a combination of parameters for the first time will cache the report for all users. This is one of the reasons why a report can only be cached if the data source uses stored credentials -- either a SQL Server login or a Windows account.

    Report caching is most useful for reports with no parameters, or reports with only a few parameters with a small number of possible combinations. It is less beneficial for reports that have datetime parameters, which allows users to select one or more arbitrary dates. The best option for mitigating this issue is to provide default values to parameters with the most likely values pre-selected. Another option is to write a report rendering application and, using the SOAP API, render all possible parameter combinations to place reports into cache.

    Report snapshot

    In addition to caching when the report is executed for the first time, you can also set up the report to be rendered from a report snapshot (See the bottom section in Figure 1 above).

    Once again, you can set up either a report specific schedule or a shared schedule. You also have the option of generating a snapshot when you click the Apply button, although that is likely practical only for highly-static reports. When the scheduled time comes, SSRS will execute the report and store the results.

    The difference between snapshots and caching is that snapshots are generated by SSRS based on a schedule while cached copies are created the first time a report is rendered by a user or by an external application.

    Snapshots eliminate the "waiting penalty" for the first report user, but they do have a major limitation: A report can only be configured for snapshot execution if it has no parameters, or if all parameters have default values.

    When a snapshot is rendered, the default parameters are used. When the report is viewed, Report Manager will display the parameter values of the rendered snapshot. It also disables the parameters, however, and you cannot change their values. In other words, if the report has parameters and snapshots are enabled, the ability to run the report with more than the default parameter values is lost.

    Therefore, report snapshots only work well if the report has no parameters, or if it's acceptable to show the report with only default values for a period of time.

    Monday, October 12, 2009

    How to use EXEC() at Linked Server

    A special feature added in SQL 2005 is that you can use EXEC() to run pass-through queries on a linked server.

    This could be another instance of SQL Server, but it could also be an Oracle server, an Access database, Active directory or whatever. The SQL could be a single query or a sequence of statements, and could it be composed dynamically or be entirely static. The syntax is simple, as seen by this example:

    EXEC('SELECT COUNT(*) FROM ' + @db + '.dbo.sysobjects') AT SQL2K

    SQL2K is here a linked server that has been defined with sp_addlinkedserver.

    There is one thing that you can do with EXEC() at a linked server, that you cannot do with EXEC() on a local server: you can use parameters, both for input and output. The confuse matters, you don't use parameters with names starting with @, instead you use question marks (?) as parameter holders. Say that you are on an SQL 2005 box, and you are dying to know how many orders VINET had in the Northwind database. Unfortunately, SQL 2005 does not ship with Northwind, but you have a linked server set up to an instance of SQL 2000 with Northwind.

    You can run this:

    DECLARE @cnt int
    EXEC('SELECT ? = COUNT(*) FROM Northwind.dbo.Orders WHERE CustomerID = ?',
    SELECT @cnt

    Note here that the parameter values must appear in the order the parameter markers appear in the query. When passing a parameter, you can either specify a constant value or a variable.

    You may ask why the inconsistency with a different parameter marker from sp_executesql? Recall that linked servers in SQL Server are always accessed through an OLE DB provider, and OLE DB uses ? as the parameter marker, a convention inherited from ODBC. OLE DB translates that parameter marker as is appropriate for the data source on the other end. (Not all RDBMS use @ for variables.)

    As with regular EXEC(), you can specify AS USER/LOGIN to use impersonation:

    EXEC('SELECT COUNT(*) FROM ' + @db + '.dbo.sysobjects')
    AS USER = 'davidson' AT SQL2K

    The login to use on the remote server can be defined with sp_addlinkedsrvlogin.

    Friday, October 9, 2009

    Introducing Dynamic SQL


    Today we will look at some quick examples of dynamic SQL and point out some very important implications of using dynamic SQL. Then we will describe sp_executesql and EXEC() in detail, the two commands you can use to invoke dynamic SQL from T-SQL.

    A First Encounter

    Understanding dynamic SQL itself is not difficult. Au contraire, it's rather too easy to use. Understanding the fine details, though, takes a little longer time. If you start out using dynamic SQL casually, you are bound to face accidents when things do not work as you have anticipated.

    One of the problems listed in the introduction was how to write a stored procedure that takes a table name as its input. Here are two examples, based on the two ways to do dynamic SQL in Transact-SQL:

    CREATE PROCEDURE general_select1 @tblname sysname, @key varchar(10)
    DECLARE @sql nvarchar(4000)

    SELECT @sql = ' SELECT col1, col2, col3 ' + ' FROM dbo.' + quotename(@tblname) +
    ' WHERE keycol = @key'

    EXEC sp_executesql @sql, N'@key varchar(10)', @key

    CREATE PROCEDURE general_select2 @tblname nvarchar(127), @key varchar(10)
    EXEC('SELECT col1, col2, col3 FROM ' + @tblname + ' WHERE keycol = ''' + @key + '''')

    Before moving forward, allow me to point out that these are examples of bad usage of dynamic SQL. Passing a table name as a parameter is not how you should write stored procedures, and one aim of this article is to explain this in detail. Also, the two examples are not equivalent. While both examples are bad, the second example has several problems that the first does not have. What these problems are will be apparent as you read this text.

    Whereas the above looks very simple and easy, there are some very important things to observe. The first thing is permissions. You may know that when you use stored procedures, users do not need permissions to access the tables accessed by the stored procedure. This does not apply when you use dynamic SQL! For the procedures above to execute successfully, the users must have SELECT permission on the table in @tblname. In SQL 2000 and earlier this is an absolute rule with no way around it. Starting with SQL 2005, there are alternatives.

    Next thing to observe is that the dynamic SQL is not part of the stored procedure, but constitutes its own scope. Invoking a block of dynamic SQL is akin to call a nameless stored procedure created ad-hoc. This has a number of consequences:

    1.    Within the block of dynamic SQL, you cannot access local variables (including table variables) or parameters of the calling stored procedure. But you can pass parameters – in and out – to a block of dynamic SQL if you use sp_executesql.

    2.    Any USE statement in the dynamic SQL will not affect the calling stored procedure.

    3.    Temp tables created in the dynamic SQL will not be accessible from the calling procedure since they are dropped when the dynamic SQL exits. (Compare to how temp tables created in a stored procedure go away when you exit the procedure.) The block of dynamic SQL can however access temp tables created by the calling procedure.

    4.    If you issue a SET command in the dynamic SQL, the effect of the SET command lasts for the duration of the block of dynamic SQL only and does not affect the caller.

    5.    The query plan for the stored procedure does not include the dynamic SQL. The block of dynamic SQL has a query plan of its own.

    As you've seen there are two ways to invoke dynamic SQL, sp_executesql and EXEC(). sp_executesql was added in SQL 7, whereas EXEC() has been around since SQL 6.0. In application code, sp_executesql should be your choice 95% of the time for reasons that will prevail. EXEC() is mainly useful for quick throw-away things and DBA tasks.


    sp_executesql is a built-in stored procedure that takes two pre-defined parameters and any number of user-defined parameters.

    The first parameter @stmt is mandatory, and contains a batch of one or more SQL statements. The data type of @stmt is ntext in SQL 7 and SQL 2000, and nvarchar(MAX) in SQL 2005 and later. Beware that you must pass an nvarchar/ntext value (that is, a Unicode value). A varchar value won't do.

    The second parameter @params is optional, but we will use it 90% of the time. @params declares the parameters that you refer to in @stmt. The syntax of @params is exactly the same as for the parameter list of a stored procedure. The parameters can have default values and they can have the OUTPUT marker. Not all parameters you declare must actually appear in the SQL string. (Whereas all variables that appear in the SQL string must be declared, either with a DECLARE inside @stmt, or in @params.) Just like @stmt, the data type of @params is ntext SQL 2000 and earlier and nvarchar(MAX) since SQL 2005.

    The rest of the parameters are simply the parameters that you declared in @params, and you pass them as you pass parameters to a stored procedure, either positional or named. To get a value back from your output parameter, you must specify OUTPUT with the parameter, just like when you call a stored procedure. Note that the first two parameters, @stmt and @params, must be specified positionally. You can provide the parameter names for them, but these names are blissfully ignored.

    Let's look at an example. Say that in your database, many tables have a column LastUpdated, which holds the time a row last was updated. You want to be able to find out how many rows in each table that were modified at least once during a period. This is not something you run as part of the application, but something you run as a DBA from time to time, so you just keep it as a script that you have a around. Here is what it could look like:

    DECLARE @tbl sysname,
    @sql nvarchar(4000),
    @params nvarchar(4000),
    @count int

    SELECT object_name(id) FROM syscolumns WHERE name = 'LastUpdated' ORDER BY 1
    OPEN tblcur

    WHILE 1 = 1
    FETCH tblcur INTO @tbl
    IF @@fetch_status <> 0

    SELECT @sql =
    N' SELECT @cnt = COUNT(*) FROM dbo.' + quotename(@tbl) +
    N' WHERE LastUpdated BETWEEN @fromdate AND ' +
    N' coalesce(@todate, ''99991231'')'
    SELECT @params = N'@fromdate datetime, ' +
    N'@todate datetime = NULL, ' +
    N'@cnt int OUTPUT'
    EXEC sp_executesql @sql, @params, '20060101', @cnt = @count OUTPUT

    PRINT @tbl + ': ' + convert(varchar(10), @count) + ' modified rows.'

    DEALLOCATE tblcur

    We have put the lines that pertain directly to the dynamic SQL in bold face. You can see that we have declared the @sql and @params variables to be of the maximum length for nvarchar variables in SQL 2000. In SQL 2005 and later, you may want to make it a routine to declare @sql as nvarchar(MAX).

    When we assign the @sql variable, be careful to format the statement so that it is easy to read, and leave spaces to avoid that two concatenated parts are glued together without space in between, which could cause a syntax error. Also put the table name in quotename() in case a table name has any special characters in it. Also prefix the table name with "dbo.", which is a good habit, as we will see when we look at dynamic SQL and query plans. Note also the appearance of '' around the date literal – the rule in T-SQL is that to include the string delimiter in a string, you must double it.

    In this example, the dynamic SQL has three parameters: one mandatory input parameter, one optional input parameter, and one output parameter. Assumed that this time the DBA wanted to see all changes made after 2006-01-01, which is why I left out @todate in the call to sp_executesql. Since I left out one variable, I must specify the last, @cnt by name – the same rules as when you call a stored procedure. Note also that the variable is called @cnt in the dynamic SQL, but @count in the surrounding script. Normally, you might want to use the same name, but I wanted to stress that the @cnt in the dynamic SQL is only visible within the dynamic SQL, whereas @count is not visible there.

    You may note that I've prefix the string literals with N to denote that they are Unicode strings. As @sql and @params are declared as nvarchar, technically this is not necessary (as long as you stick to your 8-bit character set). However, when you provide any of the strings directly in the call to sp_executesql, you must specify the N, as in this fairly silly example:

    EXEC sp_executesql N'SELECT @x', N'@x int', @x = 2
    If you remove any of the Ns, you will get an error message. Since sp_executesql is a built-in stored procedure, there is no implicit conversion from varchar.

    You may wonder why I do not pass @tbl as a parameter as well. The answer is that you can't. Dynamic SQL is just like any other SQL. You can't specify a table name through a variable in T-SQL, that's the whole story. Thus, when you need to specify things like table names, column names etc dynamically, you must interpolate them into the string.

    If you are on SQL 2000 or SQL 7, there is a limitation with sp_executesql when it comes to the length of the SQL string. While the parameter is ntext, you cannot use this data type for local variables. Thus, you will have to stick to nvarchar(4000). In many cases this will do fine, but it is not uncommon to exceed that limit. In this case, you will need to use EXEC(), described just below.

    Since SQL 2005, this is not an issue. Here you can use the new data type nvarchar(MAX) which can hold as much data as ntext, but without the many restrictions of ntext.


    EXEC() takes one parameter which is an SQL statement to execute. The parameter can be a concatenation of string variables and string literals, but cannot include calls to functions or other operators. For very simple cases, EXEC() is less hassle than sp_executesql. For instance, say that you want to run UPDATE STATISTICS WITH FULLSCAN on some selected tables. It could look like this:

    FETCH tblcur INTO @tbl
    IF @@fetch_status <> 0 BREAK

    In the example with sp_executesql, we used quotename(), but here we've let it suffice with adding brackets, in case there is a table named Order Details (which there is in the Northwind database). Since EXEC only permits string literals and string variables to be concatenated and not arbitrary expressions, this is not legal:

    EXEC('UPDATE STATISTICS ' + quotename(@tbl) + ' WITH FULLSCAN')
    Best practice is to always use a variable to hold the SQL statement, so the example would better read:

    FETCH tblcur INTO @tbl
    IF @@fetch_status <> 0 BREAK
    SELECT @sql = 'UPDATE STATISTICS ' + quotename(@tbl) + ' WITH FULLSCAN'

    The fact that you can concatenate strings within EXEC() permits you to make very quick things, which can be convenient at times, but it can lead to poor habits in application code. However, there are situations where this is an enormous blessing. As mentioned, in SQL 7 and SQL 2000, you can in practice only use 4000 characters in your SQL string with sp_executesql. EXEC does not have this limitation, since you can say:

    EXEC(@sql1 + @sql2 + @sql3)

    Where all of @sql1, @sql2 and @sql3 can be 4000 characters long – or even 8000 characters as EXEC() permits you to use varchar.

    Since you cannot use parameters, you cannot as easily get values out from EXEC() as you can with sp_executesql. You can, however, use INSERT-EXEC to insert the result set from EXEC() into a table.

    In SQL 2005 and later, EXEC() permits impersonation so that you can say:

    EXEC(@sql) AS USER = 'mitchell'
    EXEC(@sql) AS LOGIN = 'CORDOBA\Miguel'

    This is mainly a syntactical shortcut that saves you from embedding the invocation of dynamic SQL in EXECUTE AS and REVERT.

    Hope u will find this information helpful. Will add more on dynamic SQL soon.

    Happy SQL Coding.