Thursday, December 31, 2009

Case Sensitive database and use of Distinct clause

Hi,

In general, we don't have case sensitive database but whenever we have a sensitive database we need to watch out for Distinct statement.

Let's create a new database with coalation as case sensitive:


















Click on Options and change the COLLATION to SQL_LATIN1_GENERAL_CP1_CS_AS


















Now let's create a table in it and insert data in that table.
create table dbo.DisTest
(
Id int identity,
name varchar(100)
)

insert into dbo.DisTest
values ('Name1'), 
('Name2'),
('Name3'), 
('Name3')

select * from dbo.DisTest










Now let's enter name as 'NAME3' & use distinct with select statement.
Insert into dbo.DisTest
values('NAME3')

Select distinct name from dbo.DisTest











In this situation, what to do to avoid duplicate values:

We can execute following query to get distinct values:

Select distinct name Collate SQL_LATIN1_GENERAL_CP1_CI_AS from dbo.DisTest









Finally, We saw that whenever we setup/create database we must think about what collation we want.

Do let me know if there is any issue.

Happy SQL Coding

Monday, December 28, 2009

OUTPUT Command

Hi,

Today let's discuss about OUTPUT command introduced in SQL 2005. Also we will see how MERGE statement use OUTPUT command.

SCENARIO
We need to audit the changes done on a table and there is a catch that this table is being modified from several applications but we need to audit only specific fields.

SOLUTION
Use Triggers, write it on the table and whenever there is any update, we check for the fields that are required for Audit and if any of those fields are updated, insert data in the Audit table.

QUESTION
In this case, what will happen if one of the user is updating comments fields, which is not required for Auditing and he has to update approx 1 million records.

SOLUTION
Now Triggers won't be efficient because this user updates 1 million records. Moreover he is updating comments field only, which is not required for Audit, the result will be poor performance.

In this case OUTPUT command will be the better solution because we can add OUTPUT clause.

As per BOL, The OUTPUT clause is not supported in the following statements:
  • DML statements that reference local partitioned views, distributed partitioned views, or remote tables.
  • INSERT statements that contain an EXECUTE statement.
  • Full-text predicates are not allowed in the OUTPUT clause when the database compatibility level is set to 100.
  • The OUTPUT INTO clause cannot be used to insert into a view, or rowset function.
  • A user-defined function cannot be created if it contains an OUTPUT INTO clause that has a table as its target.

 Let's take a look at examples of OUTPUT clause:
Create table test
(a int identity,
b varchar(100))

insert into test
output inserted.*
select 'Simple Test'
Output:










A simple delete example:
delete test
output deleted.*
Output:










A simple Update example:
update test
Set b='Update Tested'
Output Inserted.*, deleted.*
Output









Now let's insert data in a permanent table, so that we can audit the changes.
Create table Orders
(OrderId int identity,
CustomerName varchar(100))

Create table Orders_Adt
(OrderId int,
OldCustomerName varchar(100),
NewCustomerName varchar(100),
RecType char(1))


insert into Orders
output inserted.OrderId, null, inserted.CustomerName,'I' into Orders_Adt
values('Output Clause')

select * from Orders
select * from Orders_Adt

Output:


















Let's update the customer name and store data in the Audit table:
update Orders
Set CustomerName='Updated Customer'
Output inserted.OrderId, deleted.CustomerName, inserted.CustomerName, 'U' into Orders_Adt
Where OrderId=1

select * from Orders
select * from Orders_Adt
Output:

















Same way we can use Delete.

Delete Orders
Output deleted.OrderId, deleted.CustomerName,null,'D' into Orders_Adt


select * from Orders
select * from Orders_Adt
Output:




















Now let's see how OUTPUT can be used with MERGE command:

Merge performs insert, update, or delete operations on a target table based on the results of a join with a source table.It helps synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table.


Use AdventureWorks database for following example:

TARGET TABLE = Production.ProductInventory
SOURCE TABLES = Sales.SalesOrderHeader, Sales.SalesOrderDetail


IF OBJECT_ID (N'Production.usp_UpdateInventory', N'P') IS NOT NULL DROP PROCEDURE Production.usp_UpdateInventory;
GO
CREATE PROCEDURE Production.usp_UpdateInventory
@OrderDate datetime
AS
MERGE Production.ProductInventory AS target
USING (SELECT ProductID, SUM(OrderQty) FROM Sales.SalesOrderDetail AS sod
JOIN Sales.SalesOrderHeader AS soh
ON sod.SalesOrderID = soh.SalesOrderID
AND soh.OrderDate = @OrderDate
GROUP BY ProductID) AS source (ProductID, OrderQty)
ON (target.ProductID = source.ProductID)
WHEN MATCHED AND target.Quantity - source.OrderQty <= 0
THEN DELETE
WHEN MATCHED 
THEN UPDATE SET target.Quantity = target.Quantity - source.OrderQty, 
target.ModifiedDate = GETDATE()
OUTPUT $action, Inserted.ProductID, Inserted.Quantity, Inserted.ModifiedDate, Deleted.ProductID,
Deleted.Quantity, Deleted.ModifiedDate;
GO
--In this example we are updating data based on date parameter passed to the procedure.

EXECUTE Production.usp_UpdateInventory '20030501'























This example basically matches two tables (Source and Target),
when the data matches and
target table (Production.ProductInventory) quantity - source table quantity <=0
then delete the record from TARGET table.

When the data matches then
update target table quantity with target table quantity - source table order quantity.

At the end we are using OUTPUT command "OUTPUT $action, Inserted.ProductID, Inserted.Quantity, Inserted.ModifiedDate, Deleted.ProductID,Deleted.Quantity, Deleted.ModifiedDate;" to get output.

$action indicate the type of operation (INSERT / UPDATE / DELETE)

This way we can use OUTPUT command and overcome some problems with Trigger.

Hope this will be helpful. If you have any query/concern, write to me.

HAPPY SQL CODING.

Wednesday, December 23, 2009

Policy Health State for Policy Based Management demo

Hi,

A few days ago, I created a policy which used to validate the names of procedures. No procedure must start with "sp_".

Today while working, I found something interesting that reflect the procedures which don't meet the policy condition. The information is available from Database to object level.

Following is the policy:




Now let's look at Object Explorer, how we can validate if there is any object that doesn't match this policy or any other policy. Make sure that the policy is enabled.


In the above image, when we click on the server, it display all the objects in Object Explorer. Now let's click on the Show Policy Health State for all nodes button, which is highlighted.



This changes the Databases object to Critical under Policy Health State column. Let's drill down to the actual objects, which are causing the database to be critical. First will be which type of database(s) is/are critical.



This shows that LearningCT database health state is critical. Let's drill down again to find which object(s) is/are in critical state.


This shows that Stored Procedures have some issues. Let's drill down again to find which stored procedures are not adhering to the policy.



Ok, now we see that 2 procedures doesn't comply with the policy, which says that there shouldn't be any procedure that starts with "sp_", but we have 2 procedures.

We can select both procedures and choose Policies=>Delete Health States, it prompts to delete policy health state of the selected objects. Click on OK button will remove the health state of policy.




Now refresh the screen.



If we run through the process again, it will again show the health state of policy.

This is very helpful when we have not enforced any policy but want to know which objects fail the policy check. Another way to enforce this from happening is to prevent users from creating the objects which does not satisfy policy condition.

Hope this will be helpful.

Happy SQL Coding.

T-SQL Debugger is back in SQL Server 2008 ... debugging now made easy

Let's discuss about debugging a routine like procedure, UDF and Trigger using SQL Server 2008.

Earlier Query Analyzer and Enterprise Manager were separate in SQL Server 2000 then Query Analyzer and Enterprise Manager are merged into 1 tool (SSMS) SQL Server Management Studio in 2005 but without any debugging option. To debug a routine we had to use Visual studio with debugging abilities.

SQL Server 2008 introduced debugging capability that allow to debug routines like procedure, UDF and Triggers. With SQL 2008, we can navigate our code, line by line, watch the state/values of variables, an output window is available to observe what's happening, error windows. We can also move through lines, view call stacks for execution flow, place "breakpoints" to ensure that execution stops at that point to review the state.

How to start debuging

We can start the debugger by either clicking the Debug button on the Query toolbar or by clicking Start Debugging on the Debug menu or pressing ALT+F5 on SSMS.




















The image shows a simple statement with breakpoints. It also has a Locals, Breakpoints & Watch window. These are very useful when we are doing complicated calculations.

An example to demonstrate Call Stack window.
create proc prc

as
begin
select GETDATE()
exec proc1
end

create proc proc1
as
begin
print getdate()
end

exec prc
In the above script we have created 2 procedures, in this 1 proc is called from another proc. It has a Call Stack window that shows the execution flow.












SQL Server Management Studio must be running under a Windows/SQL Server account that is a member of the sysadmin fixed server role. If you are running the T-SQL debugger when SQL Server Management Studio is running on a different computer from the instance of the Database Engine, you must enable program and port exceptions by using the Windows Firewall Control Panel application on both computers.

Its recommended that T-SQL debugging should be done on a development/test server only and not on a production server. We can't debug a SQL 2005 database routine from SQL 2008, it throws exception when we try to debug a SQL 2005 routine.

Happy T-SQL coding and debugging. :)

Sunday, December 20, 2009

SQL TRAN - Translation from mySQL to SQL Server

Hi Guys,

Today I came across a product that help translate mySQL objects to SQL Server. The best part is that it is close to 100% correct while converting the code to SQL Server.

SqlTran is an advanced relational database translator. It analyzes whole database in a source database format and writes equivalent database in target database format. We strive to have 100% fully automated translation, with all programming logic preserved.

Following objects can be translated from mySQL to SQL Server:
  • Tables
  • Indexes
  • Constraints
  • Views
  • Procedures
  • Triggers
  • Functions
Databases currently supported are:

• MySQL
• Microsoft SQL Server

For more information, I strongly urge people who want to migrate their current database from mySQL to SQL Server to download SQLTran & start converting code. It will be easy to understand the business logic written in Procedures, Triggers, View etc using SQLTran rather than assumtion.
 
You can download SQLTran from : http://www.spectralcore.com/sqltran/
 
 
All the best.
 
Do let me know if you need more information.

Monday, December 14, 2009

Community Technology Update 2009, Singapore

Hi Guys,

CTU 2009, Singapore is just around the corner on 19 Dec 2009 from 9am till 5pm at Microsoft office. Check out the Agenda here:

Agenda

For register procedure click on following link:

Register here

Do let me know if you have any concern / question.

Regards,

Sudhir

Friday, December 4, 2009

Best of SQLServerCentral.com Vol 7

Guys,

I am sure you must be aware of books published by SQL Server Central.com. They have recently released their VOL 7. Its a very good book. You can download it from

http://www.red-gate.com/specials/Ebooks/SSC7.htm

If there is any issue with download, let me know I will send the copy to you.

Enjoy SQL Coding.

Wednesday, November 11, 2009

SQL Server 2008 R2 November CTP

Hello,

November CTP of SQL Server 2008 R2 is out. If you are a technet or MSDN subscriber then its available now. If you don't have a subscription to any of these then you can download it from 11th Nov 2009.

Major changes from August CTP are:
  • PowerPivot for SharePoint to support the hosting and management of PowerPivot applications in SharePoint
  • Application and Multi-Server Management for enrolling, gaining insights and managing up to 25 instances
  • Master Data Services for data consistency across heterogeneous systems
  • Data Compression now enabled with UCS-2 Unicode support
SQL Server 2008 R2 delivers higher mission-critical scale, more efficient IT, and expanded reporting and analytics through self-service business intelligence. SQL Server 2008 R2 introduces two new premium editions to meet the needs of large scale datacenters and data warehouses.

 SQL Server 2008 R2 Datacenter
  • SQL Server 2008 R2 Parallel Data Warehouse
 Following capabilities are new to SQL Server 2008 R2 Standard:
  • Backup Compression to reduce data backups by upto 60% and help reduce time spent on backups *
  • Can be managed instance for Application and Multi-Server Management capabilities
The PowerPivot for Excel 2010 component of SQL Server 2008 R2 (formerly known as "Gemini") will be available in the November CTP.

PowerPivot for Excel 2010 (formerly known as "Gemini") is a data analysis tool that delivers unmatched computational power directly within the application users already know and love—Microsoft Excel. It provides users with the ability to analyze mass quantities of data and IT departments with the capability to monitor and manage how users collaborate by integrating seamlessly with Microsoft SharePoint Server 2010 and Microsoft SQL Server 2008 R2.

You can sign up to get Notification on PowerPivot for Excel 2010 here:

https://profile.microsoft.com/RegSysProfileCenter/wizard.aspx?wizid=27d33a54-f9f5-4357-b652-6f03ba8a4d23&lcid=1033

Download it & start using it.

Regards.........

Monday, November 9, 2009

Deleting Data in SQL Server using TRUNCATE vs DELETE commands

Hi,

Today we will discuss various ways to delete rows from table.

There are two main keywords used for deleting data from a table: TRUNCATE and DELETE. Although both achieves the same result, the method employed for each vastly differ. There are advantages, limitations, and consequences of each that you should consider when deciding which method to use.

TRUNCATE TABLE is a statement that quickly deletes all records in a table by deallocating the data pages used by the table. This reduces the resource overhead of logging the deletions, as well as the number of locks acquired; however, it bypasses the transaction log, and the only record of the truncation in the transaction logs is the page deallocation. Records removed by the TRUNCATE TABLE statement cannot be restored unless you use TRANSACTION.

--Create a simple table with 1 identity column
create table ttt
(id int identity)

insert into ttt --Insert 100 rows into table
default values
go 100

select * from ttt --check number of rows from the table

--Now run this in a transaction and roll back and check if you are able to retrieve your data or no.
begin tran
truncate table ttt
rollback

select * from ttt


You cannot specify a WHERE clause in a TRUNCATE TABLE statement-it is all or nothing. The advantage to using TRUNCATE TABLE is that in addition to removing all rows from the table it resets the IDENTITY back to the SEED, and the deallocated pages are returned to the system for use in other areas.

In addition, TRUNCATE TABLE statements cannot be used for tables involved in replication or log shipping, since both depend on the transaction log to keep remote databases consistent.
TRUNCATE TABLE cannot used be used when a foreign key references the table to be truncated, since TRUNCATE statements do not fire triggers. This could result in inconsistent data because ON DELETE/UPDATE triggers would not fire. If all table rows need to be deleted and there is a foreign key referencing the table, you must drop the index and recreate it. If a TRUNCATE TABLE statement is issued against a table that has foreign key references, the following error is returned:

Cannot truncate table 'TableName'

DELETE TABLE statements delete rows one at a time, logging each row in the transaction log, as well as maintaining log sequence number (LSN) information. Although this consumes more database resources and locks, these transactions can be rolled back if necessary. You can also specify a WHERE clause to narrow down the rows to be deleted. When you delete a large number of rows using a DELETE FROM statement, the table may hang on to the empty pages requiring manual release using DBCC SHRINKDATABASE (db_name).

When large tables require that all records be deleted and TRUNCATE TABLE cannot be used, the following statements can be used to achieve the same result as TRUNCATE TABLE:

DELETE from "table_name"
DBCC CHECKIDENT("table_name", RESEED, "reseed_value")

CHECKIDENT checks the current identity value for the specified table and, if it is needed, changes the identity value. You can also use DBCC CHECKIDENT to manually set a new current identity value for the identity column.

DBCC CHECKIDENT("ttt", NORESEED)

This will return the current last value of Identity in table.
DBCC CHECKIDENT("table_name", RESEED, "reseed_value")
This will reset the Identity column value to start from "reseed_value", if you have a unique constraint, then this will fail as it will start from 1, if you seed was 1.

DBCC CHECKIDENT("ttt", RESEED, 0)

Happy SQL Coding..................................................

Using the NOWAIT option with the SQL Server RAISERROR statement

Hi,

Today we will discuss scenario where we execute sql statements or RAISERROR which include PRINT statements in between, but we really don't get to see the output of PRINT statement, it always come after sometime.

Use WITH NOWAIT clause of the RAISERROR statement. It may be surprising but using RAISERROR doesn’t require that there is an error condition. If the severity level given to RAISERROR is 0 through 10 SQL Server treats the RAISERROR as a plain message and not an error at all. Execution continues with the next statement, even if there is a TRY/CATCH block or if SET XACT_ABORT is ON. Use the combination of a RAISERROR severity of 0 to 10 and the WITH NOWAIT clause for a statement that sends output to the Messages windows immediately, like this one:

RAISERROR ('Not an error message!', 0, 1) WITH NOWAIT

When query results are being sent to a grid, these panes are shown as soon as command execution begins and the Messages window is hidden unless there are no results. There are two ways to address this. One option is to send the results to text using either the menu or CTRL+T. The other option is to allow results to go to the grid and click on the messages window or use the SSMS menu command Window/Next Pane, which by default is tied to the F6 key or Shift+F6 in Query Analyzer.

Once you've sent Results to Text with CTRL+T try this script:

DECLARE @time char(8)
PRINT '1- Before anything else ' + convert (varchar(30), getdate(), 8)
SET @time= convert (varchar(30), getdate(), 8)
RAISERROR ('2- RAISERROR before W/O NOWAIT %s', 0, 1, @time)
WAITFOR DELAY '00:00:05'
PRINT '3- After the first delay ' + convert (varchar(30), getdate(), 8)
SET @time = convert (varchar(30), getdate(), 8)
RAISERROR ('4- RAISERROR with NOWAIT %s', 0, 1, @time) WITH NOWAIT
WAITFOR DELAY '00:00:10'
PRINT '5- After the second delay ' + convert (varchar(30), getdate(), 8)

What you’ll see is something like this result:

1- Before anything else 15:17:58
2- RAISERROR before W/O NOWAIT 15:17:58
3- After the first delay 15:18:03
4- RAISERROR with NOWAIT 15:18:03
These results show up in 5 seconds. Neither the PRINT statements on lines 1 and 3 nor the RAISERROR on line 2 show up before the RAISERROR WITH NOWAIT on line 4 is executed. Then after another 10 seconds you’ll see:

5- After the second delay 15:18:13

As you can see from the time stamps, lines 1 and 2 were executed at 15:17:58 then there was a 5 second delay until 15:18:03 when lines 3 and 4 are executed and appear. It's the NOWAIT clause on the RAISERROR that produces line 4 that forces lines 1 through 4 to the message window.

One way to make the NOWAIT clause convenient is to write it into a simple stored procedure like this.

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE proc [dbo].[ns_log_nowait]
@Msg nvarchar(2047)
/* Send a message to the caller so that it's available  immediately.
exec dbo.ns_log_nowait 'This is the immediate message'
waitfor delay '00:00:10'
print 'This is after a 10 second delay'
compare this to print 'before the call'
raiserror ('this is the msg', 0, 1)
waitfor delay '00:00:10'
print 'This is after a 10 second delay'
***********************************************************/
AS
RAISERROR (@Msg, 0, 1) WITH NOWAIT
go
GRANT EXECUTE on [dbo].[ns_log_nowait] to PUBLIC
go

RAISERROR with severity under 11 isn’t treated as an error in a TRY/CATCH construct and NOWAIT has no effect on this behavior. To see that RAISERROR with severity 0 is treated like a print statement try this script in SSMS:

DECLARE @time char(8)

BEGIN TRY
PRINT '1 In the TRY block ' + convert (varchar(30), getdate(), 8)
SET @time= convert (varchar(30), getdate(), 8)
RAISERROR ('2 RAISERROR with NOWAIT %s', 0 , 1, @time) WITH NOWAIT
WAITFOR DELAY '00:00:05'
END TRY
BEGIN CATCH
PRINT '3 In the CATCH block ' + convert (varchar(30), getdate(), 8)
PRINT '3A Error Number = ' +convert (VARCHAR, ERROR_NUMBER()) + ' Severity = ' + convert (varchar, ERROR_SEVERITY())+ ' Msg = ''' + ERROR_MESSAGE() + ''''
END CATCH
PRINT '4 After the CATCH block ' + convert (varchar(30), getdate(), 8)

The output from this script is shown here:

1 In the TRY block 15:22:49

2 RAISERROR with NOWAIT 15:22:49
4 After the CATCH block 15:22:54

Notice that lines 3 and 3A, in the CATCH block aren't reached even though a RAISERROR was executed. However, if you change the severity on the RAISERROR to 11, the statement is treated as an error. Here’s the script again with only the severity of the RAISERROR changed:

DECLARE @time char(8)

BEGIN TRY
PRINT '1 In the TRY block ' + convert (varchar(30), getdate(), 8)
SET @time= convert (varchar(30), getdate(), 8)
RAISERROR ('2 RAISERROR with NOWAIT %s', 11, 1, @time) WITH NOWAIT
WAITFOR DELAY '00:00:05'
END TRY
BEGIN CATCH
PRINT '3 In the CATCH block ' + convert (varchar(30), getdate(), 8)
PRINT '3A Error Number = ' +convert (VARCHAR, ERROR_NUMBER())+ ' Severity = ' + convert (varchar, ERROR_SEVERITY())+ ' Msg = ''' + ERROR_MESSAGE() + ''''
END CATCH
PRINT '4 PRINT after the CATCH block ' + convert (varchar(30), getdate(), 8)
 
Which produces this output:

1 PRINT in the TRY block 15:24:33

3 PRINT in the CATCH block 15:24:33
3A Error Number = 50000 Severity = 11 Msg = '2 RAISERROR with NOWAIT 15:24:33'
4 PRINT after the CATCH block 15:24:33

Note that the RAISERROR statement is handled by the TRY/CATCH and doesn't produce message output on its own. Instead the error is handled by the CATCH block, which has code that prints the message on line 3A along with the severity.

Happy SQL Coding.......

Do let me know if you have any comments/feedback.......

Friday, November 6, 2009

SQL Server Questions

Hi,

Today we will take up some general questions.

Question. What are the SQL Server system databases and can you outline the general functionality of each database?

Ans:
Master - Database responsible for SQL Server instance related data. 
Resource - Database responsible for SQL Server system objects. This database was introduced in SQL Server 2005 and is intended to ease the upgrade and rollback of SQL Server system objects.
Model - Template database for the creation of new user defined databases.
MSDB - Database responsible for SQL Server Agent related data such as Jobs, Alerts, Operators, Backup, restore, database diagrams etc.
TempDB - Temporary database to store temporary tables (#temptable or ##temptale), table variables, cursors, work tables, row versioning, create or rebuild indexes sorted in TempDB, etc. Each time the SQL Server instance is restarted all objects in this database are destroyed, so permanent objects cannot be created in this database.
Distribution - Database responsible for managing replicated data. This database could reside on the publisher or subscriber.

Question. Is it a good idea to create objects in the system databases?

Ans:
Logically objects should not be created in the system databases. It is a best practice to create a separate database for user defined objects that would be used instance wide or create the objects in each of the needed databases. From a DBA perspective, it is a common practice to create a dedicated DBA database in order to perform custom administrative tasks.
If objects are needed for future user defined databases those can be created in the Model database.

Question. Does Reporting Services create system databases during the installation process?

Ans.
Yes. It does create following databases:
  • ReportServer - Stores the reports related data.
  • ReportServerTempDB - Temporary database storage for Reporting Services.
Question. What are the typical objects that are created in the TempDB database?

Ans.
following objects use TempDB database:
  • Temporary tables (#temptable or ##temptale)
  • Table variables
  • Cursors
  • Work tables
  • Row versioning
  • Create or rebuild indexes sorted in TempDB
Question. Does size of the TempDB database really make a difference and how should TempDB be sized?

Ans.
In situations where temporary objects are created in T-SQL code (i.e. temporary tables (#temptable or ##temptale), table variables, cursors, work tables, row versioning, create or rebuild indexes sorted in TempDB, etc.) the TempDB database makes a significant difference in overall performance. In these situations the database should be sized appropriately and moved to a separate disk drive in order to support the I/O requests. If not, the default location and size may be appriopriate.

Question. Explain relationship between logins and users in the system and user databases.

Ans.
Logins - All logins reside in the master database
Users - All users reside in the master database, other system databases and in the user defined databases.

Question. What's the differences in restoring the Master database versus a user defined database?

Ans.
In order to restore the Master database, SQL Server instance must be in single user mode. After the Master database is restored, the SQL Server instance restarts.

When a user defined database is restored, it has no impact on other databases.

Question. What's Resource database?

Ans.
The Resource database is responsible for physically storing all of the SQL Server 2005 system objects. This database has been created to improve the upgrade and rollback of SQL Server system objects with the ability to overwrite only this database. The Resource database was introduced in SQL Server 2005.

Question. What are some of the tools that you use to troubleshoot any SQL Server issue?

Ans.
  • SQL Server Error Log
  • SQL Server Agent Error Log
  • Windows Application Event Log
  • SQL Server Integration Services Logs
  • Custom Application\Process Logs
Question.How can you help to prevent a SQL Injection attack?

Ans.
  • Review firewall and web server logs for potential threats.
  • Prevent particular strings from passing through the firewall.
  • Prevent IP traffic from particular web sites.
  • Implement technology to scan your public facing web sites for potential issues.
Question. What is an identity? What is the value? How can you capture the last identity value per column?

Ans.
An identity is a property of a column where we specify seed and increment values. The seed is the starting value for the column and increment is the value by which the identity grows. In most cases seed and increment values are 1 wherein the initial value is 1 and the identity grows by 1.

The value of the identity column is managed by relational engine. SQL Server provides three different functions for capturing the last generated identity value on a table that contains an identity column:
@@IDENTITY
SCOPE_IDENTITY()
IDENT_CURRENT(‘tablename’)

Question. What are the different indexing options available and what columns do we typically index? What is the value of indexing columns?

Ans.
SQL Server offers two types of indexes, clustered and non-clustered. A clustered index is an index that stores the actual data and a non-clustered index is just a pointer to the data. A table can only have one Clustered index and up to 249 Non-Clustered Indexes. If a table does not have a clustered index it is referred to as a Heap.

The primary reason indexes are built is to provide faster data access to the specific data your query is trying to retrieve. This could be either a clustered or non-clustered index. Without having an index SQL Server would need to read through all of the data in order to find the rows that satisfy the query. In the query plan the difference would be an Index Seek vs a Table Scan as well as some other operations depending on the data selected.
Indexes are typically created on these columns:
  • Primary keys
  • Foreign keys
  • Columns in WHERE, GROUP BY, ORDER BY, etc.
  • Single value, range of values, etc.
Question.How you will tuning a SQL Server query? How to identify the query causing the issue?

Ans.
Review the query plan by issuing SHOWPLAN_TEXT, SHOWPLAN_ALL, Graphical Query Plan or sys.dm_exec_query_stats.

Review the individual query components to determine which components of the query have the highest cost.
Outline options to improve the query such as moving from cursor based logic to set based logic or vice versa, changing the JOIN order, WHERE clause or ORDER BY clause, adding indexes, removing indexes, creating covering indexes, updating statistics, removing use of functions in where clause etc.
Test the options to determine the associated performance improvement.

Question. What are the options in SQL Server to rebuild indexes?

Ans.
  • CREATE INDEX with DROP_EXISTING
  • DROP INDEX and CREATE INDEX
  • ALTER INDEX
Question.What happens when we add a column in the middle of a table (Employee) in Management Studio?

Ans.
Management Studio creates a temporary table called dbo.Tmp_Employee with the new structure.
If there is data in the original table dbo.Employee this data is inserted into the new temp table dbo.Tmp_Employee (now you have two sets of the same data).
The original table dbo.Employee is dropped.
The new table dbo.Tmp_Employee is renamed to dbo.Employee.
If the table has indexes all of the indexes are recreated.

Question.How do you backup Analysis Services databases?

Ans.
Create XML statement to backup the Analysis Services databases, then create a SQL Server Agent Job to perform the task on a daily basis.

Question.What is a common use case for the OPENROWSET function?

Ans.
Import from Excel to SQL Server or vice-versa.

Question.What is the difference between a deterministic and non deterministic function?

Ans.
Deterministic functions always return the same result any time they are called with a specific set of input values and given the same state of the database.
Nondeterministic functions may return different results each time they are called with a specific set of input values even if the database state that they access remains the same.

Question.What is the Guest user account? What login is it mapped to? Can we drop it?

Ans.
The Guest user account is created by default in all databases and is used when explicit permissions are not granted to access an object. It is not mapped directly to any login, but can be used by any login. Depending on your security needs, it may make sense to drop the Guest user account, in all databases except Master and TempDB, although sufficient testing should be conducted to validate applications will not break with this security restriction.

Question.What do synonyms do and when could you make the case for using them?

Ans.
Synonyms were released with SQL Server 2005. Synonyms enable the reference of another object (View, Table, Stored Procedure or Function) potentially on a different server, database or schema in your environment. In short, this means that the original object that is referenced in all of your code is really using a completely different underlying object, but no coding changes are necessary. It is an alias, means to simplify migrations and application testing without the need to make any dependent coding changes.
Synonyms can offer a great deal of value when converting underlying database objects without breaking front end or middle tier code. This could be useful during a re-architecture or upgrade project.

Question.With SQL Server 2005, what is the technique to manage errors in Integration Services?

Ans.
Event Handlers can be setup to capture errors and perform operation such as writing the errors to a table. Additionally, custom logging can be setup at the Control Flow level and perform custom logging as well.

Question.Explain the difference between the INTERSECT and EXCEPT operators? What is the value of these commands over other techniques?

Ans.
INTERSECT - gives final result set where values in both of the tables match.
EXCEPT - gives final result set where data exists in the first dataset and not in the second dataset.

The advantage of these commands is that it allows you to get a distinct listing across all of the columns such as the UNION and UNION ALL operators do without having to do a group by or do a comparison of every single column.

Question.Explain service oriented architecture (SOA) technology introduced with SQL Server 2005 and what are some of the components of the technology?

Ans.
Service Broker is the service oriented architecture (SOA) technology introduced with SQL Server 2005. It includes:
Endpoint - Communication point in the database for Service Broker. In addition, ability to specify the authentication, encryption and message forwarding. A single Service Broker endpoint can be created for each database, however the HTTP and Database Mirroring can be created as well.
Message Type - Format for the message that is being sent and received. The format could be simply well formed XML or be bound to a schema.
Contract - Which message types are sent by either the initiator or the target.
Route - For the specific Service Broker Service which SQL Server instance and database the messages will be routed to during the Conversation.
Queue - This is the logical container in the Service Broker application. The queue serve as a storage mechanism in Service Broker. All data is RECEIVED from the queue for processing. Although the queues are all named differently, they are actually all of the same format with both relational and binary columns that can be converted to an XML format.
Service - Service correlates the queue and the contract.
Remote Binding Service - Defines the Service Broker user to with the Service.

Question.What is the OUTPUT clause and what sorts of options does it provide?

Ans.
The OUTPUT clause is introduced in SQL Server 2005 and has the ability to access the INSERTED and DELETED tables as is the case with a trigger. The OUTPUT command can be added to your stored procedures or T-SQL scripts in order to write the data out to an auditing table or return the data back to the front end client.
The OUTPUT clause has the ability to mimic and/or replace some of the functionality typically addressed by triggers. In SQL Server 2008, it can also be used with MERGE command to access INSERTED, UPDATED or DELETED values

Question. What is database mirroring?

Ans.
Database mirroring was introduced in SQL Server 2005. Database Mirroring consists of two mandatory (Principal and Mirror) roles and an optional third (Witness) role. These roles would be installed on the Enterprise and Standard editions of SQL Server 2005. The Witness role can be installed on an any version of SQL Server including Express Edition.

In terms of data protection, Database Mirroring has three different options. First is High Availability Operating Mode. High Availability Operating Mode provides durable, synchronous transfer of data between the principal and mirror instances including automatic failure detection and failover. Second is High Performance Operating Mode. With the High Performance Operating Mode the overall architecture acts as a warm standby and does not support automatic failure detection or failover. Third is High Protection Operating Mode. The High Protection Operating Mode operates very similar to the High Availability Mode except the failover and promotion (mirror to principal) process is manual.

Question. How does the database recovery model impact database backups?

Ans.
The database recovery model is responsible for the retention of the transaction log entries. So based on the setting it determines if transaction log backups need to be issued on a regular basis i.e. every minute, five minutes, hourly, etc. in order to keep the transaction log small and/or maintain a log shipping solution.
Simple - Committed transactions are removed from the log when the check point process occurs.
Bulk Logged - Committed transactions are only removed when the transaction log backup process occurs.
Full - Committed transactions are only removed when the transaction log backup process occurs.

Question.What are some common post restore processes?

Ans.
  • Sync the logins and users
  • Validate the data is accurate
  • Notify the team\user community
  • Cleanse the data to remove sensitive data i.e. SSN's, credit card information, customer names, personal information, etc.
  • Change database properties i.e. recovery model, read-only, etc.
Question.For differential backups, how is the data determined for those backups?

Ans.
As data is changed in the extent, the extent is marked as changed and the entire extent is backed up.

Question.How is a point in time recovery performed?

Ans.
It depends on which backup types are issued. In this example let's assume its full, differential and transaction log backups are issued.
  • Restore the most recent full backup with the NORECOVERY clause
  • Restore the most recent differential backup with the NORECOVERY clause
  • Restore all of the subsequent transaction log backups with the NORECOVERY clause except the last transaction log backup
  • Restore the last transaction log backup with the RECOVERY clause and a STOPAT statement if the entire transaction log does not need to be applied.

Saturday, October 31, 2009

Deprecated features of SQL Server 2008

Hi,

Today we will discuss depricated features of SQL Server 2008. Let's list down the features, their usage and alternatives in SQL Server 2008.

sp_attach_db

Description: The sp_attach_db is a command to attach a database. This is one option for upgrading a database from SQL Server 2000 to 2005 or used in move databases from one SQL Server to another.

Syntax

EXEC sp_attach_db @dbname = 'YourDatabase',
@filename1 = 'c:\Program Files\Microsoft SQL Server\MSSQL\Data\YourDatabase.mdf',
@filename2 = 'c:\Program Files\Microsoft SQL Server\MSSQL\Data\YourDatabase_log.ldf'
GO

Change to CREATE DATABASE


Syntax
CREATE DATABASE database_name
ON [ ,...n ]
FOR { ATTACH [ WITH ]
| ATTACH_REBUILD_LOG } [;]
GO

The system stored procedure sp_detach_db according to SQL Server 2008 Books Online appears to remain the command to detach a database whereas sp_attach_db database is marked as deprecated.

sp_renamedb


Description: commonly used when servers are migrated or databases are moved.

Syntax
EXEC sp_renamedb '', ''
GO

Change to ALTER DATABASE

ALTER DATABASE
Modify Name =
GO

sp_dboption

Description: commonly used for setting a database to read only or offline.

Syntax
USE master
EXEC sp_dboption 'YourDatabase', 'read only', 'TRUE'
GO
USE master
EXEC sp_dboption 'YourDatabase', 'offline', 'TRUE'
GO

Change to ALTER DATABASE

Syntax
ALTER DATABASE
SET READ_ONLY
GO
ALTER DATABASE
SET OFFLINE
GO

Friday, October 30, 2009

What's coming in SQL Server 2008 R2

There are a lot of things which are a part of SQL Server 2008 R2. These are:
  • Kilimanjaro is now officially SQL Server 2008 R2
  • Support for 256 logical processers
  • Self-service Business Intelligence (BI) in-memory analysis update
  • Utility Data Platform
  • Master Data Services
  • Low Latency Complex Event Processing announcement
  • Geospatial visualizations in Reporting Services
  • Cool SQL Server 2008 stats
 SQL Server 2008 R2

Microsoft has decided on a product name for the next release, SQL Server 2008 R2, so it’s time to say goodbye to “Kilimanjaro”. SQL Server R2 will be shipped in sync with the release of Microsoft Office 2010. Community Technology Previews (CTPs) for SQL Server 2008 R2 will be available in the second half of 2009. SQL Server 2008 R2 “is an incremental release so it's an opportunity to deliver incremental technology on top of SQL Server 2008. On the business intelligence side, the release is aligned with Office 2010.

Support for 256 logical processors

Organizations pushing the high end of scalability will want to take advantage of SQL Server 2008 R2 support for up to 256 logical processors (which requires Windows Server 2008 R2). This is a big improvement over the previous maximum of 64 processors.

Self-Service BI In-Memory Analysis

One of the big things coming in SQL Server 2008 R2 is Self-Service BI, a joint solution that offers though SQL Server, SharePoint, and Excel. “Customers want solutions, not just individual products. Solutions enable employees to seek new business solutions and get better results out of what they have. Ultimately the goal is to allow all the employees in an organization to get the benefits of BI.”

So what effect self-service business intelligence (BI) would have on the relationship between the traditional BI professional and business knowledge workers or decision-makers?

Today we have a consultant or IT expert build a BI solution for employees. That is costly and takes time. Today many users don’t get access to BI tools and data sources. Self-service BI allows them to analyze all that data and create reports, charts, and dashboards and then share solutions and models with colleagues by publishing them through SharePoint.

The biggest innovation in self service BI is in-memory analysis. With advanced compression technology that’s incorporated into the release you’ll have 10s or 100s of millions of data records inside of a laptop’s memory, available to slice and dice instantly to get better insights out of the data. IT Professionals provide the environment, know that data is secure, and know which data sets are being used more. They can refresh the data and apply the right policies. So it’s IT controlled but end users are empowered.

Utility Data Platform

Another major feature of SQL Server 2008 R2 that tackles the control issue is: application and mulitiserver management. When a developer builds an application he or she can specify policies for that application. As the application is deployed the DBA knows about the policies and can enforce them and then manage that application as it moves from server to server or as their data volumes grow to adapt to that. This is part of the vision of what is called the Utility Data Platform where we can provide incremental capacity to users and move databases around servers and provide varying SLAs to customers.

SQL Server Management Studio (SSMS) has always been great for managing SQL Server. However, SSMS was really for single server management and wasn’t strong in the area of multi-server management. New wizards, dashboards, and enhancements to Policy Based Management will boost SQL Server 2008 R2’s multi-server management capabilities.

Master Data Services

Another TechEd announcement is the inclusion of Master Data Services, a new capability, in SQL Server 2008 R2. Think about different entities that IT needs to deal with such as customers or partners or products. Multiple databases contain multiple information. A single entity can be in multiple databases, with multiple IDs. A business challenge has been reconciling all that data so they can make the right decisions about customers or products using the data. The goal of master data management is to bring all that data together so that we have a single version of the truth—A single authoritative source of data. We do that by synchronizing data across multiple data sources, understanding where there are exceptions, and managing those exceptions. What we’re doing with Master Data Services is building that capability as part of the data platform so when customers buy SQL Server 2008 R2 they get that functionality.

Almost all large organizations face the problem of multiple data definitions where the same data is defined and used differently in various locations. SQL Server 2008 R2’s new Master Data Services (MDS) feature provides a central portal from which administrators can create and update master data members and hierarchies, with the goal of ensuring data consistency across the enterprise.

Low Latency Complex Event Processing

Also announced at TechEd is that Microsoft is working on Low Latency Complex Event Processing. MS is developing this technology with the idea of building a more comprehensive data platform,  there are business opportunities around being able to analyze streaming transient data, identify patterns, and then act on the findings. We’re building a platform to enable customers and partners to build complex event processing solutions. This will be released in 2010. We haven’t announced the packaging or pricing. There will be a CTP in 2009 and for now we’re calling it Low Latency Complex Event Processing. It’s not clear yet if these CTPs will align with the other CTPs in the pipeline.

Geospatial visualizations in Reporting Services

Building on the geospatial data types in the base SQL Server 2008 release, SQL Server 2008 R2's Reporting Services will support reports with visual geographic mapping. New geospatial features include visualizations for mapping routine and custom shapes and support for Microsoft Virtual Earth tiles.

Cool SQL Server Facts

Some interesting SQL Server 2008 facts: There have had over three and a half million downloads of SQL Server 2008; companies in every country, every industry are looking at SQL Server 2008 today; there’s 160% ROI in deploying it just from savings associated with things like Resource Governor, Data Protection, Encryption, and Policy-based management.

Tuesday, October 27, 2009

Policy-Based Management in SQL Server 2008

As we discussed earlier that we can use DDL triggers to enforce developers to follow some guidelines, but this is restricted only to DDL commands. What if we need to create policies to check other things or take corrective/preventive actions?

Policy-Based Management is a new feature in SQL Server 2008. It allows to define and enforce policies for configuring and managing SQL Server across the enterprise. Initially this was called Declarative Management Framework but was renamed.

Let's understand Policy-Based Management and its components :

Target - an entity that is managed by Policy-Based management; e.g. a database, a table, an index, etc.
Facet - a predefined set of properties that can be managed
Condition - a property expression that evaluates to True or False; i.e. the state of a Facet
Policy - a condition to be checked and/or enforced

Policy-Based Management is configured in SQL Server Management Studio (SSMS). Navigate to the Object Explorer and expand the Management node and the Policy Management node; you will see the Policies, Conditions, and Facets nodes:


Click on Facets to see available facets:



There is a comprehensive collection of facets in SQL Server 2008, allowing us to manage just about every aspect of SQL Server. Let's look at Database facet:



These facet properties are used to specify a condition; A policy specifies an expression that evaluates to True or False. The expression can be made up of one or more conditions logically joined by And / Or operators.

SCENARIO:

The DBA team has released a checklist that none of the procedure names will start with 'sp_'. Now we need to create a policy to satisfy this criterion.

Let's Create a Condition

The starting point in Policy-Based Management is to create a Condition. Right click on Conditions in the SSMS Object Explorer (under the Management | Policy Management node) then select New Condition from the menu. Fill in the dialog as follows:



Select a single Facet for a Condition, then enter an Expression. The Expression evaluates to either True or False. This is the essence of Policy-Based Management which will test whether the Condition is True or False.

Let's create a policy now:

Right click Policies in the SSMS Object Explorer (under the Management | Policy Management node) then select New Policy from the menu. Fill in the dialog as follows:



The Check Condition drop down will include the list of conditions that are defined. We can check Every Database in the Against targets list and define a condition. Execution Mode can have one of the following values:



  • On Demand (this is the default)


  • On Schedule


  • On Change - Log Only


  • On Change - Prevent



  • The On Demand option only evaluates the policy when a user right clicks on the policy in the SSMS Object Explorer and selects Evaluate from the menu.

    The On Schedule option takes advantage of SQL Agent to execute a job on a particular schedule to check the policy. After selecting On Schedule from the Execution Mode drop down list, you can click either the Pick or New button.

    To pick an existing schedule, make a selection from the available options:



    To create a new schedule, click New:



    When policy evaluation is scheduled, any violations are logged to the Windows Event Log.

    The On Change - Log Only option evaluates the policy whenever the property in the facet is changed and any violation is logged to the Windows Event Log. The On Change - Prevent option evaluates the policy whenever the property in the facet is changed and actually prevents the change; this option uses DDL triggers to enforce the policy. Not all changes can be detected and rolled back by DDL triggers; the Execution Mode drop down list will include the On Change - Prevent option only when it is available.

    What is Enabled check box? When the Execution Mode is On Demand, the Enabled check box must be unchecked; for all other options you must check the Enabled check box in order for the policy to be evaluated.

    How to evaluate a Policy:

    To evaluate a policy on demand, right click on the policy in the SSMS Object Explorer and select Evaluate from the menu. The following is a partial screen shot of the output from evaluating a policy on demand:



    The green check icon signifies that the policy evaluated to True. Red indicate false. To view the details of error, click on VIEW link and following window will open with complete details:



    Edit the policy and change the Execution Mode to On Change - Log Only. Try to create a procedure which starts with sp_. It will create the procedure, but will log the error in Event Viewer. Open Windows Event Viewer, click on Application and you will see an event that was written when the policy evaluation detected the violation:



    To test the On Change - Prevent Execution Mode for a policy, create a new condition and a new policy. Follow the same steps to create the policy. The trick is that the policy must be ENABLED, only then it can stop user from violating conditions. There are 2 ways to enable a policy, one is when you create a policy and selected On Change - Prevent Execute Mode, a checkbox appears at top, just below policy name, check it. The other way is from SSMS, Right Click on Policy and select Enable.

    Now whenever the user will try to create a procedure that starts with sp_, it will roll it back and shows policy violation error as follows:



    This way we can have many policies running for many databases and can be used to control / monitor violations.

    Do let me know if there is any feedback.

    Happy SQL Coding.

    Monday, October 26, 2009

    Using the EventData() Function with DDL triggers

    Let's discuss how we store the events captured by DDL triggers and store data it in a table for reporting/auditing purposes?

    In SQL Server 2005, we can get data regarding the event that initiated the DDL trigger by accessing the EventData() function. This function returns information about server or database events and is stored in a variable that uses the XML data type. What we need to do is capture the data returned by the EventData() function and store it in a database table for reporting purposes. Since the returned type is XML, we need to apply a bit of XQuery against the data generated by the function. The DDL_TABLE_EVENTS type describes what data can be captured for a CREATE, ALTER or DROP TABLE event.

    We will create a table that will store the data returned by the EventData() function. We will only choose a few items from the schema generated by the ALTER_TABLE event; namely, the T-SQL Command, ExecuteTime, LoginName, and EventType which is included in every event.

    Create the Audit log table

    USE Northwind
    GO
    CREATE TABLE AuditLog
    (
    ExecuteTime DATETIME,
    LoginName NVARCHAR(100),
    EventType NVARCHAR(100),
    Command NVARCHAR(2000)
    )
    GO

    Now Create the DDL trigger

    CREATE TRIGGER trgAlterTblChange
    ON DATABASE
    FOR ALTER_TABLE
    AS
    DECLARE @Data XML
    SET @Data = EventData()
    INSERT AuditLog (ExecuteTime, LoginName, EventType, Command)
    VALUES
    (GETDATE(),
    CONVERT(NVARCHAR(100), CURRENT_USER),
    @Data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'),
    @Data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)') ) ;
    GO

    Let's understand the code.

    Declare a variable named @Data of XML data type. Assign the value returned by the EventData() function to the variable.

    In the INSERT statement, retrieve the element values in the @Data variable using XQuery and the value() method. We used value() method as it takes a second parameter, represented by the value 1 as it is based on the zero index, that is the name of one of the SQL Server built-in data types. The value is returned as an instance of that type. Also, the value() method gives you better results with the formatting of the XML data

    Let's test it.

    Add a new column "ShipAdd" to the Shippers table in the database

    USE Northwind
    GO

    ALTER TABLE Shippers
    ADD ShipAdd Varchar(100) NULL

    Querying the AuditLog table will give you the data returned by the EventData() function, stored in tabular format as we have already extracted the element values inside the XML data from inside our trigger definition.



    We can define DDL triggers as well as capture event details using the EventData() function on the database or server level. This approach is just to audit DDL activities and store the event details in a log table. We can have a central database that will store all the events done on both the database and server level. It is not efficient nor effective to audit every event due to the immense amount of audit record logging processes so make sure that you plan what events you need to audit.

    Let's take a look at another example, as we already know that its not efficient to have stored procedure starting with 'sp_'. Let's use DDL triggers to stop people from creating procedures starting with 'sp_'

    Let's create a trigger:

    CREATE TRIGGER [ddltrg_CheckCreateProc] ON DATABASE
    FOR CREATE_Procedure
    AS
    SET NOCOUNT ON
    DECLARE @xmlEventData XML,
    @tableName VARCHAR(50)

    SET @xmlEventData = eventdata()

    SET @tableName = CONVERT(VARCHAR(25), @xmlEventData.query('data(/EVENT_INSTANCE/ObjectName)'))

    IF LEFT(@tableName, 3) = 'sp_'
    BEGIN
    RAISERROR ( 'You cannot create procedures starting with sp_',16,- 1 )
    ROLLBACK
    END

    GO


    Let's Test it:

    create proc sp_testDDLTrigger
    as
    select getdate()


    When we execute it, we get following error:




    It throws the error as defined in the Trigger. This way we can force developers follow guidelines. SQL Server 2008 has introduced Policy based management, which help achieve same thing easily.

    Auditing of DDL (Create, Alter, Drop) Commands

    With the growing need to audit activities in our database servers, there is a need to figure out the best approach to collect changes on your system. With SQL Server 2000 we are limited to auditing login information and capturing data changes using DML (Data Manipulation Language) triggers, but there was no easy way to track DDL (Data Definition Language) changes in our database.

    With SQL Server 2005 DDL (Data Definition Language) triggers have been introduced. This type of trigger is different then INSERT, UPDATE and DELETE triggers, this trigger is fired when a change is made using such commands as ALTER, CREATE or DROP. The trigger is fired after the event occurs, so there is not an INSTEAD of trigger option like you have with DML triggers.

    Creating a DDL trigger is just as simple as creating a DML trigger. Here is an example of a trigger that would fire whenever there is a DROP_TABLE or ALTER_TABLE event.

    CREATE TRIGGER tr_tableChange
    ON DATABASE
    FOR DROP_TABLE, ALTER_TABLE
    AS
    BEGIN
    print 'your statement is logged for auditing purpose'
    END

    DDL events can be collected by using event groups. This example here is triggered whenever there is DDL_LOGIN_EVENTS action. This includes DROP_LOGIN, CREATE_LOGIN and ALTER_LOGIN.

    CREATE TRIGGER tr_LoginEvents
    ON ALL SERVER
    FOR DDL_LOGIN_EVENTS
    AS
    BEGIN
    print 'your statement is logged for auditing purpose'
    END


    Here is a listing of the event groups. This list can be found in SQL Server 2005 Books Online. The way this works is that each group consists of individual events. Also, some event groups contain other event groups. So for example the DDL_SERVER_LEVEL_EVENTS would capture all events that occur on the server.



    Next we will see how we can extract information from this DDL Trigger using EVENTDATA() function.

    Sunday, October 25, 2009

    Use of Function in Where clause

    Hi,

    Let's look at what happens when we use function in Where clause.

    There are 2 places where we can use a function, 1st when we want to fetch some value in select clause and 2nd when we want to compare a value using a function in where clause.

    Usage of both ways will affect the performance of the query.

    Let's see a simple example of function.

    Create a function that return Employee LastName, Firstname for passed Employee ID.

    CREATE FUNCTION GetEmpName 
    (
    -- Add the parameters for the function here
    @EmployeeId int
    )
    RETURNS NVARCHAR(100)
    AS
    BEGIN
    -- Declare the return variable here
    DECLARE @empname NVARCHAR(100)
    SELECT @empname = LastName +', ' + FirstName FROM Employees WHERE EmployeeID=@EmployeeId
    
    RETURN @empname
    END
    GO
    

    Let's see how this works:

    SELECT dbo.GetEmpName(9) AS Employee_Name

    Dodsworth, Anne

    SELECT distinct dbo.GetEmpName(a.EmployeeID), b.TerritoryDescription
    FROM dbo.EmployeeTerritories a
    INNER JOIN dbo.Territories b
    ON a.TerritoryID=b.TerritoryID
    WHERE dbo.GetEmpName(a.EmployeeID)='Davolio, Nancy'
    

    Result

    Employee TerritoryDescription
    Davolio, Nancy Neward
    Davolio, Nancy Wilton

    Let's look at the profiler to see how the query is executing:


















    As we can see that the profiler has executed the function so many times, which actually slow down the query performance.

    Let's change our function a little:

    CREATE FUNCTION dbo.GetEmpNameCrossJoin 
    (
     -- Add the parameters for the function here
     @EmpId int
    )
    RETURNS Table
    AS
    Return(
     SELECT LastName + ', ' + FirstName EName FROM dbo.Employees)
    GO
    
    
    --Now execute this using Cross APPLY
    
    SELECT distinct I.EName, b.TerritoryDescription
    FROM dbo.EmployeeTerritories a
    INNER JOIN dbo.Territories b
    ON a.TerritoryID=b.TerritoryID
    CROSS APPLY dbo.GetEmpNameCrossJoin(a.EmployeeID) I
    
    


    We have changed the function to return table instead of a scalar value and the function call is done using CROSS APPLY.

    Now there is only 1 row in profiler.

    This way we can improve the performance of the query.

    Happy SQL Coding.

    Wednesday, October 21, 2009

    Synonyms

    Hi,

    In SQL Server 2005, a new feature called Synonyms was introduced. Let's discuss about it today.

    SQL Server 2005 has introduced synonyms which enables the reference of another object (View, Table, Stored Procedure or Function) potentially on a different server, database or schema in our environment. This means that the original object that is referenced in all of our code is really using a completely different underlying object, but no coding changes are necessary. Think of this as an alias as a means to simplify migrations and application testing without the need to make any dependent coding changes.

    Create a synonym via SQL Server 2005 Management Studio

    To access the synonym interface navigate to SQL Server 2005 Management Studio => Server Name => Databases => Database Name => Synonyms => Right click on the Synonyms folder => New Synonym

    In General Tab - Key in Synonym name which should be a new name in the schema and information for the aliased object


    In Permissions Tab - Click the 'Add' button to grant rights to the synonym for users



    In Extended Properties Tab - Add any extended properties for documentation purposes.




    How to add Synonyms using T-SQL

    USE [NorthWind]
    GO

    CREATE SYNONYM [dbo].[Empls]
    FOR [WSNG1110065709\SQLEXPRESS].[Northwind].[dbo].[Employees]
    GO
    EXEC sys.sp_addextendedproperty @name=N'Description',
    @value=N'This is a synonym referencing the Northwind database ',
    @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'SYNONYM',@level1name=N'Empls'

    Let's validate this:

    select * from [dbo].[Empls]

    EmployeeID LastName FirstName Title
    1 Davolio Nancy Sales Representative
    2 Fuller Andrew Vice President, Sales
    3 Leverling Janet Sales Representative

    Let's Insert data using Synonym:

    Insert into [dbo].[Empls]
    ( LastName, FirstName, Title)
    values ('Thru Synonym', 'Synonym','Synonym')

    Let's see data through Synonym:

    select * from [dbo].[Empls]

    Let's see data directly from table:

    select * from Employees

    EmployeeID LastName FirstName Title TitleOfCourtesy
    7 King Robert Sales Representative Mr.
    8 Callahan Laura Inside Sales Coordinator Ms.
    9 Dodsworth Anne Sales Representative Ms.
    11 Thru Synonym Synonym Synonym NULL

    Do let me know if there is any concern.
    Happy SQL Coding..