Monday, August 29, 2011

24 Hours of PASS: Summit 2011 Preview


24 Hours of PASS is back with more sessions on September 7th and 8th. Join knowledgeable SQL Server experts - slated to present in-depth sessions at PASS Summit 2011 - as they preview their SQL Server session topics over these 24 free technical webcasts.
Singapore 8 PM, 
India 5:30 PM

Detail of Sessions: 

  Session 01 Brian Knight
Brian Knight, SQL Server MVP, MCSE, MCDBA, is the co-founder of, and He works as President and lead architect for Pragmatic Works and has written thirteen books on SQL Server over his fifteen year database career.
Presenting: INTRODUCTION TO DATA MINING IN SQL SERVER ANALYSIS SERVICES. Learn how to create a data mining model to predict your best customers, use algorithms to ensure your marketing budget is spent wisely and use Time Series analysis for budgeting and forecasting.
 Session 02 Brent Ozar
Brent Ozar is a Microsoft Certified Master of SQL Server 2008, a Microsoft SQL Server MVP, and co-founder of Brent Ozar PLF. Brent has over a decade of experience with SQL Server, systems administration, SAN administration, virtualization administration, and project management. Brent blogs at like it's going out of style.
Presenting: SAN BASICS FOR DBAs. They keep telling you it's a SQL problem - but is it? Learn the basics about your storage options like RAID 5, RAID 10, solid state drives, and why your biggest bottleneck might be a $5 cable. No storage experience required.
 Session 03 Jonathan Kehayias
Jonathan Kehayias is a Senior Consultant with SQLskills, the youngest MCM for SQL Server 2008 and has been a SQL Server MVP for many years. Jonathan blogs on, tweets as @SQLPoolBoy, and is an expert on Extended Events.
Presenting: DIVING INTO EXTENDED EVENTS. Extended Events provide information about the operations of SQL Server that have never before been available. Close the learning gap between Extended Events and SQL Trace.
 Session 04 Peter Myers
Peter Myers is a highly experienced consultant, trainer, mentor and presenter specializing in Microsoft Business Intelligence.
Presenting: EXPLORING POWERPIVOT FOR EXCEL IN SQL SERVER PROJECT "DENALI". Peter Myers explores the new design experience and modelling features available in PowerPivot for Excel in the upcoming release of SQL Server Project “Denali”.
 Session 05 Aaron Nelson
Aaron Nelson is a Senior SQL Server Architect who holds certifications for MCITP: Business Intelligence Developer, Database Administrator, Database Developer; as well as MCTS: Windows Server Virtualization, Configuration (meaning Hyper-V).
Presenting: WHY POWERSHELL? Windows PowerShell is rapidly becoming the go-to scripting choice of System Administrators, but what does that mean for SQL Server DBAs? We'll explore PowerShell and demonstrate how and why DBAs should use it.
 Session 06 Andy Leonard
Andy Leonard is a consultant and trainer with Linchpin People, LLC. He is active in the SQL Server Community, co-CSO of, and co-author of several books on SQL Server and Integration Services.
Presenting: SSIS IN THE ENTERPRISE. Learn key components of an SSIS Framework to facilitate SSIS development that performs and is manageable in the modern enterprise.
 Session 07 Gail Shaw
Gail Shaw is a SQL MVP from Johannesburg, South Africa, specializing in performance tuning and database optimization.
Presenting: EXTRACTING EXECUTION PLANS. This session will cover assorted ways of obtaining query execution plans from SQL Server.
 Session 08 Allan Hirt
Clustering MVP Allan Hirt is a consultant, published author, speaker, and trainer who has used SQL Server since 1992. Allan travels all over the world to work with and train clients. His most recent book is "Pro SQL Server 2008 Failover Clustering".
Presenting: MULTI-SITE FAILOVER CLUSTERING WITH SQL SERVER 2008, 2008 R2, and DENALI. Learn how to plan a multi-site cluster deployment for SQL Server 2008, 2008 R2, and Denali.
 Session 09 Stacia Misner + Denny Cherry
Stacia Misner and Denny Cherry are Microsoft MVPs with 25 years IT experience between them; Stacia focuses on the BI side and Denny focuses on the core engine side. Between them they have written 14 books and hundreds of articles.
Presenting: SO HOW DOES THE BI WORKLOAD IMPACT THE DB ENGINE? In this session, we review the architecture of SQL Server and its BI components and deployment options for optimal performance. We'll also discuss how day-to-day BI operations impact the database engine.
 Session 10 Maciej Pilecki
Maciej Pilecki is a SQL Server MVP, Microsoft Certified Master in SQL Server 2008, Microsoft Certified Trainer and Senior Consultant with Project Botticelli. He is passionate about the internals of the SQL Server database engine, Storage Engine and Query Optimizer, and Performance Tuning.
Presenting: SECRETS OF THE SQLOS. This session will explore a mysterious component of Microsoft SQL Server: its own internal built-in operating system, or SQLOS.
 Session 11 Glenn Berry
Glenn works as a Database Architect at Avalara in Denver, CO. He is a SQL Server MVP, and Adjunct Faculty member at University College - University of Denver.
Presenting: HARDWARE 301: DIVING DEEPER INTO DATABASE HARDWARE. Making the right hardware selection decisions is very important for database scalability. Having properly sized and configured hardware can both increase application performance reduce capital expenses.
 Session 12 Grant Fritchey
Grant Fritchey: Red Gate Software Product Evangelist, Author, MVP, Consultant, Speaker, Blogger, a Scary DBA.
Presenting: ALL ABOUT EXECUTION PLANS: READING EXECUTION PLANS. This session introduces the basics of reading execution plans. Learn where to start and how to proceed when reading your own plans.
 Session 13 Adam Jorgensen
Adam Jorgensen, President of Pragmatic Works, SQL MVP and SSAS Maestro co-course director is focused on democratizing SQL Server technology. He mentors executive teams & drives towards this goal designing & delivering world-class solutions.
Presenting: ZERO TO CUBE - FAST TRACK TO SSAS DEVELOPMENT. Attendees will learn how to drive and build an Analysis Services cube and reports as they rotate through the podium as volunteers.
 Session 14 Rob Farley
Rob Farley (MVP and MCT) runs Australian consultancy LobsterPot Solutions. He is hosting a pre-conference seminar at the PASS Summit 2011, showing ways that a better understanding of Advanced T-SQL features can be used to fix queries.
Presenting: FIXING QUERIES WITH UNIQUENESS. In my pre-con seminar, Fixing Queries through Advanced T-SQL Techniques, I’ll be looking at ways you can make your queries run faster. In this 24HOP session I'll be looking at Uniqueness, and the impact that this can have.
 Session 15 Edwin Sarmiento
Edwin Sarmiento is a SQL Server DBA/Windows Engineer for Pythian, a global remote DBA services provider. He is very passionate about technology but has interests in music, professional and organizational development, leadership and management matters.
Presenting: DISASTER RECOVERY IS NOT JUST ABOUT TECHNOLOGY. This session will describe the principles that should drive disaster recovery strategies and how they need to become the foundation of the technology solutions that you need to implement.
 Session 16 Jes Borland
Jes Schultz Borland is a Database Administrator at Kimberly Clark. She holds an AAS - Programmer/Analyst degree. She has worked with SQL Server since 2007, focusing on Reporting Services and day-to-day administration.
Presenting: RERPORTING SERVICES 201: FROM BASIC TO WOW! You have solid, reliable reports that you and others depend on. Learn how to take them from basic to "wow", using features such as tablix, lists, images, and charts.
 Session 17 Karen Lopez
Karen Lopez has 20+ years of data architecture experience on large, multi-project programs. A new SQL Server MVP, she wants you to love your data.
Presenting: 5 PHYSICAL DATABASE DESIGN BLUNDERS & HOW TO AVOID THEM. Mistakes can cost you dearly: performance snags, development delays and bugs. Learn how to: find blunders, get tips on how to avoid them, talk about and defend design recommendations.
 Session 18 Adam Machanic
Adam Machanic is a Boston-based independent database consultant, writer, and speaker. He has written for numerous web sites, magazines, and books. He loves performance monitoring and shortening his bio for the sake of LiveMeeting.
Presenting: BASELINE BASICS OR WHO BROKE THE DATABASE. Does the server seem a bit sluggish today? Is a quick glance at performance counters not revealing any obvious culprit? Where should you go next?
 Session 19 Audrey Hammonds
Audrey Hammonds is a BI Consultant for Key2 Consulting and one half of the Datachix ( She holds MCTS certifications in SQL Server 2008 Database Development and SQL Server 2008 Business Intelligence.
Presenting: T-SQL AWESOMENESS: 3 WAYS TO WRITE COOL SQL. There are some truly awesome ways to make your data do just what you need it to do, while improving performance and/or readability. Come learn new-school ways to expand your T-SQL repertoire.
 Session 20 Don Kiely
Don Kiely is a technology consultant and database application developer in Fairbanks, Alaska.
Presenting: CODE-FIRST DEVELOPMENT WITH THE ENTITY FRAMEWORK. Code-first development in Entity Framework is a new feature of version 4.1, based on custom .NET classes. Define your objects then let EF create the database. This session explores this great new feature.
 Session 21 Simon Sabin
Simon Sabin is a principal consultant for SQLKnowHow who specialize in the high perf/big data SQL Server systems. His focus is troubleshooting and mentoring teams responsible for such systems. He is a SQL Server MVP and MCM and is the founder of SQLBits.
Presenting: ADVANCED REPORTING SERVICES. Slicers are an interactivity feature available in Excel and Crescent. Learn to use features such as multi valued parameters, expression formatting and custom code to show you how to add slicers to reporting services reports.
 Session 22 Victor Isakov
Victor Isakov (MCA, MCM, MCT, MVP) is a Database Architect and Trainer who provides consulting and training services to various organizations in the public, private and NGO sectors globally. He regularly speaks at international conferences and has authored a number of books on SQL Server.
Presenting: IMPORTANT TRACE FLAGS THAT EVERY DBA SHOULD KNOW. In this session, Victor Isakov will present the important trace flags that all DBAs should know, what they do, when to implement them and how best to implement them in a production environment.
 Session 23 Jorge Segarra
Jorge Segarra is a DBA-turned-BI Consultant for Pragmatic Works consulting, SQL Server MVP, and a Regional Mentor for PASS. He is the founder of SQL University which can be found at his blog,
Presenting: POLICY-BASED MANAGEMENT IN A NUTSHELL. Discover Policy-Based Management, a powerful and easy-to-configure feature added in SQL Server 2008. Learn how to set these up in your own shops, set up an elegant (and free) reporting solution, and much, much more!
 Session 24 Paul Turley
Paul Turley is a Mentor for SolidQ and a SQL Server MVP who has been architecting, managing and developing applications and business intelligence solutions for large and small businesses since 1992. He holds the current MCTS and MCITP certifications for SQL Server BI.
Presenting: VISUAL REPORT DESIGN: BRINGING SEXY BACK. BI is all the rage but you don’t have to invest in an entire BI platform to have fresh, usable reports. Learn design techniques applicable to enterprise analytic solutions and smaller-scale operational reporting.

Happy Learning


Thursday, August 11, 2011

SQL Server Questions Part IV


Here are some more questions on SQL Server. PART - IV. Here are some more questions with answers on SQL Server:

Q. Define ACID property?


ACID (an acronym for Atomicity Consistency Isolation Durability) is a concept that Database Professionals generally look for while evaluating databases and application architectures. For a reliable database, all this four attributes should be achieved.

  • Atomicity is an all-or-none proposition.
  • Consistency guarantees that a transaction never leaves your database in a half-finished state.
  • Isolation keeps transactions separated from each other until they are finished.
  • Durability guarantees that the database will keep track of pending changes in such a way that the server can recover from an abnormal termination.

Q. What are the Different Normalization Forms?


1st Normal Form: Eliminate Repeating Groups. Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.

2nd Normal Form: Eliminate Redundant Data. If an attribute depends on only part of a multi-valued key, then remove it to a separate table.

3rd Normal Form: Eliminate Columns Not Dependent On Key.
If attributes do not contribute to a description of the key, then remove them to a separate table. All attributes must be directly dependent on the primary key.

BCNF: Boyce-Codd Normal Form. If there are non-trivial dependencies between candidate key attributes, then separate them out into distinct tables.

4th Normal Form: Isolate Independent Multiple Relationships. No table may contain two or more 1:n or n:m relationships that are not directly related.

5th Normal Form: Isolate Semantically Related Multiple Relationships. There may be practical constrains on information that justify separating logically related many-to-many relationships.

Q. What are the Different Types of Triggers?

There are two types of Triggers.
1) DML Trigger
There are two types of DML Triggers
a.Instead of Trigger
Instead of Triggers are fired in place of the triggering action such as an insert, update, or delete.
b. After Trigger
After triggers execute following the triggering action, such as an insert, update, or delete.
2) DDL Trigger
This type of trigger is fired against Drop Table, Create Table, Alter Table or Login events. DDL Triggers are always After Triggers.

Q. What is a View?

A simple view can be thought of as a subset of a table. It can be used for retrieving data as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does the data in the view as views are the way to look at parts of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using standard T-SQL select command and can come from one to many different base tables or even other views.

Q. What is a Linked Server?

Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server databases using T-SQL Statements. With a linked server, you can create very clean, easy–to-follow SQL statements that allow remote data to be retrieved, joined and combined with local data. Stored Procedures sp_addlinkedserver, sp_addlinkedsrvlogin will be used to add new Linked Server.

Q. What is Collation?

Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence with options for specifying case sensitivity, accent marks, Kana character types, and character width.

Will add more soon.

Happy Learning


Download SQL Server Code Name “Denali” CTP3 Product Guide


I am sure everyone will be aware the new version of SQL SERVER DENALI is in its CTP3 phase. Now they have released Product Guide on the same. 

This product guide release represents a significant milestone for customers and partners who want to experience the extensive new value delivered by SQL Server Code Name “Denali.” As the foundation of the cloud-ready information platform, SQL Server Code Name “Denali” can help organizations unlock breakthrough insights across the organization as well quickly build solutions and extend data on-premises and across public clouds, backed by mission-critical confidence.

The SQL Server Code Name “Denali” CTP3 Product Guide includes useful resources and demos that will help you in your evaluation of CTP3.  This includes:

  • 14 Product Datasheets
  • 8 PowerPoint Presentations
  • 5 Technical White Papers
  • 13 Hands-On Lab Preview Documents
  • 6 Click-Through Demonstrations
  • 13 Self-Running Demonstrations
  • 26 Links to On-Line References
  • 44 Links to On-Line Videos including 26 Presentations from North America TechEd 2011
The Product Guide comes with a small application to help you navigate through the content. Once you download the ZIP file, your will want to extract all of the files to your local hard drive. To start the viewer, click in the STARTHERE.CMD file. The view works with IE8 and IE9. If you are using an alternative default browser, a simpler HTML version of the viewer will be launched.

Click here to download the guide.

Happy Learning


Monday, August 8, 2011

SQL Server Questions Part III


Here are some more Questions with Answers.

Q. What is the difference between Delete and Truncate command and is it possible to rollback after Truncate command?

ANS. Delete command removes the rows from a table based on Where condition. Truncate will actually remove all the rows from a table.

Below are some points about TRUNCATE:

  • TRUNCATE is faster and uses fewer system and transaction log resources than DELETE. 
  • TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log.
  • TRUNCATE removes all the rows from a table, but the table structure, its columns, constraints, indexes and so on remains. The counter used by an identity for new rows is reset to the seed for the column.
  • You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
  • TRUNCATE cannot be rolled back unless it is used in TRANSACTION. OR TRUNCATE can be rolled back when used with BEGIN … END TRANSACTION using T-SQL.
  • TRUNCATE is a DDL Command.
  • TRUNCATE resets the identity of the table.

Below are some points about DELETE:

  • DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
  • DELETE does not reset Identity property of the table.
  • DELETE can be used with or without a WHERE clause.
  • DELETE activates Triggers if defined on table.
  • DELETE can be rolled back.
  • DELETE is DML Command.
  • DELETE does not reset the identity of the table.

Q. Why we should give preference to stored procedure?

ANS. A stored procedure is a group of SQL statements that have been created and stored in a database. A procedured can accept input parameters. When the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and improve performance. Stored procedures can be used to help ensure the integrity of the database.

Procedure Cache is the place where Execution Plan of Stored Procedure is stored. An execution plan states the efficient way in which the query(s) is executed. So whenever a normal query is executed its Execution Plan is created but when a Stored Procedure is executed, Execution plan is created and stored in Procedure Cache. Whenever the same procedure is executed and its execution plan exists in Procedure cache then it uses that execution plan rather than creating a new plan.

Q. What is an extent?

ANS. Extent is a basic unit of storage to provide space for tables. Every extent has a number of data pages. As new records are inserted new data, pages are allocated. There are eight data pages in an extent. So as soon as the eight pages are consumed, it allocates a new extent with data pages.

While extent is basic unit storage from a database point of view, page is a unit of allocation within extent.

Q. What is the advantage of SET NOCOUNT ON?

ANS. SET NOCOUNT ON gives a performance boost to action queries by suppressing the "(n row(s) affected) message that results from running a query.

Qualifying an object with it's owner boosts performance because SQL does not have to work out where if there is a user specific version of the same object. It also gives benefits in the caching of execution plans.

The performance boost is due to the few bytes of information that make up the "(1 row(s) affected)" message not being transmitted to the client application.

With this in mind I should consider the following points:

Communication between the database and the client application on a stand-alone machine will be as fast as it is possible to get. If your front end application had it's own clock and you recorded the time from submitting the query to the time when the client finally received the full results I would expect that transmitting results across a network to be slower.

In this experiment we are carrying out a single simple insert statement. If your procedure carries out multiple operations the performance boost will be more pronounced.

For queries that retrieve data the performance boost will be less simply because the size of the "(1 row(s) affected)" message is small compared to the volume of data being returned.

In .NET applications an ExecuteNonQuery command returns the number of records affected by the operation. Set NOCOUNT ON means that the value that this call returns is always zero.

Q. Suppose I have a table that has ID as primary key with IDENTITY. Whenever a new row is inserted in table, Identity will auto incremement the value by 1.

I don't want to insert an auto incremented value but I want to give it my own value. How can I do this?

Ans. We need to use SET IDENTITY_INSERT [TABLE NAME]. This allows explicit values to be inserted into the identity column of a table.

At any time, only one table in a session can have the IDENTITY_INSERT property set to ON. If a table already has this property set to ON, and a SET IDENTITY_INSERT ON statement is issued for another table, SQL Server returns an error message that states SET IDENTITY_INSERT is already ON and reports the table it is set ON for.

If the value inserted is larger than the current identity value for the table, SQL Server automatically uses the new inserted value as the current identity value.

The setting of SET IDENTITY_INSERT is set at execute or run time and not at parse time.



INSERT INTO products (id, product) VALUES(3, 'garden shovel').



ANS. Implicit Transactions are those that requires a COMMIT or ROLLBACK for every transaction.

When ON, SET IMPLICIT_TRANSACTIONS sets the connection into implicit transaction mode. When OFF, it returns the connection to autocommit transaction mode.

For example if we turn it on & update a table, the changes will not be committed until COMMIT command is executed.

Happy Learning


Monday, August 1, 2011

SSIS Questions - Part 1

Let's start a series of Q&A's on SSIS also. As these days its essential to learn SSIS or any other BI tool.
I felt SSIS is one of the best tools to start with. Let's see some of the basic questions asked on SSIS.

Q. What is the control flow in SSIS? 
As we have MAIN in .Net, same way we have Control Flow in SSIS. 
Containers, Data flow tasks, Administration tasks, Precedence constraints, and Variables are elements of the control flow. 
The control flow is the highest-level of control process. It allows to orchestrate and manage the run-time process activities of data flow and other processes within a package. We can design a control flow by using an Execute Package task to manage the sequence of processing for a set of existing packages in a Master Package concept. This capability allows to combine individual packages into a highly manageable workflow process. Use precedence constraints to set the process rules and to specify sequence within the control flow. An SSIS package consists of a control flow and one or more objects. Data flow and event handler process control components are optional.

Following are some key points of Control Flow:

  • Process is the key: precedence constraints control the project flow based on task completion, success or failure.
  • Task 1 needs to complete before task 2 can begin.
  • Smallest unit of the control flow is a task.
  • Control flow does not move data from task to task.
  • Tasks are run in series if connected with precedence or in parallel.
  • Package control flow is made up of containers and tasks connected with precedence constraints to control package 

Q. what is a data flow in SSIS?


The Data Flow task encapsulates the data flow engine that moves data between sources and destinations, and lets the user transform, clean, and modify data as it is moved. Addition of a Data Flow task to a package, control flow makes it possible for the package to extract, transform, and load data.

A data flow consists of at least one data flow component, but it is typically a set of connected data flow components: sources that extract data; transformations that modify, route, or summarize data; and destinations that load data. Components are connected in the data flow by paths. Each path specifies the two components that are the start and the end of the path.

At run time, the Data Flow task builds an execution plan from the data flow, and the data flow engine executes the plan. You can create a Data Flow task that has no data flow, but the task executes only if it includes at least one data flow.

A package can include multiple Data Flow tasks, and complex packages frequently do. For example, if a package requires that data flows be run in a specified sequence, or that other tasks be performed between the data flows, you must use a separate Data Flow task for each data flow.

The Data Flow task also manages error flows. At run time, row-level errors may occur when data flow components convert data, perform a lookup, or evaluate expressions. For example, a data column with a string value cannot be converted to an integer, or an expression tries to divide by zero. Both operations cause errors, and the rows that contain the errors can be processed separately using an error flow.

There are many types of transformations as the data moves from the data source to the description. Here are some of the transformations:

Copy Column - This transformation copies the data in a column and creates a new column with it.
Script Component - This uses a VB script to transform the data somehow. You can use programmatic means to access the data, and set this script up as a data source, destination, or a transformation. When you open up the script component, there is a button stated as Design Script, which invokes a Visual Studio Editor where you can code the script with intellisense support.
Sort - This transformation sorts the data based on one or more columns setup to sort in a specified order.
Pivot and Unpivot - New to SQL Server 2005 is the way to pivot/unpivot data rather easily and they are supported as transformations. Pivoting data means you can make the columns of the result set based on a distinct result from the data.
Merge/Union All – Merging allows you to veer two inputs into one output. You can specify the input/output parameters that the transformation will map to. The Merge transformation is a little more restrictive than the Union All transformation.
Conditional Split – Conditional Split works the opposite way; based on values within the data, you can setup statements to split the data if a condition matches. For instance, you can split one result based on the expression “ListPrice > 100” and any data that matches that result is returned via a specific data flow path.
OLE DB Command – This transformation executes a SQL statement for every row in the input source.
Lookup – This transformation looks up the value of a field in a lookup data source/table.
Derived Column - A way to add an additional column in the dataset.
Aggregate - This allow to group data.
Fuzzy Logic - This allows to perform text manpulation like upper case, lower case operations on string data.