SQL Server: TempDB

Question:

What is TempDB?

Answer:

This is a short blog post on TempDB.  TempDB is essentially a scratch database for SQL Server.  It’s also one of, if not the, busiest databases on an instance.  Learning what it does, and does not do, is important both for the DBA, but also for the developer.  Setting up TempDB is also an important task – sizing, creating multiple TempDB files, and configuring autogrowth correctly will help make your DBA life simpler.

I’ll start off with a list of facts on TempDB, followed by some DMV’s to look at TempDB useage and performance, and finally a word on troubleshooting.

Some TempDB facts:

  1. Only one TempDB file can be present on an instance
  2. Essentially the scratch database for SQL Server
  3. TempDB contains a data file and a log file
  4. You cannot backup TempDB
  5. TempDB uses Simple recovery model
  6. TempDB is cleared upon instance restart
  7. TempDB uses the Primary file group, you cannot add other file groups
  8. You can have multiple TempDB files within the filegroup
  9. Multiple TempDB datafiles should be sized the same, and their Autogrowth size should be the same. Don’t use percentages for autogrowth
  10. Don’t accept the default AutoGrowth configuration for TempDB
  11. TempDB holds three(3) things: user data, internal data, and the Version Store
  12. TempDB doesn’t log redo information, it does still log undo information. This allows better performance when performing large changes vs. a user database
  13. Table Variables: defined with the “at” sign – @
    • Declare @TempVar Table …
    • Not created “in memory” they live in TempDB, just like temporary tables
    • Cannot be assigned indexes, but you can assign constraints; including primary key or a unique constraint
    • Table Vars do not use statistics, even if a key or constraint is assigned. They use a set value (1) for cardinality
  14. Temporary Tables: defined with a hashtag – “#”, global Temp Tables are defined using a double hashtag – “##”
    • Create Table #TempTable (or ##TempTable) …
    • Unlike Table Variables, Temp Tables contain statistics
    • They can be indexed (clustered, non-clustered)
    • They can be defined as global access, where any session connected to the database can access them
    • Temporary tables are fully transactional aware, table variables are not
    • Temp Tables are limited in scope to the session that created them, once the session is gone, the temp table is dropped

# 3 & # 8.  TempDB contains a data file and a log file:

TempDB_Files_SSMS

# 4.  You cannot backup TempDB:

Backup_TempDB_SSMS

# 5.  TempDB uses Simple recovery model:

TempDB_SimpleRecoveryMode

# 6.  TempDB clears upon SQL Server restart:

Before:

TempDB_TemporaryTables_Before

After:

TempDB_TemporaryTables_After

# 7.  TempDB uses the Primary filegroup, you cannot add additional filegroups:

Secodary_FG

# 13 (d).  Table Variables do not contain statistics:

TableVar_Cardinality

# 14 (b).  Temporary Tables contains statistics:

TempTable_Cardinality

# 14 (c).  Temporary Tables can be indexed (Clustered and Non-clustered):

TempTable_Index

One additional item worth mentioning is the Version Store.  TempDB is used for Read Committed Snapshot Isolation (RCSI), and Snapshot Isolation where the row version is stored.  You can read more about TempDB and the version store here:

https://blogs.msdn.microsoft.com/sqlserverstorageengine/2008/12/30/managing-tempdb-in-sql-server-tempdb-basics-version-store-simple-example/

Impacts on TempDB

Since each data file has an associated meta-data file, whenever requests come in to TempDB the meta-data file has to be updated.  This process is single threaded and can cause contention when many requests are pending.  To enforce data integrity, SQL Server uses a lightweight lock called a latch.  This latch makes sure only one request thread is accessing the page at a time.  Because of this, multiple requests can be queued waiting for their turn to update the data page.

Using more than one TempDB datafile helps reduce this contention because there is another file available along with it’s associated meta-data file.  While the process to update the meta-data file is still single threaded, the presence of two, or more, datafiles spreads out the requests.

You can view wait contention against TempDB using the DMV:  sys.dm_os_waiting_tasks, the wait types we are concerned with are:  PAGELATCH_EX, and PAGELATCH_UP.  You can dive further into the process of waits by using the DMV:  sys.dm_os_wait_stats.

Sizing the datafiles of TempDB the same.  TempDB uses a proportional fill algorithm to populate datafiles equally.  Sizing the datafiles equally allows this algorithm to distribute data properly.

Large datafiles may impede sql server start up when a shutdown has occurred.  Using IFI (Instant File Initialization) can help this process.   Instant File Initialization is set up at the server level.

Memory Spills to TempDB

There are a few ways that memory requests can “spill” over into TempDB.  One way is due to a cardinality mismatch between the table and what the optimizer sees.  Memory grants are created by the optimizer based upon the cardinality (rows that satisfy the query).  This is directly related to the presence of up-to-date statistics.  If the optimizer (cardinality estimator) uses the wrong value for cardinality, then it may not grant enough memory to perform the work.  When this happens, it spills to TempDB for more memory.  Since reading from disk vs. RAM is far slower, this can become an issue depending on the query and execution frequency.

Best Practices

  1. Place TempDB on either a fast disk, RAID, or SSD. This should be an autonomous device, containing only TempDB
  2. Use multiple data files within TempDB
  3. Size the data files the same, along with their associated autogrowth values
  4. Don’t use percentages in data file autogrowth
  5. Configure Instant File Initialization (IFI) – for data files, log files cannot use IFI
  6. Watch for latch contention with DMV’s:
    • dm_os_waiting_tasks
    • dm_os_wait_stats
  7. Latch types:
    • PAGELATCH_EX
    • PAGELATCH_UP
  8. Use Table Variables and Temporary Tables sparingly
  9. Table Variables do not use statistics, wereas Temporary Tables do. This can have a direct impact on performance
  10. Keep statistics current
  11. Eliminate unnecessary sorting operations – large sorts might spill into TempDB

Summary:

In this post I’ve explained what you can, and cannot, do when setting up and maintaining TempDB.  I’ve also shown some best practices and described temp variables and temp tables.  TempDB is being used more and more by the SQL Server OS to perform many functions.  Therefore, setting up TempDB correctly, on the correct drive type, can dramatically increase its performance.

Resources:

Books:

Microsoft SQL Server 2014:  Query Tuning & Optimization, Benjamin Nevarez, McGraw-Hill, 2015.

SQL Server DMVs In Action, Ian W. Stirk, Manning Publications Company, 2011.

MSDN:

Best Practices:

https://blogs.msdn.microsoft.com/cindygross/2009/11/20/compilation-of-sql-server-tempdb-io-best-practices/

General:

https://msdn.microsoft.com/en-us/library/ms190768(v=sql.120).aspx

SQL Server 2014 Cardinality Estimator

Question:

Two-part:  What is different in the new Cardinality Estimator, and how can I check to see which one I’m using?

Answer:

The cardinality estimator (“CE”) was changed in SQL Server 2014.  New assumptions and logic were added based upon a modern workload (OLTP and Data Warehousing).  These new additions have shown, according to Microsoft, that most customers experience query performance enhancement, or at the least they remain the same; while a small amount of customers have seen regressions.

Microsoft doesn’t document much about the query optimizer, but their online documentation does explain some of what’s going on.  Their examples are shown in a “new” and “old” format of what happens now vs. what happened with the old estimator.

Microsoft has provided a couple ways to change the database configuration to enable the new cardinality estimator.  Trace Flags: 9481 and 2312 have been provided to disable/enable the new (2014) cardinality estimator.  You can apply these flags to an individual query using the OPTION (QUERYTRACEON  9481) statement.

SELECT * FROM Sales.SalesOrderDetail SOD
WHERE SOD.SalesOrderID = 3124
OPTION (QUERYTRACEON 9481); — use legacy CE

Alternatively, to set the entire database to use the new CE, you use the ALTER DATABASE statement.

ALTER DATABASE <DATABASE_NAME> SET COMPATABILITY LEVEL = 120;

You could also used DBCC TRACEON with a -1 option to make it global, or use the –T option in the startup configuration.  This allows you to use the features of SQL Server 2014, while keeping the legacy CE.

So getting back to what changes have actually been made, Microsoft examples show us the following:

  1. Ascending Key Issue: where statistics may not contain a specific value since they were last updated.  For instance, data being inserted on a daily basis, but the statisitics may not have been updated to reflect the latest data.  A Where predicate based upon a date for instance where the predicate contains a date later than what is reflected in the current statistics.
  2. Correlation between filtered predicates on the same table: in a query where you “AND” columns in the WHERE predicate that contain a correlation – such as City and ZipCode, the old CE would not draw a correlation between the two.  The new CE makes an assumption that there is a correlation between the two columns and uses a different algorithm to estimate the data distribution.
  3. Filtered predicates on different tables: rather than making the assumption that filtered columns from different tables “AND’ed” together are correlated, the new CE assumes they are not correlated.  This is based on the modern workloads tested.

Prior to migrating older databases (pre-2014) to the new CE in a production system, Microsoft suggests testing the database against the new CE.  This can be done in a testing environment using the two methods I mentioned earlier in this blog – Alter Database and Trace Flags.  This allows you to identify any potential regressions on a test box, and fix them prior to a production migration.  Any databases migrated from an older version to a 2014 instance will retain the legacy CE by default.

To view the compatibility level of databases on an instance, use the sys.databases DMV, and check the compatibility_level column:

SELECT DB.compatibility_level AS [Compatibility Level]
             , DB.name AS [Database]
FROM sys.databases DB;

To validate the CE being used within a query, you have a couple options.  One is to include the Actual Query Execution plan.  Consider the query:

SELECT *
FROM SALES.SalesOrderDetail SOD;

SSMS QEP capture 1

Right-click the “Select” operator and choose “Properties”:

Select Properties 1

Under “CardinalityEstimationModeVersion”,  you’ll see either 70, for the old CE, or 120 for the new CE.

*NOTE:  70 does not mean that the database is using the CE from SQL Server 7.0, 70 is used for all the legacy versions.  70 = old, 120 = new CE.

The other option is to view the XML from the Query Execution Plan:

SSMS QEP capture 1

 

 

Select Properties XML

Right-click in the result window and select “Show Execution Plan XML

Once you have the XML screen, find the tag:  StmtSimple.

You should see:

<StmtSimple … CardinalityEstimationModelVersion=“70”

To turn on / off the new CE on an individual query, you can use the QUERYTRACEON option.  Here is an example of turning “off” the new CE:

SELECT *
FROM HR.Employees
OPTION (QUERYTRACEON 9481); — 9481 old CE

Here is an interesting scenario using the DBCC TRACEON command to set the CE to the old CE:

DBCC TRACEON (9481, -1) — old CE, make it global
GO

Indeed, if I perform a query of any of my user databases on the instance, I’ll see that the legacy CE is
being used. However, if I use the QUERYTRACEON option show above, with the 2312 value, that query
execution plan will overwrite the global setting and use the new (120) CE. Here is the code:

DBCC TRACEON (9481, -1) — set to old CE globally
GO
USE TSQLFundamentals2008
GO
SELECT *
FROM HR.Employees
OPTION (QUERYTRACEON 2312); — option to use new CE for query

Oh, and while all this is happening, the Compatibility Level stays the same. Here is the query to prove it:

SELECT DB.compatibility_level AS [Compatibility Level]
            , DB.name AS [Database]
FROM SYS.databases DB
WHERE DB.name IN (‘AdventureWorks2014’, ‘TSQLFundamentals2008’)
ORDER BY [Compatibility Level] DESC;

Compatibility Level query output

 

*NOTE:  AdventureWorks2014 is showing a 110 Level because I changed it manually earlier.

So, TSQLFundementals database will use the legacy CE, but keep the Compatibility Level of 120 (2014), unless the querytraceon option is used.

Isn’t science fun!

Summary

In this blog I’ve shown you how to enable and disable the new cardinality estimator.  I’ve discussed how to set this at the instance level, and at the query level using querytraceon.  Finally, I’ve shown how to validate the version of the cardinality estimator being used by including the QEP and looking at the properties, or the XML version of the plan.

There is a lot more to discuss about the new cardinality estimator:  how it acts with filtered predicates; associations derived between related columns; and how joins are computed using the new estimator.  I’ll post another blog with some examples of these.

You can review a white paper written by Joseph Sack that goes into detail on the items I’ve mentioned above.  Check out the references for the link.

Knowledge is Power!

References

Microsoft SQL Server – Optimize your Query Plans with the SQL Server 2014 Cardinality Estimator,  written by Joseph Sack:

http://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/Optimizing%20Your%20Query%20Plans%20with%20the%20SQL%20Server%202014%20Cardinality%20Estimator.docx

MSDN:

MSDN blog on the new Cardinality Estimator (Two parts):

https://blogs.msdn.microsoft.com/psssql/2014/04/01/sql-server-2014s-new-cardinality-estimator-part-1/

https://blogs.msdn.microsoft.com/saponsqlserver/2014/01/16/new-functionality-in-sql-server-2014-part-2-new-cardinality-estimation/

SQL Server: Parameter Sniffing

Question:           

What is parameter sniffing, and is it a bad thing?

Answer:

Parameter sniffing is actually a very good thing, most of the time.  When it does go bad, it’s usually extremely bad.  I’ll define, demonstrate, and provide some ways to avoid the bad examples of parameter sniffing.

So what is parameter sniffing?  The optimizer uses statistics histograms to estimate the cardinality of a given query.  One of the things the optimizer does is evaluate the values of the parameters being passed in.  This is “parameter sniffing.”

When the query or stored procedure containing a parameter is executed, the value initially supplied for the parameter is used when creating the execution plan.  Subsequent calls to this query/SP will use this execution plan stored in cache.  So, if the parameter value is one with high selectivity and represents the data as a whole, this is good.  However, if a value is passed that is not representative of the data, the query may suffer in performance.

The best way to understand this is by creating an example.   Using the AdventureWorks20XX database, a good example table is the Sales.SalesOrderDetail.  I’ll perform a query to list the ProductID’s distribution, create a stored procedure with one input parameter, and then execute the procedure to demonstrate good, and bad, parameter sniffing.  Finally, I’ll discuss some of the most popular ways to avoid bad parameter sniffing scenarios.

Query the Sales.SalesOrderDetail table to get a count distribution for the ProductID values:

SELECT SOD.ProductID
            , COUNT(*) as [Count]
FROM   SALES.SalesOrderDetail SOD
GROUP BY SOD.ProductID
ORDER BY COUNT(*) DESC;
GO

This returns:

ProductID           Count

870                       4688
712                       3382
873                       3354
many more rows
943                       6
942                       5
897                       2

So, from this query, we can see that ProductID 897 has only two entries, while ProductID 870 has 4,688 entries!  That’s quite a difference, let’s see if it matters.

Let’s create a stored procedure to query the Sales.SalesOrderDetail table on ProductID:

CREATE PROCEDURE DBO.TEST (@prod_id int)
AS
SELECT *
FROM [Sales].[SalesOrderDetail] SOD
WHERE SOD.ProductID = @prod_id;

Now, let’s execute it and pass the parameter value of 897:

EXEC DBO.TEST @prod_id = 897;  — 2 records

key-lookup plan

The execution plan reveals a key lookup.  For this query, the plan is good.  Remember though, key lookups require a lot of IO, so this may not be the best plan in all cases.

Looking at the Select operator properties, we see that the parameter – @prod_id was set to 897 when compiled as we would expect.

Select operator properties

Now, let’s take a look at an execution with a different parameter value – 870, and for grins, lets also turn Statistics IO on:

– CLEAR THE CACHE
DBCC FREEPROCCACHE
GO
SET STATISTICS IO ON;
GO
EXEC DBO.TEST @prod_id = 870;  — 4688 records
GO
SET STATISTICS IO OFF;
GO

Whoa!  While the optimizer used the existing execution plan in cache (key lookup), take a look at the logical reads it performed – 14,300!

(4688 row(s) affected)
Table ‘SalesOrderDetail’. Scan count 1, logical reads 14380, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

The Select operator properties shows us that the compiled value for the plan generation was 897, but the runtime value was, of course, 870.

Select operator properties - high logical reads

This would be an example of parameter sniffing gone bad.  Executing the stored procedure with a value that has a high selectivity (2 records in our case) will cause the optimizer to choose an execution plan that may not perform well when other parameter values are entered.

In this example the compiled value of 897 produced the key lookup plan.  When the parameter value of 870 was entered, the optimizer said, “Oh, here’s an existing plan that uses a key lookup, cool, I’ll use it!”  And boom!  Instead of doing a key lookup of 2 records, we are now doing a key lookup for 4688 records!  The Statistics IO values returned shows the high IO due to the use of the same plan.

So, before we discuss what can be done to avoid this type of issue, let’s ask one more question:  What if I recompile the plan?  Okay, let’s do that and see what happens.

To cause a manual recompile of the stored procedure, we alter the stored procedure:

ALTER PROCEDURE [dbo].[TEST] (@prod_id int)
AS
SELECT *
FROM [Sales].[SalesOrderDetail] SOD
WHERE SOD.ProductID = @prod_id
OPTION (RECOMPILE);

Okay, so now when dbo.TEST is executed, it will recompile and use the newly passed parameter value to generate an execution plan.  Let’s execute it and check this functionality.

EXEC DBO.TEST @PROD_ID = 870  — 4688 records

Clustered Index Scan - option recompile

Now we see the newly generated execution plan based upon the parameter value of 897.  This plan uses a Clustered Index Scan operation.  Let’s take another look at the stored procedure execution using Statistics IO:

(4688 row(s) affected)

Table ‘SalesOrderDetail’. Scan count 1, logical reads 1242, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

As you might expect, the logical reads has dropped dramatically, lowing the IO thus improving performance for this parameter value.

The downfall of this type of operation, where we recompile each time the stored procedure is called, is the fact that recompiles have overhead, CPU cycles, etc.  So is this the only way to avoid the case of bad parameter sniffing?  Nope.

Solutions:

Options are available to assist with the parameter sniffing issue.  Of course, we could use the example above where we recompile the statement for each execution, but there are other alternatives.

  • OPTION (RECOMPILE) – already presented
  • OPTION (OPTIMIZE FOR (@VARIABLE = VALUE))
  • OPTION (OPTIMIZE FOR (@VARIABLE = UNKNOWN))
  • Use local variables

The OPTION (OPTIMIZE FOR (@VARIABLE = VALUE)) tells the optimizer to compile the statement using the “VALUE” vs. using whatever is passed as a parameter.  This option may be useful when calling the stored procedure over-and-over with a particular value.

Here is an example of Optimize for VALUE:

CREATE PROCEDURE DBO.TEST (@prod_id int)
       AS
       SELECT *
       FROM [Sales].[SalesOrderDetail] SOD
       WHERE SOD.ProductID = @prod_id
OPTION (OPTIMIZE for (@prod_id = 897));

If we execute the SP and pass value 870:

EXEC DBO.TEST @prod_id = 870;  — 4688 records
GO

We see the following plan:

Key Lookup Plan Option Value

The execution of the SP essentially ignores the value of 870 and uses the plan for 897 instead.  In this case, we wouldn’t want to use the value of 870 often.

The OPTION (OPTIMIZE FOR (@VARIABLE = UNKNOWN)) allows the optimizer to use statistical densities rather than statistical histograms.  All parameters use the same estimated record count.  Some query plans may not be optimal because densities are not as accurate as histograms.  When this option is used, parameter sniffing is turned off.

Here is an example of Optimize for Unknown:

CREATE PROCEDURE DBO.TEST (@prod_id int)
       AS
       SELECT *
       FROM [Sales].[SalesOrderDetail] SOD
       WHERE SOD.ProductID = @prod_id
OPTION (OPTIMIZE for (@prod_id = UNKNOWN));

If we execute this SP, passing both the 870 and 897 values, we see something interesting:

Clustered Index Scan - option recompile

Both values use the same plan – Clustered Index Scan.  Looking at the XML output of the plan, we see the StatementEstRows under tag: StmtSimple:

<StmtSimple StatementCompId=”3″ StatementEstRows=”456.079″

This estimate rowcount is the same for both parameter values passed.  As the definition states:  “the same estimated row count” is used for all parameters.

Use Local Variables – when you assign the parameter passed to a local variable in the stored procedure code to a local variable, the optimizer uses the same number of estimated rows for each parameter.

When we use this option, we are effectively “turning off” parameter sniffing.

Here is an example of local variable usage:

CREATE PROCEDURE DBO.TEST2 (@prod_id int)
       AS
       DECLARE @PRODUCT_ID INT = @prod_id  — assign param to local variable
       SELECT *
       FROM [Sales].[SalesOrderDetail] SOD
       WHERE SOD.ProductID = @PRODUCT_ID

Summary

In this post I’ve described parameter sniffing, and shown that it is a normal operation performed by the optimizer.  Using the knowledge gained in these examples, and one or more of the solutions presented, you should be able to identify and remedy any suspect stored procedures.  The takeaway from this is to have alternatives when troubleshooting stored procedures where certain parameter values give less-than-optimal results.

Resources

Microsoft MSDN – SQL Server:
https://msdn.microsoft.com/library/mt590198.aspx

Microsoft MSDN – Stored Procedures:
https://msdn.microsoft.com/en-us/library/ms187926.aspx

Microsoft MSDN – Statistics and Plan Cache:
https://msdn.microsoft.com/en-us/library/cc293624.aspx
https://msdn.microsoft.com/en-us/library/ms190397.aspx

Book:
Microsoft SQL Server 2014: Query Tuning & Optimization, Benjamin Nevarez, McGraw-Hill, 2015.

PowerShell: Finding Unused Indexes in a Database

Question:

How can I use PowerShell to find unused indexes on databases over multiple servers?

Answer:

Using the SMO namespace. Specifically, the server class of the Microsoft.SQLServer.management.SMO namespace.

In this example we create the SQL statement that will query the DMV’s to return indexes that are not being used.

We build the SQL query as a “here” document, and then use the INVOKE-SQLCMD Cmdlet to execute the query against the specific server[s] and database.

function Get-UnusedIndexes
{[CmdLetBinding()] Param
([Parameter (
Mandatory = $true,
ValueFromPipeline = $true)] [string[]] $computername,[Parameter (
Mandatory = $true,
ValueFromPipeline = $true)] [string] $database,[switch] $ShowSQL
)
BEGIN
{
Import-Module SQLPS -DisableNameChecking | Out-Null
# Here document – SQL query definition
$SQL = @”
SELECT OBJECT_NAME(i.[object_id]) AS [Table Name] , i.name AS [Unused Index Name] , i.type_desc AS [Index Type] , [Index Enabled] =
CASE
WHEN i.is_disabled = 1 THEN ‘FALSE’
WHEN I.is_disabled = 0 THEN ‘TRUE’
ELSE ‘UNKNOWN’
END
FROM sys.indexes AS i
INNER JOIN sys.objects AS o ON
i.[object_id] = o.[object_id] WHERE i.index_id NOT IN ( SELECT ddius.index_id
FROM sys.dm_db_index_usage_stats AS ddius
WHERE ddius.[object_id] = i.[object_id] AND   i.index_id = ddius.index_id
AND   database_id = DB_ID()
)
AND o.[type] = ‘U’
AND i.name IS NOT NULL
ORDER BY OBJECT_NAME(i.[object_id]) ASC,[Index Type];
“@

} # End Begin block

PROCESS
{
foreach ( $computer in $computername )
{
try
{
$result = $null

$computer = $computer.ToUpper()

$test = Test-Connection -ComputerName $computer -Count 1 -Quiet  -ErrorAction Stop

if ( $test -eq $false )
{
throw “Unable to Ping Server: $computer”
}

$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -ArgumentList $computer -ErrorAction Stop

if ( $ShowSQL )
{
Write-Output “SQL Commands:”
Write-Output $SQL
}

Write-Output “”
Write-Output “Working Server: $computer”

if ( $server.Databases[$database].IsAccessible )
{
$result = Invoke-Sqlcmd -Query $SQL -ServerInstance $server -Database $database -ErrorAction Stop
}
else
{
throw “Database: $database, Is Not Accessible”
}

if ($result -ne $null)
{
$result | ft -AutoSize
} else {
Write-Output “No Unused Indexes Found on Server: $computername, Database: $database”
}

} catch {
Write-Output “An Error Has Occurred:”
$_.exception.message
Write-Output “”
continue
}
}
} # End Process block

END {}
} # End Function Get-UnusedIndexes

Execute the function:

PS> Get-UnusedIndexes –computername MyServerName  -database MyTestDB

Working Server:  MyServerName

Table Name Unused Index Name          Index Type                Index Enabled
———-        —————–                       ———-                      ————-
TBL1            PK__TBL1                         CLUSTERED            TRUE        
TBL1            IDX_TBL1_COLC              NONCLUSTERED    TRUE        
TBL2            IDX_TBL2_COLC              NONCLUSTERED    FALSE

The query used identifies any index located in the sys.indexes view. If the index exists in one (sys.indexes) and not the other (sys.dm_db_index_usage_stats) then it’s not being used.  An entry is placed in sys.dm_db_index_usage_stats as soon as the index is used.

 

If no unused indexes are found, the following message is displayed:

Working Server: MyServerName
No Unused Indexes Found on Server: MyServerName, Database: MyTestDB

The function has two mandatory parameters: computername and database.  Computername is a string array that can accommodate several comma separated server names.  The database is singular, but the code could be modified to check one, or all, databases on the server.  Two try/catch blocks are used, one to ping the server, and the other to test if the database is available.

The switch, ShowSQL, simply writes out the SQL statement for the user.

Summary:

In this example, I’ve shown how to use the SMO namespace, along with a SQL qeury to find unused indexes on your database. This can be helpful when troubleshooting a query that is performing poorly and the optimizer has chosen not to use an [existing] index within the execution plan.

Resources:

Find out more information here:

SMO Namespace:

https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo(v=sql.120).aspx

SMO/server class:

https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.server(v=sql.120).aspx

DMV: sys.dm_db_index_usage_stats:

https://msdn.microsoft.com/en-us/library/ms188755(v=sql.110).aspx

PowerShell: Finding Database Sizes in SQL Server using SMO

Question: How can I use PowerShell to find out my database sizes on multiple servers?
Answer:  We can use the SQLPS module and the SMO -> Server class to find out the sizes of all databases on a server instance.
Here is the code:

function Get-DatabaseSizes
{

param
(
[string[]] $computername       #an array of computer names
)
Import-Module sqlps -DisableNameChecking | Out-Null
$srv = New-Object -TypeName microsoft.sqlserver.management.smo.server -ArgumentList $computername -ErrorAction Stop
# Where … gets rid of system db’s and makes sure db’s are available
$databases = $srv.Databases.Where({ -not $_.IsSystemObject -and $_.IsAccessible })
$databases |
select Name,
@{N = “Size(MB)”; E={$_.size}},
@{N = “DataSpaceUsage(MB)”;  E={$_.dataspaceusage/1024}},
@{N = “IndexSpaceUsage(MB)”; E={$_.indexspaceusage/1024}},
@{N = “SpaceAvailable(MB)”;  E={$_.Spaceavailable/1024}} |
ft -AutoSize

}

Load the function into your environment, or module file, and then execute it:

PS> Get-DatabaseSizes  -computername ServerOne, ServerTwo, … Server_N

Here is the output  (click on the image for an enlarged view):