29.11.13

Free Friday: Training

There are a number of good sites that offer free training and free E-Books. The community around MS SQL is amazing! If I forget anyone, please add an update. I'd like to have a complete list if at all possible. Let's break it apart for a second.

Before I start, I want to make sure you all are aware of the PASS Chapters.

There are local chapters in your community that gather and learn as a group. If there isn’t one,  contact me or anyone at pass. We can help give you the foundation you need to set one up.

There are Virtual chapters that are free and not locational specific. They are divided by the training they provide.  They have 15 separate Virtual Chapters to choose from. This is a lot of free training just going to waste if no one signs in!

Free Video Training? Are you the type that only learns with audio and visual stimulation? Are you just bored and there's nothing on TV or Netflix? Well... There's a number of good sites with great video archives!

Videos

This Technet section offers a number of great videos with pointed training. 70 videos if I counted correctly.

This is Brent Ozar and crew's site. They have a fantastic archive of their weekly broadcasts and other videos they've made just to help out the rest of us. While I'm here, I'd like to mention that they do have very reasonably priced higher level training that can be purchased... but the link will take you to the free side.

Sean and Jen McCown run this site. There is a whole collection of videos that are free to download or stream. They also have a rather funny webshow on Fridays at 2300(11 PM) Central. It's a great way to unwind after a bad week.

Pass is the main hub for SQL Server training in America. Or at least the main one I've found. Once a year they have a summit that is rather pricey but I've never heard one person come back saying it was a waste of time or money. They offer the previous year’s training for free. You can purchase the current year if you wish. This is a great professional networking opportunity.

Similar to SQLPass, this is a large and free conference that gives great training.

RedGate offers a really good training site and conference that’s been growing in popularity as of recent. This is another one that if you look through, you can find archived videos.

SQL Saturday is an event done in many different cities worldwide. This is the step above your local user group. It’s a great place for new speakers to stretch their legs and for people to connect with other professionals all over the region. In OKC alone we had people coming in from all over the country. These are huge networking opportunities.

How about blogs in general?

SQL Server Central is a great hub for multiple blogs to be posted. It’s a great forum for people to gather and get answers in a timely fashion. That being said, don’t risk production down time waiting for an answer. People do sleep.

Below is a quick list of blogs in no particular order. These are all good places to hit on a regular basis. Of course I threw mine in there too. ^.^

Blogs


How about the interaction you’ll be missing just internet stalking people and watching them in archives? Well we do have webcasts for you!

Webcasts

Brent Ozar, Kendra Little, Jeremiah Peschka, Jes Schultz Borland, and new comer Doug Lane host this consulting crew and they present a free webcast every Tuesday at 11:30 AM Central. It’s a lunch break for some free training.

Sean and Jen McCown run the Midnight DBA webshow every Friday at 2300 (11PM) Central. It’s more laid back and recently a lot more based around personal development and normal ethics based ideas than technical talk… but it does spring up. Don’t miss the preshow and post show both offset by 30 minutes.

PromaticWorks also hosts free training every Tuesday and Thursday at 11AM EST. This could make for a busy Tuesday.





28.11.13

Decrypting Stored Procedures Thanksgiving Special:Quick Tips

Update: Thanks for your response Jon Gurgul! I have updated the files to reflect your updates. It's faster and cleaner now.

Happy Turkey day! Today we're going to eat, lounge around... and pray we're not called in. Something that I've found useful recently. You see a job calling a Stored Procedure that don't know what it does. You attempt to script it out and view it. Encrypted? Well... that's in the way. So do we now start a trace and run it hoping it doesn't cause duplicate data or truncate something you do't want it too? Eh, we can use this instead!

This is a script I found that was written by Jon Gurgul. The original article can be found here.

This is not my creation. All I did was build a step by step guide on how to do it in a more readable fashion.

The Word Document can be found here.

The .SQL file can be found here.

Something to note, the case sensitivity is an issue. I haven't gone through to figure out what's not cased the same... but this works for most databases. (Or... you can just temporarily change the collation... Be warned, I'd make sure no production data or any data is going into it at that time. I'd request a Scheduled Outage, that or trace the collation issue ^.^' )

This can be insanely useful if you're taking over an organization that everything was encrypted and you can't get into it, until now.


I'm adding the links where I found and built this information from below.






27.11.13

Wasted Space Per Database For Fill Factor: Quick Tip

As I keep looking for answers on Fill Factor... I keep finding interesting Metrics to look into. Looking my own stuff, I've found 300GB of space. That could be huge. Just simple things to think about. My previous post explains some things I've learned about Fill Factors, pros vs cons. If you have time, what's your highest number? What's your lost space as a whole?

select db.name as DatabaseName, SUM(a.total_pages*8/1024)as SizeInMB, SUM(((a.total_pages*fill_factor/100)-a.total_pages)*8/1024)*(-1) as SpaceInMBLostToFillFactor
from sys.dm_db_index_usage_stats stats
inner join sys.tables as tbl on stats.object_id = tbl.object_id
inner join sys.schemas as sch on tbl.schema_id = sch.schema_id
inner join sys.databases as db on stats.database_id = db.database_id
inner join sys.indexes as idx on stats.object_id = idx.object_id
and stats.index_id = idx.index_id
inner join sys.partitions as part on stats.object_id = part.object_id
and stats.index_id = part.index_id
INNER JOIN sys.allocation_units a ON part.partition_id = a.container_id
where fill_factor != 0
and fill_factor != 100
and db.name = DB_NAME()
group by db.name



26.11.13

Things I've Learned In Regards To Fill Factor:Quick Tips

I don't have much today... It's been a good week so far. I think I'm becoming a bit obsessed with this Fill Factor thing. This will be short... I swear. ^.^

Negatives to changing Fill Factor from 100 (also known as 0 )

1) Backups are bigger and take longer to run.

Unlike free space as in empty pages, empty space in the page still gets backed up. This will cause the time and size to increase.

2) Reads take longer.

When data is required, it reads the whole page. this does include the empty space. So if you could have fit the data in 10 pages vs 14. That's roughly 40% more reads.

3) More storage is required.

When a page isn't completely full, it takes more pages for the same data. That increases size on disk as well. If you're using SSD's that's something you can actually measure as a cost.

4) More RAM is required.

As previously mentioned, SQL Reads by the page. that means even the empty space is read into memory. Once there it can handle it just fine as far as processing, but the space is still used.

5) Maintenance of Fill Factors will increase.

This one is tough. I personally feel that you should maintain your Fill Factors anyways. Though most can live with the out of sight out of mind principle just fine. There are probably better places to target performance boosts first anyways.

Positives to changing Fill Factor to an appropriate number

1) Inserts and Updates can be quicker.

Now that you have free space in your pages, when you insert or update items that changes its place on the page, you can move it in faster without page splits.

2) Fragmentation can be lower.

Less page splits means less fragmentation. This can be a very good thing.

3) Lower fragmentation due to Fill Factor can improve overall performance.

If your inserts and updates are faster, that helps remove locks. Now that they don't cause fragmentation, reads are in line more. There are some pretty heavy benefits.

4) Maintenance of Fragmentation will decrease.

If you fragment less often by having a lower fill factor, you don't have to rebuild or reorganize quite as often. This can help you a lot if you cannot rebuild your indexes online.

These are things I've found so far. A noticeable key word on the positives is "Appropriate".

Please take this with a grain of salt and sanity. As always, all advice depends on the environment. This is just something to think about.

25.11.13

Usage grouped by Database: Quick Tips

Quick script just to give you an idea of how your databases are being used. Mainly useful for exploring.

select db.name as DatabaseName,
sum(usage.user_seeks) as UserSeeks,
sum(usage.user_scans)as UserScans,
sum(usage.user_lookups) as UserLookups,
sum(usage.user_updates) as UserUpdates
from sys.dm_db_index_usage_stats usage
inner join sys.databases as db on db.database_id = usage.database_id
group by db.name

Hey look! Short script. Longer posts will start back up after Thanksgiving.

22.11.13

How many heap tables do you have? :Quick Tips

Nothing exciting, just a general script. How can you see what heaps you have laying around?

exec sp_MSforeachdb @command1 ='
use [?]
SELECT object_Name(i.object_id)
AS TableName, i.object_id,
Schema_Name(o.schema_id)
AS SchemaName, i.name, avg_fragmentation_in_percent, page_count
FROM sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, NULL) p
inner join  sys.objects as o on p.object_id = o.object_id
inner join sys.schemas as s on o.schema_id = s.schema_id
inner join sys.indexes as i on p.object_id = i.object_id
and p.index_id = i.index_id
where i.index_id = 0
and page_count > 1000
'


Adjust the page count to what ever works for your area. This is just for exploratory uses.

21.11.13

Fill Factor and wasted space: Quick Tip

This is a little excessive, but here's a quick script to show you your fill factors per table that are not at 100 as well as how much unused space you have per table. It also creates a script to rebuild, be aware of the options that are set.

select db.name as databasename, tbl.name as tablename, idx.name as indexname,
stats.index_id, fill_factor, last_user_update,user_seeks, user_scans,
user_lookups, user_updates, part.rows, part.data_compression_desc,
    SUM(a.total_pages) * 8 AS TotalSpaceKB,
    SUM(a.used_pages) * 8 AS UsedSpaceKB,
    (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB,
'ALTER INDEX [' + idx.name + '] ON [' + Schema_Name(sch.schema_id) +
'].[' + object_Name(idx.object_id) +
'] REBUILD PARTITION = ALL WITH ( FILLFACTOR = 100, PAD_INDEX  = OFF,
STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS
= ON, ONLINE = ON, SORT_IN_TEMPDB = ON )' as Rebuild_Script
from sys.dm_db_index_usage_stats stats
inner join sys.tables as tbl on stats.object_id = tbl.object_id
inner join sys.schemas as sch on tbl.schema_id = sch.schema_id
inner join sys.databases as db on stats.database_id = db.database_id
inner join sys.indexes as idx on stats.object_id = idx.object_id
and stats.index_id = idx.index_id
inner join sys.partitions as part on stats.object_id = part.object_id
and stats.index_id = part.index_id
INNER JOIN sys.allocation_units a ON part.partition_id = a.container_id
where fill_factor != 0
and
fill_factor != 100
group by user_updates, fill_factor, db.name, tbl.name, idx.name, stats.index_id, fill_factor, last_user_update,user_seeks, user_scans,
user_lookups, user_updates, part.rows, part.data_compression_desc, sch.schema_id, idx.object_id

20.11.13

Select * from * : Quick Tips

What? You want to see all of your data? Well do I have the script for you! New, Improved, Extra Colorful! here we have the one script to rule them all!!... I jest. Please never run this in production.


--all databases
exec sp_MSforeachdb @command1 = '
use [?]
begin
declare @dbname as varchar (128)
set @dbname = DB_NAME()
SELECT ''SELECT * FROM ['' + @dbname + ''].['' + SCH.name + ''].['' + TBL.name + '']'' as selectall
from sys.tables TBL
inner join sys.schemas as SCH on sch.schema_id = TBL.schema_id
end
'

--single database
SELECT 'SELECT * FROM [' + SCH.name + '].[' + TBL.name + ']' as selectall
from sys.tables TBL
inner join sys.schemas as SCH on sch.schema_id = TBL.schema_id

I've found an actual use for this... I had to look pretty hard though.

i use the version below to quickly fill out some DMV's, primarily the dm_db_index_usage_stats. Until there's some action on it, it has no usage to record.


--select top 1 for all databases

exec sp_MSforeachdb @command1 = '
use [?]
begin
declare @dbname as varchar (128)
set @dbname = DB_NAME()
SELECT ''SELECT TOP 1 * FROM ['' + @dbname + ''].['' + SCH.name + ''].['' + TBL.name + '']'' as selectall
from sys.tables TBL
inner join sys.schemas as SCH on sch.schema_id = TBL.schema_id
end
'

Regardless, you can now select * from *. Have fun!

19.11.13

Fill Factor, Why Do You Taunt Me? Part 3

Let's look at something a bit more in depth. Let me start by saying, "Please don't arbitrarily change any default setting without understanding it fully, meaning how it's used, what it does, what impacts will exist, and how much it could hurt." Now! for something interesting.

First we need to create our table.

I create most of my tables in a DBA only Database. I've worked in places where you weren't allowed to create any Databases for any reason ever without 3 levels of permission, a signed letter with a blood seal and two recommendations from two separate Illuminati knights. OK, maybe it wasn't that bad... but it definitely felt like it. Place this where ever you want, just modify the scripts accordingly.

Now we need to fill out the table.

This will take some time. Get some coffee, find a new book, plant a tree. I'm hoping to find a faster way to get this same data, but as it stands... pulling this information off of a 5TB + database is a days event. The good thing is that this data will be good for quite some time.

Note: This is something I would suggest addressing on a yearly basis in a slow moving environment, or monthly in a volatile one.

Now for some fun. we need to query our table.

Note: I realize that a table has the ability to use up to 8096 bytes if you're using multiple records. This includes other overheads I did not calculate. 8060 was used to allow that 36 byte area to be used just for overhead.

You should only be concerned with the tables that you know to be active. I will add a way to track that as well in another post that will marry this concept with a true use concept. you may find one that looks a bit off.

if your max record size is 3908, you are only going to get 2 on a page. If you really want every page to be able to insert two records.... you need a fill factor of 51%. That's a good time to just be happy with page splits and let it go.

If you use the query built off AVG, remember that an average item is 50%. It'll catch... well half roughly. Take that into consideration.

Note: It could also help you identify if you have one extremely large record and a bunch of tiny ones. This could be something to investigate.

--select * from master.dbo.FillFactorStatistics

It's interesting to me at least.

Kendra Little has a wonderful post on fill factors.

There's a white paper on them by Ken Lassesen.

Karel Coenye has a Technet video with more detail.

18.11.13

Fill Factor, Why Do You Taunt Me? Part 2

So Friday we were discussing the idea of an idea. I think... or an idea of an idea's idea. What she said...

Let's answer the easy question. How far are your fill factors off from the default settings?

First we want to look at what our default setting actually is.

EXEC SP_CONFIGURE 'show advanced options', 1
GO
RECONFIGURE WITH OVERRIDE
GO;
EXEC SP_CONFIGURE

0 and 100 are the same thing. I strong advise that you do not use a global setting for fill factor other than 100%. It may take more work to address the indexes and tables on a case by case basis... but doing it right also has it's merits.

 I've got a little script that builds out a table for easy viewing. This is also a part of the larger automated fill factor project I'm working on.

The script can be found here.

Now we have a table showing us how all our fill factors look that are not default.

select DatabaseName, SchemaName, TableName, IndexName, fill_factor from [master].[dbo].[DBA_Reset_Fragmentation_Levels] order by DatabaseName, fill_factor asc

Now, depending on your environment, you could have zero results, or thousands.

This is as script I found some time ago, I do not know the author. If anyone knows who wrote this, I'd like to at least give the correct credit.

I will try and define better what constitutes an (user_update), but for now this still works for good information. you can see what table is taking the most updates. These may be better candidates for lower fill factors than others. The one clearly defined thing is that when the updates are near zero, why do you need a fill factor of 50%?

I know it's not spring yet, but maybe a little house cleaning is in order?

15.11.13

Fill Factor, Why Do You Taunt Me? Part 1

Like all the arbitrary numbers we hear in the SQL community such as the 30% fragmentation and 1000 pages. These come more as a best practice, not a rule. There are so many sites talking about this, it must be an interesting or highly misunderstood idea. I don't know how deep I can go, but over the next few weeks I plan on adding as much information to this as I can.

On highly updated tables, a lower fill factor number can be beneficial, but it's a dangerous game to play. If you have your pages at 70% capacity, that's 30% lost disk space, 30% lost memory space, and higher read costs when you retrieve data. SQL retrieves data one page at a time. That wasted space is also retrieved. It does help you update and insert data into those pages cleaner, but there's a point when it's not worth it. How often and how many updates do you do? Is your clustered index on an autoID? How big are your records for that table?

These all all questions you should be asking. There are many more as well. I would say as a rule of thumb, don't change the default fill factor. A server level fill factor impacts system databases, temp tables, and other databases besides the one you're working with.


My goals are to cover a few key points:

How do we find the fill factor of all our tables? (Or at least isolate those that are not on the default setting.)
What fill factor do we truly need? (if we need it at all)
What are some things to consider before working with our fill factor?

Hopefully I can cover this in more detail as well.

This will be updated some and referenced at the end. I'm adding in all things I've learned over the course of tracking down Fill Factors.

A page is 8192 Bytes.
The maximum column size is 8000.
The maximum record size is 8060.
The maximum record size for a table using sparse columns is 8018.
The maximum usable page size is 8096.
96 is reserved for headers.
Row_Overflow_Data -When you fill up a page and it needs to split a record across multiple pages.
Row_Overflow_Data uses a 24 Byte pointer.

Update 11/5/2013 10:43 CDT: Fill Factor only applies to the index's leaf pages.
Update 11/5/2013 13:46 CDT: User_Updates from dm_db_index_usage_stats does not differentiate bulk updates from single updates.

Knowing how the pages work greatly enhances how your fill factors work. There may be points that 94% will actually help just as much as 90%. This can save space and reduce read times.


14.11.13

Baseline Maintenance Statistics Job For Indexes. (say that three times fast)

This is a project I've been working on for a while. The information is all there... how it's queried may need some work. The general idea is, no one has a full years view on how their indexes are performing. If you plan on working somewhere for the rest of your life... make that life easier. If you're taking over a new system because someone found it under their desk... or just a new one coming on board, this will make at least some of your life easier.

This job creates two tables in master. testing this on a multi-schema 3.4TB database with hundreds of tables, it only grew to 50MB for both tables. That being said, it should be relatively easy to re-target this to a DBA or other database if you wish to isolate it. It will delete any data over 370 days old. This allows for you to get ranged data. Think! You may have that one major report that runs once a month.. once a quarter! If you delete this index that's never being used, one day you may find out it was.

This job assumes that you have a privileged account named 'sa'. That is easy to change if you have modified that to another name or disabled it all together. Side note: good idea to rename or disable and use something else. This job also assumes that you have the default category of '[Uncategorized (Local)]' which is perfectly fine.

The first query produces a list of all indexes that have shown no reads in what ever time frame you define. If you let this run long enough, it's a good way to ensure that you don't have wasted space and slower inserts, updates, and deletes in your database for no good reason.

The second query produces a list of suggestions. This is meant only as a suggestion. Please do not blindly follow this. There are some wonderful places full of documentation on how to handle indexes. This is just to log information for prolonged use.

The final thing to note... The job is created without a schedule. Make sure you assign it a daily schedule, off peak hours (best right after peak hours have ended). Since this runs daily, even if you reboot every saturday, you'll still have 6 days worth of data each week to use for a base line. (7 if you reboot after).

You will have to split up the results some, I advice putting it in excel split by date of last restart.
You can get the file from here or copy from below.
/*************************************************************
Created by Bill Barnes
Script Date: 06/19/2013 09:24
for non commercial and testing purposes.
MaintenanceStatistics was created for SQL Server 2008 SP1 or higher
This job creates two tables located in master.
master.dbo.DBA_UnderUsedIndexes
master.dbo.DBA_MissingIndexes
The job created is MaintenaceStatistics.
This job pulls valuable information into the two above tables.
--Remember to set a schedule for MaintenanceStatistics. It does
--not come with one by default.
--be careful to target the correct category. [Uncategorized (Local)] is common.
--make this the owner of the job. SA is common. It's best to change this name.
*************************************************************/
/*************************************************************
--This query will tell you what indexes have shown zero reads over a 370 day window.
--The purpose is to ensure you're only removing indexes that have never been used.
--Please do not run this in production and make a decision based on a days data.
--This job is meant to run at all times, once a day, and collect a years data for strong
--indexing base lines.
select distinct b.DataBaseName, b.TableName, b.Indexname  as IndexName
FROM master.dbo.DBA_UnderUsedIndexes a
inner join(select LastServicesRestart, DataBaseName, TableName, indexname, sum(reads) as reads
FROM master.dbo.DBA_UnderUsedIndexes
group by LastServicesRestart, DataBaseName, TableName, indexname HAVING SUM(reads) = 0) b
on a.indexname = b.IndexName
inner join (select distinct LastServicesRestart from master.dbo.DBA_UnderUsedIndexes) c
on c.LastServicesRestart = a.LastServicesRestart
group by b.DataBaseName, b.TableName, b.Indexname, c.LastServicesRestart
--This query is meant to produce a list of what indexes may need some work.
--Never trust exactly what this shows. Always look deeper and use your best judgement.
select Old.CurrentDate, Dist.sqlserver_start_time, statement,
Impact, [Average Total Cost],[% Reduction of Cost],
[Missed Opportunities], [Equality Columns], [Inequality Columns],
[Included Columns] from master.dbo.DBA_MissingIndexes Orig
inner join (select distinct sqlserver_start_time from master.dbo.DBA_MissingIndexes) Dist
on Dist.sqlserver_start_time = Orig.sqlserver_start_time
inner join (select distinct sqlserver_start_time, Max(CurrentTime) as CurrentDate FROM master.dbo.DBA_MissingIndexes
group by sqlserver_start_time) Old
on Orig.CurrentTime = Old.CurrentDate
--where [% Reduction of Cost] > 50
Order by sqlserver_start_time, Impact desc
*************************************************************/

USE [master]
GO

/****** Object:  Table [dbo].[DBA_UnderUsedIndexes]******/

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[DBA_UnderUsedIndexes](
        [CurrentTime] [datetime] NOT NULL,
        [LastServicesRestart] [datetime] NOT NULL,
        [DataBaseName] [nvarchar](128) NULL,
        [TableName] [sysname] NOT NULL,
        [IndexName] [sysname] NULL,
        [reads] [bigint] NULL,
        [writes] [bigint] NOT NULL,
        [rows] [bigint] NULL,
        [reads_per_write] [numeric](38, 17) NULL
) ON [PRIMARY]

USE [master]
GO

/****** Object:  Table [dbo].[MissingIndexes]******/

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[DBA_MissingIndexes](
        [CurrentTime] [datetime] NOT NULL,
        [sqlserver_start_time] [datetime] NOT NULL,
        [statement] nvarchar(200) NULL,
        [Impact] bigint NULL,
        [Average Total Cost] numeric(10,2) NULL,
        [% Reduction of Cost] int NULL,
        [Missed Opportunities] int NULL,
        [Equality Columns] nvarchar(2000) NULL,
        [Inequality Columns] nvarchar(2000) NULL,
        [Included Columns] nvarchar(2000) NULL
) ON [PRIMARY]
GO

USE [msdb]
GO

/****** Object:  Job [MaintenanceStatistics]******/

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

/****** Object:  JobCategory [[Uncategorized (Local)]]]******/

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
--be careful to target the correct category. [Uncategorized (Local)] is common.
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'MaintenanceStatistics',
                @enabled=1,
                @notify_level_eventlog=0,
                @notify_level_email=0,
                @notify_level_netsend=0,
                @notify_level_page=0,
                @delete_level=0,
                @description=N'No description available.',
                @category_name=N'[Uncategorized (Local)]',
                @owner_login_name=N'sa', @job_id = @jobId OUTPUT
                --make this the owner of the job. SA is common. It's best to change this name.
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/****** Object:  Step [Missing Indexes]******/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Missing Indexes',
                @step_id=1,
                @cmdexec_success_code=0,
                @on_success_action=3,
                @on_success_step_id=0,
                @on_fail_action=3,
                @on_fail_step_id=0,
                @retry_attempts=2,
                @retry_interval=10,
                @os_run_priority=0, @subsystem=N'TSQL',
                @command=N'
delete from master.dbo.DBA_MissingIndexes
where CurrentTime < GETDATE()- 370
GO

                INSERT INTO master.dbo.DBA_MissingIndexes
                SELECT
        current_timestamp as CurrentTime,
        m.sqlserver_start_time,
        id.statement,
        cast(gs.avg_total_user_cost * gs.avg_user_impact * ( gs.user_seeks + gs.user_scans )as bigint) AS Impact,
        cast(gs.avg_total_user_cost as numeric(10,2)) as [Average Total Cost],
        cast(gs.avg_user_impact as int) as [% Reduction of Cost],
        gs.user_seeks + gs.user_scans as [Missed Opportunities],
        id.equality_columns as [Equality Columns],
        id.inequality_columns as [Inequality Columns],
        id.included_columns as [Included Columns]
FROM sys.dm_db_missing_index_group_stats AS gs
JOIN sys.dm_db_missing_index_groups AS ig ON gs.group_handle = ig.index_group_handle
JOIN sys.dm_db_missing_index_details AS id ON ig.index_handle = id.index_handle
join sys.dm_os_sys_info m on m.sqlserver_start_time = sqlserver_start_time
ORDER BY Impact desc',
                @database_name=N'master',
                @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/****** Object:  Step [Underused Indexes]******/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Underused Indexes',
                @step_id=2,
                @cmdexec_success_code=0,
                @on_success_action=1,
                @on_success_step_id=0,
                @on_fail_action=2,
                @on_fail_step_id=0,
                @retry_attempts=2,
                @retry_interval=10,
                @os_run_priority=0, @subsystem=N'TSQL',
                @command=N'

delete from master.dbo.DBA_UnderUsedIndexes
where CurrentTime < GETDATE()- 370

exec sp_MSforeachdb @command1 = ''
use [?]
INSERT INTO master.dbo.DBA_UnderUsedIndexes
SELECT current_timestamp as CurrentTime,
m.sqlserver_start_time as LastServicesRestart
, DB_NAME() AS DataBaseName
, o.name as TableName
, i.name as IndexName
, reads=user_seeks + user_scans + user_lookups  
, writes =  user_updates  
, rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)
, CASE
WHEN s.user_updates < 1 THEN 100
ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates
END AS reads_per_write
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id
INNER JOIN sys.objects o on s.object_id = o.object_id
INNER JOIN sys.schemas c on o.schema_id = c.schema_id
inner join sys.dm_os_sys_info m on m.sqlserver_start_time = sqlserver_start_time
WHERE OBJECTPROPERTY(s.object_id,''''IsUserTable'''') = 1
AND s.database_id = DB_ID()
AND i.type_desc = ''''nonclustered''''
AND i.is_primary_key = 0
AND i.is_unique_constraint = 0
AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
ORDER BY reads, writes desc''',
                @database_name=N'master',
                @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

13.11.13

Can't We All Just Get Along?: Personal Development

This isn't a normal post. This one is directed to newer people, those who feel pushed around a bit at work, or those in general that do not fell that they are being heard or trusted as much as they should. OK, these are all personal opinions, take that at face value. This has worked for me over time.

I'm a bit of an introvert per say. I don't handle parties or lots of people well unless they're well known. This blog and the attempt to start speaking is partially to break some of that. The things I've learned over time...

1) Don't cross your arms, fiddle with things, put your hands in your pocket,  or lean in behind a podium.

This one seems more simple, but watch yourself. day to day, how you interact... the more you tighten up, the less you seem sure of yourself. Before speaking in front of people or just to start your day. Stretch out, lean back, spread out, and move around a little. The more open you feel, the more natural you will come off.

2) Avoid weak words such as: Might, Should, Could, Maybe, Supposed To, I Think, ect.

If you've tested that piece of code, made that souffle 1000 times before, rebuilt 100 engines... why would you start off with, "This code change Might fix our problem." You know it fixes the problem. You should be proud you're providing something useful. If they accept it quicker, you can move on to something else quicker as well. Removing weak words helps remove doubt. This isn't to say that if you're not sure, declare that it'll work from the mountain tops... but still watch your speech. a small change over time will help with confidence and control.

3) Don't look away.

It's hard to judge what a person is thinking or saying when they're staring at the water cooler or the floor. If you can create that connection, do so. Looking someone in the eye can instill confidence and help move a conversation from someone just stopping by with more work to someone actually talking to you.

4) Smile.

A smile even on a bad day, makes the day better. This isn't an absolute... but the more you smile, the happier you are. The happier you are, the easier the people around you are to deal with. If you smile at the people you work with on a daily basis, you'll seem more approachable. That's where we get into #5. (FYI, Sheldon's smile is better left as a blank face)

5) Get along.

OK, I know there's always been a split in IT. We have our SAN guys who always hide our disks or give us RAID 5's when we clearly asked for RAID 10's on the new SSDs... The Developers who keep asking for SA and trying to sneak code changes in without us looking... The SA's who decide to reboot your server and continually wonder why SQL is taking all that RAM?! I bet it's a memory leak... and our help desk who keeps patching people through while we're busy and didn't even open a ticket?... The DBA's (We can't be left out) who continue to lock everything down like a dragon sitting on a pile of gold asking for more resources and never letting anyone touch their precious.

Simply put, we're all human, we all have a job to do, and we're not here to make other peoples lives miserable. I understand that in some shops... that's how it's always been. It can change with a single person making the effort. Maybe the SA has found a problem..., Maybe the SAN admin has some spare disks you can talk him into. Maybe the developer found a major bug and is willing to help you isolate it a lot faster. I'm not saying give up control over your area... but working with your team mates will make life better and the quality of your business better as a whole. You might screw up one day and need that SA, DEV, or SAN admin to help you fix it.

6) Don't deal in absolutes

All this said... Nothing should be dealt with as an absolute (see what I did there?).  If you're unsure, don't give it as a fact. If you are looking at someone, remember there's a point staring someone down is a bit creepy. Smiling is great, but faking it doesn't seem to help much.

This can go on and on rather easily. The simple note here is, be happier, speak up, get along with people and work together. These small changes can make a place to work become a home. If I am going to spend 1/3 of my life, over half of my awake life working somewhere? I'm going to make sure I actually like the place.

12.11.13

Index Fragmentation : Quick Tip

Fragmentation can cause issues for performance to the point of having queries not resolve at all. Your normally scheduled maintenance should be clearing this up for you... is it? Here's a simple script to show you how your index fragmentation is doing at a given point in time. The idea behind it creating a script instead of fixing the problem for you, is so that you control what happens when. It's not a good idea to defrag your primary table that your entire business uses during the middle of the day.

As a reminder, online operations still cause locking... though not as much as offline operations. Take care when using any new script from any source.

This can be downloaded here.

/*************************************************************
Created by Bill Barnes
Script Date: 06/19/2013 09:24
This scripts sole purpose is to give a point in time reference
of how your indexes are doing and a quick script to repair them.
Utilize this as a base for any other function you'd like to perform.
*************************************************************/
SELECT object_Name(i.object_id)
AS TableName, i.object_id,
Schema_Name(o.schema_id)
AS SchemaName, i.name, avg_fragmentation_in_percent, page_count, partition_number, i.index_id,
'ALTER INDEX [' + i.name + '] ON [' + Schema_Name(o.schema_id) + '].[' + object_Name(i.object_id) +
 '] REBUILD WITH (FILLFACTOR = 100, ONLINE = ON)' as Rebuild_Script
 --You can remove the fillfactor option if you wish to use the setting applied to index.
 --online operations will only work with enterprise. This does not currently take into account partitions.
FROM sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, NULL) p
inner join  sys.objects as o on p.object_id = o.object_id
inner join sys.schemas as s on o.schema_id = s.schema_id
inner join sys.indexes as i on p.object_id = i.object_id
and p.index_id = i.index_id
where i.index_id > 0 --This is in place to ignore Heap tables.
and avg_fragmentation_in_percent > 5 --feel free to change this to any other number. This number is low for real world use.
-- I would suggest a fragmentation of 30% just because this is for point in time use. This is not a replacement for a maintenance job.
and page_count > 20 --feel free to change this to any other number. This number is low for real world use.
-- I would suggest a page count of 1000 for normal databases. Adjust to your environment.
and o.schema_id <> 4


Creative Commons License
Index Frag by Index Fragmentation: Quick Tip is licensed under a Creative Commons Attribution 3.0 Unported License.


11.11.13

Floating Fill Factors, Automation at work!

A while back I saw a wonderful article on BrentOzar.com by Kendra Little. In the comments, Aaron Morelli mentioned that he had a script that was proprietary that manages fill factor for his tables. I'm sure his is better built, but this is a work in progress that I plan to expand on. This is the first thing that I've created that I'm releasing to more than the 5 people I work with... Please be gentle ^.^;

I am working on a project that will clear up some fragmentation issues over time. Right now this script is in beta 0.001 ½. The idea is that it adjusts your fill factor down by 5% after you've had 5 days within a 7 day window that show over 30% fragmentation and 1000 pages. This can be modified of course. Again, this is still in the beginning stages and needs a lot of love and care!

I will be working on cleaning up the code while making it a bit more user friendly. Any suggestions and feedback are welcome.

Things I plan to add in the future:

1) Make this a stored procedure.
2) Allow for the minimum fill factor to be modified.
3) Allow for this information to be E-Mailed instead of automatically adjusted for those less trusting.
4) More Dynamic to adjust for various features(i.e. partition based work, online operations)
5) Partition level rebuilds and control (2014 is supposed to allow partition based rebuilds online... YAY!)
6) Flowers and Penguins... (because who doesn't love Penguins!?!?!)

This is a two part code release. Part one will change all indexes to 100 fill factor (back to default). Part two will be the automatic fill factor adjustments.

Please please please don't run this in production. It's no where near official release. It's barely clean enough to be up here. ^.^

Auto_Adjust_FillFactor is here.
Set_Fragmentation_To_100 is here.



Creative Commons License
Auto Adjust Fragmentation by BIll Barnes is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Creative Commons License
Set All Fragmentation To 100 by BIll Barnes is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

7.11.13

Do your Foreign Keys Hate You?

We all love our databases. We trust everything they do! How could we not? It's our baby.... well, probably not. How do we know when it's sneaking around behind your back asking questions it shouldn't and refusing to follow the plans? It's good to occasionally check up on certain aspects of your database. The Ozar family has a wonderful script called sp_BLITZ that does a fantastic job of identifying if you have some of the more common database issues and gives good general information.

All this is, is an easy way to have a script build a script for you to clear up what problems you may have. you're mileage may very, all sales final, returns not permitted, penguins accepted as payment..

If you're really curious what all this entails, there's quite a few sites with good information about these evil foreign keys we just cannot trust.

This is just a basic overview. When a key is trusted, it doesn't check the other table to ensure all the data matches, it just trusts it. When it's not trusted, you're doubling that work.

You could have had to disable that relationship at some point to remove a bad record, add in new data, or just simply because you were curious what happens when you disable it.

below is a script that will isolate what foreign keys are not trusted. It also provides a basic script that will make it trusted if everything matches up. It will error if your constraint is unable to be trusted.

exec sp_MSforeachdb @command1 ='
use ?
declare @DBName VarChar(300)
set @DBName = (select ''use '' + (Select DB_Name()))
select s.name as SchemaName, o.name as TableName, i.name as IndexName, o.object_id as Item,
@DBName + ''; '' + ''ALTER TABLE ['' + s.name + ''].['' + o.name + ''] WITH CHECK CHECK CONSTRAINT ['' + i.name + '']'' as Repair_Script
from sys.foreign_keys i
inner join sys.objects o on i.parent_object_id = o.object_id
inner join sys.schemas s on o.schema_id = s.schema_id
where i.is_not_trusted = 1 and i.is_not_for_replication = 0
'



6.11.13

How to find empty tables in SQL Server : Quick Tip

These quick tips are short and sweet. They are nothing more than an explanation and some code. If you have questions or want to see something more, feel free to drop a line.

This came as a need for a project I was working on. We wanted to find all the tables that were empty and didn't have a primary key. We copied a database to test and truncated everything, we wanted to know what was being repopulated. This may not be the most beautiful way of doing this, but it works.

select distinct obj.name as TableName, obj.object_id,
part.rows as RowCounts from sys.tables tbl
inner join sys.partitions as part on tbl.object_id = part.object_id
inner join sys.objects as obj on  tbl.name = obj.name
inner join sys.indexes as idx on obj.object_id  = idx.object_id
where rows = 0
and is_primary_key = 0
and idx.type <= 1




SQL Saturday Dallas

I've been to five SQL Saturdays so far and this was my first year to attend the Dallas conference. There
were a lot of good presentations and good speakers present. The venue was well placed and the rooms were nice. The only issues I saw were that the tables were too far out limiting the walking path, the building has no cell reception (Verizon), and the internet that was available didn't allow remote desktop for those of us still working.

All in all, those are very minor complaints. The event it self turned out really well. There was a sizable group of people new to to SQL Saturdays. That's a great thing to see. Our community is growing all the time. I saw Shawn Weisfeld there this year as well, (Hi Shawn!). He's the founder of Usergroup.tv. I've seen him around the SQL Saturday community more than a few times. It's great to see faces often enough at yearly events (per location) to actually recognize them.

I think on a personal note, something is missing in this world now. Community. We see day to day rushing, competing, trying to get to the top... and seldom do we see people helping out. This entire community does just that. We come together, build training, answer questions, watch #sqlhelp and see how we can better the community. I've not found a single A list person yet in this group who has refused to help answer a quick question or help them find self help. ^.^

Originally this blog was opened to setup 101 style training, basically the questions I'm asked - I wanted to document the answer, and to document my training towards the MCM. Given that the MCM is dead now.... I took a break from the idea of blogging. I think this was a mistake. I'm making this my new official first blog post. I am working still towards learning PowerShell, SSIS, proper performance tuning, and overall how to be a good admin.

I think this will be a good start.