Showing posts with label Projects. Show all posts
Showing posts with label Projects. Show all posts

23.9.16

I'm Back!

I've been gone... wow. Since March 31st of this year?! Let's fix that.

So what has been holding me back? 

Kids? I have three! 2, 15, 18. One wants to drive, one wants to move out into the world, the other wants to rule the world. 

Wife? She's always there for me... always... there... always.

Work? We get short breaks on the weekends.. I guess. Unless called, or working or...

Events? SQL Saturdays, Summit coming up, User group meetings...

Training / School? This is the training that never ends. yes it goes on and on and on and...

Well. Let's get a little more honesty here. What's been holding me back is me. It's time to fix any excuses and get moving again! I've slowed down too much.

My kids are supportive. They give me time to study and are really good at being quiet when I'm taking tests and what not. That's a lot to ask for already. My 2 year old will even happily sit in my lap and watch me work as long as school bus is playing somewhere. This kid loves her some Magic School Bus. 

My wife is always there, to help make sure the house gets taken care of and I get time to work. Life happens and it's not always perfect, but there is a large amount of effort there. I truly couldn't appreciate it more. 

Work is actually rather great. I get time to study and learn new things. We're making fantastic improvements in how we grow and work our business. We all get along here in IT. no one tries to pull rank on anyone and everything just works. (Weird right?) 

We have so much help from the community it's awe inspiring. These events almost run them selves once you push that snow ball down a hill. It's fantastic to see this come together. I've had random attendees to our SQL Sat come up and let me know things they took care of because it needed to be done. How awesome is that?

Training. Well this one is just what it is. Training never ends. That's a good thing. I think once we stop advancing, we only start to decline. I don't honestly believe in knowledge being stagnant. 

My wife even went one step further this year. I got my anniversary present from her. It still needs a bit of work... but she's committed to giving me time and ability to work, that this appeared in our backyard. 

Grumpy cat is needed to keep me company though. It does get quiet in my backyard box. Furniture will update when I get the walls in. 





To make things better? It even has the most requested and required tools available to your studying needs!

Yes, I realize this needs a few things... like insulation, proper lights, electricity, sheetrock... a floor? But for day one, it's usable. We turned our old storage shed spot into a place where we can work without distractions. She can come out on days that she needs some time just the same.


My biggest problems keeping up on blog posts and dedicating time to training was my own fault.
I'm fixing it with a bit of help (a lot) from the family. I'm hoping this is the start of another spring forward as well. Besides, when one node of our AG goes down on my kids birthday again, we don't have to try to find a quiet spot and interrupt the party, again. :)

I'm pretty excited. I hope to post more here. I think some quiet swinging in a hammock is exactly what's needed to get back to writing.

See you all soon!

3.3.15

SQL Server, Error message 1785, and cascading deletes: How it kills kittens

I've been gone for a bit, forgive the long delay. Who would have thought the hatred for a cascading delete would cause an update? Strange isn't it? So where to begin?

It was a dark and stormy night... Wait, wrong movie. 

I just finished a design on a project for cascading deletes. We have over 100 tables that spider web out from our main claim table.

Problem: We cannot delete a record from the main table without deleting from all of the other tables first, in the right order.

Solution: Modify all of the Foreign Keys (FK) to include  ON DELETE CASCADE.

I wanted to share some of my pains of labor with you all. ^.^;

Issue 1) We had one table with FK’s to two other tables. This prevented us from using the ON DELETE CASCADE because that table had protection from deletion by another FK. Pain in the tail. 

This cause the error "Server: Msg 1785, Level 16, State 1, Line 1 Introducing FOREIGN KEY constraint 'fk_two' on table 'table2' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Server: Msg 1750, Level 16, State 1, Line 1 Could not create constraint. See previous errors."

This error is because of how a cascading path works. You have to have all the deletes/updates flow in a single direction. Having a FK to another table other than your main table will cause it to be protected by another key.

 We decided that we’d remove the secondary FK since it’s already has the constraint from the primary table… We were able to just drop the secondary constraint since it had no children related to anything else. This still kept it under the original tables constraint. 

Issue 2) We were lucky enough to have a Visio diagram of this whole 100+ table process. If we did not have this already on hand, we would have used the show dependency feature to start the spider web out. It would have worked fine.

Issue 3) Rewriting all the FK’s will take a tremendous amount of time. I created a script to help with that. It’s a bit convoluted, so bear with it. 

select *
into #results
from INFORMATION_SCHEMA.KEY_COLUMN_USAGE kcu
WHERE OBJECTPROPERTY(OBJECT_ID(constraint_name), 'IsPrimaryKey') = 1

select
'ALTER TABLE ['+sch.name+'].['+tbl.name+'] DROP CONSTRAINT ['+fk.name+'] ALTER TABLE ['+sch.name+'].['+tbl.name+']  WITH CHECK ADD  CONSTRAINT ['+fk.name+'] FOREIGN KEY(['+kcu.COLUMN_NAME+']) REFERENCES ['+sch2.name+'].['+tbl2.name+'] (['+temp.COLUMN_NAME+']) on delete cascade  ALTER TABLE ['+sch.name+'].['+tbl.name+'] CHECK CONSTRAINT ['+fk.name+']'
from sys.sysforeignkeys sfk
join sys.foreign_keys fk on fk.object_id = sfk.constid
join sys.tables tbl on tbl.object_id = fk.parent_object_id
join sys.schemas sch on sch.schema_id = tbl.schema_id
join INFORMATION_SCHEMA.KEY_COLUMN_USAGE kcu on kcu.TABLE_NAME = tbl.name
join sys.tables tbl2 on tbl2.object_id = sfk.rkeyid
join sys.schemas sch2 on sch2.schema_id = tbl2.schema_id
join #results temp on temp.TABLE_NAME = tbl2.name
WHERE OBJECTPROPERTY(OBJECT_ID(kcu.constraint_name), 'IsPrimaryKey') = 0
order by tbl.name

drop table #results

Issue 4) The script output isn't in the right order. We then took the diagram from Issue 2, started on all the outside spider webs and put them in order in the script. Using the "Show Dependency" option on the table, it helps see how many paths down it is. This would be a great time to build a diagram if you don't already have one.

Issue 5) While updating a FK, It locks both the table with the FK and the table with the PK. This table never stops updating, so it will lock it for a while its running. Luckily, with all SSDs, The worst table is 2m30s.

Final Result. It took 3-4 weeks total for this whole project. It used to take us a 5 day work week to remove 1400 records. It now takes 5 minutes to remove those same 1400.

Things to consider: Cascading deletes reduces integrity: Before if you tried to delete a claim, it would fail. Now it works. That’s the easier scenario to fix. If someone deletes something from the 2nd Tier, it may go unnoticed. The negative side of this has to be discussed. Proper access control could be enough.

While you’re doing this, do you want to add cascading updates as well? It will take the same time. It has the same issues that cascading deletes have. We decided that it wasn't worth the risk for updates as well.

7.7.14

Collaborated Training: What Legacy Will You Leave?

My very first IT job was a bit daunting for me. I really wanted to do my best but I wasn't really sure what they did. One of our supervisors had built a large collection of documentation and a list of our most common issues. It helped so much. That doesn't work as well in the database world. I do document my code and am building documentation on our projects so that everyone has better information. How do I leave behind something that someone else could use beyond simple documentation?

SharePoint

Odd lead in I know. What I plan on doing is simple though. Every day I come across something I've not learned yet but may have some interest in. I plan on creating a folder with the main term listed with an X at the end. This will only signify that I have nothing in that folder yet. When I get time later, I'll go back to each X and see what strikes me as interesting that day.  

Expanding on this idea

At first I'm going to make it simple and keep it to my close friends. The idea would be that anyone can create the folder and put items in each one. Something you didn't even think about may appear suddenly full of information. Something like this could start small scale and end up being a strong repository for multiple people within a few user groups or even beyond. I know we have forums now... but they don't really isolate information for long term use short of searching.

Keeping it clean

I might build it out at the start... but honestly moving stuff around later shouldn't be an issue. At first it will be just a list of items. It'll slowly transform into groups of folders divided by subject. I'm thinking a master legend that's updated weekly/monthly may be good as well. I'm going to set a few requirements on people that upload information that I plan to follow as well. All documents will be labeled BriefInformation-Date-VersionNumber(if applicable). Information within the document should include proper citing. If someone is recording a complete work of someone else, they will need to identify where they got it, when it was obtained and a link back to that location. I should be able to get better ideas of what to write as I slowly build out my personal list of things I don't know.

Negatives

Yes, there is a down side of sorts. If you move on quickly against your will... you may lose control of this. That's where you need to maintain good backups or have an agreement that you can retrieve this before you leave. A worst case scenario is that they may assume they can continue on without you and replace you with a cheaper newer model. That's always the fear of making work look too easy. It's good to declare goals and show accomplishments. There are just simply things you cannot predict. 

Legacy

It's a strong word to use... but hear me out. It's easy to keep a copy of good works for personal use if you keep good backups. Now I'm not saying that you should keep proprietary information... that should have its own folder. Consider what you could gather with this. Most things we learn come from a necessity of some sort. Consider all the things you've learned over the last year or so. How helpful would that have been in one central location for you? Think of all the good an internal WIKI would do? 

I pose this to you. Create this for your own environment and keep a copy for you should you leave. Every new place you go set it up again and keep working on it. How far can this go in a month, year, or even a decade? I've forgotten well more than I can remember. Take all that information you hold and store it mentally offline. This could save you and your co-workers months of effort. Every place you leave will be able to spin the new guy up with ease.  

Final Product

I plan on having something soon to show off for you all. I want to show a working skeleton with instructions and suggestions. I plan on showing price options and alternates for those a bit more cash strapped. I want to have something up and working in the next month that my close friends can demo and work. If anyone has any suggestions, I'll gladly listen to them. I'd even like to eventually get a developer friend or two to work up an app for my phone to hit my site and let me pull up information or write down ideas on the go. If this already exists... throw me a line, I'd love to know.

What do you all think of this idea?

3.3.14

Issues Installing SQL 2008 R2 on Server 2012 R2 Clustering

Oh so the nightmare begins. 

At the bottom I've included a set of links where I found my answers. What I'm doing here is just giving a comprised list of what I had to do to get it to work. I ran into a few issues almost immediately.

I got an error showing that a 2003 patch was not installed.
Windows Server 2003 FILESTREAM Hotfix Check      failed

This required a work around. You need to slipstream install SP1 or SP2. I did not write this step by step. I used what I found here written by Peter Saddow.

1. Copy your original SQL Server 2008 R2 source media to C:\SQLServer2008R2_SP1
2. Download the SQL Server 2008 R2 SP1 packages from here. You need to download all Service Pack 1 architecture packages:
•SQLServer2008R2SP1-KB2528583-IA64-ENU.exe
•SQLServer2008R2SP1-KB2528583-x64-ENU.exe
•SQLServer2008R2SP1-KB2528583-x86-ENU.exe
3. Extract each of the SQL Server 2008 SP1 packages to C:\SQLServer2008R2_SP1\SP as follows:
•SQLServer2008R2SP1-KB2528583-IA64-ENU.exe /x:C:\SQLServer2008R2_SP1\SP
•SQLServer2008R2SP1-KB2528583-x64-ENU.exe /x:C:\SQLServer2008R2_SP1\SP
•SQLServer2008R2SP1-KB2528583-x86-ENU.exe /x:C:\SQLServer2008R2_SP1\SP
Ensure you complete this step for all architectures to ensure the original media is updated correctly.
4. Copy Setup.exe from the SP extracted location to the original source media location. Here is the robocopy command:
•robocopy C:\SQLServer2008R2_SP1\SP C:\SQLServer2008R2_SP1 Setup.exe
6. Copy all files not the folders, except the Microsoft.SQL.Chainer.PackageData.dll, in C:\SQLServer2008R2_SP1\SP\ to C:\SQLServer2008R2_SP1\ to update the original files. Here is the robocopy command:
•robocopy C:\SQLServer2008R2_SP1\SP\x86 C:\SQLServer2008R2_SP1\x86 /XF Microsoft.SQL.Chainer.PackageData.dll
•robocopy C:\SQLServer2008R2_SP1\SP\x64 C:\SQLServer2008R2_SP1\x64 /XF Microsoft.SQL.Chainer.PackageData.dll
•robocopy C:\SQLServer2008R2_SP1\SP\ia64 C:\SQLServer2008R2_SP1\ia64 /XF Microsoft.SQL.Chainer.PackageData.dll
7. Determine if you have a DefaultSetup.INI at the following locations:
•C:\SQLServer2008R2_SP1\x86
•C:\SQLServer2008R2_SP1\x64
•C:\SQLServer2008R2_SP1\ia64
If you have a DefaultSetup.INI at the above locations, add the following lines to each DefaultSetup.INI:
PCUSOURCE=".\SP"

If you do NOT have a DefaultSetup.INI, create one with the following content:
;SQLSERVER2008 R2 Configuration File
[SQLSERVER2008]
PCUSOURCE=".\SP"
and copy to the following locations
•C:\SQLServer2008R2_SP1\x86
•C:\SQLServer2008R2_SP1\x64
•C:\SQLServer2008R2_SP1\ia64
This file will tell the setup program where to locate the SP source media that you previously extracted.
8. Run setup.exe as you normally would.

To add again, this post is just to give a good central location for how to get 2008 R2 installed on server 2012 R2. The original post was here.

There was a second error showing that the cluster service verification failed. You can fix this via the GUI, but it is much easier to open a Powershell window and just copy and paste, “add-windowsfeature RSAT-Clustering-AutomationServer” This was written by Emilson Barbosa Bispo on this page.

All in all it was a good learning experience. I know that it’s normally best to upgrade your OS when you upgrade SQL…. But those licenses are expensive. This goes double if it’s an enterprise license and you have a few servers that don’t need to be up to date just yet.

I really hope this is helpful for someone. I know finding these saved me a lot of time. What’s the worst thing you’ve had to work on? I’d love to hear other stories of interesting fixes.If you run into any other errors while doing this, let me know. I may have come across it and have an easy answer for you.



How to Slip Stream




3.12.13

Testing labs

What sort of testing lab do you setup when you're testing things out? I hear see the world adventureworks almost as often as I see SQL in these books. Is there anything else out there? Jeremiah Peschka made a post recently about sample databases. Are there any others out there you all use? I personally use VMWare Workstation 10, though VirtaulBox works great... and it's free ^.~ . You can use Windows Server Trial for a lot of your testing... and you can use a trial of SQL Server or even just get the developer edition for 60$ and not worry about rebuilding every time.

My personal setup is VMWare Workstation running one Domain Controller, how ever many SQL boxes I need (normally 4 + a control) and i test everything out there using adventureworks and a SQL Load Generator.

I'd love to find a better load generator that works with 2014... What settings are you all using? Do you only test in prod... or do you have a server at work just to play with?

My workstation build is an Alienware M14 i7 4th gen with 16 gb of ram and I upgraded the hard drive to the Samsung Evo 1TB. (Yes, yes... The price tag is a bit much to deal with... but it's a 14 inch laptop that can play all my games, run all my projects and I can get about 5 hours of life out of it when I cut my settings down. It's soooo much more portable than my last ASUS 17 inch Knights of the Old Republic laptop) I can normally load 1 Domain Controller and 4 SQL boxes all without lagging my videos or youtube. I plan on removing the drive bay, getting an external drive and adding a second 2TB spinning drive for archive reasons.

Please respond with any information or thoughts you have on this. I'd love to hear the feed back.

Note: Still only one post for the free month from Plural Sight. Sounds like this will be an easy win if no one else joins in. Winner decided Wednesday!

14.11.13

Baseline Maintenance Statistics Job For Indexes. (say that three times fast)

This is a project I've been working on for a while. The information is all there... how it's queried may need some work. The general idea is, no one has a full years view on how their indexes are performing. If you plan on working somewhere for the rest of your life... make that life easier. If you're taking over a new system because someone found it under their desk... or just a new one coming on board, this will make at least some of your life easier.

This job creates two tables in master. testing this on a multi-schema 3.4TB database with hundreds of tables, it only grew to 50MB for both tables. That being said, it should be relatively easy to re-target this to a DBA or other database if you wish to isolate it. It will delete any data over 370 days old. This allows for you to get ranged data. Think! You may have that one major report that runs once a month.. once a quarter! If you delete this index that's never being used, one day you may find out it was.

This job assumes that you have a privileged account named 'sa'. That is easy to change if you have modified that to another name or disabled it all together. Side note: good idea to rename or disable and use something else. This job also assumes that you have the default category of '[Uncategorized (Local)]' which is perfectly fine.

The first query produces a list of all indexes that have shown no reads in what ever time frame you define. If you let this run long enough, it's a good way to ensure that you don't have wasted space and slower inserts, updates, and deletes in your database for no good reason.

The second query produces a list of suggestions. This is meant only as a suggestion. Please do not blindly follow this. There are some wonderful places full of documentation on how to handle indexes. This is just to log information for prolonged use.

The final thing to note... The job is created without a schedule. Make sure you assign it a daily schedule, off peak hours (best right after peak hours have ended). Since this runs daily, even if you reboot every saturday, you'll still have 6 days worth of data each week to use for a base line. (7 if you reboot after).

You will have to split up the results some, I advice putting it in excel split by date of last restart.
You can get the file from here or copy from below.
/*************************************************************
Created by Bill Barnes
Script Date: 06/19/2013 09:24
for non commercial and testing purposes.
MaintenanceStatistics was created for SQL Server 2008 SP1 or higher
This job creates two tables located in master.
master.dbo.DBA_UnderUsedIndexes
master.dbo.DBA_MissingIndexes
The job created is MaintenaceStatistics.
This job pulls valuable information into the two above tables.
--Remember to set a schedule for MaintenanceStatistics. It does
--not come with one by default.
--be careful to target the correct category. [Uncategorized (Local)] is common.
--make this the owner of the job. SA is common. It's best to change this name.
*************************************************************/
/*************************************************************
--This query will tell you what indexes have shown zero reads over a 370 day window.
--The purpose is to ensure you're only removing indexes that have never been used.
--Please do not run this in production and make a decision based on a days data.
--This job is meant to run at all times, once a day, and collect a years data for strong
--indexing base lines.
select distinct b.DataBaseName, b.TableName, b.Indexname  as IndexName
FROM master.dbo.DBA_UnderUsedIndexes a
inner join(select LastServicesRestart, DataBaseName, TableName, indexname, sum(reads) as reads
FROM master.dbo.DBA_UnderUsedIndexes
group by LastServicesRestart, DataBaseName, TableName, indexname HAVING SUM(reads) = 0) b
on a.indexname = b.IndexName
inner join (select distinct LastServicesRestart from master.dbo.DBA_UnderUsedIndexes) c
on c.LastServicesRestart = a.LastServicesRestart
group by b.DataBaseName, b.TableName, b.Indexname, c.LastServicesRestart
--This query is meant to produce a list of what indexes may need some work.
--Never trust exactly what this shows. Always look deeper and use your best judgement.
select Old.CurrentDate, Dist.sqlserver_start_time, statement,
Impact, [Average Total Cost],[% Reduction of Cost],
[Missed Opportunities], [Equality Columns], [Inequality Columns],
[Included Columns] from master.dbo.DBA_MissingIndexes Orig
inner join (select distinct sqlserver_start_time from master.dbo.DBA_MissingIndexes) Dist
on Dist.sqlserver_start_time = Orig.sqlserver_start_time
inner join (select distinct sqlserver_start_time, Max(CurrentTime) as CurrentDate FROM master.dbo.DBA_MissingIndexes
group by sqlserver_start_time) Old
on Orig.CurrentTime = Old.CurrentDate
--where [% Reduction of Cost] > 50
Order by sqlserver_start_time, Impact desc
*************************************************************/

USE [master]
GO

/****** Object:  Table [dbo].[DBA_UnderUsedIndexes]******/

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[DBA_UnderUsedIndexes](
        [CurrentTime] [datetime] NOT NULL,
        [LastServicesRestart] [datetime] NOT NULL,
        [DataBaseName] [nvarchar](128) NULL,
        [TableName] [sysname] NOT NULL,
        [IndexName] [sysname] NULL,
        [reads] [bigint] NULL,
        [writes] [bigint] NOT NULL,
        [rows] [bigint] NULL,
        [reads_per_write] [numeric](38, 17) NULL
) ON [PRIMARY]

USE [master]
GO

/****** Object:  Table [dbo].[MissingIndexes]******/

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[DBA_MissingIndexes](
        [CurrentTime] [datetime] NOT NULL,
        [sqlserver_start_time] [datetime] NOT NULL,
        [statement] nvarchar(200) NULL,
        [Impact] bigint NULL,
        [Average Total Cost] numeric(10,2) NULL,
        [% Reduction of Cost] int NULL,
        [Missed Opportunities] int NULL,
        [Equality Columns] nvarchar(2000) NULL,
        [Inequality Columns] nvarchar(2000) NULL,
        [Included Columns] nvarchar(2000) NULL
) ON [PRIMARY]
GO

USE [msdb]
GO

/****** Object:  Job [MaintenanceStatistics]******/

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

/****** Object:  JobCategory [[Uncategorized (Local)]]]******/

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
--be careful to target the correct category. [Uncategorized (Local)] is common.
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'MaintenanceStatistics',
                @enabled=1,
                @notify_level_eventlog=0,
                @notify_level_email=0,
                @notify_level_netsend=0,
                @notify_level_page=0,
                @delete_level=0,
                @description=N'No description available.',
                @category_name=N'[Uncategorized (Local)]',
                @owner_login_name=N'sa', @job_id = @jobId OUTPUT
                --make this the owner of the job. SA is common. It's best to change this name.
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/****** Object:  Step [Missing Indexes]******/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Missing Indexes',
                @step_id=1,
                @cmdexec_success_code=0,
                @on_success_action=3,
                @on_success_step_id=0,
                @on_fail_action=3,
                @on_fail_step_id=0,
                @retry_attempts=2,
                @retry_interval=10,
                @os_run_priority=0, @subsystem=N'TSQL',
                @command=N'
delete from master.dbo.DBA_MissingIndexes
where CurrentTime < GETDATE()- 370
GO

                INSERT INTO master.dbo.DBA_MissingIndexes
                SELECT
        current_timestamp as CurrentTime,
        m.sqlserver_start_time,
        id.statement,
        cast(gs.avg_total_user_cost * gs.avg_user_impact * ( gs.user_seeks + gs.user_scans )as bigint) AS Impact,
        cast(gs.avg_total_user_cost as numeric(10,2)) as [Average Total Cost],
        cast(gs.avg_user_impact as int) as [% Reduction of Cost],
        gs.user_seeks + gs.user_scans as [Missed Opportunities],
        id.equality_columns as [Equality Columns],
        id.inequality_columns as [Inequality Columns],
        id.included_columns as [Included Columns]
FROM sys.dm_db_missing_index_group_stats AS gs
JOIN sys.dm_db_missing_index_groups AS ig ON gs.group_handle = ig.index_group_handle
JOIN sys.dm_db_missing_index_details AS id ON ig.index_handle = id.index_handle
join sys.dm_os_sys_info m on m.sqlserver_start_time = sqlserver_start_time
ORDER BY Impact desc',
                @database_name=N'master',
                @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/****** Object:  Step [Underused Indexes]******/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Underused Indexes',
                @step_id=2,
                @cmdexec_success_code=0,
                @on_success_action=1,
                @on_success_step_id=0,
                @on_fail_action=2,
                @on_fail_step_id=0,
                @retry_attempts=2,
                @retry_interval=10,
                @os_run_priority=0, @subsystem=N'TSQL',
                @command=N'

delete from master.dbo.DBA_UnderUsedIndexes
where CurrentTime < GETDATE()- 370

exec sp_MSforeachdb @command1 = ''
use [?]
INSERT INTO master.dbo.DBA_UnderUsedIndexes
SELECT current_timestamp as CurrentTime,
m.sqlserver_start_time as LastServicesRestart
, DB_NAME() AS DataBaseName
, o.name as TableName
, i.name as IndexName
, reads=user_seeks + user_scans + user_lookups  
, writes =  user_updates  
, rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)
, CASE
WHEN s.user_updates < 1 THEN 100
ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates
END AS reads_per_write
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id
INNER JOIN sys.objects o on s.object_id = o.object_id
INNER JOIN sys.schemas c on o.schema_id = c.schema_id
inner join sys.dm_os_sys_info m on m.sqlserver_start_time = sqlserver_start_time
WHERE OBJECTPROPERTY(s.object_id,''''IsUserTable'''') = 1
AND s.database_id = DB_ID()
AND i.type_desc = ''''nonclustered''''
AND i.is_primary_key = 0
AND i.is_unique_constraint = 0
AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
ORDER BY reads, writes desc''',
                @database_name=N'master',
                @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO