Showing posts with label Quick Tips. Show all posts
Showing posts with label Quick Tips. Show all posts

9.6.14

Documentation: What Can We Document?

          I think I'd like to do a small short series on documentation. I personally love documentation. I honestly think most of us like documentation, we just hate writing it. There are quite a few excuses I've heard against writing it.
  1. I don't have time to document as I code.
  2. Changes need to be in place now, I can't waste time documenting.
  3. This isn't my code.
  4. There's too much to document now.
  5. We plan on deprecating this system in the next year or two.
  6. Everyone's already familiar with this system.
It doesn't honestly take much time to add some documentation. We don't need everything to have the letterhead, pretty backgrounds, screenshots and instructions for every single line. Here are a few examples of fast commenting.

-- in T-SQL adding two dashes allows your comment to sit right next to or under your code. This is great for explaining a bit toggle or what options exist in a SP.

/*
In T-SQL, adding the /* at the start of a comment will comment anything out until you end with the */, no matter how many lines deep it is. Great for revisions, modification explanations, author credit, date created and many more uses.
*/

in SSIS you can right click the open area and insert an annotation. you can set signs such as, "Don't run this step without running the previous step." and many other variations. 

You can even create your documentation in Excel or Word for larger projects or even to document what tables and SP's a certain job needs. 

The one thing I'd want documented more than any of these other things would be changes. A change log for the network, Domain Controllers, Server, SQL instance, tables, stored procedures, just about anything. A simple file share with folders for each group would be useful for a change log. Share Point may be a better target if you have that in your environment. File names could be as simple as Servername.doc. 

/**************************************************************
Admin: John Doe
Change: Modified Max RAM to 177152.
Reason: Leaving 10% to the OS and other functions.
Date: 6/9/2014
**************************************************************/

/**************************************************************
Admin: Jane Doe
Change: Modified Remote Admin Connections to 1
Reason: Ability to remote the DAC when the server is having issues. Temporary change while troubleshooting Database issues related to Ticket #0003425552
Date: 4/21/2014
**************************************************************/

A year later we may see a setting, wonder why or who changed it and now we have a searchable document to answer these questions.


7.5.14

Searching SSIS Packages or searching XML In General

Today I'm posting a small script with a simple purpose. I wanted to search all of my SSIS packages in MSDB for a specific word. This does not work on encrypted packages. That being said, here's the script in question.

SELECT 
  Name
, CONVERT( xml, CONVERT(varbinary(max),PackageData) )   AS XMLSource
FROM msdb.dbo.sysssispackages
--where convert ( nvarchar(max), CONVERT( xml, CONVERT(varbinary(max),PackageData) ) ) = ''

I know it has a lot of converts, but it works. It takes about 19 seconds to run on my server if that helps. 

If there are better ways to do this, please add it in the comments. I'd love to get a better version for my archives. 

I know this is a short post. Prepping for my first SQL Saturday has taken a bit more time than I expected. Thanks for reading everyone!

28.4.14

Resume Generating Event (RGE)

No, I haven't lost my job. This is just one of those topics that's good to get out there. There are honestly few things that can cause you to question if you still have a job the next day, but those things still exist. There are ways to mitigate RGEs, but do you know what they are in your area?

Service Level Agreements (SLA)

Gathering information on exactly how important a database is required for a real disaster. Disaster recovery planning is reliant on knowing your SLAs. You need to know how long a database can be down, which one is the most important, at what point do you have to start calling 3rd party people (customers, vendors, app support), how much data can we afford to lose and at what point you should stop fixing the broken server and failover to other hardware. I know, I know... No one likes giving these answers. My experience normally goes like this.

Question: How long can the database be down?
Answers: It should never be down. As little as possible. Why? Are we having issues?

Question: Which one is most important?
Answers: They're all important. Can't you bring them all up at once?

Question: What point do I start calling 3rd party people?
Answers: What's wrong now? That's case by case, call me first.

Question: How much data can we afford to lose?
Answers: None. None. None. Why? What have you done? None. We should never lose data.

Question: What point should I just stand up a new server?
Answers: We don't have spare servers. Why would you need a new one?

What can I do now?

Well, we can take some preventive action. Some of this is harder than you'd expect without first knowing what your SLA's truly are. Here's a few things you can do anyways today to help until you get these answers.

Find where your backups are stored.
Make sure the backups are stored on a different physical medium than the databases.
Make sure you test your backups occasionally to see if they're even good.
Make sure you have all your drivers for anything that's not standard.
Keep a log of what all databases are on a particular server.
Keep a log of the average size (uncompressed) of your databases per server.
Keep a log of the standard settings you might use for that server. (Ram, drive structure, version number)
Update the phone logs or at least your personal contacts with everyone you need to call if a 2AM incident happens.

Is there some sort of form that can be used?

My next post will include a list of the questions I'd want answered for each database as well as a short list of questions I need to ask myself. Having a printed list for each database, or set of databases if they have the same requirement, can be a career saver.

I plan on making a form to make this a bit easier. I will at the very least create an Excel or Word list with examples. I think this is good to have from your highest Sr. DBA to your multi-hat Network Admin who's being forced to manage a rogue database. Having this signed off by your boss may make the difference of keeping your job during a major outage. A little CYA never hurt anyone.

24.3.14

SQL Server Enterprise is Cheaper than Standard

Free Month of Plural Site

I'm interested in seeing your responses. Any relevant response will be accepted and a free month of Plural Site will be given away. You do not have to agree with me to be relevant. I want to hear what you think.

Now this may seem a bit far-fetched, hear me out. 

This is a bit long winded for me.

First we will address the minimum requirements. We are comparing core licensing only. You must purchase at least 4 cores. Enterprise Edition costs $6,874 per core. That's a starting price of $27,496. Standard Edition costs $1,793 per core. That's a starting price of $7,172.

Now I know what you're thinking, a $20,324 difference seems pretty open and shut against this. This is where I ask that you hear me out. It gets a bit tricky here. Let's start comparing the benefits of upgrading.

Limits Standard Enterprise
RAM
64 Unlimited
Indexing Offline Online
Table Compression No Yes
Fast Recovery No Yes
Table Partitioning No Yes
Resource Governor No Yes
I know there are a lot more difference relating to BI, AS, RS and many other aspects. Let's just get enough out there to prove the point. I just told you that a $20,000 dollar cost was a savings yes? How can we save $20,000 by spending it? 

How much do you make a year? 

How about your other DBA, or the JR you're about to hire? How many Developers do you have on staff? How many of them are over worked trying to keep your old Standard server running? Look at your database closely. Let's answer some questions; we'll address this question last.

Do you have the maximum ram that your server can support in it? 

That may be 192, 384, any other number. Unless your server is older, it should support more than the 64GB of RAM that Standard does. I know I know, Windows Server Standard only supports 32GB of RAM, but that changed in Server 2012. 2TB is the RAM limit now. I know SQL 2014 allows for 128GB of RAM standard, but that still means more room to grow.

Do you have processor cores just going idle most times? 

Just because you have 12 cores doesn't mean you need to license 12. You can set SQL to use the limit of what you license it to use. Only license what you need.

Do you have SLA's to meet that have been difficult due to maintenance windows? 

Online indexing allows you to rebuild tables just about any time. You do still get a minor lock at the start and end of an online operation, but that's far better than during the whole process.

Do you have issues with archiving those massive tables?
 
Is their performance falling behind? Partitioning can help you swap parts of the table in and out while being minimally intrusive. You can even address fragmentation per partition instead of hitting that 10 billion row table all at once. In 2014 you can even do that operation online now!

Do you have multiple databases on the same server fighting for resources?

Well now you can split them up logically instead of having that same conversation about splitting them up physically. 

Are you fighting for more space or even considering moving to an Enterprise SAN?

Page compression is a beautiful thing. It should pose no problems on archived tables. If CPU is not a bottle neck currently, you can expand the window of what you compress. Heavily used tables may not benefit as much... but here's where partitioning can work with this. Page Compression saves quite a bit of space. This may be just enough of a space saver to allow you to request those SSDs you've been wanting.

Let's add this all together.

Yes there is a cost up front, but now you no longer need to hire a 3rd full time DBA or Developer. If you get that system on SSD's since we're using page compression to keep our sizes small and partitioning to keep our archived data on slow disks, our response time is faster. We were previously running to the edge of our RAM at all times, now we have cached static data from a month ago. These disks are only being accessed to present changes. Our DBAs are getting more sleep now that their fragmentation jobs aren't blocking all night long. Our maintenance windows are getting shorter and our SLAs have more room to breathe. 

This view will not fit all organizations. As always, the phrase "It depends" will fit in this scenario as well. Think carefully about all these issues and the time you've burned fixing them. You could be working on that next project to make your company even more money.

I mentioned Developers in this post. I'll explain now why. With compression, partitioning, more caching, faster access to those tables... you can hide a lot of "quick" codding with that much faster power. I'm not saying that we should code poorly because we can. I'm saying that we can code how we need because we can. Once it's up and working, you can then go back and fine tune. 

Enterprise Edition has a higher cost. Asking for more SSD's and more RAM has an additional cost. Not having your talented knowledgeable DBAs and Developers quit due to long hours, continually fighting uphill battles and being denied tools or extra personnel will cost you a lot more in the end. Training your next DBA alone may make up this cost. Not losing your customers due to the inability to meet SLA requirements has a large cost monetarily and to your reputation. 

Let's make the world happier, one Database shop at a time.

17.3.14

My Favorite Free Scripts


I'll start off with Ola Hallengren's scripts. I'm not a fan of reinventing the wheel. If someone makes gold, I tend to want to use it. Ola's scripts are fantastic. They're even already updated for 2014! You do have to contact them for the CTP version of the scripts.

These scripts cover your backups, index maintenance, verifies the integrity of your databases and logs the results of all of this for you. You can set up one part or all. This covers all of your 101 DBA requirements to keep your servers running in a safe fashion.  They even give examples of what you might want to throw into the script! It doesn't get much easier than this.


Adam Machanic wrote a great script called "Who is Active". He even has a 30 part blog series on the ins and outs of this procedure found here. Just about anything you want to know can be found there. It's a fantastic script. I would familiarize yourself with it prior to relying on it for day to day operations. 


Kendra Little has a great video and some sample scripts to run to view what indexes your server is wanting. As she will stress, do not just put them all into your database. Too much of a good thing can be really really bad. This is where we look for all that beautiful low hanging fruit.


On the same site as the Missing Indexes video by Kendra Little, there is a great script called SP_BlitzIndex. This was written by the same group. It's the step beyond just looking for missing indexes. Take the time to look through this when you have some time. This is not a place I'd look with a fire to put out... At first anyways. Learn about it in detail before assuming too much and jumping into it.


SP_Blitz was written by the Brent Ozar group and it helps you identify many pain points really quickly and even includes a link to what those mean and suggestions on how to handle it. Honestly, how nice is it these exist for us?!

What Can We Do?

So here we have 5 fantastic scripts. What will these accomplish? 
1) We start off with Ola's script and get our backups and index fragmentation under control. 
2) We find out from our users if anything in particular is slow or below SLA requirements. 
3) We run the SP_Blitz and see what shows up as a major issue.
4) We get a quick break down of what the Missing Indexes are suggesting.
5) We put all this together in a solid actionable list. 
6) We present our findings and come up with a solution to work on.

Yes I use the word "We" a lot. I do this on purpose. Correcting this many problems isn't a one person operation. You want the other product heads involved. You need to know what this could break or even if there's a problem you're currently trying to fix. If they need a report to respond in under a minute, spending a week trying to get it from 45 seconds to 5 seconds isn't where the focus should be unless everything else is working perfectly. 

There Is More Out There!

These are just 5 of the many many scripts out there for free that have been provided just to make your job easier. I've said it before, I'll say it again... MS SQL Server has a fantastic community around it. Few come even close. These 5 scripts alone can help a DBA sustain a workshop with minimal effort. Use this not as a crutch, but as a starting point to make everything even better. 

If you all know any other great scripts that can be added to this collection, I would love to hear it!

10.2.14

Do Not Set Maximum Ram To 0 Ever - Mistakes We Make

I'm going down memory lane here and remembering one of the first things I did as a DBA. I was told by one of our other DBA's that if you have 40GB of RAM for a database and it's not working very well, the fix is to modify the max RAM from 40,960 to 1,024 then back to 40,960 after it cleared everything out. I understand now that This isn't the best way to handle it. Queries may fail, everything that was cached will have to cache again, and a slew of other performance issues and possible failed reports.

Here comes the bad part. I had the bright idea at the time that if setting it to 1GB fixed the issue... why not set it to zero? Well SQL doesn't accept zero with any grace. It modifies it to 16 MB of ram. SQL just will not run with that much RAM. SSMS wouldn't load, the website went down, nothing was working. We had to stop all services and login through SQLCMD after starting SQL up in single user mode with the minimal configuration switch. 

Now we have downtime in the middle of production hours. I will say there are better ways to fix this, but the way we fixed it was restoring master from earlier that day that still had the correct setting. I wasn't really familiar with working in a DOS or PowerShell window with SQL at the time. This obviously caused down time and didn't help my reputation any. 

We All Make Mistakes

I now know what not to do. I know to verify things I do not completely understand and cannot logically pick apart. I know how to research more efficiently and I have a better Disaster Recovery plan. We all make mistakes, really we do. No one is perfect. The thing is though, what are we doing to get better? 

This is my request to all of you. Post a story about a mistake you've made and what you did to overcome it. Tell me what you've learned and how you plan on preventing it in the future. I think we could all simply learn from our mistakes or we can help others by letting them learn from ours.

I'm sure someone out there has a good story about deleting a table because they forgot the where clause and the transaction to wrap it in.

3.2.14

What Tables Are In My Filegroups

One issue I've found a bit troublesome is trying to find out what's in a specific Filegroup. Let's say you're trying to clear off a lun or just a drive and you see a file labeled, "iudexes7.ndf". Now unless you built this and have a steal trap for a memory... or just are fantastic at documentation... you probably have no clue what's in this file. If you target the offending database, you can either run the query below as is and gather this data, or add the where clause and target just that file.

Below is a good script for exploring and cleanup.

select sch.name AS SchemaName,tbl.name AS TableName,idx.name AS IndexName,ds.name AS Filegroup,
data_compression_desc,total_pages,total_pages*8/1024 AS SizeInMB,max_column_id_used,fill_factor
from sys.partitions p
inner join sys.allocation_units au  on au.container_id = p.hobt_id
inner join sys.filegroups fg  on fg.data_space_id = au.data_space_id
inner join sys.tables tbl on tbl.object_id = p.object_id
inner join sys.indexes idx on idx.object_id = p.object_id
inner join sys.schemas sch on sch.schema_id = tbl.schema_id
inner join sys.data_spaces ds on ds.data_space_id = au.data_space_id
and idx.index_id = p.index_id
--where ds.name = 'primary'
order by ds.name, idx.name

This is a good way to move files off a specific drive, clean up wasted space or even just help with some space issues related to a specific file. Happy hunting!

27.1.14

Forcing Results To Conform For Exports

This one is a bit off for me. I've had to make SSIS exports kick out in specific ways such as how a number is returned or how many spaces a column returned every time regardless of how long the actual return was. Here's a few things I've put in use that seemed to work out well in this situation. If you all have any others I don't have here that are more common or a better fit, I'm always interested adding something new to my list of tools.

Two of these were created by Igor Nikiforov. They are included at the bottom of this page. They need to be added before this script will work. If you take nothing else from this post, please visit Igor's page and look at a few of his User Defined Functions. These are very useful if you're background isn't strong into coding. 

The original query is a simple select from adventure works. 

select 
addressid, addressline1, addressline2, city, StateProvinceID, postalcode, modifieddate 
from AdventureWorks2012.Person.Address

These are a few of the conversions we used to get the outputs to fit as we needed to match an older method. 

select 
 dbo.padl(addressid,10,'0') as AddressID 
,left(addressline1 + space (40),40) as AddressLine1
,Case when addressline2 is null then '' else addressline2 end as AddressLine2 
,isnull(city,'No City Listed') as City
,dbo.padl(StateProvinceID,3,'0') as StateProvinceID 
,dbo.padr(convert(char(15),postalcode), 15, ' ') as ZipCode 
,convert(varchar,ModifiedDate,110) as Date 
,convert(varchar,ModifiedDate,108) as Time 
from AdventureWorks2012.Person.Address 
order by convert(varchar,ModifiedDate,112) desc, convert(varchar,ModifiedDate,108) desc



/****** Object:  UserDefinedFunction [dbo].[PADR]    Script Date: 01/26/2014 23:30:33 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO



-- Author:  Igor Nikiforov,  Montreal,  EMail: udfs@sympatico.ca   
 -- PADL(), PADR(), PADC() User-Defined Functions
 -- Returns a string from an expression, padded with spaces or characters to a specified length on the left or right sides, or both.
 -- PADR similar to the Oracle function PL/SQL RPAD 
Create function [dbo].[PADR]  (@cString nvarchar(4000), @nLen smallint, @cPadCharacter nvarchar(4000) = ' ' )
returns nvarchar(4000)
as
     begin
       declare @length smallint, @lengthPadCharacter smallint
       select  @length  = datalength(@cString)/(case SQL_VARIANT_PROPERTY(@cString,'BaseType') when 'nvarchar' then 2  else 1 end) -- for unicode
       select  @lengthPadCharacter  = datalength(@cPadCharacter)/(case SQL_VARIANT_PROPERTY(@cPadCharacter,'BaseType') when 'nvarchar' then 2  else 1 end) -- for unicode

       if @length >= @nLen
          set  @cString = left(@cString, @nLen)
       else
          begin
             declare  @nRightLen smallint
             set @nRightLen  =  @nLen - @length -- Quantity of characters, added on the right
             set @cString =  @cString + left(replicate(@cPadCharacter, ceiling(@nRightLen/@lengthPadCharacter) + 2), @nRightLen)
 end

     return (@cString)
    end


/****** Object:  UserDefinedFunction [dbo].[PADL]    Script Date: 01/26/2014 23:30:21 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO



-- Author:  Igor Nikiforov,  Montreal,  EMail: udfs@sympatico.ca   
 -- PADL(), PADR(), PADC() User-Defined Functions
 -- Returns a string from an expression, padded with spaces or characters to a specified length on the left or right sides, or both.
 -- PADL similar to the Oracle function PL/SQL  LPAD 
Create function [dbo].[PADL]  (@cString nvarchar(4000), @nLen smallint, @cPadCharacter nvarchar(4000) = ' ' )
returns nvarchar(4000)
as
      begin
        declare @length smallint, @lengthPadCharacter smallint
        select  @length = datalength(@cString)/(case SQL_VARIANT_PROPERTY(@cString,'BaseType') when 'nvarchar' then 2  else 1 end) -- for unicode
        select  @lengthPadCharacter = datalength(@cPadCharacter)/(case SQL_VARIANT_PROPERTY(@cPadCharacter,'BaseType') when 'nvarchar' then 2  else 1 end) -- for unicode

        if @length >= @nLen
           set  @cString = left(@cString, @nLen)
        else
  begin
              declare @nLeftLen smallint,  @nRightLen smallint
              set @nLeftLen = @nLen - @length  -- Quantity of characters, added at the left
              set @cString = left(replicate(@cPadCharacter, ceiling(@nLeftLen/@lengthPadCharacter) + 2), @nLeftLen)+ @cString
           end

    return (@cString)
   end

22.1.14

PowerShell Remote Commands

This is something I've found useful here recently when I had a server acting up. It's a simple way to send commands to a remote server such as 'shutdown -r'

Enable-PSRemoting -Force

Enables to you actually remote through PowerShell.


Test-WSMan Server1

Just tests that you can reach the server through this method.


Invoke-Command -ComputerName Server1 -ScriptBlock { Get-ChildItem C:\ } -credential Domain\UsrID

If you're only sending a single command, you'd replace the Get-ChildItem C:\" with what you're wanting to run.


Enter-PSSession -ComputerName Server1 -Credential Domain\UsrID

This command allows me to interact on a long term method. Basically if I'm sending more than a single command, this would work better.


I'm sure there are other methods. I like short concise code. How do you all connect?

15.1.14

Restore Master From Backup

There are a few posts running around talking about restoring master from backup. Thomas LaRock has a fantastic "How To: Restore The Master Database In SQL Server 2012" post. It's very well organized and has a lot of great examples and directions. SQLSkills also has a survey up referring to a survey about restoring or rebuilding master. I'm really curious what their post will entail.

I'm going to throw something in the hat here. It's not as detailed as what you'd see from SQLRockstar or SQLSkills. This is a document I put together about a year ago for my group so that when we had to restore from master, someone had a step by step on how to do so.

The word doc is located here.

This is not a training item or a deep dive into how the rebuilds are done and how you can map the stars with them. This is just simply a doc that you throw in your Disaster Recovery Plan folder and let it collect dust until something major happens. If your people know where you keep this... hopefully the instructions are simple enough that they can follow it. You may modify it some to fit your org better.

This is simply a run doc that sits and collects dust until a fire brews. Have your people do the training, have them read through SQLRockstar's post. It's great. Keep this document on the file share so if they forget ,you won't need to search through websites trying to remember something you've done once or twice a career.

13.1.14

What Tables Are Wasted?

Wasted space still seems to be on everyone's mind to some extent. I've wondered what tables do we use? We have our offenders, those are easy to track. What about that little database that's been around for 4 years that no one left even really knows what it does? I have a set of scripts that may help some with that. I'll include a download link as well.

It's listed in the comments, I'll mention it here as well. This list is only accurate to the last reboot of services. Do not base your delete decisions on this solely. If you track it over time, it should be reviewed to see about removing, then only remove it with a good backup so you can restore it if needed. Take caution any time you remove parts of your database.

You can download the full script Here. Any comments or suggestions are appreciated. Thanks again!

/*
Author Bill Barnes
Created 01/10/2014

Use: The purpose of this script is to show indexes and tables that have not been used
since the last restart of SQL services. This script can be quickly modified to show what
tables have been used and provide more useful data. This will ignore all system schema
based tables.

Note: an update or scan is counted per instance set not per scan. If you update a row once
you get one addition to update. If you update a table and change 50,000 records, that still
is only 1 update. Keep that in mind when reading the numbers provided.
*/

--This Version will only pull a list of tables that have shown no use.
select sch.name as SchemaName, obj.name as TableName, idx.name as IndexName, obj.object_id,
usage.user_lookups,usage.user_scans, usage.user_seeks, usage.user_updates, usage.system_lookups,
usage.system_scans, usage.system_seeks, usage.system_updates, usage.last_user_lookup, usage.last_user_scan,
usage.last_user_update, usage.last_system_scan, usage.last_system_seek, usage.last_system_update
from sys.indexes idx
full outer join sys.dm_db_index_usage_stats as usage on idx.object_id = usage.object_id
and idx.index_id = usage.index_id
inner join sys.objects as obj on idx.object_id = obj.object_id
inner join sys.schemas as sch on sch.schema_id = obj.schema_id
where usage.database_id is null
and sch.schema_id <> 4
and obj.object_id is not null
order by obj.name

-- This version provides a list of all tables that are in use.
select sch.name as SchemaName, obj.name as TableName, idx.name as IndexName, obj.object_id,
usage.user_lookups,usage.user_scans, usage.user_seeks, usage.user_updates, usage.system_lookups,
usage.system_scans, usage.system_seeks, usage.system_updates, usage.last_user_lookup, usage.last_user_scan,
usage.last_user_update, usage.last_system_scan, usage.last_system_seek, usage.last_system_update
from sys.indexes idx
full outer join sys.dm_db_index_usage_stats as usage on idx.object_id = usage.object_id
and idx.index_id = usage.index_id
inner join sys.objects as obj on idx.object_id = obj.object_id
inner join sys.schemas as sch on sch.schema_id = obj.schema_id
where usage.database_id is not null
and sch.schema_id <> 4
and obj.object_id is not null
order by obj.name

--This version shows a sum of all activity on these tables.
select sch.name as SchemaName, obj.name as TableName, idx.name as IndexName, obj.object_id,
sum(usage.user_lookups) UserLookups,sum(usage.user_scans) UserScans, sum(usage.user_seeks) UserSeeks,
sum(usage.user_updates) UserUpdates, sum(usage.system_lookups) SystemLookups,
sum (usage.system_scans) SystemScans, sum(usage.system_seeks) SystemSeeks, sum(usage.system_updates) SystemUpdates
from sys.indexes idx
full outer join sys.dm_db_index_usage_stats as usage on idx.object_id = usage.object_id
and idx.index_id = usage.index_id
inner join sys.objects as obj on idx.object_id = obj.object_id
inner join sys.schemas as sch on sch.schema_id = obj.schema_id
where usage.database_id is not null
and sch.schema_id <> 4
and obj.object_id is not null
group by sch.name, obj.name, idx.name, obj.object_id
order by obj.name


6.1.14

Posts I've Found Interesting:

This is a question I had until I came across a great script by The Scripting Guys. The script below is unaltered and entirely their creation. I just found it useful enough to forward it along. This is a short post. Prepping for a baby on the way has cut into my sleep. ^.^


select 'table_name'=object_name(i.id) 
        ,i.indid 
        ,'index_name'=i.name 
        ,i.groupid 
        ,'filegroup'=f.name 
        ,'file_name'=d.physical_name 
        ,'dataspace'=s.name 
from    sys.sysindexes i 
        ,sys.filegroups f 
        ,sys.database_files d 
        ,sys.data_spaces s 
where objectproperty(i.id,'IsUserTable') = 1 
and f.data_space_id = i.groupid 
and f.data_space_id = d.data_space_id 
and f.data_space_id = s.data_space_id 
order by f.name,object_name(i.id),groupid 
go 

SQLSoldier wrote a post recently on how DMV's cost him his job by helping him find a better one. I found it rather interesting and thought I should forward it along. A similar situation happened when the company I was with showed no interest in providing better training, adjusting pay any, and was overall just difficult to get a hold of. I was a contractor working as a DBA. A good friend of mine had an opening where he worked and I ran for happier times. Some times the grass really is greener.

There's an older post (2007) about joins. Let me say first, I'm not a coder. I've worked with some Java, C++, VB, and obviously T-SQL. I always prefer a visual I can work with or semi working code to build off of. Coding Horror had a post explaining SQL joins visually. It's worth looking into.


Don't forget! SQL Saturday Albuquerque is coming up!


26.12.13

Powershell Backups and Post Frequency Change

It's been a good run doing a post every day. I'm scaling it back a bit. I liked getting these 36 posts out one a week day for some time now. I'm going to scale back. I"m not sure yet if I'll go down to one or two a week. I have the book giveaway post on the 30th and the winner announcement on the 1st. It's been a fun month, that's for sure. I want to hope for more quality and less rushing to push something out. I'm really curious what I can do with proper time.

Simple PowerShell Backup Script

There are better ways to do this. I welcome your responses and advice. I'm just now dabbling in powershell. I know Sean McCown has a large library of powershell videos. I think that will be my new years resolution. Below is my first day working with powershell.

This is my powershell script that I've added to the windows task manager. Before we shred the idea here... We needed backups taken for an Express box. I'm sure scripts exist out there and no one should reinvent the wheel, but I wanted to take a stab at it to learn a bit more.

sqlcmd -S .\servername -i E:\Tasks\Backupalldatabases.sql

$Path = "E:\Backups\master"
$Daysback = "-3"

$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysBack)
Get-ChildItem $Path | Where-Object {$_.lastWriteTime -lt $DatetoDelete } | Remove-Item

$Path = "E:\Backups\msdb"
$Daysback = "-3"

$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysBack)
Get-ChildItem $Path | Where-Object {$_.lastWriteTime -lt $DatetoDelete } | Remove-Item

$Path = "E:\Backups\model"
$Daysback = "-3"

$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysBack)
Get-ChildItem $Path | Where-Object {$_.lastWriteTime -lt $DatetoDelete } | Remove-Item

$Path = "E:\Backups\Main"
$Daysback = "-3"

$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysBack)
Get-ChildItem $Path | Where-Object {$_.lastWriteTime -lt $DatetoDelete } | Remove-Item

$Path = "E:\Backups\Restore"
$Daysback = "-3"

$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysBack)
Get-ChildItem $Path | Where-Object {$_.lastWriteTime -lt $DatetoDelete } | Remove-Item

$Path = "E:\Backups\Test"
$Daysback = "-3"

$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysBack)
Get-ChildItem $Path | Where-Object {$_.lastWriteTime -lt $DatetoDelete } | Remove-Item

This calls a SQL script that is a bit more basic.

exec sp_MSforeachdb @command1 = '
use ?
if ''?'' not like ''tempdb''
begin
declare @backup varchar (1000)
declare @date varchar (100)
set @date = (select cast(cast(Getdate() as char(11)) as date))
set @backup =
(''BACKUP DATABASE ['' +
db_name() + ''] TO  DISK = N''''E:\Backups\'' +
DB_NAME() + ''\'' +
db_name() +
'' '' +
@date +
''.bak'''' WITH  RETAINDAYS = 1, NOFORMAT, NOINIT,  NAME = N'''''' +
db_name() + ''-Full Database Backup'''', SKIP, NOREWIND, NOUNLOAD,  STATS = 10'')
exec (@backup)
end'


I have found the way to have something delete anything in all sub folders with a date older than 3 days which would would work easier than hand scripting all this out.... but I haven't put it in play yet.


20.12.13

Unused Heaps? - Quick Tips: Day 10

Don't forget! This coming Monday is the last Monday to win a free month of Pluralsight! All it takes is a single comment on the post that day and some way to get your E-Mail address to give you the code. The Monday following that, December 30th, will have a different prize. I will be giving away a free book and mailing it to your house. If you live out of the continental US, I will still send you a 25$ Amazon card to be used however you want. It's at least something to ring in the new year. ^.^

A question came up recently. How can you find heap tables that have not been used? This is the best answer I have so far. This will allow you to pull a list of all unused heap tables since last reboot. Keep that mind. This will only show you since last reboot or last time the dmv's were cleared for any reason.

--single database use
declare @db_id smallint
set @db_id = DB_ID(db_name())
select db.name as DatabaseName, sch.schema_id as SchemaName, tbl.name as TableName, idx.name as Indexname,
stats.index_type_desc as IndexType, stats.page_count as PageCount
 from sys.dm_db_index_physical_stats (@db_id, null,null,null,null) stats
left outer join sys.dm_db_index_usage_stats as usage on usage.object_id = stats.object_id
inner join sys.tables as tbl on tbl.object_id = stats.object_id
inner join sys.indexes as idx on idx.object_id = stats.object_id and idx.index_id = stats.index_id
inner join sys.schemas as sch on sch.schema_id = tbl.schema_id
inner join sys.databases as db on db.database_id = stats.database_id
where stats.index_id = 0
and usage.object_id is null
order by page_count desc

This will work for multiple databases. Be careful though. This will take some time to run on larger systems.

--multiple database use
exec sp_msforeachdb @command1 = '
use ?
begin
declare @db_id smallint
set @db_id = DB_ID(db_name())
select db.name as DatabaseName, sch.schema_id as SchemaName, tbl.name as TableName, idx.name as Indexname,
stats.index_type_desc as IndexType, stats.page_count as PageCount
 from sys.dm_db_index_physical_stats (@db_id, null,null,null,null) stats
left outer join sys.dm_db_index_usage_stats as usage on usage.object_id = stats.object_id
inner join sys.tables as tbl on tbl.object_id = stats.object_id
inner join sys.indexes as idx on idx.object_id = stats.object_id and idx.index_id = stats.index_id
inner join sys.schemas as sch on sch.schema_id = tbl.schema_id
inner join sys.databases as db on db.database_id = stats.database_id
where stats.index_id = 0
and usage.object_id is not null
end'

Alternately, you can also see what heap tables you have that are in use and how big they are.

--single database use
declare @db_id smallint
set @db_id = DB_ID(db_name())
select distinct db.name as DatabaseName, sch.schema_id as SchemaName, tbl.name as TableName, idx.name as Indexname,
stats.index_type_desc as IndexType, stats.page_count as PageCount
 from sys.dm_db_index_physical_stats (@db_id, null,null,null,null) stats
left outer join sys.dm_db_index_usage_stats as usage on usage.object_id = stats.object_id
inner join sys.tables as tbl on tbl.object_id = stats.object_id
inner join sys.indexes as idx on idx.object_id = stats.object_id and idx.index_id = stats.index_id
inner join sys.schemas as sch on sch.schema_id = tbl.schema_id
inner join sys.databases as db on db.database_id = stats.database_id
where stats.index_id = 0
and usage.object_id is not null
order by page_count desc



18.12.13

Popular DBCC's And Our Third Winner Of The Month!: Day 8

Winner Announcement!

The third winner of this month is Annapu Reddy Gayathri! I've seen a nice increase of participants so far. I have one more month of Pluralsight to give away then it's on to the book! Dec 30th I will be mailing a book to one lucky winner. Something that just came to my mind... Mailing the book will be for continental US only. I don't want to stop anyone out of country from joining.... If you win and you're from another country I will instead send you a code for 25$ to amazon to purchase what ever you wish.

Popular DBCC Commands!

These are the DBCC commands I catch myself using the most. I'm including the reasons I use them. Do you have any suggestions to throw in here?

DBCC CheckDB - I mainly use it to check for any consistency issues. I prefer to have this running daily, or when  I do my weekly restores to test my backups, I run DBCC CHECKDB on that database. It can let you know of problems that you can fix prior to losing data.

DBCC INPUTBUFFER - Basic use is to show what the last command sent from a specific spid did.Good for tracking down what a blocking query is doing.

DBCC OPENTRAN - This shows what the oldest active transaction is. You can use this to see if you have active transactions or even to see what may be holding replication up between servers.

DBCC FREEPROCCACHE - This removes your plan cache. It's not something I'd advise unless you're isolating a specific issue, but it's good to test query times when you don't want the whole thing reliant on ram. Best on a test box though.

DBCC FREESESSIONCACHE - This flushes the distributed query connection cache.This is another one that I'd advise against using unless you're isolating a specific issue. I use this for testing queries as well.

DBCC FREESYSTEMCACHE - This removes all unused cache entries. SQL does this automatically, but this is the manual version. I like to use this when I'm testing queries. It's nice to at least think I have a clean slate.

DBCC SHRINKFILE - This allows you to shrink database files one by one to either specified sizes or to as much as it will allow. Now now, let me defend myself first. Shrinks are not all bad. They have a good use occasionally! I wouldn't use them daily... but if you're moving a lot of data around and building out partitions, you have a use to shrink the old files. Maybe you ran out of disk space by accident and your logs are huge since the database in full recovery didn't have a place to backup? These are good uses.

DBCC SHRINKDATABASE - This shrinks a database as a whole. Again like with shrink file... there are uses. If you're moving the database to a small test box, you may need the space before you can fit that restore there.

DBCC UPDATEUSAGE - This helps report accurate row counts. My understanding is that this was most useful coming from 2000 to any higher version. It still catches some that aren't listed else where.


17.12.13

Database Recovery: Day 7

Recently we were talking about Backups and how you need to test them often. The most complete way to test a backup is to restore from it. The syntax is already documented out there. Also don't forget that our contest giveaway is tomorrow! All you need is a single comment on this post containing an E-Mail address!

What plan to use?

This relates more to the backups themselves, not the restores... but it all factors in. Do you only take full backups on Monday at 0100? Do you have it in Full mode? If so, how many files do you get to restore when it fails Sunday at 1900? Don't get caught in that. Something to consider would be running differentials.My personal rule is to only use differentials if the full backup is over 3x the size.once it reaches 50% or higher, that adds a lot of time to your restore.I like having full backups on Fridays right after main business hours, Differentials taken every day at that same time and on Saturday or Sunday, script out a full restore to a secondary server. You'll have your backups running daily, tested every week, and all of it will be automated. You can just check for that nasty E-Mail saying something failed.

Secondary Server

You can restore your backups to a secondary server. Good targets for this are both your development and test servers.If you have one, ask your developers if they have any issues with weekly or even monthly restores of current data. It may be a great way to help them work on what you all are currently pushing out. De-identification of data may be required, keep that in mind.

Virtual

Do you not have a spare server laying around? Ask for enough storage on your san or if it fits, even an external drive and restore it to something like VMWorkstation. Something is better than nothing here. If your backups cannot fit on a 3 TB external, You're probably at the size that helps you push to get a spare server for testing backups. If they don't want a whole server for just backups, offer its use for your developers if they don't have a server currently.

How Often?

I prefer to test weekly if possible, monthly if you have too. The real question is, how much is your data worth to you? Can you lose a month of data and noone care? If so, do it every other week or so. Remember the SLA's you may have in place. If you have to be able to restore 24/7, testing your backups is the only way to be sure.

10.12.13

Documentation: Day 2

As a reminder, we have one more day before this weeks giveaway is over. All you need is a single comment on this thread. The winner will be announced Wednesday. We have more than a few people in the running this week! Good luck to you all.

/**************************************
Created by: Bill Barnes

Created: 12/9/2013 - 0728

Purpose: Run Document: To spread the word on good documentation

Description: There are multiple ways to document your code and your projects. While I've never heard a person say they like to document their code, it's invaluable to anyone taking over a new job or looking for code from years past. I know, I know... no one likes documentation. Nothing makes your day longer than looking at 10,000 lines of code without a single comment and you get tasked with writing it all down. You'll thank yourself later for doing it though!

Updates: Don't forget to comment for a free month of Pluralsight and the book giveaway at the end of the month.

12/9/2013 - 0729 Adding a header.

12/9/2013 - 0735 Creating a time line.

12/9/2013 - 1352 We all need smiley faces. ^.~
**************************************/

A few of my favorite places to find comments are listed below.

Stored Procedures, Jobs and Packages

At the top of stored procedures and jobs is a great place to write a little blurb showing who wrote it, when, and why it was written in the first place. Working in a place that has a lot of SSIS packages, it's simply wonderful seeing a quick comment that shows who wrote it, when it breaks, what parts can be restarted without causing duplication of data and where any fix scripts are if it does break. /in love/

Normal Code

As always, at the top showing what this process does and who authored it, you can also put an update section showing changes and hot fixes for faster dissection of the code. Throughout the code explaining what each section does or what it interacts with so I can use it to trace issues faster.

Hot Fixes and Temporary Changes

Anywhere there is a hot fix or temporary change. Tell me what the fix does, what direct problem does it solve and what caused it to be placed in. Is there a plan to fix this in an update vs just a casual hot fix for now? Invaluable information.

Disaster Recovery

Disaster Recovery is a common place to see documentation requests. Do you need to wait for the network guys before you fail over to the DR site? Who all do you need to call? What breaks that needs to be fixed manually in the next 24 hours after a fail over? (if nothing, bravo! Sadly not all businesses can support that. Complexity makes everything harder.)

Emergency Protocols

A good grab bag style document is fantastic if you need to drive or fly to your DR site and set the whole thing up from backups right then. What order does your site need to be configured in? How much data can be lost? What order do the databases need to be brought back on line first? What jobs need to be disabled/enabled at the new site? Who all do we need to contact?

Saving Money on Consultants

You could pay your consultants less! No, seriously! If you have good documentation, you can send it to them before they come on site or as they go through they can understand why you do what you do. If they have half day / day fee, you could save money by saving them time. Most of the ones I've seen have an initial fee and a half day or day fee after that. If you can shave a day or two off their week long visit, they get your problem fixed faster, you have less down time and they get on to the next gig faster. We can all win there.


I know, a lot of people hate documentation. I love it. Do you want to make it less impacting to your overall quality of life? There are a few simple steps that you can do to make it much easier on you.

1) As you find code that you understand, add a quick note to what it's about. Even if it's a small note, it'll help you see it and like everything else... make it better. Over time you'll add more as you go.

2) As you write the code, explain what that loop does. Write out why you're converting this int to a varchar. It'll help even while you're writing when you're looking for a specific part.

3) When you write packages or hot fixes, write out why you wrote it. It's great for keeping in a list of accomplishments. Management loves lists. Two birds, one comment.

4) Failover testing is a great point to write out what order you do everything in and what is expected to break. It's great for having reasons things act like they do.

5) Keeping the Junior DBA busy for the first week or so. When I started my first DBA spot, I was handed a pile of documentation. While it does honestly suck to read through boring word docs for a week... It was extremely helpful becoming familiar with it. When things would break, I would know where to look for common issues and problems. If one didn't exist, after the problem was fixed... one would be created.

/* Fun isn't it! I may be sick... I'm fine with that. I love documentation. It's helpful to everyone in your company and could help bridge the gap of a good review to a great one when your name is posted everywhere fixing all the problems */

--Note: If you find someone made a mistake, you can fix it one on one. Get the good vibes going.


5.12.13

Blocked processes: Quick Tips

Sometimes activity monitor just isn't fast enough. Maybe it's not telling you what a specific query is doing. Below is a quick script just to get a glimpse into what is going on. This could be very useful if activity monitor decides not to load.... again.

First, what's blocking?

select * from sysprocesses
order by blocked desc

Well well... what's spid 148 doing?

select * from sysprocesses p
cross apply sys.dm_exec_sql_text(sql_handle)
where p.spid = 148

This will give you what that query is running if you scroll to the far right.

There are more robust scripts such as Adam Machanic's script, 'Who Is Active". It honestly does a much better job. The above is just something quick to get specific information in a small amount of time.


Really quick post it seems. Don't forget, the next giveaway starts Monday!.


3.12.13

Testing labs

What sort of testing lab do you setup when you're testing things out? I hear see the world adventureworks almost as often as I see SQL in these books. Is there anything else out there? Jeremiah Peschka made a post recently about sample databases. Are there any others out there you all use? I personally use VMWare Workstation 10, though VirtaulBox works great... and it's free ^.~ . You can use Windows Server Trial for a lot of your testing... and you can use a trial of SQL Server or even just get the developer edition for 60$ and not worry about rebuilding every time.

My personal setup is VMWare Workstation running one Domain Controller, how ever many SQL boxes I need (normally 4 + a control) and i test everything out there using adventureworks and a SQL Load Generator.

I'd love to find a better load generator that works with 2014... What settings are you all using? Do you only test in prod... or do you have a server at work just to play with?

My workstation build is an Alienware M14 i7 4th gen with 16 gb of ram and I upgraded the hard drive to the Samsung Evo 1TB. (Yes, yes... The price tag is a bit much to deal with... but it's a 14 inch laptop that can play all my games, run all my projects and I can get about 5 hours of life out of it when I cut my settings down. It's soooo much more portable than my last ASUS 17 inch Knights of the Old Republic laptop) I can normally load 1 Domain Controller and 4 SQL boxes all without lagging my videos or youtube. I plan on removing the drive bay, getting an external drive and adding a second 2TB spinning drive for archive reasons.

Please respond with any information or thoughts you have on this. I'd love to hear the feed back.

Note: Still only one post for the free month from Plural Sight. Sounds like this will be an easy win if no one else joins in. Winner decided Wednesday!

2.12.13

Index Rebuild Status Update And Giveaway!: Quick Tips

I have a Free 1-Month code to PluralSight. There's no catch or gimmick, If your comment is in this post, I'll write your name down and draw it out of a hat and E-Mail you the code. If I can get my hands on a few more, my goal is to give one away every week until the new year. I will give the code away on the Wednesday of each week. (Or at least the first week if I cannot get my hands on any other codes)

Update: PluralSight has decided to give me four 1-Month PluralSight codes! Thank you Plural Sight! I am giving away one each Monday until Christmas! To bring in the new year, I'll be giving away Professional SQL Server 2012 Internals and Troubleshooting by Christian Bolton thanks to PluralSight giving these codes out for free.

A quick note, this only works with Online index rebuilds and if you are only rebuilding one index at a time. If you run this script, you will see a percent as long as you run it in the database that you're reindexing. The reason I put this together is that I wanted some way of knowing how far along I was when I was rebuilding a large table. I know there are bigger databases around, but if you're rebuilding a 1.4TB table, It's nice to know if you are 2% done or 97% done before going out for dinner.

I have a second script that has the same requirements but will give an estimate of the time of completion. As noted by a few other people that have looked at it... This one is a bit more experimental.


I'm asking for your help to give me some feed back on it. (This will help get your comment in for the free training too!) Please test this script and let me know if it's within 10-15% of it's estimate. I'm going to go ahead and state that it may be closer to Microsoft Minutes. I've found it to be within 10% accurate on my system... but it needs testing.

Thanks for all your help so far!

Note: The first one is accurate, but the second one may vary. Let me know what you find related to these estimates.

Note 2: This is not a product placement or sale. This is purely something I wanted to organize because of the sheer kindness and help I received first stepping in to the DBA world.