Ryan Adams Blog

SQL, Active Directory, Scripting

SQLSaturday Baton Rouge

I’ll be doing a precon for SQLSaturday #324 in Baton Rouge Louisiana on August 1st, 2014 titled “A Day of High Availability and Disaster Recovery” .  If you can make it, I would love to have you in the class.  We are going to cover backups, windows clustering, AlwaysOn Failover Cluster Instances, AlwaysOn Availability Groups, and more.  This class will take each of those technologies in a progressive order to build on each other.  At the end of the day we will have a single solution built out in virtual machines on my laptop that use all of those technologies to build a comprehensive and real world high availability and disaster recovery architecture.  Click the button below to register and I have included the session abstract as well.

Eventbrite - SQL Saturday #324 Baton Rouge Pre-Con with Ryan Adams


Let’s spend a day looking at several High Availability and Disaster Recovery solutions for SQL Server. We’ll start off with a solid foundation as we look at backups and how to both configure and performance tune them. After we build our foundation, we’ll take a look at SQL Server Mirroring and why it’s still an important tool in our tool box. Since all of the HA/DR solutions we will be looking at either sit on top of or can be combined with Windows Failover Clustering, we’ll learn how to setup and configure a windows cluster to host these solutions. Next we’ll build on that platform by looking at SQL Server AlwaysOn Failover Clustering Instances. Last we’ll dive into how to setup SQL Server AlwaysOn Availability Groups. Once we have a firm understanding of all these technologies we’ll see how they all work together by discussing a case study together and developing a comprehensive solution. Here’s what you will learn:

SQL Server backup types
SQL Server recovery models
How to design a backup plan
How to performance tune your backups for free
How to configure mirroring
Mirroring configuration tips to increase throughput
Setup and Configure a Windows Failover Cluster
Discover how to properly configure quorum to support the AlwaysOn feature set
Setup and Configure a SQL AlwaysOn Failover Cluster Instance
Setup and Configure a SQL AlwaysOn Availability Group

PASS Summit 2014 Speaking Badge_250x250I am excited and humbled to be speaking at the PASS Summit for my second year in a row! There were 943 submissions and only 144 spots, so I am really excited and honored to be a part of the largest and most amazing SQL Server conference in the world.  They are expecting around 5,000 people this year in Seattle from November 4-7.

The current full conference price is $1595 and considering the speakers are consultants and folks like you and me that deploy these technologies in the real world every day, it’s worth every penny.  However, today (June 27th, 2014) is the last day to register at that price.  It goes up by $300 to $1895 tomorrow.  I have more good news.  You can contact your local chapter or virtual chapter to get a discount code worth $150 off.  The codes are good at the current price and after the price hike.  There will be more price hikes as the conference date gets closer.

I’m President of the PASS Performance Virtual Chapter so if you are looking for that code, or don’t have time to get one from your chapter, you can use this code:


At the Summit I will be presenting a session titled “SQL Server AlwaysOn Quickstart”.  I would love to see you in my session.  Even if you have seen me present this session before, I’ll be making some changes and adding content since I have more time.  It is a beginner level introduction, but I talk about every aspect of the technology.  Here is the abstract so you know what to expect, and I hope to see you there!

In this presentation I’ll explain what the SQL Server AlwaysOn high availability and disaster recovery solution is all about.  I’ll talk about the different levels of protection it provides through Windows Clustering, SQL Clustering, and Availability Groups.  We’ll discuss how these three things come together to protect your databases. We’ll finish with a dive into availability group configuration, the new capabilities it gives us, and what’s new in SQL Server 2014.

The Kerberos Configuration Manager is a really handy tool that can make configuring Kerberos a much easier task, but it’s got a nasty little bug.  Configuring Kerberos can be tricky.  In what way you ask?  Here is a short list of just some of the things you have to consider.

  • What is my SPN supposed to look like?
  • Should I let SQL Server handle registering my SPNs or should I do it manually?
  • Are there special considerations if I’m running a failover cluster?
  • Where does the SPN go?
  • Do I have permissions to add/change/remove SPNs and if not what is the permission I need to request?
  • Do I need delegation enabled and on which account?

That’s the short list, but you don’t have to worry about all that stuff because you have the Kerberos Configuration Manager, right?  Not so fast.

This tool works great for most scenarios, but if your environment has a split DNS, multiple domains, or multiple DNS name spaces you better take a second look at the SPNs it suggests.  Many DBAs are not familiar with these concepts so let me give a real simple explanation and then we can work through an example.  A split DNS is where hosts are resolvable in more than one namespace.  This is fairly common when companies merge or buy out other companies.

Let’s say the company “Movie Studio A” uses the DNS namespace of movies.com and all hosts are registered in that domain.  When they create their active directory they decide to call it MovieStudioA.com.  They join all their servers to the domain and use movies.com as their primary domain suffix.  Server1 is now resolvable as server1.movies.com and server1.moviestudioa.com.  The problem here is if you open a command prompt and ping “Server1″ it will resolve it as server1.movies.com because of the DNS suffix.

Kerberos will only work with the Active Directory domain name and NOT any other resolvable DNS namespace.  Unfortunately the Kerberos Configuration Manager makes SPN suggestions based on how the client machine resolves the server name you input.  What it should do after resolving and contacting the server is get the domain it is joined to and correctly build the FQDN, but that is not the case.

Let’s see what it looks like.  I have a server called Server1 that is joined to the stars.com domain.  The FQDN is Server1.stars.com.  This server is also registered in another DNS namespace called DNSOnlyDomain.com.  Here is what the Kerberos Configuration Manager says the SPNs should be.

Kerberos Configuration Manager with bad SPNs

Kerberos Configuration Manager with bad SPNs

Those SPNs are not correct because they are from a DNS domain and not the Active Directory domain.  They should reflect the Active Directory domain and look like this.



If you are using the Kerberos Configuration Manager make sure you know what Active Directory domain your server is joined to so you can identify if the suggested SPNs are correct.

I’m officially on the slate and running for the PASS 2014 Nomination Committee!

Why should you vote for Ryan?

I love the PASS SQL Community and everything it embodies.  I am very involved at the local and regional level.  I know all of the current board members and most of the past board members and those relationships have given me a good view into the PASS organization.  I served on the 2013 NomCom last year so I’m already intimately familiar with the processes.

Why am I qualified?

I have been involved with the community for the past 10 years and I was on the 2013 NomCom last year.  I am in my second term and fourth year serving on the board of directors for the North Texas SQL Server User Group.  I am in my second year as President of the Performance Virtual Chapter and served on the board the year prior.   I have been a regional mentor for the South Central Region for 3 years.  I was on the local core organizing committee for SQLRally Dallas as well as all 5 Dallas SQLSaturdays.   In addition, I have spoken at many SQLSaturdays, SQLRally, SQLCruise, PASS Summit, 24HOP, Dev Connections, local and remote user groups, and virtual chapters.  In the past I have been a session moderator for 24HOP and served on the Summit Program Committee.

Why do I want to serve on the 2014 committee?

I believe the NomCom should have a stronger influence than it does today.  This year’s NomCom will have an extended time frame with the ability to review elections processes.  I served last year and those processes are fresh on my mind.  I believe that they need some changes and my prior experience gives me a perspective on what changes are needed and the overall impact on the PASS organization.  I’m excited to have the opportunity to review those processes for any possible improvements.

PASS is an amazing organization that has provided myself and others with both personal and professional opportunities.  Having a part in vetting the future leaders of an organization that provides so many opportunities for so many people would be a tremendous honor.

How do you vote?

There are only 3 spots on the committee and I would very much appreciate your vote to fill one of those spots.  If you were a PASS member by June 1st, 2014 with a COMPLETELY filled out profile then check your email for a link to your ballot.  Feel free to post any questions here and I will be more than happy to answer them.  You can also visit the PASS Elections Site to stay current with updates on this year’s elections or keep an eye on the #passvotes Twitter hashtag.

Voters can cast 3 votes total (1 for each seat), but I want you to think about how many you actually vote for.  It is much more powerful to cast a vote for a single candidate than it is for all 3.  This applies to the board of directors election as well.  Of course I would love for that one vote to be for me!

How can you help?

I’m also going to ask you for a favor in addition to your vote.  I want you to either post here or email me (if you prefer privacy) the questions you would ask PASS board of director candidates during an interview.  The NomCom interviews and ranks each candidate.  The nomination committee’s job is make sure the PASS board gets a slate of qualified candidates.  In order for the committee to do that I want to know what matters to you, the member.  The members are PASS.  We want to make sure the candidates are thinking about what matters to you.

I ran into an interesting issue while installing SQL 2012 SP1 in a failover cluster configuration and utilizing the local TempDB feature.  If you are unfamiliar with the new feature in SQL 2012 that allows you store TempDB on a local drive in a failover cluster you can read more here.  Here is a screen shot showing where during the install you can configure your data directories.  You can click on the image for a larger view.


You can see that I put TempDB on the local C:\ drive.  Now let me clarify that this was on a test virtual machine and putting TempDB on the operating system drive is a BAD practice.  In the words of The Offspring, “Don’t do this at home or at all!”  When you click next on that dialog you get a popup that there are warnings and to look below for more information.  You’ll notice a new warning area shows up at the bottom of that dialog.  Here is a screen shot, but the message gets cut off and you have to hover over it to see the whole thing.  It says, “You have specified a local directory C:\MSSQL11.INST01\MSSQL\Data as the tempdb data or log directory for a SQL Server cluster.  To avoid possible failures during a failover, you must make sure that the same directory exists on each cluster node and grant read/write permission to the SQL Server service.”


If that directory does not currently exist, SQL Server will both create it and give the SQL Server service account permission to it.  At this point we don’t have to worry about anything as the install will take care of it.  What that message really means is that you have to make sure that it exists on each cluster node except this one.

This is also where the story takes an even more interesting twist.  After seeing that warning, I would fully expect to see it again when adding a node.  However, I don’t get a warning at all.   Maybe it created the directory for me since the first node did?  Now a lot of people might not think twice and assume everything is good since there was no warning or they simply might not remember once they get to the other node.  I’m not one of those people, so I’ll go look to see if the directory exists.  What happens if you don’t and try to failover to that node?

It will fail over again to the original node or another node.  You should see 5 events in the event log.  Here’s what I see on my test system.

CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file ‘C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf’.

FCB::Open failed: Could not open file C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf for file number 1.  OS error: 3(The system cannot find the path specified.).

Unable to open the physical file “C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf”. Operating system error 3: “3(The system cannot find the path specified.)”.

CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.

That’s pretty much what I would expect to see.  That last error is misleading of course because we know the drive is not full, it’s because the directory does not exist.  At this point the fix is an easy one.  We just need to go to that node and create the directory C:\MSSQL11.INST01\MSSQL\Data and make sure that the account running the SQL Server service has change control on it.

I’m almost positive that adding a node gave a warning in a previous version like the RTM or a CTP, but I would have to go try that out to be sure.  On one hand Microsoft did warn you on the first node, but I still think it would be nice have it show up when adding nodes as well.  Besides, you might be adding a node 6 months to a year later and not remember.

Below is a link to a white paper I wrote on Central Management Server.  This white paper is an excellent guide for those who have never used it before.  It walks you through how to use CMS from beginning to end.  Here is what you can expect to learn:

  • Where to find CMS in management studio
  • How to enable it
  • How to configure security
  • How to add connection objects
  • How to query multiple servers simultaneously
  • How to export local server connections and import them into CMS


Central Management Server White Paper

Below is a link to a white paper I wrote on Policy Based Management.  It serves as a guide to not only explain the technology but also walk you through setting it up and designing some policies.  You can expect to learn the following:

  • How to enable PBM
  • How to create a policy
  • The different ways to evaluate a policy
  • Where to find and import Microsoft best practice policies
  • How to setup alerts to be notified of policy violations
  • A list of common use cases


Policy Based Management White Paper



This post has been sitting in the draft folder for way too long, March of last year to be precise.  So although this is no longer new information, it has not been talked about all that much.  I’m talking about the new feature in SQL Server 2012 that allows you to put TempDB on local storage in a failover cluster.  That’s right.  You no longer have to house TempDB on your SAN, and why should you?

TempDB gets destroyed and re-created with each restart of the SQL Server Service so there is no need to have that happen on shared storage.  When we fail over from one node to another we don’t need the data in TempDB to still be sitting there.  So you can stop yelling at your SAN admin to “Give me more power!”  He doesn’t like you calling him Scotty anyway.

Scotty Power

What’s so great about putting TempDB on local storage?  One acronym..SSDs.  Now we can put TempDB on local storage that is fast like SSDs or even a RAID 10 that very well may be faster than your corporate SAN.  Of course this might not even be a concern for you if your application is not a heavy user of TempDB, but if you have something that’s a big hitter for TempDB this can yield some significant gains in IO.

Let me also say that this is not a get out jail free card or a cure if you are experiencing contention in TempDB.  We’re good DBAs and we’ll make sure everything is tuned and configured properly before we go throw hardware at problems, right?  That means we need to discover if we are even experiencing TempDB contention and if we are there are some other configurations we need to look at first.  For instance, multiple TempDB data files that are equally sized.  That’s a whole other post or 2, but Robert Davis (Twitter|Blog) has done a much better job in his white paper than I ever could.  Here’s a link:


I ran across an issue that I haven’t seen since SQL 2000 and had almost forgotten about.  Let’s do a quick review of ports in SQL Server.  A default instance of SQL Server uses port 1433.  That’s just the default and you can change it to whatever you want.  Bear in mind that if you change it you’ll have to define that port in your connection strings or enable the SQL Browser Service.

Speaking of the SQL Browser Service, you’ll need that for a named instance of SQL Server for the exact same reason.  A named instance of SQL Server uses a dynamic port that can change with every restart of the SQL Server service.  Take a look at the following screen shot of a named instance.

Ports in Configuration Manager

We are interested in the very last section which is the “IPAll” section.  You can see that the “TCP Dynamic Ports” is set for 49402.  That is what SQL Server has dynamically chosen for its port number.  For a normal named instance we’ll see a port there, but the “TCP Port” setting would be blank.  A default instance would be the other way around with a “TCP Port” defined but the “TCP Dynamic ports” being blank.

So what happens when you have a named instance using dynamic ports and you define a static port?  Well the title of post already gave it away, but whatever you put into the “TCP Port” setting will override anything dynamic.  I personally think it would make more sense to only be able to define one of the settings at a time.  Maybe MS will change that one day where one of them is grayed out until the value is removed from the other setting.

I’ve had several instances over the last few months where I had to reinstall the SQL Server Performance Counters.  It sounds odd to have them magically disappear, but I’ve found a 3rd party monitoring product to be the culprit.

First we need to unload the counters just to make sure everything is removed and we are starting from a clean slate.  It’s better to be safe than sorry.  Open a command prompt as an Administrator and choose one of the following:

Default Instance


Named Instance

unlodctr MSSQL$MyInstanceName

Now we need to re-load the counters.  They are stored in an INI file.  The file is located in the binn directory of the installation folder.  You can get the name of the file from the following registry key:

Default Instance


Named Instance


Now that we have the file name we just need to run the load counter command from an elevated command prompt.  Here is what the command would look like for a default instance of SQL Server 2012.

lodctr “C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\BINN\perf-MSSQLSERVERsqlctr.ini”

Hopefully this helps if it ever happens to you.

UPDATE (7/2/2015):  I had this happen again today and was referencing this article when I realized I left something out.  After you have reloaded the counters you should restart the SQL instance.  Sometimes you might also need to restart the remote registry service as well.