Ryan Adams Blog

SQL, Active Directory, Scripting

I ran into an interesting issue while installing SQL 2012 SP1 in a failover cluster configuration and utilizing the local TempDB feature.  If you are unfamiliar with the new feature in SQL 2012 that allows you store TempDB on a local drive in a failover cluster you can read more here.  Here is a screen shot showing where during the install you can configure your data directories.  You can click on the image for a larger view.


You can see that I put TempDB on the local C:\ drive.  Now let me clarify that this was on a test virtual machine and putting TempDB on the operating system drive is a BAD practice.  In the words of The Offspring, “Don’t do this at home or at all!”  When you click next on that dialog you get a popup that there are warnings and to look below for more information.  You’ll notice a new warning area shows up at the bottom of that dialog.  Here is a screen shot, but the message gets cut off and you have to hover over it to see the whole thing.  It says, “You have specified a local directory C:\MSSQL11.INST01\MSSQL\Data as the tempdb data or log directory for a SQL Server cluster.  To avoid possible failures during a failover, you must make sure that the same directory exists on each cluster node and grant read/write permission to the SQL Server service.”


If that directory does not currently exist, SQL Server will both create it and give the SQL Server service account permission to it.  At this point we don’t have to worry about anything as the install will take care of it.  What that message really means is that you have to make sure that it exists on each cluster node except this one.

This is also where the story takes an even more interesting twist.  After seeing that warning, I would fully expect to see it again when adding a node.  However, I don’t get a warning at all.   Maybe it created the directory for me since the first node did?  Now a lot of people might not think twice and assume everything is good since there was no warning or they simply might not remember once they get to the other node.  I’m not one of those people, so I’ll go look to see if the directory exists.  What happens if you don’t and try to failover to that node?

It will fail over again to the original node or another node.  You should see 5 events in the event log.  Here’s what I see on my test system.

CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file ‘C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf’.

FCB::Open failed: Could not open file C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf for file number 1.  OS error: 3(The system cannot find the path specified.).

Unable to open the physical file “C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf”. Operating system error 3: “3(The system cannot find the path specified.)”.

CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.

That’s pretty much what I would expect to see.  That last error is misleading of course because we know the drive is not full, it’s because the directory does not exist.  At this point the fix is an easy one.  We just need to go to that node and create the directory C:\MSSQL11.INST01\MSSQL\Data and make sure that the account running the SQL Server service has change control on it.

I’m almost positive that adding a node gave a warning in a previous version like the RTM or a CTP, but I would have to go try that out to be sure.  On one hand Microsoft did warn you on the first node, but I still think it would be nice have it show up when adding nodes as well.  Besides, you might be adding a node 6 months to a year later and not remember.

Below is a link to a white paper I wrote on Central Management Server.  This white paper is an excellent guide for those who have never used it before.  It walks you through how to use CMS from beginning to end.  Here is what you can expect to learn:

  • Where to find CMS in management studio
  • How to enable it
  • How to configure security
  • How to add connection objects
  • How to query multiple servers simultaneously
  • How to export local server connections and import them into CMS


Central Management Server White Paper

Below is a link to a white paper I wrote on Policy Based Management.  It serves as a guide to not only explain the technology but also walk you through setting it up and designing some policies.  You can expect to learn the following:

  • How to enable PBM
  • How to create a policy
  • The different ways to evaluate a policy
  • Where to find and import Microsoft best practice policies
  • How to setup alerts to be notified of policy violations
  • A list of common use cases


Policy Based Management White Paper



This post has been sitting in the draft folder for way too long, March of last year to be precise.  So although this is no longer new information, it has not been talked about all that much.  I’m talking about the new feature in SQL Server 2012 that allows you to put TempDB on local storage in a failover cluster.  That’s right.  You no longer have to house TempDB on your SAN, and why should you?

TempDB gets destroyed and re-created with each restart of the SQL Server Service so there is no need to have that happen on shared storage.  When we fail over from one node to another we don’t need the data in TempDB to still be sitting there.  So you can stop yelling at your SAN admin to “Give me more power!”  He doesn’t like you calling him Scotty anyway.

Scotty Power

What’s so great about putting TempDB on local storage?  One acronym..SSDs.  Now we can put TempDB on local storage that is fast like SSDs or even a RAID 10 that very well may be faster than your corporate SAN.  Of course this might not even be a concern for you if your application is not a heavy user of TempDB, but if you have something that’s a big hitter for TempDB this can yield some significant gains in IO.

Let me also say that this is not a get out jail free card or a cure if you are experiencing contention in TempDB.  We’re good DBAs and we’ll make sure everything is tuned and configured properly before we go throw hardware at problems, right?  That means we need to discover if we are even experiencing TempDB contention and if we are there are some other configurations we need to look at first.  For instance, multiple TempDB data files that are equally sized.  That’s a whole other post or 2, but Robert Davis (Twitter|Blog) has done a much better job in his white paper than I ever could.  Here’s a link:


I ran across an issue that I haven’t seen since SQL 2000 and had almost forgotten about.  Let’s do a quick review of ports in SQL Server.  A default instance of SQL Server uses port 1433.  That’s just the default and you can change it to whatever you want.  Bear in mind that if you change it you’ll have to define that port in your connection strings or enable the SQL Browser Service.

Speaking of the SQL Browser Service, you’ll need that for a named instance of SQL Server for the exact same reason.  A named instance of SQL Server uses a dynamic port that can change with every restart of the SQL Server service.  Take a look at the following screen shot of a named instance.

Ports in Configuration Manager

We are interested in the very last section which is the “IPAll” section.  You can see that the “TCP Dynamic Ports” is set for 49402.  That is what SQL Server has dynamically chosen for its port number.  For a normal named instance we’ll see a port there, but the “TCP Port” setting would be blank.  A default instance would be the other way around with a “TCP Port” defined but the “TCP Dynamic ports” being blank.

So what happens when you have a named instance using dynamic ports and you define a static port?  Well the title of post already gave it away, but whatever you put into the “TCP Port” setting will override anything dynamic.  I personally think it would make more sense to only be able to define one of the settings at a time.  Maybe MS will change that one day where one of them is grayed out until the value is removed from the other setting.

I’ve had several instances over the last few months where I had to reinstall the SQL Server Performance Counters.  It sounds odd to have them magically disappear, but I’ve found a 3rd party monitoring product to be the culprit.

First we need to unload the counters just to make sure everything is removed and we are starting from a clean slate.  It’s better to be safe than sorry.  Open a command prompt as an Administrator and choose one of the following:

Default Instance


Named Instance

unlodctr MSSQL$MyInstanceName

Now we need to re-load the counters.  They are stored in an INI file.  The file is located in the binn directory of the installation folder.  You can get the name of the file from the following registry key:

Default Instance


Named Instance


Now that we have the file name we just need to run the load counter command from an elevated command prompt.  Here is what the command would look like for a default instance of SQL Server 2012.

lodctr “C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\BINN\perf-MSSQLSERVERsqlctr.ini”

Hopefully this helps if it ever happens to you.

UPDATE (7/2/2015):  I had this happen again today and was referencing this article when I realized I left something out.  After you have reloaded the counters you should restart the SQL instance.  Sometimes you might also need to restart the remote registry service as well.

A fellow community member sent me an email saying he was having trouble authenticating with Kerberos.  Here is the error he was receiving:

SSPI handshake failed with error code 0x8009030c, state 14 while establishing a connection with integrated security; the connection has been closed. Reason: AcceptSecurityContext failed. The Windows error code indicates the cause of failure.  [CLIENT: <named pipe>].

You’ll notice that he was using Named Pipes and a lot of folks might think that is the reason for the error and that SQL Server doesn’t support Kerberos over Named Pipes.  However, that’s not the case as it is version dependent.  Kerberos over Named Pipes started being supported in SQL Server 2008 and this person was running SQL Server 2008 R2.  In this instance, that was not the issue.

I walked him through some troubleshooting scenarios to narrow down the cause.  I had him verify his SPNs to make sure they were correct, were located on the correct object in Active Directory, and he had SPNs for both the FQDN and NetBIOS names.  That all checked out so I had him verify that port 88 (Kerberos) was open between the client and server.  That also checked out fine.  I asked him this one last thing:

Also, make sure you are using Windows authentication since Kerberos does not work with SQL authentication.  If you are using Windows authentication then make sure the client and server are in the same domain or a trusted domain.

That last sentence was the ticket.  He replied that his client and server were in different domains that were NOT trusted.  Unfortunately for him, there is nothing he can do short of moving the servers into the same domain or setting up a trust between them.  That’s a core concept for how Kerberos works.  Clients and servers MUST be in the same domain or a trusted domain.

Have you ever had an error when using a SQL Server Proxy Account?  I ran across a misleading error.  Let me qualify that a little.  It wasn’t misleading in the sense of it saying one thing and meaning another.  It was misleading because it’s one of those errors you see for multiple things and I happen to see this one fairly often.  Let’s look at the error first…

Unable to start execution of step1 (reason: Error authenticating proxy SERVER1\Administrator. System error: Logon failure: unknown user name or bad password.).  The step failed.

You can see from the error that it came from a SQLAgent job.  You can also see that the job step was using a proxy account.  The last part about “unknown user name or bad password” is the part I see most often in errors.  Before you read on to the next paragraph, what’s the first thing that comes to mind as to the cause of this error?  No seriously…stop reading…what’s your first instinct?

Did you guess that it was a bad password?  You didn’t stop reading and think did you?  That would be way too obvious and easy!  My guess?  A Kerberos double hop issue.  That did not turn out to be the case.  It’s not the most common error message for a Kerb issue as you’re more likely to see an “Anonymous User” or “SSPI Context” error, but it’s not uncommon.  So what’s the issue?

The issue turned out to be someone configuring the SQLAgent service account to use a UPN or User Principal Name.  As a DBA you may be wondering what that is.  My former life was Active Directory and the simple explanation is that it’s an account name in the format “User@Domain.com”.  All you really need to know is that SQL Server does NOT support the use of UPN names.

I want to point out two things here.  The first is the user name you see in the error message.  “Server1\Administrator” is called a SAM account name.  If it had been in a UPN format the issue would have been very obvious.  However, that account is the credential that the proxy account is using.  In fact the job should have executed under the security context of that account as opposed to the account the SQLAgent service account was running under.  Since the SQLAgent was set to run using a UPN name we failed before we ever got that far.  So  how did I figure out the issue?  You’ll recall that I said I thought it was a Kerberos error, so I went to look at the SQL Server Service account so I could check the SPNs and that’s when I saw the bad account name.

The second thing I want to point out is that SQL Server Configuration Manager will NOT allow you to use a UPN.  That means the account was mostly likely set using Services.msc.  This is just another example of why you should ALWAYS use SQL Server Configuration Manager to make account changes to your SQL services.