Ryan Adams Blog

SQL, Active Directory, Scripting

The Kerberos Configuration Manager is a really handy tool that can make configuring Kerberos a much easier task, but it’s got a nasty little bug.  Configuring Kerberos can be tricky.  In what way you ask?  Here is a short list of just some of the things you have to consider.

  • What is my SPN supposed to look like?
  • Should I let SQL Server handle registering my SPNs or should I do it manually?
  • Are there special considerations if I’m running a failover cluster?
  • Where does the SPN go?
  • Do I have permissions to add/change/remove SPNs and if not what is the permission I need to request?
  • Do I need delegation enabled and on which account?

That’s the short list, but you don’t have to worry about all that stuff because you have the Kerberos Configuration Manager, right?  Not so fast.

This tool works great for most scenarios, but if your environment has a split DNS, multiple domains, or multiple DNS name spaces you better take a second look at the SPNs it suggests.  Many DBAs are not familiar with these concepts so let me give a real simple explanation and then we can work through an example.  A split DNS is where hosts are resolvable in more than one namespace.  This is fairly common when companies merge or buy out other companies.

Let’s say the company “Movie Studio A” uses the DNS namespace of movies.com and all hosts are registered in that domain.  When they create their active directory they decide to call it MovieStudioA.com.  They join all their servers to the domain and use movies.com as their primary domain suffix.  Server1 is now resolvable as server1.movies.com and server1.moviestudioa.com.  The problem here is if you open a command prompt and ping “Server1″ it will resolve it as server1.movies.com because of the DNS suffix.

Kerberos will only work with the Active Directory domain name and NOT any other resolvable DNS namespace.  Unfortunately the Kerberos Configuration Manager makes SPN suggestions based on how the client machine resolves the server name you input.  What it should do after resolving and contacting the server is get the domain it is joined to and correctly build the FQDN, but that is not the case.

Let’s see what it looks like.  I have a server called Server1 that is joined to the stars.com domain.  The FQDN is Server1.stars.com.  This server is also registered in another DNS namespace called DNSOnlyDomain.com.  Here is what the Kerberos Configuration Manager says the SPNs should be.

Kerberos Configuration Manager with bad SPNs

Kerberos Configuration Manager with bad SPNs

Those SPNs are not correct because they are from a DNS domain and not the Active Directory domain.  They should reflect the Active Directory domain and look like this.

MSSQLSvc/server1.stars.com

MSSQLSvc/server1.stars.com:1433

If you are using the Kerberos Configuration Manager make sure you know what Active Directory domain your server is joined to so you can identify if the suggested SPNs are correct.

I’m officially on the slate and running for the PASS 2014 Nomination Committee!

Why should you vote for Ryan?

I love the PASS SQL Community and everything it embodies.  I am very involved at the local and regional level.  I know all of the current board members and most of the past board members and those relationships have given me a good view into the PASS organization.  I served on the 2013 NomCom last year so I’m already intimately familiar with the processes.

Why am I qualified?

I have been involved with the community for the past 10 years and I was on the 2013 NomCom last year.  I am in my second term and fourth year serving on the board of directors for the North Texas SQL Server User Group.  I am in my second year as President of the Performance Virtual Chapter and served on the board the year prior.   I have been a regional mentor for the South Central Region for 3 years.  I was on the local core organizing committee for SQLRally Dallas as well as all 5 Dallas SQLSaturdays.   In addition, I have spoken at many SQLSaturdays, SQLRally, SQLCruise, PASS Summit, 24HOP, Dev Connections, local and remote user groups, and virtual chapters.  In the past I have been a session moderator for 24HOP and served on the Summit Program Committee.

Why do I want to serve on the 2014 committee?

I believe the NomCom should have a stronger influence than it does today.  This year’s NomCom will have an extended time frame with the ability to review elections processes.  I served last year and those processes are fresh on my mind.  I believe that they need some changes and my prior experience gives me a perspective on what changes are needed and the overall impact on the PASS organization.  I’m excited to have the opportunity to review those processes for any possible improvements.

PASS is an amazing organization that has provided myself and others with both personal and professional opportunities.  Having a part in vetting the future leaders of an organization that provides so many opportunities for so many people would be a tremendous honor.

How do you vote?

There are only 3 spots on the committee and I would very much appreciate your vote to fill one of those spots.  If you were a PASS member by June 1st, 2014 with a COMPLETELY filled out profile then check your email for a link to your ballot.  Feel free to post any questions here and I will be more than happy to answer them.  You can also visit the PASS Elections Site to stay current with updates on this year’s elections or keep an eye on the #passvotes Twitter hashtag.

Voters can cast 3 votes total (1 for each seat), but I want you to think about how many you actually vote for.  It is much more powerful to cast a vote for a single candidate than it is for all 3.  This applies to the board of directors election as well.  Of course I would love for that one vote to be for me!

How can you help?

I’m also going to ask you for a favor in addition to your vote.  I want you to either post here or email me (if you prefer privacy) the questions you would ask PASS board of director candidates during an interview.  The NomCom interviews and ranks each candidate.  The nomination committee’s job is make sure the PASS board gets a slate of qualified candidates.  In order for the committee to do that I want to know what matters to you, the member.  The members are PASS.  We want to make sure the candidates are thinking about what matters to you.

I ran into an interesting issue while installing SQL 2012 SP1 in a failover cluster configuration and utilizing the local TempDB feature.  If you are unfamiliar with the new feature in SQL 2012 that allows you store TempDB on a local drive in a failover cluster you can read more here.  Here is a screen shot showing where during the install you can configure your data directories.  You can click on the image for a larger view.

TempDBLocalDirectory

You can see that I put TempDB on the local C:\ drive.  Now let me clarify that this was on a test virtual machine and putting TempDB on the operating system drive is a BAD practice.  In the words of The Offspring, “Don’t do this at home or at all!”  When you click next on that dialog you get a popup that there are warnings and to look below for more information.  You’ll notice a new warning area shows up at the bottom of that dialog.  Here is a screen shot, but the message gets cut off and you have to hover over it to see the whole thing.  It says, “You have specified a local directory C:\MSSQL11.INST01\MSSQL\Data as the tempdb data or log directory for a SQL Server cluster.  To avoid possible failures during a failover, you must make sure that the same directory exists on each cluster node and grant read/write permission to the SQL Server service.”

TempDBLocalDirError

If that directory does not currently exist, SQL Server will both create it and give the SQL Server service account permission to it.  At this point we don’t have to worry about anything as the install will take care of it.  What that message really means is that you have to make sure that it exists on each cluster node except this one.

This is also where the story takes an even more interesting twist.  After seeing that warning, I would fully expect to see it again when adding a node.  However, I don’t get a warning at all.   Maybe it created the directory for me since the first node did?  Now a lot of people might not think twice and assume everything is good since there was no warning or they simply might not remember once they get to the other node.  I’m not one of those people, so I’ll go look to see if the directory exists.  What happens if you don’t and try to failover to that node?

It will fail over again to the original node or another node.  You should see 5 events in the event log.  Here’s what I see on my test system.

CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file ‘C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf’.

FCB::Open failed: Could not open file C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf for file number 1.  OS error: 3(The system cannot find the path specified.).

Unable to open the physical file “C:\MSSQL11.INST01\MSSQL\Data\tempdb.mdf”. Operating system error 3: “3(The system cannot find the path specified.)”.

CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.

That’s pretty much what I would expect to see.  That last error is misleading of course because we know the drive is not full, it’s because the directory does not exist.  At this point the fix is an easy one.  We just need to go to that node and create the directory C:\MSSQL11.INST01\MSSQL\Data and make sure that the account running the SQL Server service has change control on it.

I’m almost positive that adding a node gave a warning in a previous version like the RTM or a CTP, but I would have to go try that out to be sure.  On one hand Microsoft did warn you on the first node, but I still think it would be nice have it show up when adding nodes as well.  Besides, you might be adding a node 6 months to a year later and not remember.

Below is a link to a white paper I wrote on Central Management Server.  This white paper is an excellent guide for those who have never used it before.  It walks you through how to use CMS from beginning to end.  Here is what you can expect to learn:

  • Where to find CMS in management studio
  • How to enable it
  • How to configure security
  • How to add connection objects
  • How to query multiple servers simultaneously
  • How to export local server connections and import them into CMS

 

Central Management Server White Paper

Below is a link to a white paper I wrote on Policy Based Management.  It serves as a guide to not only explain the technology but also walk you through setting it up and designing some policies.  You can expect to learn the following:

  • How to enable PBM
  • How to create a policy
  • The different ways to evaluate a policy
  • Where to find and import Microsoft best practice policies
  • How to setup alerts to be notified of policy violations
  • A list of common use cases

 

Policy Based Management White Paper

 

 

This post has been sitting in the draft folder for way too long, March of last year to be precise.  So although this is no longer new information, it has not been talked about all that much.  I’m talking about the new feature in SQL Server 2012 that allows you to put TempDB on local storage in a failover cluster.  That’s right.  You no longer have to house TempDB on your SAN, and why should you?

TempDB gets destroyed and re-created with each restart of the SQL Server Service so there is no need to have that happen on shared storage.  When we fail over from one node to another we don’t need the data in TempDB to still be sitting there.  So you can stop yelling at your SAN admin to “Give me more power!”  He doesn’t like you calling him Scotty anyway.

Scotty Power

What’s so great about putting TempDB on local storage?  One acronym..SSDs.  Now we can put TempDB on local storage that is fast like SSDs or even a RAID 10 that very well may be faster than your corporate SAN.  Of course this might not even be a concern for you if your application is not a heavy user of TempDB, but if you have something that’s a big hitter for TempDB this can yield some significant gains in IO.

Let me also say that this is not a get out jail free card or a cure if you are experiencing contention in TempDB.  We’re good DBAs and we’ll make sure everything is tuned and configured properly before we go throw hardware at problems, right?  That means we need to discover if we are even experiencing TempDB contention and if we are there are some other configurations we need to look at first.  For instance, multiple TempDB data files that are equally sized.  That’s a whole other post or 2, but Robert Davis (Twitter|Blog) has done a much better job in his white paper than I ever could.  Here’s a link:

http://www.sqlsoldier.com/wp/sqlserver/demystifytempdbperformancemanagement

I ran across an issue that I haven’t seen since SQL 2000 and had almost forgotten about.  Let’s do a quick review of ports in SQL Server.  A default instance of SQL Server uses port 1433.  That’s just the default and you can change it to whatever you want.  Bear in mind that if you change it you’ll have to define that port in your connection strings or enable the SQL Browser Service.

Speaking of the SQL Browser Service, you’ll need that for a named instance of SQL Server for the exact same reason.  A named instance of SQL Server uses a dynamic port that can change with every restart of the SQL Server service.  Take a look at the following screen shot of a named instance.

Ports in Configuration Manager

We are interested in the very last section which is the “IPAll” section.  You can see that the “TCP Dynamic Ports” is set for 49402.  That is what SQL Server has dynamically chosen for its port number.  For a normal named instance we’ll see a port there, but the “TCP Port” setting would be blank.  A default instance would be the other way around with a “TCP Port” defined but the “TCP Dynamic ports” being blank.

So what happens when you have a named instance using dynamic ports and you define a static port?  Well the title of post already gave it away, but whatever you put into the “TCP Port” setting will override anything dynamic.  I personally think it would make more sense to only be able to define one of the settings at a time.  Maybe MS will change that one day where one of them is grayed out until the value is removed from the other setting.

I’ve had several instances over the last few months where I had to reinstall the SQL Server Performance Counters.  It sounds odd to have them magically disappear, but I’ve found a 3rd party monitoring product to be the culprit.

First we need to unload the counters just to make sure everything is removed and we are starting from a clean slate.  It’s better to be safe than sorry.  Open a command prompt as an Administrator and choose one of the following:

Default Instance

unlodctr MSSQLSERVER

Named Instance

unlodctr MSSQL$MyInstanceName

Now we need to re-load the counters.  They are stored in an INI file.  The file is located in the binn directory of the installation folder.  You can get the name of the file from the following registry key:

Default Instance

HKLM\SYSTEM\Currentcontrolset\Services\MSSQLServer\Performance\PerfIniFile

Named Instance

HKLM\SYSTEM\Currentcontrolset\Services\MSSQL$MyInstanceName\Performance\PerfIniFile

Now that we have the file name we just need to run the load counter command from an elevated command prompt.  Here is what the command would look like for a default instance of SQL Server 2012.

lodctr “C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\BINN\perf-MSSQLSERVERsqlctr.ini”

Hopefully this helps if it ever happens to you.

A fellow community member sent me an email saying he was having trouble authenticating with Kerberos.  Here is the error he was receiving:

SSPI handshake failed with error code 0x8009030c, state 14 while establishing a connection with integrated security; the connection has been closed. Reason: AcceptSecurityContext failed. The Windows error code indicates the cause of failure.  [CLIENT: <named pipe>].

You’ll notice that he was using Named Pipes and a lot of folks might think that is the reason for the error and that SQL Server doesn’t support Kerberos over Named Pipes.  However, that’s not the case as it is version dependent.  Kerberos over Named Pipes started being supported in SQL Server 2008 and this person was running SQL Server 2008 R2.  In this instance, that was not the issue.

I walked him through some troubleshooting scenarios to narrow down the cause.  I had him verify his SPNs to make sure they were correct, were located on the correct object in Active Directory, and he had SPNs for both the FQDN and NetBIOS names.  That all checked out so I had him verify that port 88 (Kerberos) was open between the client and server.  That also checked out fine.  I asked him this one last thing:

Also, make sure you are using Windows authentication since Kerberos does not work with SQL authentication.  If you are using Windows authentication then make sure the client and server are in the same domain or a trusted domain.

That last sentence was the ticket.  He replied that his client and server were in different domains that were NOT trusted.  Unfortunately for him, there is nothing he can do short of moving the servers into the same domain or setting up a trust between them.  That’s a core concept for how Kerberos works.  Clients and servers MUST be in the same domain or a trusted domain.