Ryan Adams Blog

SQL, Active Directory, Scripting

Dallas TechFest was a 2 day event full of great content!  I’m a SQL guy and most of the conferences I go to are SQL centric.  This event, however, had something for everyone.  There was everything including SQL, .Net, SharePoint, Security, Kinect, and even Windows phone.   Overall, everything seemed to go very well.

I delivered two sessions for the event.  I got an email from an event organizer sent to all the speakers asking for feedback to help them improve things in the future.  The things I thought they did best was signage, maps of the venue building, maps of how to get there, and lunch.

What I am going to do here is list the things that other conferences had that Dallas TechFest was missing.  This is not a list of complaints, but a list of possible improvement ideas.  Also keep in mind that Dallas TechFest was a paid event and the other conferences I am comparing it to (like SQLSaturday events) are free.  So we are not comparing apples to apples and of course it is not a competition.  So here is my list of things commonly found at SQLSaturday and other conferences that are possible opportunities for improvement.

  • There was nothing provided for breakfast and no coffee.
  • If you wanted a drink with your lunch you had to walk half way around the building.
  • The rooms were spead out across the building and that did not lend itself well to networking.
  • Lunch was served out of the speaker room which caused confusion, as lunch was only supposed to be picked up from there.   It was to be eaten in one of the session rooms, but everyone thought they could eat in that room and organizers had to keep announcing the rules.
  • There were a lot of complaints about the photographer being intrusive, but I personally did not find 1 of the 2 to be an issue.
  • There were no room monitors to help speakers with setup.
  • There were no surveys for speaker feedback.
  • There were no surveys for event feedback.
  • Volunteers and organizers did not have event shirts so if you needed direction they were hard to find.

Again this was a great event with wonderful speakers and content.  Dallas is not the only community TechFest around so make sure to search the web for one in your area.  I know that there is also a Tulsa TechFest and Houston TechFest.  Unfortunately I won’t be at those this year since they conflict with the SQLPASS Summit.

The absolute best thing about the conference was a special event held on the first night for community and user group leaders.  It was awesome to share ideas with other leaders and talk about the difficult parts of running user groups.  I won’t get into all the goodness, as you can read what Jay Smith already wrote.

 

Many people already have connections setup in their local SSMS to all the servers in their environment.  We can import those connections into our CMS so we don’t have to manually recreate them all.  We can also export the connections in our CMS to have available for importing to another CMS, another local SSMS, or simply as an additional backup.  Please note that this should be an additional backup, because you should be backing up the MSDB on your CMS server.

Let’s look at an example.  We are going to export three local connections and import them to the East/Dev group in our CMS.  Here is what our local connection options look like, followed by what we see in SSMS.

AliasServer/InstanceAuthentication Type
Server1Server1Windows
Server1/INST01Server1/INST01SQL
File2.camelot.comFile2Windows

CMS Layout

We start by right clicking on Local Server Groups and going to Tasks>Export.  The following dialog allows us to choose which local group we want export, where to export it to, and if we want to include passwords.  By default, the option is selected to NOT include passwords, but we are going to uncheck that box so we can see what gets exported from a security perspective.

Export local SSMS Connections

Before we import this into our CMS we want to make sure that no passwords are exported in clear text.  We want to look at this for a connection using Windows Authentication as well as a connection using SQL authentication.  We know from our matrix above that Server1 is using Windows Authentication, so let’s check that out first and see what got exported.

<RegisteredServers:ConnectionStringWithEncryptedPassword
type=”string”>server=Server1;trusted_connection=true;pooling=false;packet
size=4096;multipleactiveresultsets=false</RegisteredServers:ConnectionStringWithEncryptedPassword>

We can see that the connection string is using the trusted_connection=true option as expected, so no password is included or necessary.  Now we know we don’t have to worry about plain text passwords being included with Windows Authenticated connections.  However, we still need to look at a SQL Authenticated connection.

<RegisteredServers:ConnectionStringWithEncryptedPassword
type=”string”>server=Server1INST01;uid=sa; password=AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAWN50rqJbK0KumkZmeSq0rQQAAAACAAAAADZgAAwAAAABAAAACf0muawUxos7e658B76JBJAAAAAASAAACgAAAAEAAAANgj0Wy9O+YOw2tppBqYAAAAplr80XKV/SuFiRHFwGnEQTno6rDFFAAAAP3b2Ll4WMHyOKzOMuCQqekAivqX;pooling=false;packet
size=4096;multipleactiveresultsets=false</RegisteredServers:ConnectionStringWithEncryptedPassword>

Here we can see that the username is displayed in plain text, but the password is encrypted.  If displaying the username is a security or auditing concern for your company then you need to be aware of this.  However, we can rest easier knowing that the password is not exposed in plain text.

NOTE: Remember that CMS does not allow SQL Authenticated connections, so pay attention to what happens when you import it.  You can read more about that in my post on Disadvantages of using CMS.

Switching back into SSMS, we can import these connections to our CMS by right clicking the folder we want to put them in and going to Tasks>Import.  The dialog we receive asks for the file to import and the folder we want them imported to.  The folder to import to will default to the folder you right clicked on in the previous step, but you can change it here if you wish.

CMS Import Wizard

We have now successfully exported our local connections into our CMS.  Here is what our CMS looks like now.

View of CMS after import

There is only one thing left to check.  The connection for Server1/INST01 was using SQL Authentication in our local SSMS and that is not allowed in our CMS.  Here is what happens when you import a connection that uses a SQL Authenticated login.

View of imported SQL Auth connection

You can still see the SQL Login ID that was specified for the connection, but Windows Authentication is now selected and greyed out.  SQL Authenticated logins automatically get switched to Windows Authenticated logins upon import.

T-SQL Wednesday

This month’s T-SQL Tuesday has been moved to Wednesday.  The topic this month is “crap code”.

This is a great topic to showcase what not to do and see some actual bad code.  After all we tend to learn best from our mistakes.  However, I’m not going to show bad code.  I want talk about code that may or may not be bad, but becomes bad by virtue of ignoring outside factors.  What I am talking about is being a well-rounded DBA and making sure you take off the blinders and look at things from a 1000 foot view.  Yeah it’s time to put on the big boy pants!

Let me explain what I mean with a single example that has two repercussions.  Let’s say you’ve written this beautiful piece of T-SQL and followed all the general rules.  You didn’t use functions that cause table scans, you didn’t use a ” SELECT * “, and you have a granular ” WHERE ” clause.  You’re proud of the code and everything seems fine until you run it and it returns results in an unacceptable amount of time.

Why is it slow?  I mean this is a beautiful and fantastic piece of code you wrote!  Well you didn’t think about how the table was indexed and although you had a granular ” WHERE ” clause the table has millions of rows and the column in your clause doesn’t even have an index on it.  Yeah that just happened.  You were so focused on the query you forgot to think about the outside factors, your indexes in this case, that have had a very profound effect on your query.  We have to remember to take a step back and think about the outside factors that can affect our code.  In this case the code was perfect and all you needed was a proper index.

The other repercussion to this example, is to remember that it is not a “set it and forget it” kind of thing.  Let’s say you added the index and the query is now both genius and lightning fast.  That’s great for now, but what about six months or a year down the road?  Maybe another administrator made some indexing changes, or maybe you changed the index to satisfy another query.  Now the query performs badly, so what we need to remember is to periodically check our systems to make sure things have not changed.  We also need to be sure when making changes that they don’t adversely affect other operations on the system.

If you need to setup a Central Management Server or just want to check it out, then here is how to do it.

Open SQL Server Management Studio and connect to the SQL instance you want to host your CMS.  By default SSMS does not show you the Registered Servers pane.  If you do not see a tab for the Registered Servers pane on the bottom left hand pane next to the Object Explorer pane, go up to view and select Registered Servers.  To enable CMS, simply right click on the Central Management Servers node in the hierarchy tree and select Register Central Management Server.  A create connection dialog will open where we can put in the connection details for our CMS.  For our example, we’ll use the server File2 for our CMS.

 

Your CMS server is now ready to configure with connections.  You will want to first decide how to layout the server group folders according to your business.  Once you have done that, you can right click on your server and select “New Server  Group” to begin setting up your folder structure.  Once you have your structure built out, you can right click the appropriate folder and select “New Server Registration”.  In our example we will right click the appropriate folder and select “New Server Registration”.  Here is what the registration details will look like:

 

I will be presenting two sessions at Dallas TechFest!  I am honored and excited to be speaking.  Dallas TechFest is a premiere technology conference that includes many technology areas like .Net, Java, Windows Phone, and of course SQL Server.  This year the conference has grown and expanded from one day to two days, and is being held at the University of Texas at Dallas on August 12th and 13th.

If you are looking to cram your head full technology greatness from some of the best speakers around, then this is the conference for you.  The conference does have a cost, but it’s only a $100 for two days of networking and the best in technology information.  That is a great value for two days of training, but let’s make it even better by giving you half off the price by using the code “TwoDays”.  That is two days of training and lunch for only $50!  Ooooh, I forgot to mention that lunch is included?….Your welcome.

Check out the Dallas TechFest website, and you can also Register Here.  Make sure to check out my sessions; here is what I will be presenting:

Manage your shop with CMS and Policy Based Management

Mirroring: The Bare Necessities

See you there!

 

There are two drawbacks to how CMS works.  The first one I have already written about, so I won’t cover that in this post.  You can read about it in the following post:

How to add your CMS server to a group

The second one can be a pain in large companies with multiple environments, but we will be able to see that there is some security merit behind this restriction.

All registered server connections in the CMS can only use Windows authentication.  This means you can only add servers to your CMS that are in the same domain or a trusted domain.  You will not be able to add standalone servers that are not in a domain.  We generally see this in a DMZ scenario where the SQL Server hosts data for a public facing web server.  Along those lines, we typically see that test and development environments are hosted in separate untrusted domains.  You will not be able to add these servers either.  In these scenarios, you will need a separate CMS in each untrusted domain.

The inability to use SQL authentication can seem like quite a setback in large companies with multiple environments.  Although this restriction can be a hindrance in many environments, you have to think about the security risks if it were allowed.  Allowing SQL authentication would mean having connections with stored usernames and passwords.  Anyone allowed to connect to the CMS could use that connection to access a server with another set of credentials.  This would make it impossible to have user and group based access controls to your data.  You would also not be able to audit or track access to your server and identify an offending user.

The most obvious advantage of a CMS is that it holds an inventory of all the SQL servers in our environment.  We can now connect to a single place and get a quick view of every server in our environment without having to remember each server and instance name.

We also have the flexibility to arrange our server connections in folders.  Doing so allows us to get an even better picture of our environment at a glance.  We can arrange our connections in any manner that we choose.  This is entirely dependent on your business model so that you can match your folder structure to your business support model.  Common examples are to model your folder structure in a logical manner by server function, geographic location, SQL Server version, or any combination thereof.

Let’s say our company has their support model broken up into regions, so if an administrator supports the West region he can easily discover and manage the servers he supports.  Our business model also dictates that every production environment should have both a test and development environment.  That encompasses our business model, but as the administrator you also want to separate and identify different versions of SQL Server.  This helps avoid running a query that may be supported on one version but not another.  Here is what that implementation might look like:

CMS Design Example

Beyond housing a server inventory and being able to arrange it according to our business needs, a CMS gives us the power to run a query against multiple servers.  This is the most powerful feature of CMS and the biggest reason to utilize it.  If your manager comes up to your desk and says that the company has acquired a budget to upgrade your SQL Servers and he has to submit the request within two hours, how long would it take you to find and evaluate the version of every SQL Server in your environment?  What if a new cumulative update patch comes out that fixes a security issue and you need to know which servers need the patch?  Using our example, we can find out the version of all SQL Servers in the West region by right clicking the West folder and selecting “New Query”.  In the new query window we type in “SELECT @@VERSION” and this is what we get.

Result of CMS Multi-query

That was fast and easy, and we can instantly report to our manager that we have 3 instances of SQL Server running SQL 2008 RTM SP2.  We can also see that one of them is running 64bit and the other two are running 32bit.

There are several other things to make a note of here.  First is that there is an added column returned named “Server Name”.  CMS adds that column to every multi-server query so you know which result came from which server.  You will also notice in the lower right corner that the name of the group (West) we queried is displayed to show what the query was executed against in our CMS.

The last thing to note is what you will see in the query window prior to executing your query.  The status bar at the bottom will be a pink color as opposed to the usual yellow color.  This lets you know that the query window is attached to multiple connections.  You will also notice that it says “Connected. (3/3)”.  This lets you know how many connections any queries in this window will be executed against.

CMS Connected Servers

The last advantage of CMS comes when using it conjunction with Policy Based Management.  It allows you to evaluate policies against multiple SQL instances simultaneously, much like using the multi-server query feature.  It’s as simple as right clicking on a server connection or folder and selecting “Evaluate Policies”.  PBM and CMS complement each other very well.

I recently had the need to move the MSDTC for a cluster to a new SAN drive.  It’s a quite simple thing to do, but you can’t redirect the MSDTC resource to a new drive.  Let’s just walk through the basic steps.

The first thing you need to do is get the new drive added to the cluster and put into the proper group.  If the drive is being presented from a SAN and you have multiple paths to that SAN, then you will need to setup whatever multi-pathing software you have.  If you’re not sure, then an easy test is to open Window Disk Management and see how many new drives are there.  If there are more drives there than you are supposed to have, then you need to setup your multi-pathing.  Once you get that setup and reboot, you should now see the correct number of drives.  If you are on a cluster, you need to do this on each node in the cluster.

Now that your drives are correctly presented to the OS, it’s time to bring them online and format them.  You can do this from the node in the cluster that your services are currently running on and you only have to do this on that one node.  Right click the drive in Windows Disk Management and select online.  Now that the drive is online you need to right click it again and select initialize.  Finally you need to format it by right clicking the drive and selecting New Simple Volume.  Walk through the wizard giving it a drive letter and formatting it with a label.  Don’t worry about what drive letter you assign as we can change that later.

You are now ready to add the drive to the cluster.  Open the Cluster Manager MMC, expand the cluster, and select storage in the left hand pane.  In the right hand pane you will see all of your cluster groups with their assigned storage below the group.  Select Add Disk in the actions pane and select the disk to add.  You will now see a group called Available Storage where you will find your new drive.  Right click the drive to which you plan to move the MSDTC, go to more options, and select Move this resource to another service or application.  You want to select your MSDTC group.

At this point everything is now setup and ready for the move.  Unfortunately you can’t simply change the MSDTC resource object to point to a new drive.  You have to delete the resource and create a new one.  In Cluster Manager, expand your MSDTC group, right click the MSDTC resource and select Delete.  In the actions pane on the right, click Add Resource and under More Resources select Add Distributed Transaction Coordinator.  Do not bring the resource online yet as we need to define its dependencies.  Double click the resource and go to the dependencies tab.  Here you need to make it dependent on both the network name and the new cluster disk you just added.  Click OK and bring the resource online.  You have now successfully moved your cluster MSDTC to another drive, but let’s verify it.  Open Windows explorer and view the new drive.  You should now see a folder called MSDTC, which verifies the move.

The last thing you want to do is some clean up.  Go back into Cluster Manager and select the Storage node in the left hand pane.  Under your MSDTC group, right click the old MSDTC disk and select Remove.  This will remove the old storage drive from the cluster group and put it back into the available storage group.

If you want to reuse the old drive letter (Typically M:), then right click the old drive under the Available Storage group and select Change Drive Letter.  Now you can right click the old drive and select delete, which will remove it from the Available Storage group in Cluster Manager.  In order to assign your new drive to the previously used drive letter, simply right click it in Cluster Manager and select Change Drive Letter.  The old drive letter will now be available, so simply select it.  After changing the drive letter you now need to restart the MSDTC group.

More Information – If you are moving your cluster MSDTC drive you might want to check out the following posts:

How to move cluster Quorum drive

How to move the Master database

How to move the Model database

How to move the MSDB database

How to move the TempDB database

SQLAgent Error 435

I recently had the need to move the quorum for a cluster to a new SAN drive.  It’s a quite simple thing to do, but some of the options and locations of the cluster resources itself are a little hidden in Windows 2008.  Let’s just walk through the basic steps.

The first thing you need to do is get the new drive added to the cluster and put into the proper group.  If the drive is being presented from a SAN and you have multiple paths to that SAN, then you will need to setup whatever multi-pathing software you have.  If you’re not sure, then an easy test is to open Window Disk Management and see how many new drives are there.  If there are more drives there than you are supposed to have, then you need to setup your multi-pathing.  Once you get that setup and reboot, you should now see the correct number of drives.  If you are on a cluster, you need to do this on each node in the cluster.

Now that your drives are correctly presented to the OS, it’s time to bring them online and format them.  You can do this from the node in the cluster that your services are currently running on and you only have to do this on that one node.  Right click the drive in Windows Disk Management and select online.  Now that the drive is online you need to right click it again and select initialize.  Finally you need to format it by right clicking the drive and selecting New Simple Volume.  Walk through the wizard giving it a drive letter and formatting it with a label.  Don’t worry about what drive letter you assign, as we can change that later.

You are now ready to add the drive to the cluster.  Open the Cluster Manager MMC, expand the cluster, and select storage in the left hand pane.  In the right hand pane you will see all of your cluster groups with their assigned storage below the group.  Select Add Disk in the actions pane and select the disk to add.  You will now see a group called Available Storage where you will find your new drive.

At this point everything is now setup and ready for the move.  In the left hand pane, click the cluster name at the top of the list.  In the actions pane go to More Options and select Configure Cluster Quorum Settings.  Follow through the wizard and the only thing you need to change is on the Configure Storage Witness page.  On this page, simply select the check next to the new storage disk.  You have now successfully moved your cluster quorum to another drive, but let’s verify it.  Open Windows explorer and view the new drive.  You should now see a folder called Cluster, which verifies the move.  Just to be clear, what you have really done is not move the quorum but created a new one.

If you want to reuse the old drive letter (Typically Q:), then right click the old drive under the Available Storage group and select Change Drive Letter.  Now you can right click the old drive and select delete, which will remove it from the Available Storage group in Cluster Manager.  In order to assign your new drive to the previously used drive letter, simply right click it in Cluster Manager and select Change Drive Letter.  The old drive letter will now be available, so simply select it.  After changing the drive letter you now need to restart the cluster service or run the following command from an elevated command prompt to move it to another node.

cluster group “cluster group” /move:nodename

More Information – If you are moving your cluster quorum drive you might want to check out the following posts:

How to move cluster MSDTC

How to move the Master database

How to move the Model database

How to move the MSDB database

How to move the TempDB database

SQLAgent Error 435

Do you get error 435 in the Application event log and your SQLAgent fails to start?  The reason you’re getting this error is that the SQLAgent error log and working directory file locations are not available.

I recently ran into this after moving the user and system databases to a different SAN drive.  Even though you tell SQL where you moved MSDB, you still have not told it the new location for the error log and working directory.  SQL stores these locations in the registry and you will have to update it.  Here are the locations for the keys you need to update.

*** NOTE *** If you are on a cluster, make sure you update the registry on every node in the cluster.

HKEY_LOCAL_MACHINESOFTWAREMicrosoftMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERSQLServerAgentErrorLogFile

HKEY_LOCAL_MACHINESOFTWAREMicrosoftMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERSQLServerAgentWorkingDirectory

More Information – If you have this problem then you might want to check out the following posts:

How to move the Master database

How to move the Model database

How to move the MSDB database

How to move the TempDB database

How to move cluster Quorum drive

How to move cluster MSDTC drive