Ryan Adams Blog

SQL, Active Directory, Scripting

First we need to know the name and file location of the Model database files.  When you change the location, make sure to keep the same name.  Technically we don’t need the current physical path, but it’s just a good idea to have it documented in case things go badly.

SELECT name, physical_name
FROM sys.master_files
WHERE database_id = DB_ID(N’model’);
GO

Next we need to tell SQL where we plan to move the files.  Make sure to change the ”FILENAME” path and ensure the “NAME” is the same as what you got from the above query.

USE master;
GO
ALTER DATABASE model
MODIFY FILE (NAME = modeldev, FILENAME = ‘Y:MSSQL10.MSSQLSERVERMSSQLDATAmodel.mdf’);
GO
ALTER DATABASE model
MODIFY FILE (NAME = modellog, FILENAME = ‘Z:MSSQL10.MSSQLSERVERMSSQLDATAmodellog.ldf’);
GO

We need to shut down SQL.  Once SQL is stopped, copy the physical MDF and LDF to the new file system location.

Now we are ready to restart SQL server.  Once SQL is back up you might want to run the first query again just to make sure everything went as planned and the location has been updated properly.  Also, don’t forget to clean up after yourself and delete the old database files.

More Information – If you are moving your Model database you might want to check out the following posts:

How to move the Master database

How to move the MSDB database

How to move the TempDB database

SQLAgent Error 435

How to move cluster Quorum drive

How to move cluster MSDTC drive

The catch to moving the Master database is that you must also move the Resource database.  Microsoft states that the Resource database must reside in the same location as the Master database.  You can see their instructions HERE on how to move the Master database.

However, on my cluster I did not find the Resource database in the same location as Master.  In fact I didn’t find it there on any of the instances I have running on my laptop either.  I actually found it in the local disk of each node in the cluster.  Here is where I found mine, and if you don’t find yours there, then just do a search in all drives with Windows Explorer.

D:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLBinn

Now that we have the file location for the Resource database files we need to verify the file location of the master database as well.  Here is the query we can use to get that:

SELECT physical_name
FROM sys.master_files
WHERE database_id = DB_ID(N’master’);
GO

First we will focus on moving the Master database, so we need to tell SQL where we plan to move the master database files.  SQL stores the location of the Master database in the registry, but it’s best to use SQL Server Configuration Manager to make the change.  Open Configuration Manager and select the “SQL Server Services” node on the left.  Next we want to right click and select properties on the SQL Server Service for the instance we are changing.  In the properties dialog box we want to select the Advanced tab and take a look at the Startup Parameters option.

SQL Startup Parameters

By default there are 3 startup parameters. “-d” specifies the location of the Master database data file.  “-l” specifies the location of the Master database log file.  “-e” specifies the location of the SQL Server error log file.  Here is what it looks like on my laptop for the default instance.

-dC:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATAmaster.mdf;-eC:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLogERRORLOG;-lC:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATAmastlog.ldf

Go ahead and change the location of the Master database data and log file locations and click OK.  You can also change the error log location at the same time if you need to.

We need to shut down SQL.  Once SQL is stopped, copy the physical MDF and LDF to the new file system location.  Remember when I said my resource database was in a local drive on my cluster nodes, even though the Microsoft article said it HAD to be in the same location as Master?  If that is the case for you, then you can leave it alone and just go ahead and restart SQL Server.  However, if your Resource database was indeed in the same place as Master then read on to see how to get that moved.

When we start SQL back up we now have to start it in recovery mode to change the location of the Resource database.  We can do that by running a command prompt as an Administrator and running the following command.  Type this command exactly as you see here.

NET START MSSQLSERVER /f /T3608

Next we need to tell SQL where we plan to move the Resource database files.  Make sure to change the “FILENAME” path but leave the ”NAME” the same as what you see below.

USE master;
GO
ALTER DATABASE mssqlsystemresource
MODIFY FILE (NAME = data, FILENAME = ‘Y:MSSQL10.MSSQLSERVERMSSQLDATAmssqlsystemresource.mdf’);
GO
ALTER DATABASE mssqlsystemresource
MODIFY FILE (NAME = log, FILENAME = ‘Z:MSSQL10.MSSQLSERVERMSSQLDATAmssqlsystemresource.ldf’);
GO

Don’t shut down SQL yet.  Go ahead and copy the physical MDF and LDF to the new file system location, and then run the following command.  The system Resource database must be put back into a read only mode.

ALTER DATABASE mssqlsystemresource SET READ_ONLY;

Now we can shut down and restart SQL server.  Once SQL is back up you might want to run the first query again just to make sure everything went as planned and the location has been updated properly.  Also, don’t forget to clean up after yourself and delete the old database files.

More Information – If you are moving your Master database you might want to check out the following posts:

How to move the Model database

How to move the MSDB database

How to move the TempDB database

SQLAgent Error 435

How to move cluster Quorum drive

How to move cluster MSDTC drive

You cannot add a connection to your CMS server, on your CMS server.  Well that is what I thought until SQLSaturday in Houston, where John Sterrett (Blog|Twitter) pointed out that you can use an IP address.  I think Microsoft prevents this primarily because you already have a connection defined for it simply by virtue of it being your CMS.  However, you may have the need to include it in a folder for the purpose of multi-query or PBM policy evaluations.  I decided to do a little more research on the matter to better understand the internals.  First let’s walk through the issue, then figure out how Microsoft is preventing it, and finally look at our workaround options.

The Issue

In the example below you can see that the server named “File2” is our CMS server at the root of the tree, and you can right click on it to perform several CMS actions.

 CMS Registered Servers

If you double click it, a connection to File2 and only to File2 will be automatically opened in your object explorer window.  However, if you right click and select Object Explorer it will open connections to every server in your CMS in the object explorer window.  The same will occur against all servers in your CMS if you select “New Query” or “Evaluate Policies”.  If you want to perform any of these actions against the CMS server alone then you will notice an option in the right click context menu called “Central Management Server Actions”.  Choosing options from there will execute against the CMS alone and not every server connection defined within your CMS.

If your CMS (File2 in our example) is also a production SQL Server in the East region, we might need to add it to that folder.  If we attempt to do that, here is what we get:

 CMS Registration Error

How it is prevented

When you use the GUI to add a server to CMS, it passes the parameters that you define to following stored procedure:

msdb.dbo.sp_sysmanagement_add_shared_registered_server

If you were to look at that stored procedure, you would see several checks on the server name that is passed to it.  It is designed to disallow you adding a server with the same name as the CMS.  It also will not let you add a server named “.”, “local”, or “localhost”.  The check it uses to ensure you do not add a server connection with the same name as the CMS is to compare the name you passed to @@servername.  The @@servername system variable holds the NetBIOS name of the machine, and in our case that is File2.

If you are curious, here is the part of the above stored procedure that performs the server name check.

    IF (@server_name IS NULL) 
    BEGIN 
        RAISERROR(14618, -1, 1, '@server_name'
        RETURN(1)    
    END 

    set @server_name = LTRIM(@server_name) 
    set @server_name = RTRIM(@server_name) 

    — Disallow relative names 
    IF (‘.’ = @server_name) OR 
        (1 = CHARINDEX(N’.’, @server_name)) OR 
        (1 = CHARINDEX(N’LOCALHOST’, UPPER(@server_name collate SQL_Latin1_General_CP1_CS_AS))) OR 
        (UPPER(@server_name collate SQL_Latin1_General_CP1_CS_AS) = ‘LOCALHOST’) OR 
        (UPPER(@server_name collate SQL_Latin1_General_CP1_CS_AS) = ‘(LOCAL)’) 
    BEGIN 
        RAISERROR (35011, -1, -1) 
        RETURN (1) 
    END 

    IF (UPPER(@@SERVERNAME collate SQL_Latin1_General_CP1_CS_AS) = UPPER(@server_name collate SQL_Latin1_General_CP1_CS_AS)) 
    BEGIN 
        RAISERROR (35012, -1, -1) 
        RETURN (1) 
    END 

Workaround Options

What does all that mean and how can you circumvent it?  It’s quite simple.  We just need to add the connection with another name that will resolve to our server other than the NetBIOS name.  The easiest method is to add it using the server’s IP address (local loopback address can also be used).  Of course that does not look very pretty in our CMS console, but we can alias it when creating the connection.  Here is how we do that:

 New Server Registration

We have two other options for alternate names.  As you can see in the above screen shot, we are in a domain named “Camelot”.  This means we can use the FQDN of file2.camelot.com.  The last option we have is to create an alias or CNAME record in DNS.  Here is what it would look like if we added it twice in the East Production group using the IP address for one connection and the FQDN for the other:

 CMS Registered Servers 2

The syspolicy_purge_history SQLAgent job is used by Policy Based Management in SQL server to clear out the policy evaluation history kept by PBM.

The job consists of the following three steps.

  1. First to check to see if PBM is enabled and exit if it is not.
  2. Run the msdb.dbo.sp_syspolicy_purge_history system stored procedure to clear the history
  3. Use PowerShell to clear the system health phantom records

If you’re not a PBM user you might not have any idea what this job does (until now).  This job might be failing on you and the error is seen in step 3.  If this describes you then you are also probably running your SQL instance on a cluster, and that’s why we see this job fail.

The job is created by default during the SQL server install and SQL uses the system name in step 3.  It’s not a problem on a standalone SQL server, but it’s a problem on a cluster.  When you address a SQL instance running on a cluster you don’t connect using the names of the physical cluster nodes.  You have to use the virtual cluster network name given to that SQL instance on the cluster.

The fix is very simple.  You just need to open the job and edit step 3 to replace the system name with the cluster virtual name.  You don’t even need to know PowerShell, so don’t panic if you don’t.  Simply replace the server name with the virtual cluster network name for you SQL instance.  Here’s an example of the PowerShell line.

(Get-Item SQLSERVER:SQLPolicyMyServerNameDEFAULT).EraseSystemHealthPhantomRecords()

T-SQL Tuesday

This month’s T-SQL Tuesday is hosted by Allen Kinsel (Blog|Twitter) and covers “disasters and recovery”.  My favorite DR solution is mirroring.  Mirroring is easy to setup and fast, if implemented properly.  It’s a database level solution that allows you to keep a remote copy of your database in sync with your principal database.  Everything is kept in sync, transaction by transaction, in near real time.

Here are some tips and tricks for your mirroring environment:

  • If you use mirroring in SQL 2005, I highly suggest you upgrade to 2008.  I have seen several cases where OLTP systems had to use asynchronous mirroring due to load with SQL 2005, but the added log stream compression in 2008 allowed the use of synchronous mode.
  • Did you use the GUI to configure your mirroring solution?  Then you really need to look into the T-SQL way.  There are several options the GUI doesn’t show you that you need to know about like dedicating mirror traffic to a specific NIC Card.
  • Have you checked to make sure everything on your principal has also been configured on your mirror for when a failover occurs?  You need to make sure things like the database owner, CLR, linked servers, and Logins are all setup and ready to go on the mirror server.
  • Are you using a witness for automatic failover?  You should double check that your application has the partner setting in its connection string or it will not be able to find the mirror.

Last month I flew out to Tucson, AZ to film a few training sessions for SSWUG.ORG.  One of my sessions was on mirroring as part of their SQL Disaster Recovery Expo.  The timing for this event and this month’s T-SQL Tuesday topic is perfect.  The expo starts this Friday June 17th, 2011.  If you are planning a DR solution I highly suggest you check out their expo.

The expo is delivered online and split into two tracks.  The premium track is $39 for SSWUG members and $49 for non-members.  That’s really cheap for an entire day of training from industry leading SQL experts.

They also have a “Best of SSWUG” track that is absolutely free and features content from experts like Kevin Kline (Blog|Twitter).

If you’re considering mirroring in your environment and want to learn more tips and tricks like the ones above, then make sure you register for this expo and watch my session.  I will also be online in the chat and available to answer any questions you may have.

In my session you learn how mirroring can fit into an HA/DR solution, and we also discuss the other options that are available.  You can’t choose the right solution if you don’t know what the other options are, because there is no one size fits all.  In the session I also walk you through how to configure mirroring both through the GUI and T-SQL.  You will not want to miss seeing the differences.  We even take a brief look at how to monitor your mirroring configuration after you get it up and rolling.

I hope to see you there!

REGISTER <<<< REGISTER NOW!

SQL Rally

It’s Official !!

The host for SQLRally 2012 has officially been given to Dallas and the North Texas SQL Server User Group.  NTSSUG is elated to have been chosen as the host for this event.  We have hosted three SQLSaturdays in the last 12 months, so we are prepared to dig in our heals.  Well, this is Texas so we’ll be digging in our spurs.

SQLRally is still in its infancy, but it has some amazing and seasoned event leaders behind it.  That’s why it’s expected that an event this large will need continuous process tuning.  Even getting to this point there have already been a lot of lessons learned.  One of the biggest lessons comes in finding venues.  A lot of venues require hotel room blocks that can run anywhere from $50,000 to $120,000.  If the rooms don’t get filled, that’s a lot of money to put at risk.

Nashville and Denver were also on the slate to host SQLRally, but unfortunately those room blocks ultimately took them out of the running.  The SQLPASS board felt there was too much risk to the organization to put up that kind of money.  I agree, even if that would have taken Dallas out of the running.  PASS is an amazing organization with some top notch people, so when decisions like this are made we should know that they were made with the best interest of the organization at heart.  Even though it came out in our favor, I absolutely think the right decision was made and would have the same decision myself.

That being said, I still find it disappointing that there was not more than one city in the running.  If there had been more than one option on the table then it would have gone to a community vote.  That would have been an awesome thing for the SQL community to “Rally” around.  Unfortunately, life handed us lemons this time, so maybe we’ll just have to serve lemonade at the rally.

We’re hoping to have an unforgettable event in Dallas next year, so start saving your pennies now!

NTSSUG also wants to give our President Sri Sridharan (Blog|Twitter) a very special thank you!  We appreciate your hard work in advocating to get SQLRally here in Dallas.  You’re a machine and we value your leadership!

I had an interesting issue a couple of months ago while creating an SSRS report.  I developed my report in BIDS, built it, and previewed it just fine.  I was also able to deploy it to my RS server just fine, but when I viewed the report from the browser I got a cryptic error about the report not being able to be rendered.  I started digging around and somewhere during the course of my research it dawned on me that this was a new SQL 2008 R2 server and I was probably still running BIDS 2008 RTM on my laptop.

I went ahead and upgraded BIDS to 2008 R2 and opened the report.  As expected, BIDS recognized that this was an older version report and offered to update the report.  I went ahead and updated the report, built it again, and re-deployed.  Of course anytime you do this you want to make sure your settings allow for everything to be overwritten, which I did.  The deployment went fine, but I continued to get the exact same error when viewed through the browser.  I never figured out why I was still getting the error, and it annoys me to this day.

Fortunately the report was a simple one, so I blew it away and re-created it from scratch.  This time it worked without a hitch, and no rendering issues.  My advice is to always develop your reports in the same version of BIDS that your RS server is running.

I have had reports update just fine through BIDS before so I’m not sure what it was about this particular report that did not like that process.

A few years ago I had a need to find all accounts in Active Directory that had the “Password Never Expires” option set.  I wrote an HTA with VBScript to make it easy to find these accounts for anyone with little AD experience.  I posted the script in the Microsoft Scripting Guys’ community script repository HERE, but I have received several reports recently from community members about errors.  I have had the same reports with my Active Directory Last Logon Utility posted on their site, and it turns out that something gets lost in translation when copying code to the site.  For whatever reason some of the formatting gets changed, so years later I have decided to post it here by request.

The script is written to search two AD LDAP paths, so make sure to change the paths at the top of the script.  The script will return any user account objects that have “Password Never Expires” set on their account within those defined paths.  It also searches all OUs under those paths.  Once the accounts are returned you have a few choices.  You can remove the option from those accounts, you can delete the accounts if no longer required, or you can simply export them to an Excel spreadsheet.  After you have performed actions on those accounts, the script will display the accounts it made changes to and allow you to export those to a spreadsheet as well.

If you have any questions about this utility feel free to leave a comment.  Also if you find this utility helpful, let me know in the comments.  I always love hearing that I was able to help someone else.

 

Password Never Expires Utility

Today I received the following error on one of my SQL Reporting Services Servers.

“The report server is not responding. Verify that the report server is running and can be accessed from this computer”

Old Solution

This is certainly not the first time I have seen this error, and the situation is always different.  Most of the time the solution is to add the proper permissions for the reporting services service account and ASPNET account to the following folders.

C:Program FilesMicrosoft SQL ServerMSSQL.1Reporting ServicesReportManager

C:Program FilesMicrosoft SQL ServerMSSQL.1Reporting ServicesReportServer

To identify if this is your problem, then go look at the reporting services error log.  It will be obvious as you will see an access denied message with one or both of the above paths.

HINT: When you change the permissions make sure inheritance is turned on for everything under that folder.

New Solution

Today it was a new issue.  Here is what I saw in the error log this time.

ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerDatabaseLogonFailedException: The report server cannot open a connection to the report server database. The logon failed

What makes this tricky is that your first thought is the service account running the Reporting Services Service has an issue.  That can certainly be the case if the password for that account was changed, but not updated on the service.  If the server had been rebooted or the service restarted then the service would fail to start.  Just because the service is running does not mean that there is not an issue.  If the server was not rebooted or the service was not restarted then the service will still be running and you will be none the wiser, until you see the failed authentications in the server security log.

Don’t jump to conclusions, because the reporting services service account was not the culprit this time.  Remember when you first installed and configured reporting services?  One of the options you define is the account that reporting services uses to access the report database.  That was the culprit today.  Someone changed the service account password on the account I setup for reporting services to access the report server.  Changing the password for the account in RS is very simple.  Open the Reporting Services Configuration utility and navigate to the Database Setup node.  Here you will see the account configured for Reporting Services to access the report database.  You can change the account or update the password.

 

If you have ever helped organize a SQLSaturday you know how hard it can be and how much time is required.  This was the third (one of which was the first and only BI focused event) in twelve months for the North Texas SQL Server User Group and we’re proud of that.  I’m on the board of directors for the group so I’m a little biased.  However and based on attendee feedback, all three have been hits and this one was particularly on the mark.  That’s wonderful, but it is not why I’m proud of the events we have hosted.  I’m proud that after three large events we haven’t lost touch with the real reason we do this.

It’s natural to try and out do yourself every time, but we’ve remained grounded in our passion to bring free training to SQL Server professionals.  Our charter is to not only provide free training but also an environment for SQL Server professionals and developers to come together, share ideas, and network.  I believe we have stayed true and grounded in our charter and I’m very proud we have done that.

Last year I handled both the sponsors and volunteers which proved to be a logistical complication.  I can’t be in two places at one time (at least not until Buck Woody clones me in the cloud).  This year I just handled the sponsors, but I also had my hands full with the responsibilities of being on the core organizing team, the board of directors, and also being a speaker.  Taking care of your sponsors is a bigger task than one might think, and once the event is over there are still some deliverables so your job is not done.  A lot of people don’t like the sponsors because they don’t want to be hassled by them, but those are the people who have lost sight in what they are getting from those sponsors.  If it was not for their support the event and entire day of totally free training would not have been possible.  I am personally very grateful for our sponsors and I hope I made the experience such that they will not only return again, but will choose us if there is a competing event.  To see all of the wonderful sponsors for this event that are proud to help the SQL Community CHECK THEM OUT HERE.

We’re still putting together our post mortem, but here are some things we definitely got right.

  • Lunch was a boxed lunch with a vegetarian option, a chicken wrap, or a couple of sandwich choices.  Each lunch came with a bag of chips, salad, and cookie.  The food was good quality and the second time we have used this vendor.  We always get great feedback on the lunch and have never had bad feedback on it.  A definite win for this one.
  • This year we decided to do different colored shirts for volunteers and speakers.  This really helped the attendees know who to ask when they had questions and eliminated them from asking speakers questions they do not have the answers to.  We went with wrinkle free Oxford shirts.  These are really nice shirts and we got a lot of compliments on the choice.  The only downside is that being wrinkle free also meant that the name badges did not stick to them very well.  We also went with long sleeved shirts, which got hot so we will go with short sleeved next time.
  • Anyone who has hosted a SQLSaturday will know that one of the trickier things to figure out is the closing ceremony raffle.
    • Last year we drew out of the session evaluations.  We had a room monitor in each session that collected the evaluations; we compiled them over the course of the day, and used those to draw from.  The first positive to this method is everyone gets an entry for each evaluation they complete.  The second positive is it helps the speakers by giving incentive to the attendees to provide an evaluation.  On the other hand, everyone is ready to go home at the end of the day so not everyone attends the closing ceremony.  The drawback here is you keep drawing names of folks who are not there.  The other drawback is that you have to sort, compile, scan, and email the evaluations to the speakers after the event.  That takes a lot of time and the speakers are stuck waiting to get their feedback.  Although we liked this method because we were helping our speakers get the feedback they want and need, the drawbacks were not worth the cost.
    • This year we tried to mitigate these issues with a two pronged approach.  First we had the room monitors pass out the evaluations at the beginning of the session so attendees could fill them out during the session.  This increased feedback because handing it out at the end meant attendees would skip it so they could get to the next session.  From what I have heard, this worked really well.  I think every single person in my session gave an evaluation.  The room monitor was also supposed to give the evaluations directly to the speaker at the end of the session.  This means there was no work for us after the event to compile and distribute them, and best yet the speaker got their feedback immediately.  Since we did not use the evaluations for the drawing, we hope this still helped in our goal of encouraging and providing feedback for our speakers.
    • So how did we do the actual drawing?  We put raffle tickets in each attendee bag.  I know that doesn’t seem revolutionary, but what we did differently is put both sides of the ticket in the bag.  Each side of the ticket has the same number and we just had the attendees put one side in a box as they entered the closing ceremony.  This ensured that every number that was pulled would be someone in attendance and it worked perfectly.
  • Last year the attendee bags were very noisy and distracting during sessions.  This year we got different bags made of a different plastic and noise was not an issue.
  • Paciugo gelato.  We did this last year and it was a smash hit.  This has now become our trade mark so we did it again this year during both of the afternoon breaks.
  • We brought in a TV and Netbook to have a live Twitter display.
  • Last year we had a very low after party attendance.  We advertised it much better this year and it paid off with some awesome networking.
  • Last year our sponsors were asking for event guides and we ran out.  This year I set some aside for the sponsors.
  • Last year we ordered the schedule in our event guide by track.  That’s fine if you are sticking to a single track all day long, but most attendees do not.  This time we ordered it by time and it was much easier to read and follow.
  • Although we publicized our user group, PASS, and the SQL community last year we felt we could have done a better job.  I think we delivered on doing a better job with it this year.  We talked about it more in the opening and closing ceremony and I also had a rotating slide deck I ran in the morning and afternoon that advertised our user group, PASS, the PASS Summit, and SQL Rally.

I presented my mirroring session this year and it went great.  There were a ton of questions and lots of discussion, which is what I love the most.  I compiled the scores and got 4.5 out of 5.  I also got some good comments including the one I like the most, “Excellent session, but I wish there was more time”.  In fact, that is one thing we are going to consider for next time is having longer sessions.  If you caught my session you can find my slide and code here.  Time was short, so if you have any questions feel free to shoot me an email or leave a comment.

This event was a success without question and we appreciate every single attendee that showed up.  It was an incredible time of learning and networking.  We also had 2 of the 3 fathers of SQLSaturday in attendance, Steve Jones(Blog|Twitter) and Andy Warren(Blog|Twitter), which just made it that much more special.  I know what you’re thinking now that the event is over.  The North Texas SQL Server User Group is a SQL community engine with an unparalleled passion and enthusiasm, so what comes next?

That’s right.  We submitted to host SQLRally 2012 here in Dallas.  We are really excited about this opportunity, but we need your help to make it happen.  Two other cities have also applied and PASS will hold a vote to determine the winning city.  So what can you do?  Glad you asked.  PASS will be sending out emails to all members with a link to vote, but the catch is that you need to be a registered member before the voting opens to get the invitation.  Registration is free so head over there now and get registered.