Ryan Adams Blog

SQL, Active Directory, Scripting

Want to speed up your AD replication?  You might want to if users call your helpdesk for a password reset, try to login right after the reset, and they still get denied.  Maybe you use some sort of identity management software or web site automation where users can update things like title or phone number, and they don’t see their changes for awhile.  That might be acceptable in some environments, but certainly not in all of them.

AD has something called Urgent Replication where certain events like account lockouts replicate immediately, as opposed to the default replication interval.  What if your network has very large pipes across the forest and you can afford to have everything replicated immediately?  Keep in mind that large network pipes might not be necessary in every scenario.  Enabling change notifications means constant updates, but they are small in size.  On the other hand, using the default replication interval means less traffic, but larger amounts of data.  It is up to you to decide what is best for your environment.

So what are Change Notifications and how do I enable them?  Enabling Change Notifications tells AD that all attributes are to be considered urgent replication.  Instead of a domain controller queuing up all the changes it receives and then replicating them at its defined replication interval, it replicates all changes as they occur.  Here is how you can enable them:

  1. Open ADSI edit
  2. Connect to and expand the Configuration container
  3. Navigate to Sites-> Inter-Site Transports container
  4. Select CN=IP – Note you cannot enable change notification for SMTP links
  5. Right click and select properties for each site link object for the sites you wish to enable
  6. Select the “Options” property
  7. In the edit attribute box, if the value is <not set> then change it to 1.  If it contains a value, you must get the new value by performing a Boolean BITWISE-OR calculation on the old value.  This would be old_value BITWISE-OR 1.


The above will enable change notifications on all connection object in those sites that are managed by the KCC.  If you have created any manual connection objects, which are not managed by the KCC they will not inherit the change notification setting from the site settings.  You will have to set each of those individually.  Here is how:

  1. Open ADSI edit
  2. Connect to and expand the Configuration container
  3. Navigate to Sites-> My sitename with manual connections-> Servers-> My Server-> NTDS Settings
  4. Right click and select properties for each manual connection object in this folder.
  5. Select the “Options” property.  Note that if the value is 1 then it is an intrasite connection object and owned by the KCC.  If the value is 5 then it is an intersite connection object and owned by the KCC.  If it is one of these values and owned by the KCC then do NOT change it.  It should be changed at the site level instead, otherwise if you change the value on a connection object that is owned by the KCC you force it out KCC control and the KCC will no longer manage it.
  6. In the edit attribute box, change the value to 12.


How much of a difference can this really make?  Well mileage will certainly vary according to network and forest architecture.  I implemented this for a large world wide fortune 100 company with large network pipes.  Here are the total convergence times across each geographic region and also forest wide.


Before – 24 minutes 25 seconds

After – 48 seconds

Asia Pacific

Before – 9 minutes 28 second

After – 51 seconds

North America

Before – 25 minutes 35 seconds

After – 58 seconds

Entire Forest

Before – 58 minutes 4 seconds

After – 2 minutes 57 seconds

I am very excited to be a part of the core planning committee for SQLSaturday 35 in Dallas on May 22nd, 2010.  The Dallas area is a technology hot spot and we have quite a local SQL community.  The North Texas SQL Server User Group averages 80 people every month.  If Sean McCown (Twitter|Blog)  is teaching a class before the meeting we rarely have under 100 people.  SQLSaturday is no exception, and current registrations put those numbers to shame.  This is our first time to hold a SQLSaturday in Dallas and planning an event this size is quite an under-taking.  I’m only handling two things and that is almost a full time job itself.  The planning committee has done quite a bang up job and this event will NOT disappoint.

There’s even a RUMOR that ice cream will be provided! I know what you are thinking, “Where do I signup?”  Well I’m sorry to tell you that you missed the bandwagon, but also excited to say that we are full and at capacity.  We have over 500 registrations with 50 people on the waitlist!  This is going to be one big event.

If you are already registered you can verify your registration and lunch paid status HERE.

A couple of years ago I had a group that used a sharepoint web front end that created AD users on the fly in the backend. The problem is that people would either not use it again after the first login, or they would just signup again if they forgot their credentials. Although all of that COULD have been handled by the developers on the front end, it was not and my concern was AD. For the sake of a clean AD, additional replication overhead, and SOX compliancy, unused accounts needed to be removed. Of course let’s not forget the security implications.

It was the application owners’ responsibility to maintain the accounts, but without any AD knowledge they needed a dummied down way to clean up the accounts. At that time I was still learning VBScript and decided to kick it up a notch and write my script in an HTA. I swear my scripts are much cleaner these days. I’m not going to go over all the code, but please leave any comments if you have questions. After you copy the code and save it as an .HTA you will need to change the LDAP paths according to your AD, the maximum age for enabled accounts, and the maximum age for disabled accounts. The reason for querying whether the account is disabled or not is so you can do something like disable an account that is 30 days past and delete it if more than 60 days past. You should also know that I use the LastLogonTimeStamp attribute since it is replicated to all domain controllers. Conversely the LastLogon attribute is not replicated and will vary from DC to DC. For more on the workings of the LastLogonTimeStamp attribute and its replication frequency (14 days) see this TechNet article.

This HTA will not only show you your unused accounts, but will let you save it in a spreadsheet, selectively disable, enable, or delete the accounts, and let you save your changes to a spreadsheet.  Just for reference you can also find this script in the Microsoft Scripting Guys’ Script Repository HERE.  Don’t panic, I didn’t steal it; I submitted it to them to add to their community submitted scripts.

Download File

I ran across something a couple years back and I just ran across this sweetness again.  I had a database populated by a VBScript that inventoried specific things on our servers with a web front end.  I wanted to make it easy for users to see if a server was still under warranty at a glance and not have to do the “datediff” in their head.  My table had a column that contained the warranty expiration date so I could have written this logic into the web page, but it would have to process it for every page load.  By doing this in SQL the computation only has to be done once.  The problem is that I wanted a string returned based on a date calculation and not just the result of some calculation on a column or two.  I discovered a cool way to do this was to write a user defined function because you can use UDFs in your computed column definitions.  Here is the UDF I created:

create function [dbo].[udf_warrantystatusupdate]
 @expdate datetime
returns char(8)
 declare @status char(8)
 set @status = case
  when @expdate > getdate() then ‘Active’
  when @expdate < getdate() then ‘Expired’
  else null

Here is my column definition in my table:

[WarrantyStatus]  AS ([dbo].[udf_warrantystatusupdate]([warrantyexpdate]))

Awhile back I was tasked with designing and implementing a disaster recovery solution for one of our production application’s databases.  I decided to take a very simple approach for a couple of reasons.  I chose to simply create SQLAgent backup jobs on the production box that backup to a share on the DR box.  Why not use mirroring or log shipping? I took this route for the following reasons:

  1. We have very large pipes between our data centers, so keeping the data in small chunks with mirroring or log shipping is not necessary.  Also the DBs are small.
  2. This gives me point in time (full recovery model) backups in a remote location.  Another group handles server and application backups and the last time I needed a restore they “forgot” to setup the backups.
  3. Security over the wire is not an issue since our security team is like a cast iron chastity belt.  Seriously….we got bought 5 years ago and they still block traffic from the company that bought us even though that company has opened its firewalls to us.  Hey guys we are all the same company, let’s play nice.
  4. The required uptime for this application allows some leniency.  Besides, I can have the DBs restored long before the developer can change his code to point to the new location and DNS to replicate.


I’m a big believer in automation, because less is more.  I was also itching to write some VBScript, so I decided to script the restoration of the DBs.  This script makes some assumptions on how you design your backup job schedules so it will not work in every situation.  This script assumes that you take a full backup sometime in the AM and a differential sometime in the PM.  We are restoring these backups onto a different server than where the backups were taken.  This means we do not have local access to the SQL MSDB for backup information unless we remotely query the source server.  Since the backup files are local and SQL can’t tell us the difference between a full and differential backup (Please comment and correct me if you know a way) there is no need to query MSDB for any information anyway.  I bring this up because if we could query SQL to find the last full and differential backups, then it would not matter how we scheduled our backups and no assumptions would need to be made.   As far as I have ever known SQL only shows backups as one of two types, either a data file backup or a log file backup.  Since full and differentials are both data file backups there does not appear to be a way to differentiate between the two.  The last assumption is that you do not backup multiple databases in a single folder.  Things get messy if you do, so I suggest avoiding that as a general rule.   Here is where you can specify separate folders for each database while creating a maintenance plan within the “Backup Database Task”.

Backup Database Task

We start off declaring some variables and then initialize the ones that define the name of the DB we are restoring, the local drive letter where the backups are located, and the full path to the backup folder on that drive.  Now we create a connection to the local SQL server that will use the credentials under which the script is run.  We also create a connection to WMI and create a log file to record the name of each DB file that got restored.  Note that the log file will get overwritten every time the script is run.  You can change this to simply append, but make sure you also insert the date and time.  Alternately you could also build a log table and store it all in SQL.

Dim strComputer, strDrive, strPath, strDBName, colDBFilePath, colLogFilePath
Dim colDBName, strSQLRestore, colLogName, strSQLFixUser
strDBName = “MyRestoreDB”
‘This is the drive letter where your back files are located on the local machine
strDrive = “D:”
‘This is the path where your back files are located on the local machine
strPath = “\MyBackupFolder\” & strDBName & “\” ‘WMI requires double backslashes in its path names
strComputer = “.”

‘=========================== SQL Connection =====================================================
Set objSQLConnection = CreateObject(“ADODB.Connection”)

objSQLConnection.Open _
    “Driver={SQL Server};Server=localhost;” & _

‘=========================== WMI Connection =====================================================
Set objWMIService = GetObject(“winmgmts:” _
    & “{impersonationLevel=impersonate}!\” & strComputer & “rootcimv2”)

‘=========================== Output Log Creation ================================================
Set objFSO = CreateObject(“Scripting.FileSystemObject”)
Set objTextFile = objFSO.CreateTextFile(“c:ScriptsDBRestoreLog.txt”,True)

Now we need to get the file name and location where SQL tells us it is storing the DB, so we have that for the restore.  We’ll also need a list of all the backup “BAK” files in the backup directory to determine which full backup is the most recent and if there was a differential taken after it.  In order to do all that we query WMI for the list of files and store them in a recordset where we can sort them by their last modified date.  Since we know we do a full in the AM and a differential in the PM, we know if the last modified date of the most recent file is in the AM then all we have to restore is that full backup.  If the most recent backup file is in the PM then we know it was a differential and we have to restore the file before it since that will be the most recent full and then restore the differential.


colDBFilePath = objSQLConnection.Execute(“Select filename from sys.sysaltfiles where dbid = db_id(‘” & strDBName & “‘) and fileid=’1′”)
colLogFilePath = objSQLConnection.Execute(“Select filename from sys.sysaltfiles where dbid = db_id(‘” & strDBName & “‘) and fileid=’2′”)
colDBName = objSQLConnection.Execute(“Select name from sys.sysaltfiles where dbid = db_id(‘” & strDBName & “‘) and fileid=’1′”)
colLogName = objSQLConnection.Execute(“Select name from sys.sysaltfiles where dbid = db_id(‘” & strDBName & “‘) and fileid=’2′”)

Set colFiles = objWMIService.execquery _
    (“Select * from CIM_DataFile where Path = ‘” & strPath & “‘” _
     & “And Drive = ‘” & strDrive & “‘ and Extension = ‘bak'”)

Set rsBackup = CreateObject(“ADODB.Recordset”)
rsBackup.Fields.Append “strFileName”, 200, 255
rsBackup.Fields.Append “strFileDate”, 7

For Each objFile in colFiles
    rsBackup(“strFileName”) = objFile.Name
    rsBackup(“strFileDate”) = WMIDateStringToDate(objFile.lastmodified)

If ucase(Right(rsBackup.Fields.Item(“strFileDate”),2)) = “AM” Then
 ‘Restore Full
 objTextFile.WriteLine(rsBackup.Fields.Item(“strfilename”) & vbTab & rsBackup.Fields.Item(“strfiledate”) & VbCrLf)
 strStartDate = rsBackup.Fields.Item(“strfiledate”)
 strSQLRestore = “RESTORE DATABASE [” & strDBName & “] FROM  DISK = N'” & rsBackup.Fields.Item(“strfilename”) & _
 “‘ WITH  FILE = 1,  MOVE N'” & colDBName(0) & “‘ TO N'” & colDBFilePath(0) & “‘,  MOVE N'” & colLogName(0) & “‘ TO N'” & colLogFilePath(0) & “‘,  NORECOVERY,  REPLACE”
 ‘Restore Full
 objTextFile.WriteLine(rsBackup.Fields.Item(“strfilename”) & vbTab & rsBackup.Fields.Item(“strfiledate”))
 strSQLRestore = “RESTORE DATABASE [” & strDBName & “] FROM  DISK = N'” & rsBackup.Fields.Item(“strfilename”) & _
 “‘ WITH  FILE = 1,  MOVE N'” & colDBName(0) & “‘ TO N'” & colDBFilePath(0) & “‘,  MOVE N'” & colLogName(0) & “‘ TO N'” & colLogFilePath(0) & “‘,  NORECOVERY,  REPLACE”

 ‘Restore Differential
 objTextFile.Writeline(rsBackup.Fields.Item(“strfilename”) & vbTab & rsBackup.Fields.Item(“strfiledate”) & VbCrLf)
 strSQLRestore = “RESTORE DATABASE [” & strDBName & “] FROM  DISK = N'” & rsBackup.Fields.Item(“strfilename”) & _
 “‘ WITH  FILE = 1,  MOVE N'” & colDBName(0) & “‘ TO N'” & colDBFilePath(0) & “‘,  MOVE N'” & colLogName(0) & “‘ TO N'” & colLogFilePath(0) & “‘,  NORECOVERY, REPLACE”
 strStartDate = rsBackup.Fields.Item(“strfiledate”)
End If

The last thing we need to do is find and restore all the transaction log files.  We use the same method of querying WMI for all the files in our backup folder that have an extension of “TRN”.  We then save them in a recordset so we can sort them by date.  In the previous section of code you’ll notice we stored the date of that last backup file we restored in a variable called “strStartDate”.  That allows us to compare it to the current transaction log date and skip restoring any log backups that were taken prior to the last full or differential backup we just restored.  As we iterate through the files in the recordset the other condition we have to check for is whether or not it is the last record in the set.  We do this because we have to specify “norecovery” in our TSQL restore command until we restore the last log file. At that point we remove “norecovery” from our TSQL restore command (or you can change it to “recovery”, which is the default) so SQL can rollback any uncommitted transactions and make the DB available again.


Set colFiles = objWMIService.ExecQuery _
 (“Select * from CIM_DataFile where Path = ‘” & strPath & “‘” _
  & “And Drive = ‘” & strDrive & “‘ and Extension = ‘trn'”)

Set rsTran = CreateObject(“ADODB.Recordset”)
rsTran.Fields.Append “strFileName”, 200, 255
rsTran.Fields.Append “strFileDate”, 7

For Each objFile in colFiles
    rsTran(“strFileName”) = objFile.Name
    rsTran(“strFileDate”) = WMIDateStringToDate(objFile.lastmodified)

Do Until rsTran.EOF
 If rsTran.Fields.Item(“strfiledate”) < strStartDate Then
  objTextFile.WriteLine(rsTran.Fields.Item(“strfilename”) & vbTab & rsTran.Fields.Item(“strfiledate”))
  If rsTran.AbsolutePosition <> rsTran.RecordCount Then
   strSQLRestore = “RESTORE LOG [” & strDBName & “] FROM  DISK = N'” & rsTran.Fields.Item(“strfilename”) & _
   strSQLRestore = “RESTORE LOG [” & strDBName & “] FROM  DISK = N'” & rsTran.Fields.Item(“strfilename”) & _
   “‘ WITH  FILE = 1”
  End If
 End If

This last piece of the script just closes the text file we opened to log our actions and includes a function that converts the WMI date format to something more familiar.  For more details on the WMI date conversion function, visit this article in the Microsoft Script Center that was written by the ScriptingGuys.  Now we have successfully restored our database on our DR server and hopefully the crisis is averted.  Of course now you have to go fix the production system…..Good luck!


Function WMIDateStringToDate(dtmInstallDate)
    WMIDateStringToDate = CDate(Mid(dtmInstallDate, 5, 2) & “/” & _
        Mid(dtmInstallDate, 7, 2) & “/” & Left(dtmInstallDate, 4) _
            & ” ” & Mid (dtmInstallDate, 9, 2) & “:” & _
                Mid(dtmInstallDate, 11, 2) & “:” & Mid(dtmInstallDate, _
                    13, 2))
End Function

I’m finally back online after being on vacation for a couple of weeks following my second daughter’s birth.  If you follow me on Twitter or FaceBook you know that Emily Grace Adams was born 11/4/2009 @ 1:46pm.  She weighed 6lbs 9oz and was 18″ long.  She is very healthy and doing extremely well.  I can’t thank my wife Jennifer enough for the sacrifices she has made for our family.  She is an amazing and inspiring woman that I’m proud I can call my wife.

I guess it is now back to work and the real world, but at least the holidays are upon us.  Here our some pictures of our new daughter.

I discovered this little catch back when SQL 2005 released, but it took me awhile to notice it was not working.  One of the cool new things in the 2005 release was database mail.  Finally we didn’t have to setup mail profiles and install Outlook, which meant additional patching for Microsoft Office and possible reboots.  We also no longer had to use an extended stored procedure to send email alerts from the database.

The group I worked in used Microsoft Operations Manager to monitor our environment, which included much more than just our SQL servers.  One day I got an alert from MOM that one of my SQLAgent jobs failed on a job step.  However, I never got an email alert from SQL server that the job had failed.  I had already setup database mail and had even run my code in SSMS, which ran fine and sent email via msdb.dbo.sp_send_dbmail.  In fact, I had even recently received an email from that job and verified that kicking off the job manually resulted in success.  So I started wondering what could be the problem since I knew database mail was working and verified it both by executing code in SSMS and via SQLAgent job.  I also started wondering why I had not noticed this before when it dawned on me that my jobs were set to only send email in the event of a failure.  Well, I would like to think it was skill and not luck that I had not had any job failures up until then.  I’m glad we were using MOM or who knows when I would have noticed the job failures.  This server was a development server that rarely got paid any attention.

After some troubleshooting I started to search the internet to see if anyone else had run across this issue.  My hours of searching turned up nothing so I went back to the troubleshooting drawing board.  Keep in mind this was right after SQL 2005 released so there very well may be plenty of search results for it now.  What I discovered was that the SQLAgent itself has to be mail enabled and assigned to a database mail profile.  The fix was quite simple.  In SSMS right click SQL Server Agent and select properties.  In the left hand pane select the Alert System page.  Next we need to check off “Enable Mail Profile”.  Next we select the mail system from the drop down box and select Database Mail.  If you are still using SQLMail you can also select that, but I highly recommend you start migrating your jobs and any code over to Database Mail.  Lastly we select which mail profile setup in Database Mail that we want to use for sending job messages from SQL Server Agent.

SQLAgent Mail Config

Configuring DBMail for SQLAgent Alerts

If you have ever worked in a company with a split DNS name space or one that has gone through a merger, you have probably dealt with the DNS suffix search list.  This list allows a user to put “companyweburl” in their browser address and it actually resolve.  This works because Windows will cycle through the DNS suffix search list appending each suffix to what was typed into the address bar until it finds a site.  If the list contains contoso.com and litware.com then the OS will attempt companyweburl.contoso.com first.  If it gets a response from that site then off you go.  If it does not then it tries companyweburl.litware.com.

To set the suffix search list, open the properties of your network connection and then open the TCP/IP properties.  Now click Advanced and go to the DNS tab.  Here you can set your suffix list as shown.

DNS Suffix Search List

DNS Suffix Search List

There are two other places you can view your current suffix search list.  The first is by opening a command prompt and typing in “ipconfig /all”.  The second is in the registry and can be found in this location:


In a large environment making this change on all the clients can be daunting, but there are several solutions.  I won’t go into all the solutions, but VBScript or PowerShell scripts being deployed through domain group policy or user logons are good ones.  The most obvious solution is to just make the change by deploying a domain group policy.  This method has a catch in that it sets itself in a different registry key than if you set it locally through the GUI.  If a GPO pushes the setting then it takes precedence over the local configuration, but you can still set it in the GUI.  This makes you think you are changing it when you really are not.  This is a field engineer troubleshooting disaster.   The only good news is that an “ipconfig /all” will show what is actively being used.  Deploying this setting with a GPO puts its changes in the registry in the following location:


It should be noted that this setting is only available in GPOs where AD is running Windows Server 2003 or greater.  Here is the path where you can find the setting in a GPO.

Computer Configuration → Policies(If you’re running Windows Server 2008) → Administrative Templates → Network → DNS Client → DNS Suffix Search List

Here is an example.  A GPO has been deployed to Machine1 setting the DNS Suffix Search List to “contoso.com”.  On the top you can see that “litware.com” is the only suffix set in the GUI.  On the bottom you can see from an “ipconfig /all” that the suffix search list only has “contoso.com”

DNS Suffix Search List in the GUI

DNS Suffix Search List in the GUI

Actual Current DNS Suffix
Actual Current DNS Suffix

As you can imagine, there is bound to be a very frustrated desktop engineer.  Since the GPO and GUI put their settings in different registry keys you never see the GPO settings in the GUI.  For this reason I would warn against using this GPO setting.  I recommend the following work around solutions.

  • VBScript or PowerShell deployed via logon script
  • VBScript or PowerShell deployed via GPO startup or logon script
  • GPO using a direct registry change to the local GUI registry key

In part 1 we looked at how “Run As” works in Windows XP and in part 2 we looked at how it works in Windows Vista.  Now let’s see how it works in Windows 7.


Vista Run As Context Menu

Vista Run As Context Menu

In Windows 7 you can hold down the shift key and right click ADUC to get the full context menu.  Here you will see two options called “Run as Administrator” and “Run as Different User”.  The “Run as Administrator” option makes use of the new User Account Control (UAC) security feature.  The good news is if you are logged in with a privileged account and simply run ADUC without any options, the application only has user access to the local machine.  This applies even if the account is in the local administrators group.  If you run it with the “Run As Administrator” option, UAC elevates the account by combining the second part of the SID with the first part of the SID (SID filtering).  The first part describes the regular user account and the second part describes the elevated privileges.  This works exactly as it did in Windows Vista.  This is Microsoft’s way of solving the issue of companies making every day knowledge workers administrators on their machine and reducing the attack surface.  If you select the Run as Administrator option, Windows 7 runs the application using the entire account SID so that the process (and only that process) now has local administrator permissions.

Although user account control was probably the most persecuted new feature after Windows Vista launched, it was actually a really good thing.  In my opinion it certainly has a place in the corporate world, but proved to be a daunting inconvenience in the home user world.  If you still think UAC is a bad thing and too inconvenient go google “linux +sudo”.  Microsoft is not the only one to implement such features.  Microsoft has gone to great lengths to keep this security feature in Windows 7, but greatly reduce the annoyance factor of multiple “Allow or Deny” prompts.

The problem is that “Run as Administrator” is a UAC function and should not be used as a “Run As” replacement.  Although you can set the “Run as Adminstrator” option to prompt for alternate credentials, that application will run as a local administrator and under the supplied alternate credential.  See part 2 of this series for more information.  What we really want is for the application to run under the locally logged on non-privileged account and only pass the alternate credentials to the remote application.  This where “Run as Different User” comes in.  This is the “Run As” replacement and uses the secondary logon service.  The issue here as that it works exactly as it did in Windows XP where it uses the supplied alternate credentials to run the local application. Once again we are left without a secure option in the GUI.


Executing “Run As” from the CLI is exactly the same in Windows 7 as it was in Windows Vista (see Part 2).  Again it requires elevating the account using UAC, so you might as well just use “Run As Administrator” from the GUI.

The sad part is that although the SysInternals add-on that gave us exactly what we needed in Vista installs in Win 7, it still acts like it did in XP.  The application will run under the supplied alternate credentials on the local box instead of just supplying them to the remote application.


My suggestion for the most secure option for each OS would be…

XP – Use “Run As” from the command line and supply the /netonly switch

Vista – Download the SysInternals add-on called Shellrunas.  Use it from the GUI and always select the one labeled “netonly”.

Windows 7 – Use the “Run As Different” user option from the GUI.  It is the worst option of the 3 but the only viable one.  Let’s cross our fingers that MS will put out another Shellrunas add-on through SysInternals.




In part 1 of this series we discussed how “Run As” does not work exactly as one might think.  Unless you use the CLI and supply the /netonly switch, the application runs locally under the supplied alternate credentials.  The behavior we desire only comes with the CLI and /netonly switch where the application runs under the locally logged in account.  The supplied alternate credentials are only used over the network with the remote application.  We specifically looked at these behaviors in Windows XP, so now let’s look at what Windows Vista brings to the table.


Run As Administrator Context Menu

Run As Administrator Context Menu

The first thing you will notice in Windows Vista is that you will always see the context menu shown on the left whether you hold down the shift key while right clicking or not.  That’s right.  “Run As” is no longer in the OS!  You will, however, notice an option called “Run As Administrator”.  Although it looks and sounds like the same thing as the old Run As in Windows XP, it is in fact something entirely new.  This option is part of a new security feature implemented in Windows Vista called UAC (User Account Control).  I will not be going into detail on UAC since that is out of scope for this series and deserves a series itself.  However for the sake of clarity, it allows a user to be in the local administrators group without running as an administrator for the entire logon session.  The user is essentially logged on as just a member of the user group.  It allows them to elevate to their administrator status for just the duration of completing a task that requires those privileges (SID filtering).

Using the “Run As Administrator” option simply elevates the account you are logged on with to have administrative permissions locally.  This means that there is no prompt for alternate credentials.  This option of UAC can be changed in the local security policy or via group policy by changing the “Behavior of the  elevation prompt for administrators in admin approval mode” option to “Prompt for Credentials”.  The net effect is the same as doing a simple Run As in Windows XP where the application runs locally and remotely under the supplied alternate credentials.  The only difference is that the application is little more secure running under UAC than it was under the secondary logon service of Windows XP.  Since Microsoft took away the GUI version (which remember did not work securely as we wanted it to in Part1 of the series under Windows XP), let’s see if the CLI still works.


After typing the command into the command prompt like we did in Part1 you will see this.

Run As from the command prompt

Run As from the command prompt

Please note that I am running this with ADUC (dsa.msc) and you will not get this error for all administrative tools and applications like “Computer Management”.  For whatever reason only some MMC snap-ins require administrative privileges on the local machine.  This means that I have to right click “Command Prompt” and select “Run As Administrator” and then type in my “Run As” CLI command.  Of course, you might as well just right click ADUC and select “Run As Administrator” and skip the command prompt altogether.

So are we without hope and doomed to run remote applications with lowered security?  Can we use a Microsoft product and adhere to Microsoft’s own best practice of least user privilege?  I worked with MS for over a year while Vista was in development to get a solution that many others had asked for and Microsoft agreed was needed.  In fact, I estimated that the company I work for would lose $423,000 a year in lost productivity not including the potential for loss from the lack of security.  (BTW…turns out my company didn’t care that I prevented them from incurred costs, they only care when you remove a cost they are already incurring).  Okay off my soap box.  The solution was an add-on tool MS released through SysInternals that not only returns the Windows XP version of “Run As”, but also gives you the option to use it from the GUI with the /netonly switch.  Now we have the perfect solution of the CLI /netonly switch but this time in the GUI.  This saves precious time for domain administrators, SQL DBAs using SSMS, and custom applications.  You can find the add-on for Vista here on the SysInternals web site called ShellRunas.