Ryan Adams Blog

SQL, Active Directory, Scripting

We will walk through creating a policy to evaluate the status of “Auto Create Statistics” on our server named “File2”.  First we need to open SQL Server Management Studio and connect to our server.  Then we need to enable Policy Management by expanding the Management node, right clicking on Policy Management, and selecting Enable.

You will notice three folders under the Policy Management node.  If explore them you will see that the Policies and Conditions folders are empty while the Facets folder contains many items.  Facets are created by Microsoft and you cannot create your own.  The other folders are empty because we have not yet created any policies.  This is a good time to browse through the facets and open a few to get an idea of what they contain.

We want Auto Create Statistics to be turned on for all our databases, so let’s start by right clicking on the Conditions folder and selecting New Condition.  This opens our create condition dialog box where we define the condition name, the Facet that contains the properties we want to evaluate, and the expression used to evaluate those properties.  This is what those options should look like:

Now that our condition is defined, we’re ready to create our policy.  Right click the policies folder under the Management node and select Create Policy.  We need to give it a name, select the check condition we created previously, define the targets to apply it to, choose the evaluation mode, and select any server restrictions.  Here is what our example will look like:

Now that we have our policy configured let’s test it out by evaluating it against our local server and reviewing the results.  Simply right click our new “Auto Create Statistics” policy and select Evaluate.  This will cause the policy to be evaluated immediately on the local SQL instance.  Here is what we get:

We can see that our server has four databases and one of them has violated our policy.  If you check the box next to the offending database and select apply, PBM will bring that database into compliance by changing the Auto Create Statistics setting from false to true.  Before we do that we can also select the View hyperlink in the details column to see the exact settings that caused this policy to be violated.

We can see that the policy expected AutoCreateStatisticsEnabled to be set to True, but the actual value was False.

Let’s take a look at how to import the Microsoft Best Practice Policies into your Policy Based Management Server.  You can also import policies that you have exported from other servers.  The Microsoft Best Practice Policies are a great place to start learning what you can do with PBM, by simply importing them and inspecting their various configurations.  They are installed by default and simply need to be imported from the following directory.


 Under that directory there are several others separated by technology like SSIS and SSAS.  We will focus on the policies in the “%installdir%100toolspoliciesDatabaseengine1033” folder.  Let’s import the Microsoft Best Practice Policy named “Database Auto Shrink”.  Right click on the policies folder under Policy Management on your PBM server and select import.  In the Import dialog, click the ellipsis next to “File to Import” and navigate to the policies folder mentioned above.  We want to select the “Database Auto Shrink.xml” file.  Here are the options we want to choose for our import:

That’s it!  Please make a note that we imported the policy in a disabled state and I always suggest you do that.  Once the policy has been imported you can inspect all the settings to make sure they are appropriate for your environment.

I will be live blogging the SQLPASS Summit keynote again today. Today’s keynote will be delivered by community favorite Dr. David Dewitt.

Rick Heiges opens us up and introduces a special musical number by Rob Farley and Buck Woody. It was hilarious with a special lyrical shout out to Paul Randal. Rick is now walking us through the leadership changes in the PASS organization, with a very special thank you to past President Wayne Sneider as Rushabh Mehta becomes the immediate past President.

Rick is announcing upcoming events with SQLRally in Dallas on May 10-11. He also announces the next Summit in Seattle November 6-9. He also tells everyone that all Summit attendees will receive an e-book copy of the MVP Deep Dives book and to keep an eye on their email.

Dr. David Dewitt enters the stage with a great start by explaining why his wife is not here to watch him speak. There are lots of laughs. David starts his talk on big data and shares some size statistics for systems of the larger web sites like Facebook and other social media. The data sizes are astounding.

David is explaining the NOSQL movement and points out that is does NOT mean NO SQL, but means not only SQL. So he wants us to think about large data and a mix of systems to support the data. We may have one entry point or front end to get the data but on the backend some data might be in SQL and some might be in Hadoop. He is trying to get us to think about the data and that not all data is relational and is better suited for other storage systems.

David explains that NOSQL is not a paradigm shift and that RDBMS are still the best way to store data efficiently. However, some data like unstructured data does not work best in an RDBMS. He plans on talking about Hadoop and how it works.

David is explaining the Hadoop file system called HDFS. The HDFS does not replace the windows file system or NTFS but sits on top of it. The blocks are stored and replicated by a factor of three. It sounds like RAID 5 but spread across multiple nodes with seperate storage systems. The node first written to is the node where the transaction originated, the second on a node in the same rack, and the third on a node in a different rack. The replication of this data is handled by a name node or primary node (which also has a backup node). It monitors all the other nodes with heartbeats and decides how to distribute the data among the nodes.

David is now explaining how Hadoop handles failures and that it was designed to expect failures. It does use checksums for reads and writes, but expects that hardware and software failures will occur. When a failure occurs on a node then the name node finds the blocks that are missing and replicates them to other nodes to maintain that factor of three. I can’t help but think this sounds like RAID except it is supported by replication as opposed to multiple writes across disk. With a factor of three I envision it like a RAID 5 on top of a RAID 5, but all data is written by replication instead of multiple writes on a single disk array.

David is explaining how Hadoop finds the data when you request it since you don’t know where it is stored. David is moving on to how mapreduce works with an animation he jokes took him 6 hours to come up with. The audience loves the animation explanation, with some clapping. He shows how map tasks find the data across nodes and then hand it off to the map reduce procedure that takes the data from the multiple nodes and reduces it to a single output.

Now we are hearing that after Hadoop came out that Facebook and Yahoo started using it. However, they both came to different conclusions on the language. Facebook came up with something very SQL like, but Yahoo came up with something more procedural. David brings up a slide with a lot of writing and jokes that it is not meant to be read and that he will not be using zoomit. The crowd loves the comment with lots of laughs after the lack of use in other keynotes this week. David now points out that out of the 150k jobs Facebook runs that only 500 are map reduce jobs and the rest are hive sql. Now we are seeing how hive tables are designed and that they are partitioned but a particular attribute.

Now we are seeing how Hive relates to parallel datawarehousing. We can see how Hive is great for unstructured data that is not related, but how SQL is much better with relational data due to a common schema and partitioning method. Now David is talking about putting the two things together and connecting the universe. We see the difficulties in getting data from both worlds in regards to performance. He explains the SQOOP approach and the challenges and that there must be a better way. SQOOP moves the data from one world to other to get the data.

David asks what about if we don’t move the data, but put a management systems between the two that understands how to get the data from both systems seperately. This is something he is working on in his labs as it becomes clear that we will be living in a world with both types of data and a need to get information from both and relate it.

David is wrapping things up with a re-cap and driving home the major points. The biggest one is that SQL is not going away and neither is Hadoop or other unstructed data systems and we need to work with both.
That’s the end of the last SQLPASS 2011 summit keynotes. The crowd is wild about David with a huge standing ovation!

I was chosen to sit at the blogger table and live blog the keynote tomorrow so plan to be here at 8:15pt and I’ll refresh the post that day as often as possible. I may not be at the table today, but that won’t stop me from blogging anyway.

Rushab Mehta starts us out with a recap of all that PASS has accomplished in the last year. He thanks everyone involved from the PASS board to those that volunteer for SQLSaturdays or user group meetings. The SQL community is growing by leaps and bounds and it is very exciting.

Ted Kummert is the keynote speaker today. Ted recognizes the success of MS due to the work and success of the SQL community. He talks about how their success is based on community success. As everyone expected, it didn’t take long before we heard the word “cloud”. Ted talks about how the cloud is another revolutionary change in the data technology landscape and it’s economic impact. It was a profound statement, but then Ted took a little bit of a turn. We always hear MS pushing the cloud and they are always referring to THEIR cloud. However, Ted talks about how we will have a choice and mix of MS cloud, private cloud, and partner cloud.

Ted officially announced that the next version of SQL Server code named Denali will be released in the first half of 2012 and will be called SQL Server 2012. He also announces that SQL Azure will fully integrate and support Hadoop. We then got to see a demo of using the MS BI stack to consume and dig into Hadoop data.

I had a chapter leader and regional mentor meeting so I didn’t attend the first session. It was a good meeting, with good ideas. I wish there had been more time, but the larger meeting was yesterday and I was in a pre-con. I’m very happy that they had a second meeting for those of us that could not make the first one.

I watched Andrew Kelly present on SQL 2008 Query stats. He did a good job of explaining the DMVs used to get the stats you need to understand what is going on in your system. The DMVs are great, but it is not always clear how to get the information you need. You generally have to correlate 2 or more DMVs to get what you want. Andrew’s session really helped to show those correlations.

I also watched Brent Ozar do his Blitz session. I saw his first one online, but this was his second, new, and improved version. In the session he shows a script he wrote and provides for free that has a ton of elements to assist you in assessing a new server you have just inherited. Brent added some really good additional things to the script. The really shiny thing he added was wrapping the script in a stored procedure that takes all the results of the blitz script and prioritizes the results. This is Brent…so he didn’t stop there. The SP actually uses OpenRowset to connect to his SQL Server he has running in the cloud and updates the SP definition. How cool is that? It’s like Windows Update. Make sure to go check it out at brentozar.com/blitz

Chapter Lunch
Today was chapter lunch day. Every PASS chapter had a table and sign in the lunch room. Mad props to whoever started this idea. You would be surprised how many people come to the summit and don’t know about local chapters in their area. This does not just apply to the smaller cities but even larger cities as well. It is a great way to connect people who are local to each other and grow the local chapters.

Expo Hall
This is an open networking time to talk to all the vendors and see the solutions they provide. Of course, there is always the SWAG! I had a good time and there are several things I plan on testing out, but I found the SWAG a little lacking.

SQLPeople Party
SQLPeople.com is the brain child of Andy Leonard and he held an event sponsored by Embarcadero for all those involved with the project. It was at the Tap House Grill. The Tap House is great place and perfect for a smaller event like this. I had a great time meeting some other wonderful SQL people.

Today was my travel day.  I don’t travel much at all so it’s always an exciting adventure for me.  I left the house with plenty of time and made it to the Parking Spot in about 40-45 minutes.  If you travel out of DFW international airport I highly suggest using these guys for parking.  The airport itself is very large and not fun to navigate, but getting to the Parking Spot is easy.  Once you get there they tell you what row to park in and their van picks you up right from your parking spot.  The staff are always friendly and even handle your luggage for you.  Once you are on the bus, you have no worries about navigating the often confusing DFW airport, since they drop you off right at the terminal you need.  Once you return they pick you up from the terminal baggage claim and run through about every 10 minutes for pickups.  They drop you back off right at your car and give you a free bottle of water for the drive home.

Once I got to the terminal I had to check a bag, but there was a very short line.  Security was also no big deal and only took about 10 minutes to get through, so everything went very smoothly.  Of course my plane left at 7pm so the slightly later departure may have aided in the short lines.  I’m glad it all went well, because I needed to grab a bite to eat and had just enough time to get something from the food court.  I then headed over to my gate and found David Stein, and it’s always nice to run into a friend.

The flight itself was very smoothe and completely on time!  Always a good thing, but especially since my flight didn’t arrive until a little after 9PT and 11 my time.  Dave and I met up and headed down to the baggage claim to get our stuff.  From there we headed over to the Seattle Link Light Rail.  They don’t take American Express so I had to pay cash, but hardly a big deal for a one way trip of $2.75.  The whole process was easy and cheap, and I’ll certainly continue to use the Light Rail on future trips.

The trip took 40 minutes to get from the airport to the WestLake exit.  I’m staying in the Sheraton and I don’t think it even took me 5 minutes to walk from the train station to the hotel.  It was a perfect choice.  I got checked into the hotel, unpacked, ironed some shirts, and headed down to the hotel lounge to see who I could find.  I ran into Denny Cherry, Jim Murphy, and Wes Brown.  We chatted for awhile and Wes and I decided to call it a night after a long travel day.  I think I finally went to sleep around midnight PT or 2am my time.

Everything went great and it was an awesome start to the trip.

This morning I headed over to Top Pot doughnuts to hang out with Andy Warren, Steve Jones, Bill Fellows, David Fargo, Tim Radney, and many others. It was a nice time of networking and a good small crowd. The doughnuts were good with a large and interesting selection and the coffee was good as well.

Again I was lucky to attend a pre-con today.  Today I chose to attend Denali Always On by Allan Hirt. I have always gravitated toward HR and DR solutions so I’m looking forward in getting up to speed with Denali Always On. Here are a subset of my notes from the class to give you an idea of what was covered. Please note that there will not be any formatting so they may look scattered.

You can now have a TempDB local to each node. This can have better performance. Make sure the SQL Server service accounts has rights to the folder.

When going through the add node install in CTP3 it does not create the TempDB folder. You have to create it yourself manually.

As you patch nodes in your cluster, remember to remove the node you are patching as a possible owner before patching. You don’t want SQL to attempt to failover to that node in the middle of applying the patch.

You have to enable trace flag 9532 to get more than 1 availability group replica in CTP3

AG has integration with failover clustering so if you are combining them like was done with clustering and mirroring you no longer have to tweek the mirroring timeouts. An AG will not failover until the cluster completely fails, so FC is your primary HA and AG could be HA/DR.

You can take backups of the replica, but since Full Recovery mode is required you still have to take log backups of the primary or the log will grow. This has not been confirmed, but is likely the case.

First Timers Orientation

The first timers program was a great idea and as a first timer I can say that it works. The program needs some work and organization. Even the big brother/sister sponsors really didn’t know what was going on. When you entered everyone was given a sticker with a number and color. The idea was to find the person with the same number as you but the opposite color. It was a great idea to get people to meet someone new, but this process was never explained.

PASS did a really cool entrance of the first timers into the reception, but we were just told to split on one side of the room or the other and watch the video. That’s all we knew and had no idea what was going on. Rushab Mehta was doing the introduction and at the end he tells everyone to look toward the end of the room for the curtain. All of the first timers were trying to make sense out of it while turning and staring at a blank wall. It was not until the curtain opened on the other end of the room when you could hear a collective, “Ohhhh”.

The orientation was great and I definately think it should be continued, it just needs some better communication and organization. This was the launch of this new program and as with anything new there are always growing pains.

Welcome Reception and Quiz Bowl

The welcome reception was a great time of networking and an awesome opportunity to meet some amazing people. The quiz bowl was fun, but it didn’t seem like very many people were watching.

SQLServerCentral Party

This was a fun event. When you get there you get a ticket to exchange for chips. Depending on how much you win, the more tickets you get for the prize drawing at the end. I spent so much time chatting and networking with all the great people I never played a game.


I got up at 6am PT this morning, got ready, got some coffee from Starbucks, and made it to the convention center by 7AM.  I had no problem following the signs to registration and there was no wait in getting registered.  I wish I had gotten in early enough yesterday to make registration, but no luck there.  The good news is that the convention center and hotel are so close I had more then enough time to run back and drop off the laptop bag and goodies from registration in my room.  I met and chatted with a lot of folks and sat down for breakfast with Adam Saxton, Allen White, Wil Sisney and others before I dropped everything off.


I was lucky enough to attend a pre-con.  I chose to attend execution plans by Grant Fritchey and Gail Shaw.  I’m not going to blog the whole thing, but here are a subset of my notes to give you an idea of what was covered and the best tips I picked up. Please remember these points are just notes and not well formed.

  • Using Optimize for ad-hoc workloads has no downside, so it was suggested to turn it on regardless.  I’m not a fan of turning things on that you don’t need, but if there is no overhead and it could save you issues in the future then it’s worth it.
  • Always use SPs if you can.  They are stored in the engine and execution plans are cached.  I’ve been preaching this one myself.  Ad-hoc queries are more difficult to track down, especially if you don’t have optimize for ad-hoc workloads turned on.
  • Every SP has it’s own plan, so if an SP calls another SP they each have their own plan.
  • ANSI settings on the connection from SSMS and .NET are different, so you’ll get different plans and makes troubleshooting the query more difficult.
  • Rebuilding your indexes can cause statistics to be out of date, which means you could end up with an inefficient plan in your plan cache.  Reorganizing an index does not update statistics.  <<<<ask and clarify this.
  • Every insert into a temp table causes a recompile of the plan.  Another perfomance hit of using temp tables.  Although table valued parameters NEVER cause a recompile.
  • Plan cache hit ratio.  There are some general numbers out there, but you can’t use this.  As usual you need a baseline for your system.  If your normally at a consistent 95% and you get a SUSTAINED drop to 92% then that is a problem for you even though that 92% is above the suggested value.
  • The cache_miss event is not all that helpful for two reasons.  If you execute an SP from SSMS then the call to the SP (The EXEC statement) is causes a miss because the call itself never gets cached.  You’ll see a second one come in for the procedure itself, and that is the one you are interested in.  The other reason is if the optimizer has to insert a plan into the cache the miss is assumed, and not recorded in the miss counter.  It is only counted in the cache_insert counter, so that counter is a better place to look.
  • If you use optimize for ad-hoc workloads and a run a query from SSMS with the ESTIMATED plan it is seen by the optimizer and will create a stub.  When you later run the query fo real it will see the stub and cache the plan.
  • A nested loop join is actually a cursor!  Read the tooltip in the plan for the description and think about what it is saying.
  • Nested loop joins can be efficient if the outer table has a small amount of rows.
  • Scan count does not mean how many time SQL read the table or even how many time it accessed it.  Ignore this value and concentrate on Logical Reads instead.
  • Merge joins are extremely efficient.  You usually see them when the join columns are indexed.
  • If you see worktable in your IO statistics output it is temp table created by SQL generally for a hash match or sort.
  • Sum and count are good to put in an include, but min and max are better in the predicate because it will already know the range.


I met up with Allen Kinsel, Dave Stein, Jim Murphy, and John Clark and headed down to Lowell’s for the networking dinner hosted by Andy Warren and Steve Jones. When we got there the line was crazy long and we were all hungry so we decided to head over to the Pike Place Grill. We all had a great time networking and went to the Tap House. We met up with Tim Radney, Bill Graziano, and several others but the Tap House was packed with staff complaining of too many people. We all headed over to the Sherton lobby and hung out for the rest of the evening.

It was an awesome first day and start to the conference with plenty of SQL goodness yet to come.

Today I will be LIVE blogging the keynote from the PASS Summit 2011 conference.  Keep your browser tuned in here and refresh often.  Today’s keynote is being delivered by Quentin Clark.  Quentin is Microsoft’s Corporate Vice President of the SQL Server database systems group.  I will also try to tweet updates as I can so make sure to follow me.  http://twitter.com/ryanjadams

Bill Graziano kicks us off on day 2 of SQLPASS and is sporting a kilt for the official SQL kilt day.  Bill starts off thanking all of the volunteers that make this community as amazing as it is.  He thanks chapter leaders, regional mentors, and special programs volunteers.

Bill thanks Jack Corbett and Tim Radney for outstanding service to the community.  Lori Edwards wins the PASSion 2011 award for oustanding commitment with everything she has done for PASS over the last year.  She is truly an inspiration for those of us that love this community.  Congratulations and thank you Lori!

Quentin shares the Microsoft vision that was introduced yesterday.  Their vision is any data anywhere, and of course integration with the cloud.  Quentin is sharing his favorite SQL Server features which he calls the fantastic 12.  The first 4 are required 9s uptime, fast performance, rapid data exploration, and managed self service BI.

Quentin brings a customer on stage to talk about their use of the SQL Server product.  We are watching an Always On demo for how their shipping company uses the feature.  This is one cool new feature that you need to check out.  We are shown the Always On dashboard and everyone claps as they point out that it is all green.  We now see how easy it can be to deploy a read only secondary that can be used for reporting and other purposes.  There is clapping from the cloud as they remember to start using zoomit so we can actually see the demo.

Quentin is moving on to his second favorite feature of performance which covers ColumnStore Index.  He is just quickly covering the specific features of his overall favorite categories.  He mentions things like PowerPivot, SharePoint, Data Quality Services, and Master Data Services.  He is just listing the features, but not digging in too much and the crowd is getting restless.

Everyone claps as we enter another demo.  That’s right, Contoso rises again! They are showing some data quality features where you can use metadata to validate your data.  Everyone sighs as they toss the cloud into the mix again.  They are showing how you can use services from the cloud in the Azure marketplace to validate the quality of your data.  I wonder what happens when the data in the cloud is wrong? It seems like a lot of trust in MS and their data services must be taken.  I bet there is one heck of an agreement when using those services to idemnify MS from being responsible.

Quentin is up to 8 and 9 on his fantastic 12 list.  They are scalable data warehousing and fast time to solution.  The big thing pointed out here is appliances.  Another guest comes on stage to talk about parallel datawarehousing.  Both HP and Dell appliances are featured.  HP’s has 700TB!  One of the smaller appliances looks like the robot from Lost in Space. Danger Will Robinson!!

Quentin is on to number 10 which is Extend any data anywhere.  He covers added interoperability and announces a new ODBC driver for Linux.  He admits it’s a ploy to get them into the MS stack.  Nothing better than transparency, huh?  He also announces ODBC drivers for change data capture for SSIS and Oracle.  We have another guest on the stage showing a demo, and the crowd is clammering for zoomit.  They are showing how symantec search works to bring you related data for your searches.  We are seeing how it can be used to search your document libararies.  Symantec search is like full text search on crack.  You can use it to search for keywords within your documents and even pull back the correlations in similar documents.

We are on to Quentin’s number 11 which is Optimized Productivity.  Here we have SQL Server Data Tools, formerly know as Juneau, as well as Unification Across Database and BI.

We quickly move on to number 12 which is Scale on Demand which covers AlwaysOn.  Our next guest from the SQL CAT team arrives on stage.  It looks like he is a Twitter user as he points out making all the bloggers happy by using zoomit.  We are seeing how to easily deploy a database to SQL Azure.  They are announcing Windows storage in the cloud so you can back up your Azure databases to cloud storage and restore from there as well.  They also show local SSMS management of your Azure DBs.  After the demo of this feature there is clapping, but our guest has to prompt everyone to get more excitement.

Our next guest from the Azure team comes on stage.  A new SQL Azure management interface is shown and it has live tiles.  They also ann0unce that Azure can now hold up to 150GB databases.  These features will go live by the end of the year, so don’t go looking for that 150GB just yet.  The Azure reporting and data sync features are in CTP now.

Quentin is now talking about Hybrid IT where you can have combinations of server, private cloud, and public cloud.  It look like Quentin is wrapping up by reviewing everything covered today.

The day 2 keynote is over.  Do not miss tomorrow’s keynote with Dr. David Dewitt.  I’m not at the live blogger table tomorrow, but I plan to blog it live anyway.






SQLSaturday 97 Austin     #97

This was a great event! This was Austin’s first SQLSaturday, so it’s quite an accomplishment and something to celebrate.  I brought my family with me since my wife wanted to do some shopping in San Marcos. The drive down went well and we only had two hickups. The first was just a minor hold up due to a grass fire in the highway median. Texas has been in quite a drought so this is unfortunately common right now. The second was traffic on 35 once we got into Austin, but I was able to get around all of it. The speaker dinner was held at the Iron Cactuss restaurant and was a buffet style BBQ. The food was great and everyone had a great time networking with a bunch of wonderful people.

The next morning I headed out to the Thompson Conference Center on the University of Texas campus. Being a college, it was obviously well suited for an event like this. UT was late getting the building open so things started off behind schedule, but the Austin team was able to get everything back on track.

Opening Ceremony
Wes Brown gave a good opening and a great job thanking sponsors, volunteers, and speakers. My hat is off to Wes, Jim Murphy, and AJ Mendo for putting on a wonderful event.

Session 1
Michael Hotek spoke on SQL Server Perfomance Analysis. Mike is a Dallas speaker and helps run the Ft. Worth group, but this was my first time to hear him speak. He did a great job covering the process flow of the SQLOS to help explain performance waits. He talked about the gotchas with using performance monitor, talked about the DMVs, and even touched on extended events.

Session 2
Steven Ormond spoke on SQL Server Memory. Steven is a really great guy and it was a pleasure to meet him. He did a great job in his live demos of limiting the max server memory to demonstrate various methods of how to find memory pressure in your system. He showed what counters to look for in performance monitor and what they mean. He also showed how to find the bottlenecks using the DMVs. He is a new speaker and he delivered a fantastic session. I certainly hope he continues to speak.

Session 3
I spoke during this time slot on SQL Server Mirroring. If you attended this session or want to see what it was about then you can view the abstract and download the slide deck HERE.

Lunch was a standard lunch box with a sandwich, chips, cookie, and an apple. It was pretty basic lunch as far as quality. The venue had a good setup for lunch time networking with an outside courtyard, and the weather was perfect.

Session 4
Jim Murphy spoke on Denali Always On. I’ve always loved and gravited toward HR and DR technologies and as a fan of both clustering and mirroring this was right up my alley. I admit that I should already be up to date on this faeature, but simply have not had time. Jim wrote a nice little custom application front end to show the app connecting to the differentl replicas as he failed them over. It was an awesome touch and great idea.

Session 5
Tim Radney spoke on TempDB. Tim is a fellow regional mentor and it was great to meet him. He had an amazing session on TempDB performance and using SQLQueryStress to create contention and show how to troubleshoot it. He covered everything from best practices and PFS/GAM/SGAM, to personal experience. This was a great session and I think I might need to put together something similar. Maybe I can convince Tim to give me some pointers.

Session 6
There were some great sessions, but I ended up using this as a networking time and as always was time well spent.

The closing ceremony was a bit long for the raffle. This is a very common pain point for many of these events. I shared some ideas of what has worked for us in Dallas and hopefully that will help in the future.

Other Observations
They used the standard SQLSaturday evaluation forms that only have two criteria. One was for expectations (Did Not Meet, Met, Exceeded) and the other was a scale of 1 to 5 for overall quality. The fact that it is short and sweet might yield a greater return of forms.  People are more inclined to fill it out since it’s quick.  The tradeoff is whether it was enough for the speakers.  It worked fine for me as an attendee and speaker.

The team did a great job on the inside signs, but outside signs were a little lacking.

After Party
The after party was also held at the Iron Cactuss. It was a great time of networking and everyone had a wonderful time. The turn out was small, but it’s hard to get people to go after a full day of drinking from the fire hose and time away from family. I’ve got some ideas, that I hope try out in the future to help this situation.

Icing on the Cake
Wes Brown was awarded with an MVP award it could not have come on a better day for him. Congratulations Wes! You are most deserving of this award and we appreciate everything you do for the community.

Let’s take a look out how to schedule policy evalution using the PBM On Schedule mode.  Start by double clicking on the policy of your choice.  Next we need to change the evaluation mode to On Schedule.  As soon as you make this change you will notice a red warning appear at the top of the dialog stating that you must assign a schedule.  You can either pick an existing schedule using the Pick button or create a new one with the New button.  Let’s click the new button and create a new schedule called “Every Day 2AM”.  Here is what the schedule should look like.


Back in the policy dialog you need to check off the Enable box and click OK to close the dialog.

If you go look at your SQL Server Agent jobs you will notice a new job with the prefix “syspolicy_check_schedule” followed by a unique identifier.  The first thing you should do is rename it, so you know what it does in the future.  Let’s run this job to test out our new policy.  The job will report success even if a policy violation occurs because the violation will be stored in PBM.  If you right click the policy and select history we can see the results.

We can see that the most recent evaluation had a policy violation and we can see the results in the details pane.  Our ReportServer database has violated the policy and that’s easy to see in the details pane, but we only evaluated one policy against a handful of databases.  You’ll notice that the detail information is stored in XML format and could be time consuming to navigate if the job had a broader scope.  To get a better view of the result we can click the hyperlink in the details column to get a graphical view.