Ryan Adams Blog

SQL, Active Directory, Scripting

I will be live blogging the SQLPASS Summit keynote again today. Today’s keynote will be delivered by community favorite Dr. David Dewitt.

Rick Heiges opens us up and introduces a special musical number by Rob Farley and Buck Woody. It was hilarious with a special lyrical shout out to Paul Randal. Rick is now walking us through the leadership changes in the PASS organization, with a very special thank you to past President Wayne Sneider as Rushabh Mehta becomes the immediate past President.

Rick is announcing upcoming events with SQLRally in Dallas on May 10-11. He also announces the next Summit in Seattle November 6-9. He also tells everyone that all Summit attendees will receive an e-book copy of the MVP Deep Dives book and to keep an eye on their email.

Dr. David Dewitt enters the stage with a great start by explaining why his wife is not here to watch him speak. There are lots of laughs. David starts his talk on big data and shares some size statistics for systems of the larger web sites like Facebook and other social media. The data sizes are astounding.

David is explaining the NOSQL movement and points out that is does NOT mean NO SQL, but means not only SQL. So he wants us to think about large data and a mix of systems to support the data. We may have one entry point or front end to get the data but on the backend some data might be in SQL and some might be in Hadoop. He is trying to get us to think about the data and that not all data is relational and is better suited for other storage systems.

David explains that NOSQL is not a paradigm shift and that RDBMS are still the best way to store data efficiently. However, some data like unstructured data does not work best in an RDBMS. He plans on talking about Hadoop and how it works.

David is explaining the Hadoop file system called HDFS. The HDFS does not replace the windows file system or NTFS but sits on top of it. The blocks are stored and replicated by a factor of three. It sounds like RAID 5 but spread across multiple nodes with seperate storage systems. The node first written to is the node where the transaction originated, the second on a node in the same rack, and the third on a node in a different rack. The replication of this data is handled by a name node or primary node (which also has a backup node). It monitors all the other nodes with heartbeats and decides how to distribute the data among the nodes.

David is now explaining how Hadoop handles failures and that it was designed to expect failures. It does use checksums for reads and writes, but expects that hardware and software failures will occur. When a failure occurs on a node then the name node finds the blocks that are missing and replicates them to other nodes to maintain that factor of three. I can’t help but think this sounds like RAID except it is supported by replication as opposed to multiple writes across disk. With a factor of three I envision it like a RAID 5 on top of a RAID 5, but all data is written by replication instead of multiple writes on a single disk array.

David is explaining how Hadoop finds the data when you request it since you don’t know where it is stored. David is moving on to how mapreduce works with an animation he jokes took him 6 hours to come up with. The audience loves the animation explanation, with some clapping. He shows how map tasks find the data across nodes and then hand it off to the map reduce procedure that takes the data from the multiple nodes and reduces it to a single output.

Now we are hearing that after Hadoop came out that Facebook and Yahoo started using it. However, they both came to different conclusions on the language. Facebook came up with something very SQL like, but Yahoo came up with something more procedural. David brings up a slide with a lot of writing and jokes that it is not meant to be read and that he will not be using zoomit. The crowd loves the comment with lots of laughs after the lack of use in other keynotes this week. David now points out that out of the 150k jobs Facebook runs that only 500 are map reduce jobs and the rest are hive sql. Now we are seeing how hive tables are designed and that they are partitioned but a particular attribute.

Now we are seeing how Hive relates to parallel datawarehousing. We can see how Hive is great for unstructured data that is not related, but how SQL is much better with relational data due to a common schema and partitioning method. Now David is talking about putting the two things together and connecting the universe. We see the difficulties in getting data from both worlds in regards to performance. He explains the SQOOP approach and the challenges and that there must be a better way. SQOOP moves the data from one world to other to get the data.

David asks what about if we don’t move the data, but put a management systems between the two that understands how to get the data from both systems seperately. This is something he is working on in his labs as it becomes clear that we will be living in a world with both types of data and a need to get information from both and relate it.

David is wrapping things up with a re-cap and driving home the major points. The biggest one is that SQL is not going away and neither is Hadoop or other unstructed data systems and we need to work with both.
1!
That’s the end of the last SQLPASS 2011 summit keynotes. The crowd is wild about David with a huge standing ovation!

I was chosen to sit at the blogger table and live blog the keynote tomorrow so plan to be here at 8:15pt and I’ll refresh the post that day as often as possible. I may not be at the table today, but that won’t stop me from blogging anyway.

Keynote
Rushab Mehta starts us out with a recap of all that PASS has accomplished in the last year. He thanks everyone involved from the PASS board to those that volunteer for SQLSaturdays or user group meetings. The SQL community is growing by leaps and bounds and it is very exciting.

Ted Kummert is the keynote speaker today. Ted recognizes the success of MS due to the work and success of the SQL community. He talks about how their success is based on community success. As everyone expected, it didn’t take long before we heard the word “cloud”. Ted talks about how the cloud is another revolutionary change in the data technology landscape and it’s economic impact. It was a profound statement, but then Ted took a little bit of a turn. We always hear MS pushing the cloud and they are always referring to THEIR cloud. However, Ted talks about how we will have a choice and mix of MS cloud, private cloud, and partner cloud.

Ted officially announced that the next version of SQL Server code named Denali will be released in the first half of 2012 and will be called SQL Server 2012. He also announces that SQL Azure will fully integrate and support Hadoop. We then got to see a demo of using the MS BI stack to consume and dig into Hadoop data.

Sessions
I had a chapter leader and regional mentor meeting so I didn’t attend the first session. It was a good meeting, with good ideas. I wish there had been more time, but the larger meeting was yesterday and I was in a pre-con. I’m very happy that they had a second meeting for those of us that could not make the first one.

I watched Andrew Kelly present on SQL 2008 Query stats. He did a good job of explaining the DMVs used to get the stats you need to understand what is going on in your system. The DMVs are great, but it is not always clear how to get the information you need. You generally have to correlate 2 or more DMVs to get what you want. Andrew’s session really helped to show those correlations.

I also watched Brent Ozar do his Blitz session. I saw his first one online, but this was his second, new, and improved version. In the session he shows a script he wrote and provides for free that has a ton of elements to assist you in assessing a new server you have just inherited. Brent added some really good additional things to the script. The really shiny thing he added was wrapping the script in a stored procedure that takes all the results of the blitz script and prioritizes the results. This is Brent…so he didn’t stop there. The SP actually uses OpenRowset to connect to his SQL Server he has running in the cloud and updates the SP definition. How cool is that? It’s like Windows Update. Make sure to go check it out at brentozar.com/blitz

Chapter Lunch
Today was chapter lunch day. Every PASS chapter had a table and sign in the lunch room. Mad props to whoever started this idea. You would be surprised how many people come to the summit and don’t know about local chapters in their area. This does not just apply to the smaller cities but even larger cities as well. It is a great way to connect people who are local to each other and grow the local chapters.

Expo Hall
This is an open networking time to talk to all the vendors and see the solutions they provide. Of course, there is always the SWAG! I had a good time and there are several things I plan on testing out, but I found the SWAG a little lacking.

SQLPeople Party
SQLPeople.com is the brain child of Andy Leonard and he held an event sponsored by Embarcadero for all those involved with the project. It was at the Tap House Grill. The Tap House is great place and perfect for a smaller event like this. I had a great time meeting some other wonderful SQL people.

Today was my travel day.  I don’t travel much at all so it’s always an exciting adventure for me.  I left the house with plenty of time and made it to the Parking Spot in about 40-45 minutes.  If you travel out of DFW international airport I highly suggest using these guys for parking.  The airport itself is very large and not fun to navigate, but getting to the Parking Spot is easy.  Once you get there they tell you what row to park in and their van picks you up right from your parking spot.  The staff are always friendly and even handle your luggage for you.  Once you are on the bus, you have no worries about navigating the often confusing DFW airport, since they drop you off right at the terminal you need.  Once you return they pick you up from the terminal baggage claim and run through about every 10 minutes for pickups.  They drop you back off right at your car and give you a free bottle of water for the drive home.

Once I got to the terminal I had to check a bag, but there was a very short line.  Security was also no big deal and only took about 10 minutes to get through, so everything went very smoothly.  Of course my plane left at 7pm so the slightly later departure may have aided in the short lines.  I’m glad it all went well, because I needed to grab a bite to eat and had just enough time to get something from the food court.  I then headed over to my gate and found David Stein, and it’s always nice to run into a friend.

The flight itself was very smoothe and completely on time!  Always a good thing, but especially since my flight didn’t arrive until a little after 9PT and 11 my time.  Dave and I met up and headed down to the baggage claim to get our stuff.  From there we headed over to the Seattle Link Light Rail.  They don’t take American Express so I had to pay cash, but hardly a big deal for a one way trip of $2.75.  The whole process was easy and cheap, and I’ll certainly continue to use the Light Rail on future trips.

The trip took 40 minutes to get from the airport to the WestLake exit.  I’m staying in the Sheraton and I don’t think it even took me 5 minutes to walk from the train station to the hotel.  It was a perfect choice.  I got checked into the hotel, unpacked, ironed some shirts, and headed down to the hotel lounge to see who I could find.  I ran into Denny Cherry, Jim Murphy, and Wes Brown.  We chatted for awhile and Wes and I decided to call it a night after a long travel day.  I think I finally went to sleep around midnight PT or 2am my time.

Everything went great and it was an awesome start to the trip.

This morning I headed over to Top Pot doughnuts to hang out with Andy Warren, Steve Jones, Bill Fellows, David Fargo, Tim Radney, and many others. It was a nice time of networking and a good small crowd. The doughnuts were good with a large and interesting selection and the coffee was good as well.

Again I was lucky to attend a pre-con today.  Today I chose to attend Denali Always On by Allan Hirt. I have always gravitated toward HR and DR solutions so I’m looking forward in getting up to speed with Denali Always On. Here are a subset of my notes from the class to give you an idea of what was covered. Please note that there will not be any formatting so they may look scattered.

You can now have a TempDB local to each node. This can have better performance. Make sure the SQL Server service accounts has rights to the folder.

When going through the add node install in CTP3 it does not create the TempDB folder. You have to create it yourself manually.

As you patch nodes in your cluster, remember to remove the node you are patching as a possible owner before patching. You don’t want SQL to attempt to failover to that node in the middle of applying the patch.

You have to enable trace flag 9532 to get more than 1 availability group replica in CTP3

AG has integration with failover clustering so if you are combining them like was done with clustering and mirroring you no longer have to tweek the mirroring timeouts. An AG will not failover until the cluster completely fails, so FC is your primary HA and AG could be HA/DR.

You can take backups of the replica, but since Full Recovery mode is required you still have to take log backups of the primary or the log will grow. This has not been confirmed, but is likely the case.

First Timers Orientation

The first timers program was a great idea and as a first timer I can say that it works. The program needs some work and organization. Even the big brother/sister sponsors really didn’t know what was going on. When you entered everyone was given a sticker with a number and color. The idea was to find the person with the same number as you but the opposite color. It was a great idea to get people to meet someone new, but this process was never explained.

PASS did a really cool entrance of the first timers into the reception, but we were just told to split on one side of the room or the other and watch the video. That’s all we knew and had no idea what was going on. Rushab Mehta was doing the introduction and at the end he tells everyone to look toward the end of the room for the curtain. All of the first timers were trying to make sense out of it while turning and staring at a blank wall. It was not until the curtain opened on the other end of the room when you could hear a collective, “Ohhhh”.

The orientation was great and I definately think it should be continued, it just needs some better communication and organization. This was the launch of this new program and as with anything new there are always growing pains.

Welcome Reception and Quiz Bowl

The welcome reception was a great time of networking and an awesome opportunity to meet some amazing people. The quiz bowl was fun, but it didn’t seem like very many people were watching.

SQLServerCentral Party

This was a fun event. When you get there you get a ticket to exchange for chips. Depending on how much you win, the more tickets you get for the prize drawing at the end. I spent so much time chatting and networking with all the great people I never played a game.

Morning

I got up at 6am PT this morning, got ready, got some coffee from Starbucks, and made it to the convention center by 7AM.  I had no problem following the signs to registration and there was no wait in getting registered.  I wish I had gotten in early enough yesterday to make registration, but no luck there.  The good news is that the convention center and hotel are so close I had more then enough time to run back and drop off the laptop bag and goodies from registration in my room.  I met and chatted with a lot of folks and sat down for breakfast with Adam Saxton, Allen White, Wil Sisney and others before I dropped everything off.

Pre-Con

I was lucky enough to attend a pre-con.  I chose to attend execution plans by Grant Fritchey and Gail Shaw.  I’m not going to blog the whole thing, but here are a subset of my notes to give you an idea of what was covered and the best tips I picked up. Please remember these points are just notes and not well formed.

  • Using Optimize for ad-hoc workloads has no downside, so it was suggested to turn it on regardless.  I’m not a fan of turning things on that you don’t need, but if there is no overhead and it could save you issues in the future then it’s worth it.
  • Always use SPs if you can.  They are stored in the engine and execution plans are cached.  I’ve been preaching this one myself.  Ad-hoc queries are more difficult to track down, especially if you don’t have optimize for ad-hoc workloads turned on.
  • Every SP has it’s own plan, so if an SP calls another SP they each have their own plan.
  • ANSI settings on the connection from SSMS and .NET are different, so you’ll get different plans and makes troubleshooting the query more difficult.
  • Rebuilding your indexes can cause statistics to be out of date, which means you could end up with an inefficient plan in your plan cache.  Reorganizing an index does not update statistics.  <<<<ask and clarify this.
  • Every insert into a temp table causes a recompile of the plan.  Another perfomance hit of using temp tables.  Although table valued parameters NEVER cause a recompile.
  • Plan cache hit ratio.  There are some general numbers out there, but you can’t use this.  As usual you need a baseline for your system.  If your normally at a consistent 95% and you get a SUSTAINED drop to 92% then that is a problem for you even though that 92% is above the suggested value.
  • The cache_miss event is not all that helpful for two reasons.  If you execute an SP from SSMS then the call to the SP (The EXEC statement) is causes a miss because the call itself never gets cached.  You’ll see a second one come in for the procedure itself, and that is the one you are interested in.  The other reason is if the optimizer has to insert a plan into the cache the miss is assumed, and not recorded in the miss counter.  It is only counted in the cache_insert counter, so that counter is a better place to look.
  • If you use optimize for ad-hoc workloads and a run a query from SSMS with the ESTIMATED plan it is seen by the optimizer and will create a stub.  When you later run the query fo real it will see the stub and cache the plan.
  • A nested loop join is actually a cursor!  Read the tooltip in the plan for the description and think about what it is saying.
  • Nested loop joins can be efficient if the outer table has a small amount of rows.
  • Scan count does not mean how many time SQL read the table or even how many time it accessed it.  Ignore this value and concentrate on Logical Reads instead.
  • Merge joins are extremely efficient.  You usually see them when the join columns are indexed.
  • If you see worktable in your IO statistics output it is temp table created by SQL generally for a hash match or sort.
  • Sum and count are good to put in an include, but min and max are better in the predicate because it will already know the range.

Evening

I met up with Allen Kinsel, Dave Stein, Jim Murphy, and John Clark and headed down to Lowell’s for the networking dinner hosted by Andy Warren and Steve Jones. When we got there the line was crazy long and we were all hungry so we decided to head over to the Pike Place Grill. We all had a great time networking and went to the Tap House. We met up with Tim Radney, Bill Graziano, and several others but the Tap House was packed with staff complaining of too many people. We all headed over to the Sherton lobby and hung out for the rest of the evening.

It was an awesome first day and start to the conference with plenty of SQL goodness yet to come.

Today I will be LIVE blogging the keynote from the PASS Summit 2011 conference.  Keep your browser tuned in here and refresh often.  Today’s keynote is being delivered by Quentin Clark.  Quentin is Microsoft’s Corporate Vice President of the SQL Server database systems group.  I will also try to tweet updates as I can so make sure to follow me.  http://twitter.com/ryanjadams

Bill Graziano kicks us off on day 2 of SQLPASS and is sporting a kilt for the official SQL kilt day.  Bill starts off thanking all of the volunteers that make this community as amazing as it is.  He thanks chapter leaders, regional mentors, and special programs volunteers.

Bill thanks Jack Corbett and Tim Radney for outstanding service to the community.  Lori Edwards wins the PASSion 2011 award for oustanding commitment with everything she has done for PASS over the last year.  She is truly an inspiration for those of us that love this community.  Congratulations and thank you Lori!

Quentin shares the Microsoft vision that was introduced yesterday.  Their vision is any data anywhere, and of course integration with the cloud.  Quentin is sharing his favorite SQL Server features which he calls the fantastic 12.  The first 4 are required 9s uptime, fast performance, rapid data exploration, and managed self service BI.

Quentin brings a customer on stage to talk about their use of the SQL Server product.  We are watching an Always On demo for how their shipping company uses the feature.  This is one cool new feature that you need to check out.  We are shown the Always On dashboard and everyone claps as they point out that it is all green.  We now see how easy it can be to deploy a read only secondary that can be used for reporting and other purposes.  There is clapping from the cloud as they remember to start using zoomit so we can actually see the demo.

Quentin is moving on to his second favorite feature of performance which covers ColumnStore Index.  He is just quickly covering the specific features of his overall favorite categories.  He mentions things like PowerPivot, SharePoint, Data Quality Services, and Master Data Services.  He is just listing the features, but not digging in too much and the crowd is getting restless.

Everyone claps as we enter another demo.  That’s right, Contoso rises again! They are showing some data quality features where you can use metadata to validate your data.  Everyone sighs as they toss the cloud into the mix again.  They are showing how you can use services from the cloud in the Azure marketplace to validate the quality of your data.  I wonder what happens when the data in the cloud is wrong? It seems like a lot of trust in MS and their data services must be taken.  I bet there is one heck of an agreement when using those services to idemnify MS from being responsible.

Quentin is up to 8 and 9 on his fantastic 12 list.  They are scalable data warehousing and fast time to solution.  The big thing pointed out here is appliances.  Another guest comes on stage to talk about parallel datawarehousing.  Both HP and Dell appliances are featured.  HP’s has 700TB!  One of the smaller appliances looks like the robot from Lost in Space. Danger Will Robinson!!

Quentin is on to number 10 which is Extend any data anywhere.  He covers added interoperability and announces a new ODBC driver for Linux.  He admits it’s a ploy to get them into the MS stack.  Nothing better than transparency, huh?  He also announces ODBC drivers for change data capture for SSIS and Oracle.  We have another guest on the stage showing a demo, and the crowd is clammering for zoomit.  They are showing how symantec search works to bring you related data for your searches.  We are seeing how it can be used to search your document libararies.  Symantec search is like full text search on crack.  You can use it to search for keywords within your documents and even pull back the correlations in similar documents.

We are on to Quentin’s number 11 which is Optimized Productivity.  Here we have SQL Server Data Tools, formerly know as Juneau, as well as Unification Across Database and BI.

We quickly move on to number 12 which is Scale on Demand which covers AlwaysOn.  Our next guest from the SQL CAT team arrives on stage.  It looks like he is a Twitter user as he points out making all the bloggers happy by using zoomit.  We are seeing how to easily deploy a database to SQL Azure.  They are announcing Windows storage in the cloud so you can back up your Azure databases to cloud storage and restore from there as well.  They also show local SSMS management of your Azure DBs.  After the demo of this feature there is clapping, but our guest has to prompt everyone to get more excitement.

Our next guest from the Azure team comes on stage.  A new SQL Azure management interface is shown and it has live tiles.  They also ann0unce that Azure can now hold up to 150GB databases.  These features will go live by the end of the year, so don’t go looking for that 150GB just yet.  The Azure reporting and data sync features are in CTP now.

Quentin is now talking about Hybrid IT where you can have combinations of server, private cloud, and public cloud.  It look like Quentin is wrapping up by reviewing everything covered today.

The day 2 keynote is over.  Do not miss tomorrow’s keynote with Dr. David Dewitt.  I’m not at the live blogger table tomorrow, but I plan to blog it live anyway.

 

 

 

 

 

SQLSaturday 97 Austin     #97

This was a great event! This was Austin’s first SQLSaturday, so it’s quite an accomplishment and something to celebrate.  I brought my family with me since my wife wanted to do some shopping in San Marcos. The drive down went well and we only had two hickups. The first was just a minor hold up due to a grass fire in the highway median. Texas has been in quite a drought so this is unfortunately common right now. The second was traffic on 35 once we got into Austin, but I was able to get around all of it. The speaker dinner was held at the Iron Cactuss restaurant and was a buffet style BBQ. The food was great and everyone had a great time networking with a bunch of wonderful people.

The next morning I headed out to the Thompson Conference Center on the University of Texas campus. Being a college, it was obviously well suited for an event like this. UT was late getting the building open so things started off behind schedule, but the Austin team was able to get everything back on track.

Opening Ceremony
Wes Brown gave a good opening and a great job thanking sponsors, volunteers, and speakers. My hat is off to Wes, Jim Murphy, and AJ Mendo for putting on a wonderful event.

Session 1
Michael Hotek spoke on SQL Server Perfomance Analysis. Mike is a Dallas speaker and helps run the Ft. Worth group, but this was my first time to hear him speak. He did a great job covering the process flow of the SQLOS to help explain performance waits. He talked about the gotchas with using performance monitor, talked about the DMVs, and even touched on extended events.

Session 2
Steven Ormond spoke on SQL Server Memory. Steven is a really great guy and it was a pleasure to meet him. He did a great job in his live demos of limiting the max server memory to demonstrate various methods of how to find memory pressure in your system. He showed what counters to look for in performance monitor and what they mean. He also showed how to find the bottlenecks using the DMVs. He is a new speaker and he delivered a fantastic session. I certainly hope he continues to speak.

Session 3
I spoke during this time slot on SQL Server Mirroring. If you attended this session or want to see what it was about then you can view the abstract and download the slide deck HERE.

Lunch
Lunch was a standard lunch box with a sandwich, chips, cookie, and an apple. It was pretty basic lunch as far as quality. The venue had a good setup for lunch time networking with an outside courtyard, and the weather was perfect.

Session 4
Jim Murphy spoke on Denali Always On. I’ve always loved and gravited toward HR and DR technologies and as a fan of both clustering and mirroring this was right up my alley. I admit that I should already be up to date on this faeature, but simply have not had time. Jim wrote a nice little custom application front end to show the app connecting to the differentl replicas as he failed them over. It was an awesome touch and great idea.

Session 5
Tim Radney spoke on TempDB. Tim is a fellow regional mentor and it was great to meet him. He had an amazing session on TempDB performance and using SQLQueryStress to create contention and show how to troubleshoot it. He covered everything from best practices and PFS/GAM/SGAM, to personal experience. This was a great session and I think I might need to put together something similar. Maybe I can convince Tim to give me some pointers.

Session 6
There were some great sessions, but I ended up using this as a networking time and as always was time well spent.

Closing
The closing ceremony was a bit long for the raffle. This is a very common pain point for many of these events. I shared some ideas of what has worked for us in Dallas and hopefully that will help in the future.

Other Observations
They used the standard SQLSaturday evaluation forms that only have two criteria. One was for expectations (Did Not Meet, Met, Exceeded) and the other was a scale of 1 to 5 for overall quality. The fact that it is short and sweet might yield a greater return of forms.  People are more inclined to fill it out since it’s quick.  The tradeoff is whether it was enough for the speakers.  It worked fine for me as an attendee and speaker.

The team did a great job on the inside signs, but outside signs were a little lacking.

After Party
The after party was also held at the Iron Cactuss. It was a great time of networking and everyone had a wonderful time. The turn out was small, but it’s hard to get people to go after a full day of drinking from the fire hose and time away from family. I’ve got some ideas, that I hope try out in the future to help this situation.

Icing on the Cake
Wes Brown was awarded with an MVP award it could not have come on a better day for him. Congratulations Wes! You are most deserving of this award and we appreciate everything you do for the community.

Let’s take a look out how to schedule policy evalution using the PBM On Schedule mode.  Start by double clicking on the policy of your choice.  Next we need to change the evaluation mode to On Schedule.  As soon as you make this change you will notice a red warning appear at the top of the dialog stating that you must assign a schedule.  You can either pick an existing schedule using the Pick button or create a new one with the New button.  Let’s click the new button and create a new schedule called “Every Day 2AM”.  Here is what the schedule should look like.

 

Back in the policy dialog you need to check off the Enable box and click OK to close the dialog.

If you go look at your SQL Server Agent jobs you will notice a new job with the prefix “syspolicy_check_schedule” followed by a unique identifier.  The first thing you should do is rename it, so you know what it does in the future.  Let’s run this job to test out our new policy.  The job will report success even if a policy violation occurs because the violation will be stored in PBM.  If you right click the policy and select history we can see the results.

We can see that the most recent evaluation had a policy violation and we can see the results in the details pane.  Our ReportServer database has violated the policy and that’s easy to see in the details pane, but we only evaluated one policy against a handful of databases.  You’ll notice that the detail information is stored in XML format and could be time consuming to navigate if the job had a broader scope.  To get a better view of the result we can click the hyperlink in the details column to get a graphical view.

There are four different ways we can evaluate policies against servers in our environment.  These methods are all manual and I will cover how to automate them in another post.  This is just a quick list to show your available options, and where they are located.

  1. We can evaluate a single policy against a single instance by simply right clicking on the policy and selecting evaluate.
  2. We can evaluate multiple policies against a single instance by right clicking the Policies node and selecting evaluate.  This opens a dialog where we can choose multiple policies to evaluate against the local instance.
  3. We can evaluate a single policy against multiple instances.  This is where the power of Central Management Server comes in.  You can right click on a group in your CMS, select evaluate policies, choose your PBM server source, and select a policy to evaluate against the instances in the group.
  4. We can also evaluate multiple policies against multiple instances.  We use our CMS again just like the previous bullet point, but simply select multiple policies to evaluate.

Here is what the policy selection screen looks like for numbers 2, 3, and 4 above.

PBM Select Multiple policies dialog

PBM has four evaluation modes that provide us with flexibility in the way we evaluate policies against our SQL instances.  Here are the options and what we can use them for:

  • On Demand – This is a manual policy evaluation.  Policies in this mode can only be evaluated by performing a policy evaluation in SSMS.
  • On Schedule – This allows you to automate the evaluation of policies.  Policies in this mode require a schedule and are executed via SQL Agent jobs.
  • On Change: Log Only – This mode only evaluates a policy in response to a SQL Server DDL Event.  The action performed by the user is allowed to complete and the result is logged in PBM.
  • On Change: Prevent – This mode only evaluates a policy in response to a SQL Server DDL Event.  The action performed by the user is wrapped in a transaction and automatically rolled back if it violates the policy.