Ryan Adams Blog

SQL, Active Directory, Scripting

Morning

I got up at 6am PT this morning, got ready, got some coffee from Starbucks, and made it to the convention center by 7AM.  I had no problem following the signs to registration and there was no wait in getting registered.  I wish I had gotten in early enough yesterday to make registration, but no luck there.  The good news is that the convention center and hotel are so close I had more then enough time to run back and drop off the laptop bag and goodies from registration in my room.  I met and chatted with a lot of folks and sat down for breakfast with Adam Saxton, Allen White, Wil Sisney and others before I dropped everything off.

Pre-Con

I was lucky enough to attend a pre-con.  I chose to attend execution plans by Grant Fritchey and Gail Shaw.  I’m not going to blog the whole thing, but here are a subset of my notes to give you an idea of what was covered and the best tips I picked up. Please remember these points are just notes and not well formed.

  • Using Optimize for ad-hoc workloads has no downside, so it was suggested to turn it on regardless.  I’m not a fan of turning things on that you don’t need, but if there is no overhead and it could save you issues in the future then it’s worth it.
  • Always use SPs if you can.  They are stored in the engine and execution plans are cached.  I’ve been preaching this one myself.  Ad-hoc queries are more difficult to track down, especially if you don’t have optimize for ad-hoc workloads turned on.
  • Every SP has it’s own plan, so if an SP calls another SP they each have their own plan.
  • ANSI settings on the connection from SSMS and .NET are different, so you’ll get different plans and makes troubleshooting the query more difficult.
  • Rebuilding your indexes can cause statistics to be out of date, which means you could end up with an inefficient plan in your plan cache.  Reorganizing an index does not update statistics.  <<<<ask and clarify this.
  • Every insert into a temp table causes a recompile of the plan.  Another perfomance hit of using temp tables.  Although table valued parameters NEVER cause a recompile.
  • Plan cache hit ratio.  There are some general numbers out there, but you can’t use this.  As usual you need a baseline for your system.  If your normally at a consistent 95% and you get a SUSTAINED drop to 92% then that is a problem for you even though that 92% is above the suggested value.
  • The cache_miss event is not all that helpful for two reasons.  If you execute an SP from SSMS then the call to the SP (The EXEC statement) is causes a miss because the call itself never gets cached.  You’ll see a second one come in for the procedure itself, and that is the one you are interested in.  The other reason is if the optimizer has to insert a plan into the cache the miss is assumed, and not recorded in the miss counter.  It is only counted in the cache_insert counter, so that counter is a better place to look.
  • If you use optimize for ad-hoc workloads and a run a query from SSMS with the ESTIMATED plan it is seen by the optimizer and will create a stub.  When you later run the query fo real it will see the stub and cache the plan.
  • A nested loop join is actually a cursor!  Read the tooltip in the plan for the description and think about what it is saying.
  • Nested loop joins can be efficient if the outer table has a small amount of rows.
  • Scan count does not mean how many time SQL read the table or even how many time it accessed it.  Ignore this value and concentrate on Logical Reads instead.
  • Merge joins are extremely efficient.  You usually see them when the join columns are indexed.
  • If you see worktable in your IO statistics output it is temp table created by SQL generally for a hash match or sort.
  • Sum and count are good to put in an include, but min and max are better in the predicate because it will already know the range.

Evening

I met up with Allen Kinsel, Dave Stein, Jim Murphy, and John Clark and headed down to Lowell’s for the networking dinner hosted by Andy Warren and Steve Jones. When we got there the line was crazy long and we were all hungry so we decided to head over to the Pike Place Grill. We all had a great time networking and went to the Tap House. We met up with Tim Radney, Bill Graziano, and several others but the Tap House was packed with staff complaining of too many people. We all headed over to the Sherton lobby and hung out for the rest of the evening.

It was an awesome first day and start to the conference with plenty of SQL goodness yet to come.

Today I will be LIVE blogging the keynote from the PASS Summit 2011 conference.  Keep your browser tuned in here and refresh often.  Today’s keynote is being delivered by Quentin Clark.  Quentin is Microsoft’s Corporate Vice President of the SQL Server database systems group.  I will also try to tweet updates as I can so make sure to follow me.  http://twitter.com/ryanjadams

Bill Graziano kicks us off on day 2 of SQLPASS and is sporting a kilt for the official SQL kilt day.  Bill starts off thanking all of the volunteers that make this community as amazing as it is.  He thanks chapter leaders, regional mentors, and special programs volunteers.

Bill thanks Jack Corbett and Tim Radney for outstanding service to the community.  Lori Edwards wins the PASSion 2011 award for oustanding commitment with everything she has done for PASS over the last year.  She is truly an inspiration for those of us that love this community.  Congratulations and thank you Lori!

Quentin shares the Microsoft vision that was introduced yesterday.  Their vision is any data anywhere, and of course integration with the cloud.  Quentin is sharing his favorite SQL Server features which he calls the fantastic 12.  The first 4 are required 9s uptime, fast performance, rapid data exploration, and managed self service BI.

Quentin brings a customer on stage to talk about their use of the SQL Server product.  We are watching an Always On demo for how their shipping company uses the feature.  This is one cool new feature that you need to check out.  We are shown the Always On dashboard and everyone claps as they point out that it is all green.  We now see how easy it can be to deploy a read only secondary that can be used for reporting and other purposes.  There is clapping from the cloud as they remember to start using zoomit so we can actually see the demo.

Quentin is moving on to his second favorite feature of performance which covers ColumnStore Index.  He is just quickly covering the specific features of his overall favorite categories.  He mentions things like PowerPivot, SharePoint, Data Quality Services, and Master Data Services.  He is just listing the features, but not digging in too much and the crowd is getting restless.

Everyone claps as we enter another demo.  That’s right, Contoso rises again! They are showing some data quality features where you can use metadata to validate your data.  Everyone sighs as they toss the cloud into the mix again.  They are showing how you can use services from the cloud in the Azure marketplace to validate the quality of your data.  I wonder what happens when the data in the cloud is wrong? It seems like a lot of trust in MS and their data services must be taken.  I bet there is one heck of an agreement when using those services to idemnify MS from being responsible.

Quentin is up to 8 and 9 on his fantastic 12 list.  They are scalable data warehousing and fast time to solution.  The big thing pointed out here is appliances.  Another guest comes on stage to talk about parallel datawarehousing.  Both HP and Dell appliances are featured.  HP’s has 700TB!  One of the smaller appliances looks like the robot from Lost in Space. Danger Will Robinson!!

Quentin is on to number 10 which is Extend any data anywhere.  He covers added interoperability and announces a new ODBC driver for Linux.  He admits it’s a ploy to get them into the MS stack.  Nothing better than transparency, huh?  He also announces ODBC drivers for change data capture for SSIS and Oracle.  We have another guest on the stage showing a demo, and the crowd is clammering for zoomit.  They are showing how symantec search works to bring you related data for your searches.  We are seeing how it can be used to search your document libararies.  Symantec search is like full text search on crack.  You can use it to search for keywords within your documents and even pull back the correlations in similar documents.

We are on to Quentin’s number 11 which is Optimized Productivity.  Here we have SQL Server Data Tools, formerly know as Juneau, as well as Unification Across Database and BI.

We quickly move on to number 12 which is Scale on Demand which covers AlwaysOn.  Our next guest from the SQL CAT team arrives on stage.  It looks like he is a Twitter user as he points out making all the bloggers happy by using zoomit.  We are seeing how to easily deploy a database to SQL Azure.  They are announcing Windows storage in the cloud so you can back up your Azure databases to cloud storage and restore from there as well.  They also show local SSMS management of your Azure DBs.  After the demo of this feature there is clapping, but our guest has to prompt everyone to get more excitement.

Our next guest from the Azure team comes on stage.  A new SQL Azure management interface is shown and it has live tiles.  They also ann0unce that Azure can now hold up to 150GB databases.  These features will go live by the end of the year, so don’t go looking for that 150GB just yet.  The Azure reporting and data sync features are in CTP now.

Quentin is now talking about Hybrid IT where you can have combinations of server, private cloud, and public cloud.  It look like Quentin is wrapping up by reviewing everything covered today.

The day 2 keynote is over.  Do not miss tomorrow’s keynote with Dr. David Dewitt.  I’m not at the live blogger table tomorrow, but I plan to blog it live anyway.

 

 

 

 

 

SQLSaturday 97 Austin     #97

This was a great event! This was Austin’s first SQLSaturday, so it’s quite an accomplishment and something to celebrate.  I brought my family with me since my wife wanted to do some shopping in San Marcos. The drive down went well and we only had two hickups. The first was just a minor hold up due to a grass fire in the highway median. Texas has been in quite a drought so this is unfortunately common right now. The second was traffic on 35 once we got into Austin, but I was able to get around all of it. The speaker dinner was held at the Iron Cactuss restaurant and was a buffet style BBQ. The food was great and everyone had a great time networking with a bunch of wonderful people.

The next morning I headed out to the Thompson Conference Center on the University of Texas campus. Being a college, it was obviously well suited for an event like this. UT was late getting the building open so things started off behind schedule, but the Austin team was able to get everything back on track.

Opening Ceremony
Wes Brown gave a good opening and a great job thanking sponsors, volunteers, and speakers. My hat is off to Wes, Jim Murphy, and AJ Mendo for putting on a wonderful event.

Session 1
Michael Hotek spoke on SQL Server Perfomance Analysis. Mike is a Dallas speaker and helps run the Ft. Worth group, but this was my first time to hear him speak. He did a great job covering the process flow of the SQLOS to help explain performance waits. He talked about the gotchas with using performance monitor, talked about the DMVs, and even touched on extended events.

Session 2
Steven Ormond spoke on SQL Server Memory. Steven is a really great guy and it was a pleasure to meet him. He did a great job in his live demos of limiting the max server memory to demonstrate various methods of how to find memory pressure in your system. He showed what counters to look for in performance monitor and what they mean. He also showed how to find the bottlenecks using the DMVs. He is a new speaker and he delivered a fantastic session. I certainly hope he continues to speak.

Session 3
I spoke during this time slot on SQL Server Mirroring. If you attended this session or want to see what it was about then you can view the abstract and download the slide deck HERE.

Lunch
Lunch was a standard lunch box with a sandwich, chips, cookie, and an apple. It was pretty basic lunch as far as quality. The venue had a good setup for lunch time networking with an outside courtyard, and the weather was perfect.

Session 4
Jim Murphy spoke on Denali Always On. I’ve always loved and gravited toward HR and DR technologies and as a fan of both clustering and mirroring this was right up my alley. I admit that I should already be up to date on this faeature, but simply have not had time. Jim wrote a nice little custom application front end to show the app connecting to the differentl replicas as he failed them over. It was an awesome touch and great idea.

Session 5
Tim Radney spoke on TempDB. Tim is a fellow regional mentor and it was great to meet him. He had an amazing session on TempDB performance and using SQLQueryStress to create contention and show how to troubleshoot it. He covered everything from best practices and PFS/GAM/SGAM, to personal experience. This was a great session and I think I might need to put together something similar. Maybe I can convince Tim to give me some pointers.

Session 6
There were some great sessions, but I ended up using this as a networking time and as always was time well spent.

Closing
The closing ceremony was a bit long for the raffle. This is a very common pain point for many of these events. I shared some ideas of what has worked for us in Dallas and hopefully that will help in the future.

Other Observations
They used the standard SQLSaturday evaluation forms that only have two criteria. One was for expectations (Did Not Meet, Met, Exceeded) and the other was a scale of 1 to 5 for overall quality. The fact that it is short and sweet might yield a greater return of forms.  People are more inclined to fill it out since it’s quick.  The tradeoff is whether it was enough for the speakers.  It worked fine for me as an attendee and speaker.

The team did a great job on the inside signs, but outside signs were a little lacking.

After Party
The after party was also held at the Iron Cactuss. It was a great time of networking and everyone had a wonderful time. The turn out was small, but it’s hard to get people to go after a full day of drinking from the fire hose and time away from family. I’ve got some ideas, that I hope try out in the future to help this situation.

Icing on the Cake
Wes Brown was awarded with an MVP award it could not have come on a better day for him. Congratulations Wes! You are most deserving of this award and we appreciate everything you do for the community.

Let’s take a look out how to schedule policy evalution using the PBM On Schedule mode.  Start by double clicking on the policy of your choice.  Next we need to change the evaluation mode to On Schedule.  As soon as you make this change you will notice a red warning appear at the top of the dialog stating that you must assign a schedule.  You can either pick an existing schedule using the Pick button or create a new one with the New button.  Let’s click the new button and create a new schedule called “Every Day 2AM”.  Here is what the schedule should look like.

 

Back in the policy dialog you need to check off the Enable box and click OK to close the dialog.

If you go look at your SQL Server Agent jobs you will notice a new job with the prefix “syspolicy_check_schedule” followed by a unique identifier.  The first thing you should do is rename it, so you know what it does in the future.  Let’s run this job to test out our new policy.  The job will report success even if a policy violation occurs because the violation will be stored in PBM.  If you right click the policy and select history we can see the results.

We can see that the most recent evaluation had a policy violation and we can see the results in the details pane.  Our ReportServer database has violated the policy and that’s easy to see in the details pane, but we only evaluated one policy against a handful of databases.  You’ll notice that the detail information is stored in XML format and could be time consuming to navigate if the job had a broader scope.  To get a better view of the result we can click the hyperlink in the details column to get a graphical view.

There are four different ways we can evaluate policies against servers in our environment.  These methods are all manual and I will cover how to automate them in another post.  This is just a quick list to show your available options, and where they are located.

  1. We can evaluate a single policy against a single instance by simply right clicking on the policy and selecting evaluate.
  2. We can evaluate multiple policies against a single instance by right clicking the Policies node and selecting evaluate.  This opens a dialog where we can choose multiple policies to evaluate against the local instance.
  3. We can evaluate a single policy against multiple instances.  This is where the power of Central Management Server comes in.  You can right click on a group in your CMS, select evaluate policies, choose your PBM server source, and select a policy to evaluate against the instances in the group.
  4. We can also evaluate multiple policies against multiple instances.  We use our CMS again just like the previous bullet point, but simply select multiple policies to evaluate.

Here is what the policy selection screen looks like for numbers 2, 3, and 4 above.

PBM Select Multiple policies dialog

PBM has four evaluation modes that provide us with flexibility in the way we evaluate policies against our SQL instances.  Here are the options and what we can use them for:

  • On Demand – This is a manual policy evaluation.  Policies in this mode can only be evaluated by performing a policy evaluation in SSMS.
  • On Schedule – This allows you to automate the evaluation of policies.  Policies in this mode require a schedule and are executed via SQL Agent jobs.
  • On Change: Log Only – This mode only evaluates a policy in response to a SQL Server DDL Event.  The action performed by the user is allowed to complete and the result is logged in PBM.
  • On Change: Prevent – This mode only evaluates a policy in response to a SQL Server DDL Event.  The action performed by the user is wrapped in a transaction and automatically rolled back if it violates the policy.

I will be speaking at SQLSaturday #97 in Austin, TX on October 1st, 2011.  I will be delivering my presentation on database mirroring.

I know the event team for this SQLSaturday and I can say without a doubt that it will be an event to be remembered.  Wes Brown (Blog|Twitter), Jim Murphy (Blog|Twitter), and AJ Mendo (Blog|Twitter) are some top notch guys that are committed to the community.  If you attend, make it a point to meet these guys and thank them for their hard work!

As with any SQLSaturday (especially the first one) there are some bumps along the road.  These guys have handled those bumps with grace, dignity, and the upmost in professionalism.  I know they could use any help they can get, so if your company can sponsor this event or you can volunteer to lend a hand, make sure to do so.

It is going to be a great event and there are limited spots left so make sure to Get Registered, and stop by my session.  Here is the abstract:

Mirroring: The Bare Necessities

Remember Baloo the bear from the Jungle Book? Well we are going to get down to the “bear” necessities of mirroring and more. Mirroring can be an integral part of your high availability and disaster recovery planning. We’ll cover what mirroring is, how it can fit into an HA/DR plan, the rules surrounding its use, configuration via the GUI and T-SQL, as well as how to monitor mirroring. This presentation is designed to not only give you an overview of mirroring, but to also walk you through a basic implementation. At the end you will have learned what mirroring is, how it can fit into your environment, what business requirements it solves, and how to configure it.

This past weekend I was speaking at SQLSaturday #90 in Oklahoma City.  During the event I had the pleasure of sitting down and getting to know Karla Landrum (PASS Community Evangelist).  We met each other at a SQLSaturday in Houston earlier this year, but we didn’t have much time to chat.  We had a great conversation about PASS and community.

I knew she had been a regional mentor prior to taking the new job at PASS as community evangelist, so I asked her what being a regional mentor was about.  I already knew what the overall goal of the program was and I was curious about the specifics.  She explained the program and the changes it has been going through.  At this point you may be asking the same thing.  What does a regional mentor do for PASS?  Allow me to give you an overview with a technical twist.

RMs are endpoints.  They are the communication channels between local chapters, PASS, and other local chapters.  It’s a mesh topology of endpoints!  We have a lot of chapters in the PASS organization and this gives us a way to remain connected and help each other out.  RMs are ambassadors for PASS to the community.  They gather information from the chapters about their pains, so that PASS understands the challenges and can help out.

RMs are also advocates for the local chapter leaders.  The idea is to make sure that chapter leaders have the resources and tools they need to be successful.  This can encompass a lot of things, like how a chapter can market itself, how they get sponsorship, how they get items for raffles, and how they get speakers.  It also provides a way for chapters that are geographically close to share resources like speakers and sponsors.

Let’s flashback to Saturday.  During my conversation with Karla, she mentioned a need for another regional mentor.  I let her know that I would be happy to help out.  We finished our conversation and I went back to attend some more sessions.  About an hour later I was called out of the session and Karla let me know that I would be the new regional mentor for the US South Central region.  She had already contacted Mark Ginnebaugh on the PASS board of directors, who approved it.  Thanks Mark!

It was already an amazing day simply because I was at a SQLSaturday, the Oklahoma City User Group had become an official chapter, and now this!

I am very honored to have been chosen for this position, but I’m even more excited.  In fact, I already have an idea for growing local speakers.

I am ready to serve!

SQLSaturday 90 OKC     #90

This was a fantastic event! This was Oklahoma City’s first SQLSaturday, so it’s quite an accomplishment and something to celebrate.  Tim Mitchell, Russell Loski, and I started off by heading from Dallas to Oklahoma City which only took about 3 hours. The speaker dinner was held at the Embassy Suites hotel restaurant. The hotel and restaurant were very nice and the food was good. They served finger foods and pizza, but the pizza was not your typical mediocre pizza. This was some good stuff.

The next morning we headed to the Norman Moore Technology Center, which was a prefect venue for an event like this. This place was new, clean, and very well laid out for sponsors and attendee networking.

Keynote
Steve Jones (Blog|Twitter) delivered a key note speech. His keynote was titled “The Winding Road” and was about his life and path to technology, how he became a SQL Server expert, and how he started SQLServerCentral.com. He went on to talk about how we make choices that define where we go and how those choices are presented to us. Sometimes those choices are not clear cut. How often do you hit a fork in the road where you get to make a choice? Steve says that for most of us, we are on a road and the choice is to stay on that road or take an exit and try something new.

Steve also talked about why we make the choices we make. Did you choose a career path for money, or maybe knowledge? The reason for making a choice is a good indicator whether it will be a good one or not. He suggests that we not decide what to do, but first decide what things we do NOT want to do.

Session 1
Wes Brown presented on “Understanding Storage Systems and SQL Server”. Wes runs the Austin, TX SQL Server user group and they are planning SQLSaturday #97. Make sure you check that event out in any way you can. Whether you are looking to sponsor the event, speak at the event, or just attend and I’ll see you there!

Have you heard that something is only as good as its weakest link? Well for SQL server that is the disk subsystem. Wes covers everything about the various disk systems and how they all integrate together. 10 minutes into the session and I had an overwhelming urge to hop on Newegg and start building a new home system. If you are building out a new SQL Server system then this session is for you.

Session 2
I spoke during this time slot on Policy Based Management and Central Management Server. If you attended this session or want to see what it was about then you can view the abstract and download the slide deck HERE.

Lunch
They served a nice box lunch with sandwich, apple, cookies, and chips. My chips expired on August 9th, so I hope that was the only expired thing in the bag.

Session 3
I used this time to do a little networking and some great conversations. I got the opportunity to talk to Karla Landrum, who recently joined the PASS HQ team. We had some great community dialog and I look forward to working with her in the community space.

Session 4
Sri Sridharan spoke on data governance. Sri explained how he handles mining configuration data from his servers and aggregating all the information. It can serve as an inventory, but it’s the data that is mined that can give you real value. It provides a way to see the discrepancies between test, dev, and prod. It also gives you a central way to manage your data and server environment. Have you ever wondered which cluster node your SQL instance is currently running on? Sri shows a way to see that from one central point across your enterprise.

Session 5
This session was a 30 minute time slot and is the first time I have seen this done at a SQLSaturday. It’s a short time, but I think it worked well and encourages networking after the session for those that want to dig deeper. I watched Ben Miller talk about TDE, but I was called out of the session early.  Make sure to check back tomorrow to find out why.  I was able to meet up with Ben after the event, and talk shop to fill in the gaps.

Closing
The closing ceremony went very smoothly with good advertisement of SQLPASS and community events. The raffle process is a pain point for many of these events and OKC did a great job making it run smoothly.

Other Observations
The speaker evaluation forms only had two criteria. One was for expectations (Did Not Meet, Met, Exceeded) and the other was a scale of 1 to 5 for overall quality. The fact that it was short and sweet might have yielded a greater return of forms.  People are more inclined to fill it out since it’s quick.  The tradeoff is whether it was enough for the speakers.  It worked fine for me as an attendee and speaker.

In all honesty I only saw two things that could have been improved upon, and that is absolutely amazing for a first time event.  Those things were no signs outside and pre-event communications were behind schedule.  I suspect the communications thing was due to pre and post event venues, and sometimes there is just nothing you can do about it.

Icing on the Cake
The OKC SQL group officially became a chapter of PASS on the day of the event.  The team was able to announce it at the event which really made it special.  Congratulations OKC!

REGISTER

On September 7th, 2011 at 1pm Central Time the SQL Server Worldwide User Group will be airing my presentation on how to Manage your shop with CMS and PBM.  The webcast is free for SSWUG members and $29 for non-members.  As an added bonus, I will be in the live chat room ready to answer your questions.  Make sure to catch my session by Registering Here.  Here is the abstract:

Manage your shop with CMS and Policy Based Management

In this presentation we talk about Central Management Server and how it can help you manage a disperse environment. We will also cover what Policy Based Management is and how you can leverage its power to better manage your environment. With PBM we’ll see what it can and cannot do to help you enforce standards in your enterprise. We will cover and demonstrate PBM for the beginner from creating and evaluating policies to receiving alerts on policy violations.