Ryan Adams Blog

SQL, Active Directory, Scripting

You Should Have a SQL Naming Convention

Everyone should have a naming convention for their database objects.  I’m sure I could rant on about all the reasons, but let me just draw a parallel and then give an example.  Whenever you support multiples of anything in IT, the benefits of consistency and standards will always prevail in easing their supportability.  Consider creating a desktop or server OS image if all the hardware is different.  If they were all the same you would only need one image, but if they are not you have to create an image for each underlying architecture.  The same applies with naming standards and supporting many database servers.  If you have a defined nomenclature for your database objects things are easier to find and more importantly you can write scripts that will apply across the board.  If you’ve written any code or scripts you know how much time savings can be gained from code re-use.  SQL Server also has system objects and you want to make sure you can easily identify your objects from the system objects.

Do your stored procedures start with SP_ ?  It is well known that naming your stored procedures with a prefix of SP_ is a bad idea.  Why you ask?  All of the system stored procedures start with that prefix which makes identifying your SPs from the system SPs more difficult.

Are You Enforcing Your Convention to Ensure Standards

Most standards in companies are and should be documented.  Team members are told where the document is located and that they are to follow it.  If this method works for your company then more power to you, but most of the time people rush and the rules are not followed.  So how can you mitigate that?  Here is where SQL 2008 comes to the rescue with Policy Based Management.  You can use PBM to enforce your company standards and prevent people from creating SQL objects that are not within policy.  Using PBM to enforce your naming standards is a whole other series.  Come back next week and you can find the first post here:  SQL Naming Conventions with PBM – SPs

Example

Below is a skeleton outline of a naming standard.  It is provided here as an example so it is not complete, but should give you an idea of the things you might want to build a standard around.

  • Stored Procedures – usp_[procedurename]
  • Tables – tbl_[table name]
  • Views – vw_[view name]
  • Columns – [column name]
  • Triggers -
  • Clustered Index – clidx_[table name]_[column name]
  • Nonclustered Index – nclidx_[table name]_[column name]
  • Primary Keys – PK_[tablename-column]
  • Foreign Keys – FK_[tablename-column_referencetablename-column]
  • Constraints – [Constraint Type Identifier]_[table name]_[column name]
  • Constraint Type Indentifiers
    • Default – DF

What Convention Do You Use?

I think it would be great to see the conventions that other people use.  What objects did they create a convention for and how did they decide to name them?  If you’re reading this and don’t mind, please post your convention in the comments so we can have an online brain storming session for all to see.

Other Caveats

There are several other things you should take into consideration when developing a naming standard, like avoiding reserved keywords and special characters.  For ideas on things to avoid check out Bob Pusateri’s blog post about bad table and column names HERE.

I presented two sessions last weekend 1/29 at SQLSaturday #57 in Houston.  Below are links to the slides and/or code for those sessions.  If you were able to catch one of my sessions I would appreciate any feedback you can give by commenting on this post or on SpeakerRate.com.

Manage Your Shop with Policy Based Management Server and Central Management Server

Mirroring: The Bear Necessities

SQLSaturday #57 was great.  It’s only about a 4 hour drive from Dallas, so I headed out about 12:30 and picked up Tim Mitchell (Blog|Twitter) and Drew Minkin (Blog|Twitter) on the way.  The weather was great the entire weekend which made for a pleasant drive.  We arrived at the church about 1.5 hours before the speaker dinner to get the lay of the land and see if any help was needed.  The Houston team Nancy Wilson (Blog|Twitter), Jonathan Gardner (Blog|Twitter), and Malik Al-Amin (Twitter) had everything under control so we headed to the hotel to check-in before the speaker dinner.

The speaker dinner was held at the Outback Steakhouse.  It was a little cramped for space, but that was also a good thing because it meant a high attendance.  The Houston group really took care of us by not only providing appetizers, but buying our meals as well.  Thank you guys very much for that!  The dinner was a great time where I was able to catch up with people from the SQL community I only get to see at these events.  I also got to meet a lot of new people and put faces with Twitter handles.  After everything was wrapped up, Tim and I went back to the hotel bar to have a beer with Patrick LeBlanc (Blog|Twitter).  I’ve wanted to meet Patrick ever since I presented a SQLLunch session.  We had a great time getting to know each other and some awesome conversations that will yield some future benefits for the community.  If you want free training you absolutely need to check out SQLLunch.com.

Tim and I were up at 6 the next morning getting ready to head out and pick up some others from the Dallas crew that were staying at another hotel.  I don’t travel much so I didn’t sleep well at all.  I probably only slept for about 4 hours that night.  Anyway, the venue was the Bammel Church of Christ and it was an awesome venue.  I hope the Houston crew is able to use it for next year.  The opening and closing ceremonies were held in the youth building which was huge and space was certainly not an issue.  That room and one other room in the youth building were used for sessions.  Everything else was in the main building.  This building was perfect for a SQLSaturday because all the rooms for sessions were around the outside with a huge space in the middle.  That space was awesome for networking between sessions as well as lunch.

Speaking of lunch, Houston stayed true to the Texas way by having BBQ catered.  The lunch was fantastic.  They also had Panera Bread bring in fruit, breads, juice, and coffee in the morning.  To top it all off they had an afternoon snack of cookies, water, and assorted sodas.  I would say they hit it out of the park on the food front.

At the end of the day, they had the customary closing ceremony and raffle.  This was probably the only pain point I saw, and to be honest the raffle is a tough one to organize.  One part of why this is a tough thing is that there are two facets (PBM pun intended) to the raffle.  First you have the general raffle that includes items the user group has obtained like books and items from sponsors who only provided SWAG and had no physical presence.  The second part are the sponsor raffles for those sponsors who took names at their table and had a physical presence.  The sponsor raffle is an easy one because you have them pull the winners on stage themselves.  The general raffle is the tricky part.  Houston put raffle tickets in each attendee bag, which sounds low maintenance until you realize how much attrition you had.  In addition to those who registered and did not show, are those attendees that only came for part of the day or left early.  What you end up with is calling out ticket numbers one after the other with no one to claim it.  Fortunately Nancy Wilson did a great job of improvising on the spot.

In Dallas last year we used speaker evaluation forms for the general raffle drawing.  That seemed to work fairly well because those people willing to fill out evaluations were more likely to attend the closing ceremony.  We still had “no shows”, but to a lesser extent.  Tim and I talked about this at great length on our drive back and I think we came up with a good solution.  If it works at our next SQLSaturday in Dallas, I will certainly blog and let everyone know.

Overall this was a very successful SQLSaturday, and my hat is off to the Houston Area SQL Server User Group.

Below is a link to the slide deck for my presentation on “Manage Your Shop with Policy Based Management Server and Central Management Server”.

If you have any questions please comment or send an email to “ryan at ryanjadams dot com”.
If you have seen my presentation please take the time to give me some feedback on

SpeakerRate.com

 

PBM Presentation Link

Paul Randall of SQLSkills fame has posted the contest of a lifetime.  It’s a free seat in their Master Immersion Event on SQL Internals and Performance.  This post is my entry into the contest.

I have worked for the same company for almost 13 years.  During those 13 years the company has sent me to training exactly two times.  Pretty impressive, huh?  Now think about this.  When I started I was a desktop guy on a Novell network.  Since then I have been a server engineer, NT4 domain administrator, Active Directory Administrator, Microsoft Operations Manager, Forefront Identity Manager, automation and scripting, and now a DBA.  That is just the short list, but what I want to point out is that all those skills were learned by me on my own and without training.  Guess what though?  The company has had no problem utilizing (exploiting) my skill set without compensation.  If your wondering what that’s like, imagine trololo in your head for 13 years.

I have a passion for SQL server and you can read about that here in my How I Became a DBA post.  I have an even bigger passion for the SQL community, and that’s why I want this free seat in Paul and Kimberly’s class.  I know what it’s like to have a passion for technology and no access to the training, so you better believe I will share everything I learn with the community.  I will be presenting two sessions at SQLSaturday 57 in Houston this weekend alone, but I bet they would have been much more in depth after some SQLSkills training.  The reason I started blogging and speaking was because of the infectious Brent Ozar of SQLSkills.  I owe him a debt of gratitude and if I win (and Brent makes it), his drinks are on me.  If he doesn’t then I’ll buy Paul’s in hopes he will pay it forward, or share it with his sheep.  These guys have been an inspiration to everyone in the SQL Community, and it would be an honor to attend this class.

Are your blog posts falling on blind eyes?  It can certainly seem that way sometimes.  The reason we blog is because we want to be heard, or seen as the case may be.  One of the ways we gauge our success is based on comments left by readers.  As a technical blogger, there is nothing more discouraging than spending a ton of time banging out an awesome post just to find that days later no one has even commented on it.  There are other ways to gauge your success with free tools like Google Analytics and Google FeedBurner, but they only give us raw numbers and no emotion.  I want to know that my content has helped others solve problems, provided new ideas, or added a new perspective.  As a SQL and technical community, what can we do?

The answer is obvious, leave comments for others!  When was the last time you commented on someone else’s blog?  The best way for us to encourage and support each other is to make a more vigilante effort to leave comments.  I bet that many of us would not hesitate leaving a comment to correct something we see wrong in a post, but I guarantee it takes less time to comment a simple “Thank You”.  The effort is small and it only takes a few seconds to tell someone that their post helped you out, made you want to research a new topic, encouraged you, or was bookmarked for information on a future project.  Comments are not only good for the publisher, but also for the other readers.  This could be anything from adding additional information or ideas to leaving a link to another related blog post.

The other thing we can do is make sure we attribute others for their work.  I’m talking about referencing our resources.  If you remember those dreaded research papers from school, that’s what I am talking about.  Of course I don’t mean to the detail that was required then, but just a simple link and mention of any resources you may have used.  Doing this results in ping backs through most major blogging software and is another great means for us to see those who appreciate a post.  Since we are talking about referencing others and attributing their work along with the dreaded word “plagiarism”, here is how I would attribute a post that has great additional information.

Brent Ozar (Blog|Twitter) wrote a fantastic post entitled “Plagiarism Week: Finding the Slimey Slimeballs“.

We all want validation and human nature is to compare ourselves to others to gauge that.  This is why comments are so important for both the author and other readers.  Don’t worry; you’re not blogging for the blind.  We just need to let each other know we’re reading and appreciate the content.

T-SQL Tuesday

Be the Interpreter

Steve Jones (Blog|Twitter) is hosting this month’s T-SQL Tuesday where his topic is about getting your job done while dealing with business requirements and mandates.  Over my career I have seen many instances of management mandating that something be implemented because they saw an ad on the internet or some magazine.  The problem is that they don’t understand anything about the technology or the availability of other options.

Our job as DBAs is to be an interpreter to understand what the end result is that the business requires and the options to get that result.  The first thing you have to do is recognize the situation where interpretation is needed.  If the request from management has little technical requirements and lots of industry buzz words, that’s your first clue.  Once you have identified the situation you need to reverse engineer it and start asking questions that will allow you to discover management’s desired end result.

Recently I received a request from management to consolidate our reporting platforms.  Once we did our discovery work, we realized that a couple of groups were coding their own reports in C#.  The data they were reporting against was in SQL so it was easy to say that SQL Reporting Services needed to be the chosen platform.  Our reasons were because it is free with the product, would provide the same results, and could seriously reduce report development time.  Although this practice needed to be changed, I suspected this was not what management was really after.  I was right.  Management was not looking for a product standard, but a single report consolidated from the several they were already receiving.  Management did not clearly articulate their desired end result and used a bunch of buzz words that indicated something entirely different.  As a result, my team’s interpretation of their request was incorrect and I guarantee we will ask more questions next time.  Fortunately our detour was not a waste since product standardization is something we needed, and getting the developers onto SSRS saved tons of labor hours.

Dodging Bullets
Dodging Bullets

 

 

 

Bust out the Matrix

Management is firing out mandates so you start dodging bullets like Neo.  Not that kind of Matrix!  There is nothing management likes more than a matrix…well except for a matrix that shows a cost reduction.  One way to make sure you are interpreting requirements correctly is to document your interpretation and submit it back to make sure your solution meets their expectations.  Depending on the request you might have to write it up in another form, but over the years I have found that management really likes things presented in matrix form.

Stand-up (Slowly)

I work in a large corporation where politics abound.  It is very common to see something implemented poorly or incorrectly because an executive asked for it.  Remember that internet add your VP saw?  Well it would not be prudent to simply implement it without asking what the benefits are and how it helps the business.  This is where business requirements, technology, and your career come together on a fine line.  Will the real DBA please stand-up?  I believe you should stand up, but you should do it slowly.  You need to voice your opinion on the right method to use as a solution, but be sure to back it up with facts and explain it well.  You also need to be sure and have it documented in email for the classic CYA.  That being said, you need to be very careful in how you present these technical challenges.  I find that asking questions is a good technique.  You don’t want to overdo it to the point you come across as uninformed and incompetent, but simply spewing out facts and statements can come across is confrontational.

Be a real DBA and stand-up, but be smart about it.

I recently had lunch with my boss to pitch him (yet again) for PASS.  He does not make the actual decision, but if he feels it is worthy, he sends it up the management chain.  He sees the value and always sends it up, but it never makes it through.  During our conversation he mentioned something that sent up a red flag for me.  I found it very interesting and those trying to get their employers to send them to PASS might want to know this inside tip about “large corporation mentality”.  Here is what he said:

They (upper management) almost always turn down travel for conferences, because they are not considered training.”

Well isn’t that Special?

To be honest, he has a point.  I’ve seen plenty of people at conferences that are just there to get out of work.  I have even seen people not show up at the conference at all.  Unfortunately this seems to have given employers a bad view of conferences in general.  I have never been able to attend a PASS Summit, but I know a lot of people that have attended.  That being said, I think you would be hard pressed to not classify it as training.  You obviously cannot consider the pre and post conference sessions to not be training.  When the two things are combined you get 2 days of dedicated training followed by 3 days of specialized break-out training.

If you find yourself in this boat, then my first suggestion is to read Jeremiah Peschka’s(Blog|Twitter) post on Getting to PASS on the Cheap.  My second suggestion is to pitch the PASS Summit to your employer as a training class of 2 dedicated class days and 3 days of break-out sessions based on particular SQL Server features.  Here is part of the email I sent to my boss that more accurately explains what the PASS Summit provides, translated into acceptable corporate terms.

There is a training opportunity called the PASS Summit in Seattle, WA the week of October 10th, 2011.  I am a member of PASS (Professional Association for SQL Server) as well as on the board of directors for the local NTSSUG (North Texas SQL Server User Group) chapter of PASS.  Every year PASS holds a national training conference where the best and brightest SQL server minds come together to teach highly technical sessions.  This is not a conference but actual training sessions.  The first two days are full training sessions with a dedicated instructor and the last days are break-out sessions.  The advantage of the break-out sessions is that there are several concentrations of expertise within SQL server and it allows you to choose the instructor who is an expert in your area of concentration.  These instructors include Microsoft MVPs (Most Valuable Professionals) as well as Microsoft employees right from their headquarters in Redmond.  These instructors are the people who wrote SQL itself, members of the Microsoft product team, members of the Microsoft technical support team, and the people who write all the technical books.  There is nowhere else you can get training from these instructors.

I’m crossing my fingers that the terminology gap has been crossed and I will have more success this year in my company realizing the true value of the PASS Summit.  It might be worth PASS marketing the Summit more toward training than a conference to bridge this gap, but I’m not sure how many other companies view conferences the same way as mine.

Setting the Stage

I started my career as a desktop technician working for a small personal PC repair shop in high school.  The company started picking up contracts to do PC installs for WorldCom.  I began taking on all those contracts and the manager of their IT department was so impressed he offered me a job.  As many people know, WorldCom went bankrupt and was bought by MCI.  During that transition I was moved from the WorldCom sites I was supporting to an MCI site.  Altogether I contracted for WorldCom/MCI for 2.5 years, at which point MCI converted me to an employee.  At this point doing desktop work was no longer challenging and I moved into a server administration role.  I did server administration for several years supporting everything on the NT server platform including domain administration.  This too became no longer challenging, but right at that time Microsoft released Windows 2000 and Active Directory.  This was also the time that large companies started transitioning from the old geographic IT business support model to a role based business support model to better match the evolving technology landscape.  MCI was no exception and a virtual team made of the best domain people from all over the company was put together.  The charter was to create a new Active Directory domain that would support the entire company.  I needed a new challenge and was very happy to be chosen for this new team.  

The Lights Comes On

I supported Active Directory for many years during which I became a bit of an AD expert.  Our team architected and supported the entire AD forest both nationally and internationally for over a quarter million employees!  One weekend we had a Microsoft patching fire drill and there were some SQL 2000 servers on our list.  After installing the patch, we had to verify the version and someone on the team showed me the query and how to execute it.  This intrigued me to find out more about this product whose version could not be derived from a Help-About menu. I started learning SQL 2000, was hooked, and got my MCDBA.

It didn’t take long to see the writing on the wall from Microsoft that many of their products had SQL on the backend like SharePoint and MOM/SCOM (I’m a MOM fan as well), and that was the route everything was going.  At the time, they even rumored moving AD to SQL from the Jet Engine. For me it was a case of being proactive and “see a need, fill a need”.  I knew we would end up needing a DBA and I wanted to fill that spot.

The Show

I continued my primary role in AD for many years and simultaneously began supporting everything SQL related that came our way.  I knew this dual role would serve me well in the long run, and I continued to shove my head in every SQL book I could get my hands on.  I also earned my MCTS and MCITP.  Part of my AD support role was also supporting Microsoft Identity Integration Server (MIIS/ILM/FIM).  The company decided to make organizational changes by creating a new group that would support just MIIS and an in-house application.  I was put on this team because of my DBA and scripting skills, and my primary function finally became DBA.

Since then I have discovered the SQL Community which took my new passion and breathed a whole other life into it.  A lot has happened since I joined the community and this post has been a long time coming, so here is a list of how I became involved in chronological order.  My path was a natural progression so I hope it can serve as a guideline to those just starting to get involved.

I became an active member of my local user group the North Texas SQL Server User Group

Twitter – At the time Peter DeBetta (Blog|Twitter) was our user group President and encouraged everyone to check it out and see what the online SQL community was all about.  I admit that I was skeptical and really did not have the time, but I checked it out anyway.  The help you can get and the minds you have access to sold me instantly and I’ve never looked back.

Blogging – Once you join the SQL community on Twitter the next thing you see are people tweeting about their newest blog posts.  The information you can glean from blog posts is limitless.  I knew my time was finite, but I wanted to give back to the community all the information they have given me.  It was ultimately Brent Ozar (Blog|Twitter) with his post on “How to Start a Blog” that had me sold.  I also suggest using Google Reader to subscribe to your favorite bloggers and keep track of their latest work.

SQLSAT #35 – The NTSSUG began talking about holding its first SQLSaturday and I am so excited about SQL technology and the SQL community that I jumped right in, feet first.  I took care of organizing all of the volunteers as well as coordinating everything with our sponsors and their benefits for sponsoring the event.  You can read more about my involvement in this event HEREHERE…and HERE

Board of Directors – At this point I’ve been riding on the SQLSaturday high (what other kind is there?) for months and I wanted more.  The NTSSUG group had a position open for election on their board of directors, so I went for it and won the election!  I absolutely love serving on the BoD.  There is nothing more rewarding than serving the SQL community and serving your local user group is a great way to do it.

Speaking – Twitter and blogging are great ways to share your knowledge and help others, but speaking is a whole other avenue of interaction.  My first time to speak was at our local user group and on sqllunch.com.  I’ve spoken before and taught a few classes, but it was always internally to the company.  You can find the materials here for my “Mirroring: The Bear Necessities” presentation and you can view the recorded webcast on sqllunch.com HERE.

SQLSat #56 - The NTSSUG decided to have a second SQLSaturday focused on Business Intelligence a mere 5 months after the previous SQLSaturday.  Again I was a core organizer and we had a great event. You can read my re-cap HERE.

SQLSat #57 – The Houston Texas user group is hosting a SQLSaturday in January 2011.  Several of these folks attended the SQLSaturday in Dallas and I was very fortunate to meet this great group of folks.  This is their first SQLSaturday so I volunteered to help them out where I could.  I plan to be at this event early to help where I can, but so far I have been helping them with their sponsorship process.  In addition, I have also submitted to speak at this event and my fingers are still crossed.

SQLSat #63 – The NTSSUG has already set the date for our next SQLSaturday and it will be our third in less than a year!  I am a core organizer and plan to again handle the sponsor side of things.  It was a bit much for me to take on the sponsors and volunteers last year because those responsibilities overlap in the morning, so I more than likely will not handle the volunteer side of things this time around.  Again I have also submitted to speak at this event as well.

SQLSat #64 – This event is being hosted by the Baton Rouge user group headed up by Patrick LeBlanc (Blog|Twitter), who also runs sqllunch.com.  I have also submitted to speak here.

The Curtain Call

Well there is no curtain call for this play, because it will continue on as I learn, hone my skills, and increase my involvement in the SQL community beyond what I have already done.  This blog will continue to serve as my documenter of this journey.  It’s been a long road starting from the bottom, working my way up to a senior guy at a fortune 100 company, and transitioning my career to become a DBA.  Although this road has been full of layoffs, outsourcing, and stagnant pay I’m proud of what I have accomplished and where I am.  I’m an over-achiever by nature so my sights are already set on the goals to come.

SQL Mirroring has a TIMEOUT option, but when and why would you want to use it?  First we need to understand what this option does and that is very simple.  This option defines the amount of time that has to pass without receiving a ping response from a partner before a failover is initiated.  If you are familiar with clustering then think heartbeat (but over a public connection).  The partner will be considered disconnected and either a manual failover can be performed or an automatic failover will occur, depending on your configuration.

The other thing we need to know about the TIMEOUT option is that the default is 10 seconds if you do not change it.  Books Online tells us if you set it for anything between 0 and 4 seconds it will automatically set the timeout to 5 seconds.  If you set it for 5 seconds or greater it will be set to that value.

Network Latency

So why would you want to change this option?  The first scenario would be one of a slow network connection or where the partners are geographically dispersed.  If your network latency between partners is greater than the defined mirroring session timeout, then a disconnected state might be detected.  This could cause a failover event, even though the partner is actually online, all because the pings did not return to the partner in time.  If this scenario is true for you then you have to determine how long it takes a packet to get from partner to partner.  You can use “tracert”, but I suggest doing a simple “ping” which will return a minimum, maximum, and average response time.  To be absolutely certain you pick an appropriate timeout you need to perform the ping from Principal to Mirror, Mirror to Principal, Principal to Witness, Witness to Principal, Mirror to Witness, and Witness to Mirror.  Every network is different and you can never guarantee that a packet from Server1 to Server2 will follow the same route as a packet from Server2 to Server1.  This is why I suggest performing your ping tests in both directions.  Write down the maximum response time from each test and then select the largest maximum as your timeout along with a couple extra seconds of padding to account for times of high network traffic.  Remember that ping gives response time in milliseconds, so don’t forget to convert your time to seconds.

Failover Clustering

The other scenario where you will want to change this option is if one of the partners is a failover cluster.  In this case you not only need to factor in network latency, but also the amount of time it takes for a node to failover.  You don’t want a mirroring failover to occur during a cluster failover so you want to make sure you set a timeout that accounts for network latency plus the time it takes for a node to failover.  So how do you determine that?  We already discussed how to determine the network latency piece, so now we just need to know how long it takes a node to fail over.  Start by failing over the SQL group and look in the cluster logs to see the time the failover started and the time it came back online, and then subtract the times to get the duration.  You might be thinking about doing a ping test while you failover the group, but this will not be accurate.  A ping response will be returned as soon as the cluster IP resource comes online, but prior to the SQL resources coming online.  This is because the SQL resources depend on the disk resources, the disk resources depend on the network name resource, and the network name resource depends on the IP resource.  That means the IP resource will come online first and SQL comes online last.

Now that we have our timing ironed out and have settled on an appropriate TIMEOUT setting, we need to actually set it.  This MUST be run on the PRINCIPAL.

ALTER DATABASE MyDB
SET PARTNER TIMEOUT TimeinSeconds

The last thing you might want to know is how to find out what the current TIMEOUT setting is.  Here is how to get it and I highly suggest you explore the other information in this system view.

SELECT Mirroring_Connection_Timeout
FROM sys.database_mirroring
WHERE database_id = db_id('MyDB')

SQLSaturday 56 BI Edition was a great success according to feedback and we couldn’t be more pleased.  This event was put on by the North Texas SQL Server User Group and supported by MSBIC, Microsoft, and Artis Consulting (a Gold Certified Microsoft Partner).  I may proudly serve on the board of directors for NTSSUG, but even if I didn’t there is no way I would not consider being an organizer for an NTSSUG event.  This is our second SQLSaturday in 6 months!  If you think that is impressive….keep reading.

If you have helped organize a SQLSaturday event then you no doubt know how much time and work goes into these events.  If you have not then I highly encourage you to volunteer and get involved.  If you’re an attendee my advice is to thank every organizer you see at the event.  My second piece of advice is to fill out session evaluations and thank the speakers.  These folks are giving you free training and most pay for their travels out of their own pockets, so a simple “Thank you” goes a long way.  What does all this have to do with SQLSaturday 56?

This event was a little different than the typical SQLSaturday as it was BI focused.  It was also different in the aspect of how we managed it compared to our other SQLSaturday.  This is why I talked about the time and difficulty of putting on these events in the paragraph above.  This was our second event in 6 months and we don’t want our organizers and volunteers to get burned out, so we scaled the event way back.  First is that we used the local Microsoft facility, which meant that the rooms were already equiped for A/V and MS would handle the registration.  Second is that we really did not have any sponsorships except Microsoft for the facility and they in conjunction with Artis Consulting provided breakfast and lunch.  It might not sound like a lot, but it’s huge!  That meant there would be no coordinating sponsors and their benefits (which is a full time job itself) and it meant that breakfast and lunch took no management from us.  This SQLSaturday was designed to be a no frills, “Let’s get our learning on” event.

The only negatives or complaints I’ve heard is that there was no coffee in the afternoon and entering the Microsoft buildings was a pain.  I agree that the building access was inconvenient.  Who wants to show up at an event to find the doors locked and have to wait for security to open the door for you; so yes that was a pain point.

On the plus side we had some awesome speakers and amazing sessions with an incredible time of networking.  We even left off the last session in the largest room and dedicated it to a networking room.  I met a lot of great people.  Drew Minkin (Twitter) is a local speaker for us and I had a great time talking with him at the after party.  If you get a chance to hear him speak then you should do it, because he’ll rotate your mind like it’s his personal cube!  I also got to meet Thomas LeBlanc (Blog|Twitter) who is down to earth and a joy to talk to with his engaging personality.  They don’t call him the smiling DBA for nothing!  The other person who comes to mind was John Sterrett (Blog|Twitter).  This guy flew in from Pittsburgh!  John was great to talk to because, like me, he works with a lot of other administration things other than SQL that were a great commonality.  He and his wife are thinking of moving to Texas and I hope they do, but he might get a hard time being an OU graduate in UT country.

Let me wrap this up before I get carried away, and yes I know it’s too late for that.  If you have any comments on the event either good or bad please leave them here.  We want to know you had a good time, what you liked, and what you didn’t so we can improve next time.  Now I promised you there was something impressive coming and you hung in with me this far so here it is.  If you think 2 SQLSaturdays in a year is impressive….let’s add a third within a 12 month span.  That’s right, NTSSUG officially brings you

SQLSaturday #63