PSSUG Presentation 12/7 AlwaysOn

I’ll be presenting at the Philadelphia SQL Server User’s Group at Microsoft Malvern on Wednesday December 7. The topic will be a new feature in SQL Server 2012–Always On Availability Groups.

I’ll give an overview of the existing DR and HA technologies currently available with SQL 2008, talk about what’s need in 2012 Clustering, and then move on to the star of the show Availability Groups. I’ll demo building an Availability Group from scratch and walk through the failover process.

 

 

 

What my SQL Family Means to Me

So there are a ton of things I could talk about with this–helping each other remotely, being stuck, bored in a hotel room in another country and assisting with a restore for entertainment, or the fact that if I travel to almost anywhere I have someone to hang out with, but there’s more.

The night I got back from the PASS Summit (after a wonderfully spectacular time), I was greeted by parents’ news, that my dad would be having heart surgery the following week. I was pretty annoyed at the time (they had known for a week and didn’t call), but more concerned.

So I ended up having to fly to New Orleans the next weekend and my dad’s surgery was on the following Tuesday. I tweeted about it, and the outpouring of thoughts, prayers, and emails was incredible. My dad isn’t fully recovered yet, but I’m still thankful for everyone’s thoughts and prayers.

I even spent my birthday, “celebrating” over a pink bubbly beverage with a couple of “family members”.

We truly are a very special community. And one that I am very proud to be a part of.

 

 

How to Find Data File Page Counts for a Table

Last week one of the guys on my DBA team, asked me if I knew of a way to identify which data from a table is in a particular data file. SQL Server  (when it has multiple data files) typically does a proportional fill of data files—it will try to write an even amount of pages in each of the data files. If a new file is added the engine will continue to add pages there until the fill levels are about even.

The reason this question came up, was that we had a rather large table, and a two data files—one of which was pretty empty, so we wanted to see how much data was left in the second file (these tables are refilled pretty regularly, so it might have been possible to get all of the data out and drop the second data file).

Unfortunately, in SQL 2008 R2 (and below) there is no documented method of doing these checks. The process involves dumping DBCC IND into a temp table and running a select statement against it (see below code):

if object_id(‘#dbcc_indresults’) is not null

begin

drop table #dbcc_indresults

end

create table #dbcc_indresults (

pageFID bigint,

pagePID bigint,

IAMFID bigint,

IAMPID bigint,

ObjectID bigint,

IndexID bigint,

PartitionNumber bigint,

PartitionID bigint,

IAM_CHAIN_TYPE varchar(50),

PageType bigint,

IndexLevel bigint,

NextPageFID bigint,

NextPagePID bigint,

PrevPageFID bigint,

PrevPagePID bigint);

insert into #dbcc_indresults exec(‘dbcc ind (”pagesplittest”, ”testtable”, 1)’);

select t1.name as ‘File Name’, count(*) as ‘Pages’  from #dbcc_indresults t2

join sys.database_files as t1 on

t1.file_id=t2.PageFID

group by t2.PageFID, t1.name;

However, (and I have to credit this find to this weekend’s Northeastern snow storm, as I was without cable and internet, so I didn’t have much to do, except play with my local SQL Server and read), there is a new system function name sys.dm_db_database_page_allocations, built into SQL Server 2012. It accepts the object_id as a parameter (similar to sys.dm_db_physical_index_stats)  and brings us a lot of good information about the pages within a given object.  This data was available through various commands in older versions of SQL Server, but this brings it together in one place. This code below, will bring back the number of pages per datafile for a given object.

select t1.name as ‘File Name’, count(*) as ‘Pages’  from sys.dm_db_database_page_allocations

(db_id(), object_id(245575913), null,null,’detailed’) as t2

join sys.database_files as t1 on

t1.file_id=t2.extent_file_id

group by t2.extent_file_id, t1.name

Only, one problem I ran into—the page counts didn’t match the older method of using DBCC IND. So I asked Paul Randal (blog|twitter), for his thoughts on the matter—he had me check sys.dm_db_index_physical_stats which matched DBCC IND nearly (DBCC IND accounts for the IAM page)

So either my test data is weird (I did perform this test twice after wiping out and recreating the database) or there is a bug in sys.dm_db_database_page_allocations.  I opened a connect bug with Microsoft on it, so hopefully I will get some feedback soon.

Presenting at SQL Saturday 96 (Washington DC)

I’ll be presenting two sessions at SQL Saturday #96 in Washington, DC (well actually Chevy Chase, MD) on November 6. If you are in the mid-Atlantic and looking to get a great day of SQL Server education, please signup.

The two topics I will be presenting on are Virtualization for DBAs and SANs for DBAs.

In the virtualization talk I will cover:

  • Basics of Virtualization (Hypervisors, Hosts, and Guests)
  • Advanced Features of Virtualization (vMotion, Live Migration, and DR)
  • Potential Gotcha’s of running your database servers in a Virtual Environment
  • Things you need to configure in your DB servers for Virtual Environments

In the SAN talk I will cover:

  • How databases interact with storage
  • Why memory is faster than a hard drive
  • RAID levels and what they mean for databases
  • Why a SAN is like living in a New York highrise
  • How to talk to you SAN administrator

All in all it should be a great day of training–there will be many top notch speakers and MVPs in attendance. I’m honored to be speaking.

PASS Wrapup

The best thing I can tell you about the PASS Summit is my sleeping schedule since I’ve been home–on Saturday night I slept for about 11 hours, Sunday was more normal at 9, and then Monday night was big, with me going to bed at 8 and waking up around 7. So, now that I’m finally caught up with my sleep I can tell you a little more about the Summit.

I’ve been to other database conferences before, but this was my first trip to the PASS Summit, and I can tell you there is an energy and vibe there like no other. That and the learning is absolutely top notch–the people who write the books and the software are there, and eager to share what they know. I’m not going to recap bit by bit everything I went through, but I definitely learned something in every session I went to, I made a ton of great connections and all in all I had a blast.

There is no better learning, networking or fun value in all in of IT. The SQL Server community is a family, and really gets and understands the value of community, I didn’t have a meal by myself, and I spent so little time in my hotel room, I requested a discount.  Do whatever it takes, but if you are a SQL Server professional, get yourself to Seattle in 2012.

PASS Day Three Keynote–Dr. DeWitt

I’m starting with my last paragraph–this blog post is a really long read. I recommend going to the SQLPASS website and watching this keynote, you will learn a ton about NoSQL and how it works..

This was truly incredible–really in-depth technical content, explained in a way that DBAs could easily understand. If anyone from PASS or Microsoft (or hardware vendors) is reading this–this is the content we want, deep technical and incredibly well presented.

So this is the talk that all of the hardcore database folks have been looking forward to. Dr. David Dewitt, Technical Fellow at Microsoft will be presenting. These talks in the pass have been deep dives into the optimizer, and the math behind it. Speculation is that he”s going to be talking about Big Data.

We start out with Rob Farley and Buck Woody singing a duet about query performance–it was awesome.

We then heard about SQLPass and SQLRally dates for next year–the Summit is back in November the 9th through the 12th. SQL Rally will be May 8th-12th in Dallas. Additionally SQL Rally Nordic has been extremely successful and sold out.

Then Dr. Dewitt takes the stage to discuss Big Data. Starts out talking about the work Rimma Nehme did on the query optimizer for Parallel Data Warehouse. The good doctor talks about the pain of preparing his keynotes, something I think we all experience as speakers.

Talking about very big RDBMS systems–in about 2009 a Zetabyte (1,000,000 petabytes), and it’s expected to grow by a factor of 40!!! More data is generated by sensors–and not entered by hand, or it’s entered by a larger group (social media). Talks about the dramatic costs in hardware costs.

Then goes into the discussion about about how to manage big data. eBay manages 10PB across 256 nodes. Facebook on the other hand uses 2700 nodes to support 20PB.  Bing is 150 PB on 40k nodes.

Talks about NoSQL–benefits include JSON, no schema first. Updates don’t really happen to this data. Lower upfront software costs. The time from converting insight to business intelligence can be lower. Data arrives, and isn’t put into a schema, and then check the application program.

Records are shared across nodes by a key–MongoDB, Cassandra, Windows Azure.  Hadoop is designed for large amounts of data, with no data model, records. Talking about structured versus unstructured data, and how “unstructured data” has some structure.

Key value stores are OLTP, Hadoop more like Data Warehouse. He talked about eBay versus Facebook, and how much more CPU efficient a RDBMS can be.

Talking about the shift from hierarchical systems to relational, and how SQL is not going away. Talking about Hadoop and it’s ecosystem of software tools. This really all started at google–had to be reliable and cheap for PBs of clickstream data.  HDFS is the file system and MapReduce is the process at Google for analyzing massive amounts of data.

Start talking about Hadoop Distributed File System–designed to be scalable to 1000s of nodes. Assumes that hardware and software failures are common. Targeted towards small numbers of very large files–written in Java and highly portable. Files are partitioned into big 64 mb chunks. Sits on top of NTFS.

Each block is replicated to nodes of the cluster, based on the replication factor. First copy is written to original node, second copy is written to another node in the same rack (to reduce network traffic), and the third is put in another rack or even another data center.

The name node has one instance per cluster, and is a single point of failure, but there is a backup node or checkpoint node which will take over in the event of failure.  Name node always checks the state of the name nodes–much like the quorum or heartbeat in a regular cluster. It also tells the client to which node it wants to write the block to. The name node also returns block locations to the client.

Talks about types of failures–disk errors, data node failures, and switch/rack failures. Name node and data center failures. When a data node fails, it’s blocks are replicated to another node. The name nodes fails–it’s not an epic failure–automatically fails to the backup node. The file system does automatic load balancing, to evenly spread blocks amongst the nodes. In summary, it’s built to support 1000s of nodes and 100s of TBs. Large block sizes–this is designed for scanning not OLTP. No use of mirroring or RAID–the RAID comes in from the highly replicated blocks.

The negatives are that makes it’s impossible to employ many optimizations used successfully by RDBMS.

MapReduce–programming framework  to analyze data sets in HDFS. User only writes map and reduce functions, the framework takes care of everything else. It takes a large problem and divides into a bunch of much smaller problems, and then perform the same function to all of the much smaller pieces. The reduce phase combines that output.

The components include a job tracker (which runs on the name node). Manages the job queues, and scheduling, it schedules the task tracker. Has task tackers, which execute individual task tracks (which run on the data nodes). Shows an example to sum sales by zip code–this is really great example of map reduce compared to SQL. Blocks are stored locally after the map operation (better performance for small writes than HDFS). The map reduce framework does the sorting of the operation, again the results are stored locally.

The worker’s load is distributed amongst the nodes, but data skew is still a problem, because the reducer can get stuck (example–large numbers of New York zip codes, versus say Iowa). This is highly fault tolerant. MR framework removes burden of dealing with failures from the programmer.

On the other hand, you can’t build indexes, constraints or views.

Now, we are talking about Hive and Pig–Facebook produced a SQL like language called  and Yahoo produced Pig. These are pretty SQL like, to hide the hard work of MapReduce–it’s basically an abstraction. This looks almost exactly like the SQL we know and love, but with a schema definition on top. Every day facebook runs 150k warehouse jobs–only 500 are map reduce, the rest are HiveQL.

Column and data types are richer in Hadoop than SQL (columns can be structures, lists), and Hive tables can be partitioned. When you position a Hive table the partition name becoms what it’s partitioned by, so it’s not repeated (it’s taken out of the records).

In a simple TPC benchmark, Parallel Data Warehouse was 4-10x faster than Hive.

Now talking about Sqoop–to move data from unstructured universe into SQL. Not an efficient process.

Summarizes–relational databases vs hadoop. Relational databases and Hadoop are designed to meet different needs. Neither will be the only default. Feels like Enterprise Data Managers have the capability to merge the two worlds.

This was incredible–really in-depth technical content, explained in a way that DBAs could easily understand. If anyone from PASS or Microsoft (or hardware vendors) is reading this–this is the content we want, deep technical and incredibly well presented.

Salary Data–Where to Get It..

So in yesterday’s WIT luncheon, there was a bit of discussion on salary data. There was even a discussion on going to your local HR department, to find out the range for your position. DON’T FREAKING DO THAT!!!! This could be a whole another topic, but you don’t want to send a loud signal to your HR organization that you are shopping jobs. That’s as much of a career limiting move, as not having a backup.

Here’s the link the IRS data on the average (median, mean, and top split) data for DBAs. it’s broken down by region–from my experience with the data across a variety of IT positions, and it has been reasonably accurate.

It covers SQL, and those other databases, but should give you a good starting point for negotiations. And remember–salary is always negotiable, even if your unemployed–if you don’t get that money up front, you will never see it.

What DBA’s do, and how much they make.