The Self-Tuning Database?

There was a lot of talk on Twitter over the weekend, about automation, and the future of the DBA role. I’ve spoken frequently on the topic, and even though the PASS Summit program committee has had limited interest in my automation sessions, they have been amongst my most popular talks at other large conferences. Automating mundane tasks makes you a better DBA, and escalating the level of those tasks allows you to focus on activities that really help your business like better data quality, and watching cat videos on YouTube.

But Tuning is Hard

Performance tuning has always been something of a black art. I remember that Oracle called 9i the self-tuning database, because we no longer had to manually calculate how many blocks (pages) of memory where allocated to each pool. That was a joke—and those databases still required a lot of manual effort for tuning. However, that was then, and we’re in the future now. We’ll call the future August. Stay with me here, I’m going to talk about some theories I have.

So It’s August 2017

Let’s say you were a vendor of a major RDBMS, who also happened to own a major hyperscale cloud, and you had invested heavily in collecting query metadata in the last two releases of your RDBMS. Let us also accept the theory that the best way to get an optimal execution plan, is to generate as many potential execution plans as possible. Most databases attempt a handful of plans, before picking the best available plan—this is always a compromise as generating execution plans involves a lot of math, and is very expensive from a CPU perspective. Let us also portend that as owner of the hyperscale cloud, you also have a lot of available processing power, and you’ve had your users opt-in to reviewing their metadata for performance purposes. Additionally, you’ve taken care of all of the really mundane tasks like automating backups, high availability, consistency checks, etc. So now it’s on to bigger fish.

rube-goldberg-stamp

Still With Me?

Ok, so we have all the tools in place to build our self-tuning database, so let’s think about what we would need to do. Let’s take a somewhat standard heuristic I like to use in query tuning—if a query takes more than 30ms to execute or is executed more than 1000/times in a day, we should pay attention to it for tuning purposes. That’s a really big filter—so we’ve already narrowed down the focus of our tuning engine (and we have this information in our runtime engine, which we’ll call the Query Boutique). We have also had our users opt-in to use using their metadata to help improve performance.

So we identify our problem queries in your database. We then export the statistics from your database, into our backed tuning system. We look for (and attempt to apply) any missing indexes to the structure, to evaluate the benefit of a missing index. We then attempt to generate all of the execution plans (yes, all of them—this processing is asynchronous, and doesn’t need to be real time). We could even collect read/write statistics on given objects and apply a weighted value to a given index. We could then take all of this data and run it through our back end machine learning service, to ensure that our query self tuner algorithm was accurate, and to help improve our overall process.

We could then feed this data back into the production system as a pinned execution plan. Since we are tracking the runtime statistics, if the performance of the plan drops off, or we noticed that the statistics changed, we could force out our new execution plan, and start the whole process over again.

So there, I didn’t write a line of code, but I laid out the architecture for a self-tuning database. (I thought about all of this on a bike ride, I’m sure Conor could do a lot better than me Smile)  I’m sure this would take years and many versions to come into effect. Or not at all. To be successful in this changing world of data, you need to stay ahead of the curve, learn about clouds work, how to script, how to automate, and how to add value.

Would You Fly a Plane with One Engine? Or Run Your Airline with One Data Center(re)?

For those of you who may of been in the US or outside of Europe this past weekend, you may not have heard about the major British Airways IT outage, that took down their entire operations for most of Saturday and into Sunday. Rumors, which were later confirmed, were that a switch from primary to backup power at their primary data centre (they’re a UK company, so I’ll spell it in the Queen’s English), lead to a complete operations failure. I have a bit of inside information, since my darling wife was stuck inside of Terminal 5 at Heathrow.

Image result for jet with missing engine

There’s a requirement for planes that travel across oceans call ETOPs, which stands for Extended Range Operation with Two-Engine Airplanes, however in parlance is know as Engines Turn or Passengers Swim. This protocol and requirements are a set of rules that ensure if a plane has a problem over a body of water, it can make it back to shore for a safe landing. As someone who flies across oceans a decent amount, I am very happy the regulatory bodies have these rules in place.

However, there are no such rules for data centers that run airline operations. In fact, in January, Delta Airlines had a major failure which took down most of its operations for a couple of days. Most IT experts have surmised that Delta was running a single data center for it’s operations. Based on the evidence from Saturday’s incident with BA, I have to assume that they are, as well. One key bit of evidence, was that BA employees were unable to access email. They are an Office 365 customer, so theoretically, even if on-premises systems were down e-mail should work. However, if they were using Active Directory Federation Services, so that all of their passwords were stored on-prem, then the data center being down, would mean they couldn’t authenticate, and therefore would not have email.

This was my biggest clue that BA was running with a single data center—was that email didn’t work. While some systems, particularly some of the mainframe systems that may handle flight operations, have a tendency to not do well with failover across sites, Active Directory is one of the best distributed systems there is, and is extremely resilient to failures. In fact, given BA’s global business, I’m really surprised they didn’t have ADFS servers in locations around the world.

Enter the Cloud

Denny and I sat talking yesterday and running some numbers on what we thought a second data center would cost a company like BA. Our rough estimate (and this is very rough) was around $30-40 million USD. While that is a ton of money, it is estimated that weekend’s mess may cost BA up to  £150 million (~$192MM USD). However, companies no longer have to build multiple data centers in order to have redundancy, as Microsoft (and Amazon, and Google) have data centers throughout the world. The cloud gives you the flexibility to protect critical systems, and at a much cheaper cost. I’ve designed DR strategies for small firms that cost under $100/month, and I’ve had real-time failover that supported 99.99% uptime. With the resources of a firm like BA, this should be a no-brainer given the risk profile.

What About Outsourcing?

Much has been made of the fact that BA has outsourced much of its IT functions to TCS and various other providers. Some have even tried to place blame on the providers for this outage. Frankly, I don’t have enough detail to blame anyone, and it seems more like the data center operator’s issue. However, I do think it speaks to the lack of attention and resources paid to technology at a company that clearly depends on it heavily. Computers and data are more important to business now than ever, and if your firm doesn’t value that, you are going to have problems down the road.

Conclusions

In the cloud era, I’m convinced no business, no matter how big or small should run with a single data center. It is way too cheap and easy to ship your backups to multiple sites, and be online in a matter of hours with a cloud provider. Given the importance and consolidation of airlines to our world economy, it probably wouldn’t be a terrible idea if their regulators created regulations requiring failover and failover testing. Don’t let this happen to your stock price.

//platform.twitter.com/widgets.js

An “Ask” for Microsoft—A Global Price List

And yes, I just used ask as a noun (I feel dirty), I wouldn’t do that in any other context, but this one. In reviewing my end of year blog metrics, my number one post from last year was a post that listed the list price of SQL Server. I wrote this post because a) I wanted clicks and b) I knew what a pain it was to find the pricing in Microsoft documents. However, the bigger issue is that to really figure out what a SQL Server cost, you need to go to another site to get Windows pricing, and probably another site to find out what adding System Center to your server might cost.

This post came up because Denny and I were talking the other night, as someone had posted to the Data Platform MVP list asking how much the standalone R Server product cost. We found a table on some Microsoft site:

IMG_06012017_194009

I’m not sure what math is required to translate “Commercial Software” into a numeric value, but it is definitely a type conversion and those perform terribly. Eventually I found this on an Azure page:

This image is charged exactly like SQL Server 2016 Enterprise image, but it contains no Database elements and has the core ScaleR and DeployR functionality optimized for Windows environments. For production workloads we recommend that you use a virtual machine size of DS4 or higher.

This leads me to believe that R Server has the same pricing as SQL Server, but with the documents I have I am not certain of that fact.

What Do I Want?

What I want, is pricing.microsoft.com, a one-stop shop where I can find pricing for all things Microsoft, whether they be Azure, On-Premises, or Software as a Service. At worse it should be one click from the product name to it’s pricing page. Ideally, I’d like it all in a single table, but let’s face it, software pricing can be complex and each product probably needs it’s own page with pricing details.

The other thing that would be really cool, and this is more of an Azure thing, is to have pricing data built-in to the API for deploying solutions. That way I can build pricing based intelligence into my automation code, to rollout cost optimized solutions for Azure.

Anyone else have feature suggestions?

Updated: Jason Hall has a great comment below that I totally forgot about. Oracle has a very good price list (it definitely wins the number of commas award) that is very easy to access. So dear readers in Redmond: Oracle does it, we you should too!

Updated: There is some of this available in Azure. It’s not perfect though. https://msdn.microsoft.com/en-us/library/azure/mt219004?f=255&MSPPError=-2147217396. Amazon just announced enhancements to their version of this service. https://awsinsider.net/articles/2017/01/09/pricing-notifications.aspx

But What about Postgres?

What About Postgres?

Since I wrote my post yesterday about Oracle and SQL Server, I’ve gotten a lot of positive feedback (except for one grouchy Oracle DBA) on my post. That said, I should probably stay clear of Redwood Shores anytime soon. However there was one interesting comment from Brent Ozar (b|t)

Screen Shot 2016-08-20 at 12.05.36 PM

While Postgres is a very robust database that is great for custom developed applications, this customer has built a pretty big solution on top of SQL Server, so that’s not really an option.

multiple-cords-in-one-outlet

However, let’s look at the features they are using in SQL Server and compare them to Postgres. Since this a real customer case, it’s easy to compare.

1. Columnstore indexes—Microsoft has done an excellent job on this feature, and in SQL Server 2016 new features like batch mode push-down drive really solid performance on large analytic queries. Postgres has a project for columnstore but it is not developed. There’s also this add-on feature https://www.citusdata.com/blog/2014/04/03/columnar-store-for-analytics/ which does not offer batch execution mode performance enhancements and frankly offers extremely mediocre performance.

You can compare this benchmark:

https://www.monetdb.org/content/citusdb-postgresql-column-store-vs-monetdb-tpc-h-shootout

to the SQL Server one:

SQL Server 2016 posts world record TPC-H 10 TB benchmark

2. Always On Availability Groups—In this system design we are using readable secondaries as a method to deliver more data to customers. It doesn’t work for all systems, but in this case it works really well. Postgres has a readable secondary option, but it is far less mature than the SQL Server feature. For example, you can’t create a temp table in a readable secondary.

3. Analysis Service Tabular—There is no comparison here. Postgres has some OLAP functions that are comparable to windowing functions in T-SQL. Not an in-memory calculation engine.

4. R Services—You can connect R to Postgres. However, SQL Server’s R Services leverages the SQL Server engine to process data, unlike Postgres which uses R’s traditional approach of needing the entire dataset in memory. Once again, this would require a 3rd party plug in to work in Postgres.

5. While Postgres has partitioning, it is not as seamless as in SQL Server, and requires some level of application changes to support.

https://www.postgresql.org/docs/9.1/static/ddl-partitioning.html

While I feel that SQL Server’s implementation of partitioning could be better, I don’t have to change any code to implement.

6. Postgres has nothing like the Query Store. There are data dictionary views that offer some level of insight, but the Query Store is a fantastic addition to SQL Server that helps developers and DBAs alike

7. Postgres has no native spatial feature. There is a plug-in that does it, but once again we are making an even bigger footprint of 3rd party add-ins to manage.

Postgres is a really good database engine, with a rich ecosystem of developers writing code for it. SQL Server on the other hand, is a mature product that has had a large push to support analytic performance and scale.

Additionally, this customer is leveraging the Azure ecosystem as part of their process, and that is only possible via SQL Server’s tight integration with the platform.

Stupid Interview Questions—Trying to Find the Right Employees

I was driving into the office this morning, and as I do whenever I’m in the car, I was alternating between NPR and Mike and Mike on ESPN Radio (professional sports are an interest of mine). Anyway, Mike and Mike were discussing a recent story about the Cleveland Brown’s asking their potential draft picks questions such as “Name all of the things you could do with a single brick in one minute?” I’m not sure about you, but I’m not sure how that translates into hiring the most effective person to snap a football and then block a raging defensive lineman, but hey I don’t work in football.

What does this have to do with football?

Bad IT Interviews

I do, however work in IT—and I’ve interviewed for a lot of roles, as well as interviewed a lot of people for roles. Fortunately, for my current role I was hired through people I had worked with in the past, and there was barely a formal interview process. Even my previous role for the internet provider many of you are likely reading this post on, the interview process was straightforward and consisted mostly of conversations about technology and style of design. I actually have to think back to many moons ago to one particularly bad interview, with a director who thought he was the be-all and end-all of IT management. Some of the questions where:

  • How many gas/petrol stations are there in the United States?
  • Why is a manhole cover round?
  • How many pieces of bubble gum are in this jar? (Ok, I made this one up, but you get the idea)

To this list I would like add the following questions, which I hope are destined to the waste bin of history:

  • What is your biggest weakness?
  • Where do you see yourself in five years?
  • How did you hear about this position?

None of the above questions really help me (as the hiring manager) determine if a person a qualified for a role as a data professional (or frankly any other job). They are filler, and additionally any halfway prepared candidate is going to have prepared answers for them.

Building the Better Interview

I’ve been trained on a large number of interview techniques, between business school and corporate America. There is no magic set of interview questions—however, my favorite way to interview a candidate is to get them talking about a technology they’ve worked on and are passionate about. This serves two purposes—it lets me see if they are really into technology, or if it’s just a job to them, and additionally I can pretty quickly gather their technical level with follow on questions. Conversations are much better than trivia e.g.—”What is the third line item when you right click on a database in SQL Server Management Studio?”

One other thing I’ll add is make sure you have qualified people to do an interview—if you are trying to hire a DBA, and you don’t have any on staff, consider bringing in a consultant to do interviews—it’s small investment, that could save you a lot of money down the road.

So what stupid interview questions have you heard? Answer in the comments..

T-SQL Tuesday—Bad Bets

The year was 94, in my trunk is raw, and in the rear view mirror was the mother $%^in’ law. Got two choices y’all pull over the car or bounce on the devil, put the pedal to the floor” – Jay Z 99 Problems

Ok, so the situation I’m going to be talking about isn’t as extreme as Jay’s of being chased by the cops, with a large load of uncut cocaine in my trunk. The year was actually 2010, things had been going well for me in my job, I had just completed a major SQL consolidation project, built out DR and HA where there was none before, and was speaking on a regular basis at SQL Community events. A strange thing happened though—I got turned down to go to PASS Summit (in retrospect, I should have just paid out of pocket,) and I wasn’t happy about. All of the resources in the firm were being sucked into the forthcoming SAP project. So, back when I thought staying with a company for a long time was a very good thing (here’s a major career hint—focus on your best interest first, because the company will focus on its best interest ahead of yours), I decided to apply for the role of Infrastructure Lead for the SAP project. I’ve blogged about this in great detail here, so if you want to read the gory details you can, but I’ll summarize below.

The project was a nightmare—my manager was quite possibly mental, the CIO was an idiot who had no idea how to set a technical direction, and as an Infrastructure team we were terribly under-resourced. The only cool part, was due to the poor planning of the CIO and my manager, the servers ended up in Switzerland, so I got to spend about a month there, and work with some really good dedicated folks there. I was working 60-70 hours a week, gaining weight and was just generally unhappy with life. I had a couple of speaking engagements that quarter the biggest of which was SQL Rally in Orlando—I caught up with friends (thank you Karen, Kevin, and Jen) and started talking about career prospects. I realized how what I was doing was useless, and wasn’t going to move my career forward in any meaningful sense. So I started applying for jobs, and I ended up in a great position.

I’ve since left that role, and am consulting for a great company now, but almost every day I look back fondly at the all the experiences I got to have in that job (the day after I met my boss, he offered to pay my travel to PASS Summit 2011) and how much it improved my technical skills and gave me tons of confidence to face any system.

The moral of this story, is to think about your life ahead of your firms. The job market is great for data pros—if you are unhappy, leave. Seriously—I had 5 recruiters contact me today.

Backing Up Your SQL Server Database to Windows Azure Blob Storage

So recently, I was speaking at my first SQL Saturday of 2014 in Nashville (great job by Tammy Clark (b|t) and her team to put on a great event!) and due to a weird confluence of circumstances, I couldn’t get to my laptop (it involves my car being in the shop, an emergency performance tuning engagement snow, and not being about to get into my office.)

So, I thought, no fear, I have my trusty Surface Pro and a MSDN account—so I fired up a couple of virtual machines running SQL Server 2012 and 2014. The databases I use for my Data Compression and Columnstore presentations are Adventure Works, but I do a lot of prep work to build some big tables to make the demos be more dramatic. Unfortunately, when I started down this performance was pretty slow—something I’m likely attributing to thin provisioning on the Windows Azure Infrastructure side.

Since both databases (2012 and 2014) are the same—I thought let me just do a backup and restore, especially since I didn’t need to actually do work—I just had to fire up a restore. Additionally, I wanted to learn how to do something new.

First Things First

So I’m going to go on limb and assume that you have a Windows Azure account. From there you’ll need to create a storage container.

 

You want to make note of a couple of things in your storage account. I added a new storage account called ‘sqlbackupsjd’, and then I created a container a container called ‘mycontainer.’ You will want to make note of the URL associated with this container. We will need it when we get back to SQL Server.

Also, you’ll want to click ‘Manage Access Keys’ and grab the primary access key.

 

Moving on to SQL Server

You’ll need to be on SQL Server 2012 SP1 CU2 or forward to take advantage of this feature. The first thing we will need to do is create a credential to be able to pass our security information to Windows Azure.

So in my case, ‘mystorageaccount’ is ‘sqlbackupsjd’ and the secret is my storage access key. From here—backing up the database is really straightforward.

 

Restoring is just the opposite of the process. On the target server we create the credential, and then do the restore.

 

Summary

Cloud backup is one of my favorite use cases for Azure services. Much like DR, many organizations have a requirement to have offsite backups. This process can be an easy way to automate this process, without worrying about handing off your tapes to someone else. (Or storing them in the DBA’s wine cellar). It’s a really easy process with SQL 2012.

 

Sales Reps–Please Don’t BS Me, Alright?

Today is my morning of big data storage events, I’m attending two from two different vendors in about four hours. One down so far, and it was pretty good, until…

I’ve bashed sales reps before (on twitter and on this blog), I’ve even offered lists of things not to do. Well today’s presentation was on par with some of the best I’ve seen. I was engaged, and we had a good discussion of the architecture of Hadoop, and the kind of data applications where it really sense. I was engaged, and wasn’t bashing the vendor on twitter like I sometimes do.

But Then,

The vendor had a slide with the Hadoop ecosystem up–there are a lot of components there. And they aren’t all needed. I though a really good comparison would be to SQL Server, we don’t always need replication or analysis services installed, but if we want to have a database we need the engine. Hadoop is a lot like that–you can get by with just a few components out of the total stack.

At that moment the presenter mentioned SQL Server, and I thought, great this will be a really great example. Then he asked “What is the core engine to SQL Server?” (The right answer I think is Sybase, then it was rewritten for 2005, iirc, someone correct me if I’m way off) He eventually responded with “Jet Database” using the example that you can install SQL Server without installing Jet. As far as I can tell and from my twitter queries, SQL has never run on jet, but Jet may run on SQL Server now.

Anyway, the trivia isn’t the point–if you are quoting a fact in your presentation, be certain of it, and if you aren’t either don’t use that fact, or clarify, saying “I think this to be the truth, but I’m open to facts”. After this, A) I didn’t trust the speaker’s credibility and B) I was distracted trying to confirm the fact the Jet was never a part of SQL Server.

I guess I can add one more thing for sales reps not to do–don’t make $&%# up, you may have a subject matter expert in the room, and you will look like an idiot.

Vendors, Again—8 Things To Do When Delivering a Technical Sales Presentation

In the last two days, I’ve sat through some of the most horrific sales presentations I’ve ever done—this was worse than the time share in Florida. If you happen to be a vendor and reading (especially if you are database vendor—don’t worry it wasn’t you), I hope this helps you craft better sales messages. In one of these presentations, the vendor has a really compelling product that I still have interest in, but was really put off by bad sales form.

I’ll be honest, I’ve never been in sales—I’ve thought about it a couple times, and still would consider it if the right opportunity came along, but I present, a lot. Most of these things apply to technical presentations as well as sales presentations. So here goes.

The top 8 things to do when delivering a sales presentation:

  1. Arrive Early—ask the meeting host to book your room a half hour early and let you in. This way you can get your connectivity going, and everything started before the meeting actually starts, wasting the attendee’s valuable time, and more importantly cutting into your time to deliver your sales message. Also starting on time allows you to respect your attendees’ schedules on the back end of the presentation.
  2. Bring Your Own Connectivity—if you need to connect to the internet (and if you have remote attendees, you do) bring your own connectivity. Mobile hotspots are widely available, and if you are in sales you are out of the office most of the time anyway, consider it a good investment.
  3. Understand Your Presentation Technology—please understand how to start a WebEx and share your presentation. If you have a Mac have any adapters you need to connect to video. If you want to use PowerPoint presentation mode (great feature by the way) make sure the audience doesn’t see the presenter view, and sees your slides. Not being able to do this is completely inexcusable.
  4. Understand Who Your Audience Is—if you are presenting to very Senior Infrastructure architects in a large firm, you probably don’t need to explain why solid state drives are faster than spinning disks. Craft your message to your intended audience, especially if it has the potential to be a big account. Also, if you know you are going to have remote attendees don’t plan on whiteboarding anything unless you have access to some electronic means to do so. You are alienating half of your audience.
  5. Don’t Tell Me Who Your Customers Are—I really don’t care that 10 Wall St banks use your software/hardware/widget. I think vendors all get that same slide from somewhere. Here’s a dirty little secret—large companies have so many divisions/partners/filing cabinets that we probably do own 90% of all available software products. It could be in one branch office that some manager paid for, but yeah technically we own it.
  6. I Don’t Care Who You Worked For—While I know it may have been a big decision to leave MegaCoolTechCorp for SmallCrappyStorageVendor, Inc., I don’t really care that you worked for MegaCoolTechCorp. If you mention it once, I can deal with it, but if you keep dropping the name it starts to get annoying and distracting.
  7. Get on Message Quickly—don’t waste a bunch of time telling me about marketing, especially when you go back to point #4—knowing your audience. If you are presenting to a bunch of engineers, they want to know about the guys of your product, not what your company’s earnings were. Like I mentioned above, one of the vendors I’ve seen recently has a really cool product, which I’m still interested in, but they didn’t start telling me about the product differentiation until 48 minutes into a 60 minute presentation.
  8. Complex Technical Concepts Need Pictures—this is a big thing with me. I do a lot of high availability and disaster recovery presentations—I take real pride in crafting nice PowerPoint graphics that take a complex concept like clustering and simplify it so I can show how it works to anyone. Today’s vendor was explaining their technology, and I was pretty familiar with the technology stack, yet I got really lost because there were no diagrams to follow. Good pictures make complex technical concepts easy to understand.

I hope some vendors read this and learn something. A lot of vendors have pretty compelling products, but fail to deliver the sales message which is costing them money. I don’t mind listening to a sales presentation, even for a vendor I may not buy something from, but I do really hate sitting through a lousy presentation that distracts me from the product.

Vendors and Clients, or How not to sell to me

Vendors, Customers, Community

I love vendors—I’m heavily involved with the SQL Server Community and the support of Microsoft and the many other vendors in our space makes our community possible. I recommend their software to friends; I know most of their evangelists. I’m not always the best customer, I’ll admit. I have done evaluations of vendor software I probably wasn’t planning to buy, just to be able share feedback and try to give a little back. Some vendors love this, others hate it. I’ve been slow in responding to follow up emails, especially after attending events, because I have other priorities than returning sales calls as soon as they come in.

Unfortunately, sometimes you have to deal with vendor sales reps who are focused so much on doing what’s right for them they refuse to hear what’s right for the customer (or prospective customer). They just won’t leave you alone. I understand these folks work off of commission, so they are trying to get a sale, and I (representing a Fortune 100 company) may represent a big target, but there is a right way and a wrong way to sell to me. Pissing me off is generally a bad idea.

Shortly after a SQL Saturday, I started receiving emails (to my personal account, not work) from a vendor I knew, but from a sales representative, whom I had never met. Remember, this is a SQLSaturday, so they would have my full name and company name from the registration. Even if I used a different email address, my name isn’t that common and he could have made a reasonable guess that I was the same person as the other person at the same company with the same name. Or he could have asked me.

The emails looked like the standard from the vendor—I’ve already worked pretty extensively with this vendor on evaluating their product. It will be a good fit for our future environment, but we just aren’t there yet. I mention this, because I’m 95% certain my name is in their sales lead system. If previous sales reps managed to have extended conversations with me about their product and didn’t add this to their sales lead system, then that’s another way that something went wrong. But I highly doubt that. About a week ago, I got this email from the sales rep.

How Not To Send an Email

I saw this email, I thought, “Wow this just comes off as really desperate and annoying. Even worse, I’m thinking this is the last person on Earth I want to work with in bringing in a solution to my company. It’s even more damning because I’ve worked with these folks before. I don’t know if this guy is my new sales rep, another rep trying to steal business from the other, or just naïve at how large enterprises acquire software. As my friend Karen Lopez likes to say “They will never treat you better than they did during the sales process”. I wonder if his boss knows he’s talking to prospective customers in this manner?

I will admit that occasionally things fall through to my spam folder. I sometimes don’t respond to vendor contacts as soon as they want. I probably should have let him know that I was already in their system and ask him to work with the other sales reps about the status of our project evaluations I assumed that he would look at the sales system eventually before making me do that data mining for him. The truth is I’m not going to have the time or resources to work with multiple competing reps at the same vendor. If I need to talk to someone there, I’ll work with one of the reps who I’ve already been working with.

This isn’t so much about the number of emails, but about the tone of the emails. I just don’t understand why a sales rep would ignore the data they already have about me and my company. Maybe this company pays sales people based on each contact? Maybe having lots of duplicate customers in the system benefits somebody?

I know trying to track down leads is a tough job. It requires a lot of follow up getting people to respond to voicemails and emails because so little of their time is allocated for investigating new software. If I responded positively to every request for a call or a demo from every vendor I come in contact with I’d be busy 100 hours a week doing just that.

It gets worse though.

These emails made me feel bad—as in uncomfortable, like I was obligated to reach out to the rep, and it was my problem. I am extremely thankful to all the people and companies who support our community, and would recommend nearly all of their products to serve specific needs to my friends and clients. In fact, I recommended this vendor’s product in a presentation I gave last week. Sales reps—just use your heads—if I’m not returning your email/voice mail/call to my dog’s cell phone/Linked In message, it means as a company we aren’t interested right now. Nothing more.

%d bloggers like this: