Clean Code, the SQL

as a developer, DBA, or hybrid “DbDev”, you’re often tasked with writing or improving the stored procedures which house that complex logic.  And that’s my topic today: being clean about your SQL code.

Get it? It’s just too punny! … Ok I’m done.

The Coding Blocks guys did a series of episodes about the perennial favorite Clean Code book.  If you haven’t subscribed to their podcast…

do it do it now -Arnold
What are you waiting for?!?!

And it’s a great book, no doubt. But those guidelines for application code are not 100% directly applicable to database code.

wait... what?
Srsly?

Let’s back up a second. Why? That sounds about counter-intuitive, no?  Ok, more context. See, the traditional (“legacy”?) app consists of about 3 layers, the bottom one being the database. And it’s usually relational, and is usually responsible for far more than simple data access and persistence.  Read: complex business rules and process logic.  Data flow, not just getters and setters.

So that means, as a developer, DBA, or hybrid “DbDev”, you’re often tasked with writing or improving the stored procedures which house that complex logic.  And that’s my topic today: being clean about your SQL code.

Part 1: Comments

There’s a fairly famous quote from the book about comments:

Comments are always failures.

He’s using hyperbole, but for a purpose.  While his views on comments may be extreme, most programmers tend to realize the core essence of that chapter, which is that comments only serve to express something in plain English that the code has failed to express clearly enough to be easily and immediately understood.

With SQL scripts, and in particular with stored-procedures, I’m taking a somewhat opposite stance:

Comments are always appreciated, even if they’re potentially outdated or inaccurate.

There are two types of comments in SQL, the --inline and the /* block */.  Different people have their preferred flavors of block — sometimes it’s just several lines prefaced with the double-dash --.  And that’s fine, whatever floats your comment-boat.

hms commentus comment-boat
I made my own!! (most copied from an example at http://www.asciiworld.com/-Boats-.html)

In particular, I always encourage a comment block at the top of ever stored-proc & other user-defined programmable objects (function, types, etc).  Just a small example for illustration:

/*
Location: Server.Database
Author: NateTheDBA
Created: 2012-12-21
Description: Gets users who have not logged in since the given date.
Consumers: MyCoolAppName, MyReportServer
Revisions
2015-05-15, Nate: removed archive (never used after archive-date)
2017-06-07, Nate: fixed formatting for blog post
*/
CREATE PROCEDURE GetUsersNotLoggedInSince
    @SinceDate datetime2
BEGIN
    --some clever stuff goes here...
END

“But wait”, you say, “what about source control?”  Yes, all your programmable objects (and even, arguably, your reference data) should be in source control.  There are tool-vendors aplenty to help you with that.  But guess what?  Budgets.  Time & effort.  Oh, did I mention, legacy legacy legacy?  Yes, dear reader, the average business has years (decades) of organically evolved relational databases and processes.  Are you the guy or gal to swoop in on your unicorn and seamlessly convert their entire data tier infrastructure to a beautiful DevOps pipeline with shiny rainbows and kittens for all?  No?  Okay then.  Baby-steps.

devops-unicorn-dba-cleaning-up
Not that I’m bitter or anything…

Yes, my procs are in source control.  It’s called “daily automated script-out-objects-to-files which are then committed to SVN”.  It’s not built-in to SSMS.  Which means that I, or another DBA, or a potential consultant, or a Dev who gets enlisted to help improve a proc that runs for hours when it should only take minutes, would be inconvenienced by the extra trip to a separate tool/system to fetch some change-history just for context.  And really, that’s all this is for — CONTEXT.  We like to know what it is we’re working on when we start to work on it, without having to traverse a change-tree or go bug 3 other people who “might” have touched it last.  I’m not asking for a detailed log of every single time someone touched the thing; just give me the overview, the milestones and significant changes to functionality/features/scope so that I have a jump-off point for troubleshooting/testing/reasoning about it.

“But wait”, you say again, “shouldn’t your name be a sufficient description of what the proc does?”  Sure, in theory.  Until you have dependencies which need that name to stay the same even after an update or logic-change.  Like reports.  Or data-connected Excel workbooks.  Which are used daily by managers, who will come yelling at you if their worksheets suddenly stop functioning.

end rant

Back to comments in general.  The reason they’re helpful (besides documentation-headers for objects) is that they provide context and explain intent.  Half the time, my job as a DBA is improving or fixing someone else’s code.  Therefore, I want to see, in plain English, what it is they’re trying to accomplish, notes about attempts and failures, and the like.  Yes, I could have a discussion with them.  And I will.  But if I’m working on it asynchronously and they’ve moved on to something else, or our hours are different, I want those little nuggets of context and intent to be right there in the script, because that’s where I’m working!

What about queries that get passed-down from the app to the DB?  ORMs don’t support pre-pending a comment to their data calls, do they?  I wish.  Maybe some do, I haven’t researched it, but I know for sure that LINQ doesn’t.  But then again, when I’m using a query-capture tool (like DMVs, Profiler, X-events, or a vendor monitoring tool), ORM queries are so painfully obvious in comparison to hand-crafted SQL that I can usually spot them from a mile away, and go bother the app-devs for some context & conversation.  If you’re one of the poor unfortunate souls who still passes ad-hoc generated SQL statements down thru ODBC to your DB, then sure, a little comment won’t hurt anybody.

you poor unfortunate soul
it’s sad, but true…

So do your DBAs a favor, comment your SQL code, at least in terms of programmable database objects and ad-hoc scripts.  I promise, it’ll make them hate you less.  It might even make you love yourself more, because 3 months down the road when you revisit that proc, and you need to remember what it was for and why you did it that way, you’ll see it right there in your very own writing!  (OK, typing.)

Part 2: SRP, Encapsulation, and Abstraction

A bit of paraphrase of one of the book’s key points:

A reusable module (function, method) should do one thing, and do it well.

Also, the DRY principle:

Don’t repeat yourself.

When building SQL modules, we’re usually concerned with performance and accuracy, over abstraction and composability.  Therefore, repeating oneself is not necessarily a bad thing, when done for the right reasons.  Diligence is a big factor here — if there’s a non-trivial relationship between some entities that you’re repeating in several places, and you know that it could become a maintenance headache if that relationship’s definition has to change later, do as much as possible to mitigate the risk of dependency/consistency-loss.  This can be documentation, comments, and/or building that relationship into a view.

Make-Time-for-Due-Diligence
It’s important.

The latter brings up an interesting topic, one which I had a lively discussion about with a colleague recently (he’s a developer, and a dang good one) — nested views.  Because inevitably, the encapsulation of those relationships & business-rules into things like views or ITVF’s can and will lead to nesting those objects into other objects.  And troubleshooting many-level-nested views is a particularly frustrating exercise; in fact they’re what some DBAs call one of the “deadly sins of SQL“.  But there are perfectly valid reasons and uses for them, sometimes, and I really enjoyed the discussion thread we had on it, so I’ll have to expand on that in another post.

Anyway, I’m already getting long-winded and well over 1k words, so I’ll wrap it up for now, and continue this topic next week.

Thanks for reading, stay tuned!

Quickie: SQL DB Role Members

A typical part of a DBA’s work-week might involve the occasional DB user-role-membership management, so I hope this helps the lone-wolf DBAs out there and/or the developers who need to know what to ask for…

Just a brief post on adding/removing users (database level users) to/from roles (database level roles).  It’s relevant because several shops are still stuck supporting at least a few 2008 (or hopefully, 2008R2) instances, and there is a key difference between those and newer (2012 & up) versions in the “preferred” method of doing this security task.

security seal for ur protection
He’s wearing a police hat, he must know what he’s doing…
There are reams of documentation and books and articles written about SQL security in general.  That is beyond the scope of this post (and indeed, beyond the scope of any single blog, unless you’re an SME on the subject!).  But a typical part of a DBA’s work-week might involve the occasional DB user-role-membership management, so I hope this helps the lone-wolf DBAs out there and/or the developers who need to know what to ask for, when they’re planning/deploying a new app against their SQL DB(s).

The “old” method involves calling system stored-procedures, sp_addrolemember and sp_droprolemember, in which you pass the role-name and username.  The “new” method, supported starting with SQL 2012, is to use the command-phrases ALTER ROLE [role] ADD MEMBER [user], and ALTER ROLE [role] DROP MEMBER [user].

The latter is more ‘standard‘, while the former is more ‘Microsoft-y‘.  I couldn’t easily find whether it’s part of the official ANSI standard or not… that’s an exercise for the reader.  What I find very interesting is that Azure’s data warehouse offerings require the old method.  Of course, hopefully in a DW setting you’re not messing with security nearly as much as a typical OLTP system, but… yeah.

Does that mean those Azure services are built on top of older SQL engine versions?  Possibly.  MSFT isn’t too open about the deep internals of such tech, but neither is any other cloud vendor, so we can’t really ask them such a question and expect anything more than a blank-stare.  But it is curious, no?

fry not sure if curious or suspicious
Exactly.
Syntax examples:  Let’s add the user foo to the database Bard, in the db_datareader built-in role.  Then we’ll remove him.  (Or her, I guess; “foo” is a pretty gender-neutral name.)  Creating said user is easy, so I’ll start with that, and it’s the same in all supported versions.  You need a server-level login to link it to; if you don’t have one, I’ll show you how to create it first.

Create server-level login:

--preferably, you create a login for an existing AD/Windows account:
CREATE LOGIN [yourdomain\foo] FROM WINDOWS;
--or, you can just create a SQL login (not connected to domain/Windows/ActiveDirectory; also less secure, as discussed here and here)
CREATE LOGIN [foo] WITH PASSWORD = 'foobar';

Create database-level user:

USE Bard;
--if you made the domain/Windows login:
CREATE USER [foo] FOR LOGIN [yourdomain\foo];
--or, if you just made the SQL login:
CREATE USER [foo] FOR LOGIN [foo];

Now the role-membership.

Old way:

  1. Add user to role:
    • exec Bard.sys.sp_addrolemember
          @rolename = 'db_datareader'
          , @membername = 'foo';
  2. Check that it worked:
    • exec Bard.sys.sp_helprolemember
          @rolename = 'db_datareader'
    • It will show something like this:
      db_datareader with member 'foo'
  3. Remove user from role:
    1. exec Bard.sys.sp_addrolemember
          @rolename = 'db_datareader'
          , @membername = 'foo';

New way (step 2, the “check”, is the same)

  1. Add user to role:
    • USE Bard;
      ALTER ROLE db_datareader ADD MEMBER [foo];
  2. Check (see above)
  3. Remove user from role:
    • USE Bard;
      ALTER ROLE db_datareader DROP MEMBER [foo];

Yay.

Notice that, because the “old way” is simply executing sys-sp’s, we can actually run it from any database context.  Whereas the “new way” requires you to connect to the database in question.

Note: I am in no way shape or form responsible for you screwing up your database or SQL instance, nor for you getting yelled at by your DBA or security admin or any other form of verbal assault you may incur as a result of running these commands.  But since you need server-admin & database-owner equivalent permissions anyway, you’re probably one of those people already, so you’ll just end up yelling at yourself.

no guarantees
No substitutions, exchanges, or refunds.
Cleanup (just so you don’t muddy your instance/DB up with a silly example user):

USE Bard;
DROP USER [foo];
USE master;
DROP LOGIN [foo];

If you have any questions, feel free to reach out to me!

 

DBA Holy Wars Part 2

Battle 4: GUIDs vs Identities

This is an oldie but goody.  A) Developers want their apps to manage the record identifiers, but DBAs want the database to do it.  B) Developers prefer abstracting the identity values out of sight/mind, DBAs know that occasionally (despite your best efforts to avoid it) your eyeballs will have to look at those values and visually connect them with their foreign key relationships while troubleshooting some obscure bug.

but-wait-theres-more-billy-mays
there’s ALWAYS more…

But there’s more to it than that.  See, none of those arguments really matter, because there are easy answers to those problems.  The real core issue lies with the lazy acceptance of GUI/designer defaults, instead of using a bit of brainpower to make a purposeful decision about your Primary Key and your Clustered Index.

Now wait a minute Mr. DBA, aren’t those the same thing?

NO!  That’s where this problem comes from!

A good Clustered Index is: narrow (fewer bytes), unique (or at least, highly selective), static (not subject to updates), and ever-increasing (or decreasing, if you really want).  NUSE, as some writers have acronym’d it.  A GUID fails criteria ‘N’ and ‘E’.  However, that’s not to say a GUID isn’t a fine Primary Key!  See, your PK really only needs to be ‘U’; and to a lesser extent, ‘S’.  See how those don’t overlap each other?  So sure, use those GUIDs, make them your PK.  Just don’t let your tool automagically also make that your CX (Clustered indeX).  Spend a few minutes making a conscious effort to pick a different column (or couple columns) that meet more of these requirements.

For example, a datetime column that indicates the age of each record.  Chances are, you’re using this column in most of your queries on this table anyway, so clustering on it will speed those up.

Most of the time, though, if your data model is reasonably normalized and you’re indexing your foreign keys (because you should!), your PKs & CX’s will be the same.  There’s nothing wrong with that.  Just be mindful of the trade-offs.

Battle 5: CSV vs TAB

bluray-vs-hddvd-fight
Who doesn’t love a good format-war?

Often, we have to deal with data from outside sources that gets exchanged via “flat files”, i.e. text files that represent a single monolithic table of data.  Each line is a row, and within each line, each string between each delimiting character is a column value.  So the question is, which is easier to deal with as that delimiter: comma, or tab?

String data values often have commas in them, so usually,the file also needs a “quoting character”, i.e. something that surrounds the string values so that the reader/interpreter of the file knows that anything found inside those quotes is all one value, regardless of any commas found within it.

But tabs are bigger.. aren’t they?  No, they’re still just 1 byte (or 2, in Unicode).  So that’s a non-argument.  Compatibility?  Every program that can read and automatically parse a .csv can just as easily do so with a .tab, even if Windows Explorer’s file icon & default-program handler would lead you to believe otherwise.

I recently encountered an issue with BCP (a SQL command-line utility for bulk copying data into / out of SQL server), where the csv was just being a pain in the arse. I tried a tab and all was well! I’m sure it was partially my fault but regardless, it was the path of least resistance.

Battle 6: designers vs scripting

no-wizard-allowed
Wizards are usually good, but in this case, they’re lazy and bad for you…

This should be a no-brainer. There is absolutely no excuse for using the table designer or any other wizardy GUIs for database design and maintenance, unless you’re just learning the ropes. And even then, instead of pressing ‘OK’, use the ‘Script’ option to let SSMS generate a `tsql` script to perform whatever actions you just clicked-thru.  Now yes, admittedly those generated scripts are rarely a shining example of clean code, but they get the job done, even with some unnecessary filler and fluff.  Learn the critical bits and try to write the script yourself next time– and sure, use the GUI-to-script to double check your work, if you still need to.

Confession: I still use the GUI to create new SQL Agent Jobs. It’s not that I don’t know how to script it, it’s just that there are so many non-intuitive parameters to those msdb system-sp’s that I usually have to look them up, thereby spending the time I would have otherwise saved.

Bonus round: the pronunciation of “Data”

its-data-not-data
Call me “big Data” one more time…

Dah-tuh, or Day-tuh?  Or, for the 3 people in the world who can actually read those ridiculous pronunciation glyphs, /ˈdeɪtə/ or /ˈdætə/ ?  It’s a question as old as the industry itself… or maybe not.  Anecdotally, it seems like most data professionals, and people in related industries, tend to say “day-tuh”; while those in the media and generally less technical communities tend to say “dah-tuh”.  (Where the first syllable is the same vowel-sound as in “dad” or “cat”.)  This likely means that the latter is more popular, but the former is more industrially accepted.

In either case, it doesn’t really matter, because at the end of the day, we’re talking about the same thing.  So if some dogmatic DBA or pedantic PHB tries to correct your pronunciation, tell ’em to stop being so persnickety and get on with the task at hand!

Until next time…

Little Gotchas

If the caller of our stored-procedure literally passes NULL as the parameter value, we might have a problem!

A large part of most DBA/DBD’s daily job is writing & maintaining stored-procedures.  In SQL Server or other RDBMSs, stored-procs (“SP’s”, “procs”, however you like to abbreviate), serve as one of the building-blocks of your overlaying applications and day-to-day operations, including maintenance and automation.

sprocket
This is a sprocket, not to be confused with a sproc, which is really just a proc.

Today, something struck me, and I was both shocked and comforted by the fact that this hadn’t really “come back to bite me in the arse“, as the proverbial saying goes.  But first, some context.

When we declare our proc signature with our parameters, we of course give them datatypes, and often default values — the parameter value that is assumed & used upon execution when the caller (operator, application, agent job, etc.) calls said proc without passing a value to that parameter.  So we create our proc like so:

CREATE PROCEDURE dbo.MyProc @MyParam BIT = 0 AS BEGIN SET NOCOUNT ON; END

So that users are allowed to call it like so, and assume some correct default behavior:

EXEC dbo.MyProc;

Coincidentally, that CREATE line is part of a typical “boilerplate” snippet or template which I use to create procs with “create if not exists, else alter” logic and a nice header-comment-block, which I’ll publish on my GitHub or Gist shortly, so I can show it here.  I know that MS recently added DROP IF EXISTS support to the language, but frankly, I like to keep procs intact if they exist because it’s easier not to have to remember/re-apply their metadata, such as security (grants/deny’s, certificate signatures, etc.) and extended properties.  Wake me up when they add true CREATE OR ALTER syntax!  Oh snap, they did… in 2016 SP1.  Anyway.

Now for the “catch”, the gotcha.

gotcha-programming-wikipedia-def
In programming/software-dev/IT-systems, “gotcha” has a specific meaning.  Thanks Wikipedia!

If the caller says exec dbo.MyProc, that’s great — they didn’t pass a parameter value, so the execution uses the default value (0) and off we go.  However, if the caller is so malicious as to literally pass NULL, we might have a problem!  Because let’s say that @MyParam value is used in a JOIN predicate or a IN (SELECT...) block, or even a CASE expression.  We won’t get an actual error; SQL Server is smart enough to jump over the syntactical variations required for equivalence checking (i.e. Column1 = 0 vs. Column1 is NULL) when it interprets/compiles the stored-procedure.  But, what we’re probably going to get is unexpected or unknown behavior.

warning-assumptions-ahead

It seemed worth re-using a classic…

And really, it all comes back to those nasty things called assumptions.  See, as the proc author, we’re assuming that our @MyParam will always be a 0 or 1, because it’s a BIT, and we gave it a default value, right?  Sure, maybe in another language, but this is T-SQL!  NULL is a separate and distinct thing, a valid value for any datatype, and must be accounted for and treated as such.  It can get especially dicey when you have a NOT IN (SELECT...) block that ends up as an empty-set, which suddenly morphs the outer query into a “without a WHERE clause” beast, and.. well, you can guess the rest.

So what do we do about it?  Well, we can add a “check parameter values” block to the top of our procedure where we either throw an error, or set the NULL value back to a default.

Examples:

IF (@MyParam IS NULL) RAISERROR ('@MyParam cannot be NULL; try again.', 15, 1);
IF (@MyParam IS NULL) SET @MyParam = 0;

We could also work on the internal proc logic to account for NULL values and “eliminate the guesswork” (i.e. prevent unexpected behavior) by actually having logical branches/conditions which “do something” if the parameter is NULL.  Then, at least we know what our proc will do if that infamous caller does exec MyProc @MyParam = NULL.  Yay!  But that sounds like a lot of work.  Maybe.

Or maybe it’s worthwhile because you actually want NULL to be treated differently than all other parameter values, and then, hey, you’ve already spent the time on that logic, so you’re done!

what-if-i-told-you-null-does-not-equal-null
But NULL does not NOT equal NULL, either!  Crap, somebody give me the red pill…

I hope this helps somebody else avoid the same assumptions.

Adventures in SQL Cluster Stacked Instances

Enough with the pitchforks; this is Test/QA. Here, I talk about 3 gotchas.

Today we’re going to talk about SQL Server instance stacking.

blasphemy-300
Blasphemy of the highest order!

Right, in production.  I’m talking about DEV/TEST environments.

Still, blasphemy!

Settle down.  If your server is set up correctly and has the resources you want it to have, and you divide your resources up per instance in a few very simple ways, it’s fine.  Enough with the pitchforks, the wailing and gnashing of teeth.

okay-okay-calm-down
Stop scaring the pandas.

Okay, now that that’s out of the way…

Remember our cute little DEV server?  So, the way he’s set up is, he’s got 3 SQL Server instances on him, each with its own dedicated SSD, and another dedicated SSD just for tempdbs.  Ideally, we’d have a separate SSD for each instance’s tempdb, but sadly, motherboards with 3 M.2 or NVMe slots aren’t (weren’t?) in production at the time, at least not for desktop class systems.  But I digress.

This is called instance stacking.  And yes, it’s a big no-no in production.  Mostly because performance troubleshooting is a pain in the arse.  But also because it’s more difficult to divvy-up resources like RAM and I/O & network throughput channels than one would like.  But it’s super simple to set up — you simply run the SQL Server installer 3x, each time creating a unique instance name.  Then, at the end of it, your SQL instances are addressable by MachineName\InstanceName, e.g. SQLDEV\Foo, SQLDEV\Bar, etc

Now the time came to create a “QA” environment.  Which, like DEV, didn’t need to be very performant (that’s a made-up word that consultants like to use, but it’s generally accepted in our industry so we go with it), and so, since we had some hardware laying around from a recent “up-gration” (upgrade-migration… okay, now I’m being ridiculous), we said “let’s use that thing!”.  It was a 2-node cluster setup with shared DAS storage.  For the uninitiated, DAS is Direct Attached Storage, i.e. an array of disks that you can directly attach to 1 or more servers using whatever interconnect is available on the endpoints (usually SAS, serial-attached SCSI  – which is one of most fun acronyms to pronounce in IT: “scuzzy”).  DAS is not to be confused with a SAN, Storage Area Network, which is a super fancy storage array with performance tiers and snapshot technology and de-duplication and all that hotness.

NAS-SAN-DAS-diagram
NAS, SAN, DAS – 3 acronyms, 1 underlying purpose, 3 implementations.

The interesting thing with a cluster is, when you install SQL Server instances, you can’t actually use the same “MachineName” for the 3 different “InstanceName”s.  Because in a cluster, the former is actually the “VirtualServerName”, which must be unique per clustered instance, in order to properly configure cluster resources, storage pools, and networks.

The reason this is interesting, is that it contrasts with stacked instance setup on a standalone server (non-clustered).  So if you compared our DEV and QA setups side-by-side, it’s a bit odd-ball: instead of SQLDEV\Inst1, SQLDEV\Inst2, etc., we have instance names like SQLQA1\Inst1, SQLQA2\Inst2, etc.  That makes the ol’ “find and replace” in config files a bit harder.  But, at the end of the day, it’s all just names!

find-and-replace-diallog
One of the handiest tools in an engineer’s toolbox!

Another interesting “gotcha” revolves around SQL 2008R2, which I know shouldn’t be on the short-list of versions to spin up, but unfortunately, a legacy ERP system demands it.  Well, it only happened to me with the 2008R2 instance installation, not the 2016’s, but that’s not to say it couldn’t happen with others.  Anyway, after installation, SQL Agent was not working; it wasn’t coming up as a cluster resource.  Basically, exactly what was outlined in this timely & detailed article at mssqltips.  I won’t restate the fix instructions here, just give it a read!  I do want to clarify something though.

In part of the fix, we use the handy-dandy PowerShell cmdlet Add-ClusterResourceDependency .  In its basic form, it requires 2 arguments, Resource and Provider.  To someone who’s not a cluster expert, this terminology might be a bit confusing.  Resource in this case is the SQL Server Agent, while Provider is SQL Server itself.  But we’re adding a Dependency, right?  Which depends on which?  Well, we know that Agent depends on the engine, so, Resource depends on Provider.  Yes, I know, that’s what the article tells you to do — I just like to understand why.

can-you-tell-me-why
Fry shares my curiosity…

Finally, there’s the question of divvying-up resources to the stacked clustered instances.  Now, in a standard cluster, you’ve got your active node and your passive node.  But if we’re stacking instances, we might as well split the SQL instances up and take advantage of the compute resources on both nodes.  (Storage is still shared; this is a cluster, after all!)  The CPUs are no problem — however many instances are stacked on a node, they’ll share the CPU cores pretty cooperatively.  Memory is a bit of a different story.  We want to take advantage of all the available RAM in the cluster, but…

As you know, you can configure each SQL instance to use a set amount of max. server memory.  So let’s say each cluster node has 32GB RAM, and we’re stacking 4 SQL instances total (to keep the math easy!).  If we split them up among the nodes at 2 each, each instance can use 16GB.  But if for some reason a node goes down, and all 4 instances move to 1 node, now they’re fighting for that 32GB!  So we should reduce their max-memory settings to 8GB each, instead of 16.  But we don’t want to do this manually!  Fortunately Aaron Betrand has an excellent blog post on the subject, with some useful ideas about how to do this dynamically & automatically.  The only issue I have with it is that it requires the linked-servers to use a highly privileged account (sysadmin or maybe serveradmin role) to be able to set that max-server-memory setting.  But wait, remember what we said at the beginning?  This ain’t production!  Who cares about security?  (That’s facetious, sort of — in reality, yes, we don’t care as much about security in lower environments, but we should still care a little!)

when-i-forget-my-password-is-incorrect
We all do silly things from time to time..

That concludes this week’s adventure!  Thanks for reading.

DBA Holy Wars

On a lighter note than usual, I thought it was time I weighed in on some of the long standing “programmer holy wars”, but with a little DBA-twist (like a twist of lime, only less delicious).  Like any good holy war, this will be full of posturing, pontificating, and political correctness.  And I probably won’t even commit to a particular side on some issues.  But hey, isn’t that the point?

Battle 1: Tabs vs. Spaces

tabs-vs-spaces-run-away
OH SHI…
Text editors and IDEs have long been mature enough to handle “smart tabs” and preference-based tab size.  However, you will occasionally have to copy-paste code into a non-code-oriented environment, such as an email or a document, where of course the tab size is based on inches rather than spaces in a monospace font.  I will admit in those rare instances, tabs are annoying.  But what is more annoying is the inconsistency you can get when spaces are used incorrectly, especially in the midst of lines in a sad attempt to do some kind of vertical alignment.  Plus, if you happen to have a different spacing-size preference than the original code author, you’re now battling that visual discrepancy as you read & maintain said code.

So I prefer tabs.  But I won’t fight my team on it if everybody else prefers spaces — that’s what those settings in the editor/IDE are there for!  I will happily conform with the best of them.  A quick Google says I’m in the minority anyway — which I’m OK with.

Battle 2: The Case for Casing

capslock-cruise-control-for-cool
The original, if somewhat dated.
Certain languages (COBOL, SQL) have a historical bent toward ALLCAPS for their keywords and language constructs.  Some argue that this is archaic, outmoded, etc.  I don’t mind it, working primarily with SQL, but in almost all other languages (C#, Python, JavaScript), I think it makes sense to follow the established conventions, and modern conventions never favor caps.  As I transitioned from C# to SQL, I actually wrote my scripts and stored-procs primarily in lower case for the longest time.  And then I came into an environment where RedGate’s SQL Prompt was in heavy use, and since its default “auto-format” settings are in-line with the SQL language “standard” (however old and dated it may be), it started YELLING all the keywords at me.. and like most people, I just accepted it, eventually letting it become my own “default” style.  (SQL Prompt is a fantastic tool, don’t get me wrong.  I absolutely love it, but its default formatting settings never agreed with me — then again, nor do anybody else’s, as we already discussed!)

But that’s not really what this battle is usually about.  Most often, it’s about your names, i.e. the identifiers for objects/methods/variables/procedures/APIs/etc. that your team and your developers have to come up with on a constant basis.  And usually it comes down to camelCase, TitleCase (which are often incorrectly used interchangeably!  and is apparently better known as PascalCase, which I just learned today, or possibly re-learned after several years), or lower_case_with_underscores (which, in another learning moment, I discovered is named snake_case!  How cool is that?).  Rarely, if ever, do people argue for ALLCAPS in these areas — it just feels.. obnoxious.

caps-lock-not-always-necessary
Yelling doesn’t always get you what you want…
As with any programmer-y topic, you can dive down the rabbit-hole and dissect layer upon layer of nuance in this battle until you’ve lost all semblance of productivity.  Because casing is, in some languages, important; while in others it’s simply convention-based, dependent on the abstraction level or family of things you’re talking about.  For example, C# Class names are TitleCase, and so typically are Methods, while object instances are usually camelCasepublic members can be TitleCase or camelCase, and private members can be _underscore_led, or whatever flavors for each that your boiler-plate/template system prefers.  Scoped variableNames are most often camel’d as well, while global constants are typically CAPS_WITH_UNDERSCORES.  And god help you if you ask a team of more than 3 people what their dependency packages’ names should look like.

omg-object-and-casing
Shamelessly borrowed from Adam Prescott’s blog, which you should definitely go read.
So in this battle, I have to play Switzerland.  I’m not vehemently opposed to any particular flavor of casing, finding it best to work within the conventions of the language and tool-set at hand.

Side-battle: Spacing in Names

That said, I can’t stand names/identifiers with actual white space in them, but that’s a somewhat different battle.  Most languages don’t even allow that, but most RDBMSs will happily accept your ridiculous My Cool Database and its resident Silly Tables and Happy Column 1/2/etc. as long as you properly “quote” them (surround them with [square-brackets] or `backticks`, depending on the SQL flavor).  If you submit that kind of nonsense to me, I will find you, and I will slap you with a large trout.

trout-slap
Particularly offensive names may warrant a double trout slap.

Battle 3: ORM vs Stored-Procs (vs Linq?)

This is that little twist-of-DBA as promised.  I recently read an interesting post related to this topic, and essentially the point was this: Developers have “won” (won what? I thought were all on the same side!), the ORM is here to stay, and as DBAs/DBDevs, we (you/I) need to build up our understanding of them so that we A) know them even better than our devs, and B) can troubleshoot performance issues with them.

I think there’s some truth to that, and some necessary context as well.  Ideally, yes, I would be an ORM expert on whatever 1 or 2 specific frameworks my colleagues are using (Entity Framework, most likely), and any time there was a potential performance challenge with a app-to-database call, I’d be able to parachute-in and sprinkle some magic dust and make it all better.  But I’m also the one DBA (out of approx. 1.3 total), serving 4 teams of 3-6 devs each, so in the immortal words of meme-dom:

Ain’t nobody got time for that!

nobody-got-time-old-fashioned
because sometimes old-fashioned things are funny too…
Now I’m not making excuses.  All I’m saying is, the burden of understanding is on more than just one team member or job-role.  If your dev team is adapting an ORM, said devs need to learn how it works too — at least enough to help with basic performance troubleshooting.  Even if it’s just the ability to extract, from a debug session, the actual T-SQL code that’s being sent to the server, and give me a sample query to analyze for performance bottlenecks.

Let’s step back a bit.  It’s all about using the right tool for the job, yes?  ORMs are meant for basic CRuD operations and simple data access patterns, right?  So why try to build complex business logic into them?  Because, like it not, teams do build complex business logic into the data layer — despite our protests and soapbox sermons to not do it.  And because the vast majority of applications we’re dealing with are not greenfield.  Furthermore, ORMs tend to work best when the data model is well-defined, or the database is modeled well (well-modeled?).  And again, we don’t all get to work with unicorns in utopia.

Put it this way: If you want an efficient, performant module of data-layer business-logic against your SQL database, it’s likely going to be a stored procedure carefully crafted by a DBA/DBDev.  Could you achieve the same results from the app layer, using Linq and/or some mix of ORM and code?  Probably.  Do you have the time and patience to do so?  Maybe not.

i-have-no-strong-feelings-neutral
If I don’t survive this… tell my wife, “hello”.
So once again, I’m Switzerland.  Well, preferably a more pragmatic version — what country would that be?  Norway?  Anyway.  Use the methodology that’s the best compromise between “right tool for the job”, “optimized developer productivity”, and “easiest to troubleshoot”.  It’s a tough call, but that’s why we get paid.

Until next time!

Nested Set++ Wrap-Up

So we’ve built our Cat Tree. But how do we know it’s all correct?

One more time, with feeling!  Not that this dead horse needs another beating, but I did promise…

So we’ve built our Cat Tree.  We’ve written our CrUD ops, our “move” op, and even some readers.  But how do we know it’s all correct?  We can select from our Cats view, of course.  But we want to be really sure.  Plus there’s that pesky SwapCatNode method.

Easy one first.  SwapCatNode can mean swapping sibling order, or switching a parent with a child or grandchild, or toggling nodes that are in completely different places in the tree & not related at all!  This is the least logical operation, if you think about a proper hierarchy, but it turns out to be necessary sometimes.  We’re just swapping the nodes’ position values & ParentIDs with each other, and updating ParentIDs on their children to each others’ IDs.

I really don’t even need to draw this one… but because I needed a header image, I did.  Anyway, just get the rows with the given target IDs, swap the PLeft, PRight, Depth, and ParentID values, and call it a day.

Now the complex.  To validate that our tree is properly structured, the following statements need to be true:

  1. Each node’s Right value is greater than its Left.
  2. More to the point, each node’s Right value is greater than all of its ancestors’ Left values.
  3. Similarly, each node’s Left value is less than all of its descendants’ Left values (and Right values, obviously!)
  4. Leaf nodes have no gaps between Left & Right: Right = Left + 1
  5. Depth is easy to verify because we already wrote the rCTE to calculate it!
  6. And of course, no orphans – all ParentIDs lead to an actual parent node, except of course if they’re NULL (root nodes).

We can either go thru lots of logical checks in different queries, or we can try building a mock tree out of the base adjacency-list structure (ParentIDs) and compare values.  The latter will only help us with #1-5; the orphans problem is a different animal, but it’s also not part of the model per-se, so it’s actually good to separate that check from the rest.  (And it’s really simple – use a not exists query on ParentIDs and presto, orphans checked!)

Building a mock-tree, or a “position re-builder”, will come in handy for another reason:  Let’s say we need to completely revamp a subtree, i.e. insert & update a bunch of nodes at once because somebody royally screwed up that branch.  And we’ve got our shiny fixed data, 100’s of rows, ready to go, if only those damn triggers weren’t there, preventing us from doing bulk operations!  What we’d really like to do is, knowing the starting ParentID, just insert all our new nodes with PLeft values in sequence to each other (and not care about the rest of the tree); and/or, update a few sets or families of nodes to massively re-order them, without having to call the Swap routine one-at-a-time ad-nauseum.  We also don’t want to care about figuring out correct PRight & Depth values.  After that’s all done, our new subtree will have “bad” position values, so we need to rely on some other routine to fix them for us, so that the tree can again be well-formed and things can go back to normal.

nsm-cat-bulk-reorder-insert-rebuild
Simon & Tigger got re-ordered. Then we bulk-added 3 new Cats under Mittens and didn’t know what their position values would be, so we let the rebuilder take care of it.

In our RebuildCatTree routine, we actually need to re-number all nodes to the right & above our “bulk-inserted” subtree, just in case we’ve caused things to move.  And since we’ve re-ordered some siblings elsewhere, it turns out to be easiest, in practice, to re-number the whole tree.  This is where our fair-weather friend recursion comes in — and not just another rCTE, but real stored-procedure recursion.  This can get dicey; SQL only supports a certain # of recursion levels, and it can really eat up those CPU cycles & RAM buffers.  So this should be done rarely, and preferably during a time where the tree is not under heavy usage.

The code samples are now available on my GitHub page.  Comments abound!

I hope you’ve enjoyed this little mini-series.  And now, I promise to move on to new topics & rantings of various nature!  Thanks for reading.

~Fin~

Update:

I’d like to point future readers at two very informative articles for those interested in deep-diving down the hierarchical rabbit-hole: Aaron Bertrand, and Jeff Moden.  There are many more tweaks and enhancements that can be made to the “classical” Nested Set model, which those lucky Devs/DBAs who are in a position to actually [re]implement their hierarchies will want to read about and take advantage of.

The Nested Set Model++

This time we talk about adding a Depth field, and good ol’ CrUD ops – Create, Update, Delete.

Since my first post on this topic got a lot of attention and traction, I felt it appropriate to expand on the topic a bit, even if it’s been largely covered by other bloggers in the past.  I’ve also found it very useful to have a “depth” field, which isn’t canonically part of the model (hence the “++” in the title!), but is quite handy not only for display purposes (while you’re querying & testing the thing), and also for making certain “get” ops easier.  Sure, it adds a wee bit more to structural maintenance, but since that’s already the most complicated part of the model anyway, it’s hardly worth a second thought.  So let’s dive in!

The big topic last time was this operation of “move a subtree” — of course, sometimes you’re just moving one node, but only if it’s a leaf; otherwise you’re moving a node and all its descendants, so I’ve kept the procedure name MoveCatSubtree intact.  This time we’ll talk about good ol’ CrUD ops – Create, Update, Delete.  In my implementation, I chose to handle these with table triggers.  Some would argue in favor of stored-procs, and while that would seem “more consistent” with the precedent set, I’d counter with 2 points:

  1. To be really fool-proof, you’ll need to prevent ungoverned inserts/updates/deletes anyway; you could either do this with GRANT/DENY permissions, or triggers.  Permissions would be more complex because you’d still need your users to be able to exec the CrUD procs, so you’d end up using some convoluted security mechanisms that can be tricky to maintain over time.
  2. With triggers, we can allow the consumers of the data (apps, users) to continue to use “plain-ol’-TSQL” to access and manipulate the data, instead of having to remember stored-proc names and hunt for documentation on them.  (The exception being, of course, MoveCatSubtree, which, honestly, could be integrated into the insert trigger, but I’ll leave that as an exercise to the reader!)

Again, yes, we could easily do the same implementations in stored-proc form, and you’re welcome to fork my GitHub repo if you feel like exploring that.

Let’s outline the steps and draw some pictures.

1. InsertMake a hole!

When we INSERT a node, we want to specify its parent and a name, and let the triggers do the rest!  We place it at the right of its siblings-to-be, and update the position values of all nodes to the right so that everything stays kosher.  This should sound familiar — it’s essentially that “make a gap” part of the subtree-move op.  In terms of depth, we just +1 to the parent’s.

nsm-cats-insert-gadget-under-stripes
Stripes is a breeder; Gadget comes in and makes Fluffy & children move over.

Also, for some reason, our cats reproduce asexually…

 

2. Delete: Think of the children!

Similarly, to DELETE a node, we want to “close the gap” left by said deleted node.  But what of the children?  We don’t want to leave any orphans behind!  So we “promote” the children of our deleted node to the level (depth) of their parent, sandwiching them in between the deleted node’s siblings (aka their former aunts/uncles!).  This is easier than it sounds.

nsm-cast-delete-fluffy
He killed Fluffy!

Fluffy is survived by his children, who are now for some reason his siblings, and are very confused by their sudden increase in age & status.

3. UpdateRename; everything else is encapsulated.

Finally, we only allow UPDATEs on the Name, because everything else (position values, depth, parent) is structural, and encapsulated by our tree maintenance logic.  Moving a node or subtree?  MoveCatSubtree.  Swapping positions with another node?  SwapCatNode (TBD!).

4. Depth: Set it once, & encapsulate it!

Depth is pretty simple to add if you’ve already got a tree full of data.  We can use a recursive common table expression, or “rCTE“.  While normally these are frown-worthy (remember, recursion is not SQL’s strong suite), we’re only using it one time to populate an existing data-set, so we can keep on smiling.

;WITH CatTree AS
(
    SELECT CatID, ParentID, Name, PLeft, PRight, Depth = 0
    FROM nsm.Cat
    WHERE ParentID IS NULL
  UNION ALL
    SELECT cat.CatID, cat.ParentID, cat.Name
        , cat.PLeft, cat.PRight, Depth = tree.Depth + 1
    FROM CatTree tree
    JOIN nsm.Cat cat
        ON cat.ParentID = tree.CatID
)
UPDATE cat SET cat.Depth = CatTree.Depth
FROM CatTree
JOIN nsm.Cat cat
    ON cat.CatID = CatTree.CatID

The last order of business (for now) is to add Depth support to our MoveCatSubtree method.  As illustrated below, we have to move the subtree “up” or “down” in Depth depending on its new parent’s position relative to its old position.  The details are, of course, in the GitHub repo, but here’s a quick snippet of what that looks like: NodeNewDepth = /*NodeCurrent*/Depth + (@NewParentDepth - @SubtreeOldDepth) + 1  (where @SubtreeOldDepth is the depth of the top node of the moving subtree.)

nsm-cast-move-jack-to-mittens
Move Jack to under Mittens; I won’t repeat the Left/Right logic, just note the Depth logic.

 

In a future little addendum, I’ll briefly go over the “get” queries and that TDB SwapCatNode method.  For now,  enjoy the cats (again)!  Thanks or sticking around, I know it’s been a few more weeks than normal.

PS: A big thank-you to the dudes in the CodingBlocks #blogging Slack channel for their encouragement and motivation to get this done!  You guys rock.  Check out their blogs for some terrific content: http://dotnetcore.gaprogman.com/ , http://www.codeshare.co.uk/ , http://thereactionary.net/ .

Update:

I’d like to point future readers at two very informative articles for those interested in deep-diving down the hierarchical rabbit-hole: Aaron Bertrand, and Jeff Moden.  There are many more tweaks and enhancements that can be made to the “classical” Nested Set model, which those lucky Devs/DBAs who are in a position to actually [re]implement their hierarchies will want to read about and take advantage of.

The Nested Set Model

The #1 rule of the Nested Set Model is: FAST READs. The #2 rule of the Nested Set Model is: see #1

There are probably definitely several articles out there which cover the SQL implementation of the Nested Set Model, aka “modified preorder tree traversal” (which is more the name of the algorithm by which you traverse the tree, rather than the structure itself).  But I found it interesting enough, and more importantly, applicable enough to my job experience, that I feel it deserves some treatment.  Not the basic “how to”, but more an example of a particular operation and a specific pitfall to avoid. (Jump straight to the example diagrams.)

Now, we’re not going to debate about whether this model is “the best” representation of hierarchical data in an RDBMS (some argue that Closure Tables, aka “Ancestor Tables“, or some kind of hybrid approach is better, and I’d probably agree).  The fact is, sometimes (read: almost always) as a DBA/DBDev, you’re “stuck with” an existing database in a legacy application environment that you pretty much can’t change — or if you can, changes need to be small, incremental, and non-disruptive.

Okay, with that disclaimer out of the way, let’s dive in.  First things first:

The #1 rule of implementing the Nested Set Model is: FAST READs.

I can’t stress that enough.  Fast SELECTs.  Everything else pales in comparison.  In other words, we don’t care how long and painful and slow write operations are against this table (updates, inserts, deletes), as long as our SELECTs remain super speedy.  If that is not your use-case, consider a different model.

The #2 rule of the Nested Set Model is: see #1

Moving on…

The #3 rule is: encapsulate tree operations to maintain its integrity & structure.

Put another way, the #3 rule is that you should always operate on the tree (CrUD ops) using stored-procedures and/or triggers that encapsulate all the nitty-gritty details of maintaining the correct position values during said insert/update/delete operations.  Of course, somebody is responsible for writing those stored-procs.  Any volunteers?  Easy now, don’t raise your hands all at once!  Generally, this responsibility falls to the DBA(s) or DBDev(s).

The problem at-hand, in my current situation, was that of “moving a sub-tree”, i.e. taking a node and all its descendants, and moving it to place it under another “parent” node.  In some models, and/or in some languages, this is a simple recursive operation.  However, SQL is not spectacular at recursion — after all, we’re working in a relational engine — so let’s try to play to its strengths:

namely, SET-BASED operations!

A previous DBDev had written a stored-proc for just such an operation.  However, as (somewhat) expected, it was horribly slow, to the tune of hours of run-time.  This is not acceptable, even given the #1 rule stated above.

Well it turns out that most of it was pretty efficient, but the last step, in which they attempted to “fix” the left/right values in the entire table “just to make sure we didn’t leave any gaps“, was, frankly, quite silly.  Because the only “gaps” you create are created by the previous steps in the proc, and you know exactly how big that gap is (the width of the subtree you’re moving), and where it is, so you should be able to target that specific area of the tree and close the gap more intelligently, using some simple math. (addition and subtraction — the simplest math there is!)

Doing that improved the performance of the whole proc by a factor of 10.  That’s huge.  Or, “yuuuuge“.

So let’s get specific.  As you’ll see from my diagrams, the model actually is a hybrid, combining an Adjacency List (each record knows its “parent”) with a Nested Set (each record has a “left” & “right” position value).  We do this for two big reasons.  First, having the parent relationship along with the position values makes all that nasty book-keeping (rule #3) a bit easier to manage (and to check our work).  And second, because, conveniently, we can store the data from both models in one table.

On to the examples!

First, we have our tree of Cats.

cat-tree-1
Or, as a coincidentally cute table alias, CatTree

Now, we want to move Jack & his children to become descendants of Mittens (Jack being the child, Smush & Smash being grandchildren).  So we start by “making a gap” of the subtree’s “width” (6, the distance between Jack’s PLeft and PRight inclusive of end-points).  We add that amount to all PRight values >= Mittens’ original PRight,  and add it to all PLeft values > Mittens’ PRight — see the blue #s in diagram below, and code here:

UPDATE Cats
SET PLeft = (CASE WHEN PLeft > @NewParentRight
             THEN PLeft + @SubtreeSize
             ELSE PLeft END)
  , PRight = (CASE WHEN PRight >= @NewParentRight
             THEN PRight + @SubtreeSize
             ELSE PRight END)
WHERE PRight >= @NewParentRight

The red values haven’t changed (yet) but are now wrong, so we’ll have to fix them next.  And of course the green values are the moved subtree’s new positions based on the new parent’s (Mittens) PLeft.

cat-tree-2
Jack is now Mittens’ child.

Finally, now that we’ve moved Jack & his children under Mittens, we need to “close the gaps” that we created at first, to make sure that the tree’s position values remain contiguous.  This isn’t as difficult as it sounds: if we’ve stored Jack’s original PRight value (10), we can use that as a cutoff to subtract the subtree width from higher position values and intelligently (and quickly) close the gaps we created before.  Again, code & diagram:

--Notice this looks very similar to the previous
--code snippet! (We're basically doing the reverse)
UPDATE Cats
SET PLeft = (CASE WHEN PLeft > @SubtreeOldRight
             THEN PLeft - @SubtreeSize
             ELSE PLeft END)
  , PRight = (CASE WHEN PRight >= @SubtreeOldRight
             THEN PRight - @SubtreeSize
             ELSE PRight END)
WHERE PRight >= @SubtreeOldRight
cat-tree-3
Red values indicate “closing the gap” that was created by removing the subtree of Jack. Blue values indicate the incidental gap closures for the rest of the tree (above and right). Green values, you’ll notice, are “reverted” (i.e. same as they were originally).

SQL-wise, this should translate pretty well.  I’ve posted the setup and stored-proc scripts to GitHub, so the distinguishing reader can review and offer feedback.  In theory, there’s probably a way to exclude the green reverted values from the first pass operation (gap-making) so that we don’t have to revert them (at gap-closing), but again, since we’re doing SQL set-based operations, it seems hardly worth the effort — i.e. the potential speed gain would be outweighed by the logical/maintenance complexity.

 

So what’s the lesson here?  Well hopefully, if you’re “stuck with” a SQL DB with a Nested Set Model table containing a hierarchical tree of data, you don’t have to completely re-invent the wheel and write your CrUD ops from scratch.  But if your predecessors didn’t plan for certain kinds of operations, and this “move a subtree to a new parent” happens to be one of those, this should help you (re)implement it efficiently.

I’d love to get some feedback on this.  Let me know if I’ve missed anything conceptually, if there are better ways or methods to doing any of this, or any other tips & tricks that folks might have for dealing with such data.  Leave me a comment!

[footnote 1]
The root of the problem, in this case, was simply taking the code from a slideshare presentation and copy-pasting it into the routine without analyzing its effectiveness and efficiency.  It proposed re-calculating the position values after a move, across the entire tree, by using a triple-cartesian-product (or cross-join) to “get the count of nodes to the left/right of each node” for every node, which should sound dirty even as you say it silently in your head, let alone attempt to write it in query form!

[footnote 2]
There’s a 3rd model that we could consider storing in the same table, called “Enumerated Path” or “Materialized Path” or “Breadcrumbs”, which may look good on paper and to your human eyeballs, but breaks down spectacularly when you start talking performance and scale — but to be fair, so do most of these models, eventually, in one way or another, which is why we’ve invented fantastic alternative technologies to address these problems… and frankly, if you’re using all 3 models at once, you’re #DoingItWrong, creating a veritable maintenance nightmare for yourself and everyone around you.  Note that the elusive 4th model, the Ancestor Table, requires (as the name would imply) another table — not an argument for or against anything, just an observation.

PS: Happy 2017!

Dates, Times, and Datetimes, Oh My!

There’s a tool for every job. Just stop abusing the tool!

This MSDN page,  CAST and CONVERT (T-SQL), specifically the section on DATETIME conversion formats, is easily one of my most frequently visited links.

It really shouldn’t be.

SQL Server is very good at storing and manipulating Date/Time values.  There are dedicated data-types for all flavors — DATETIME, DATE, TIME, the newer DATETIME2, and the less common SMALLDATETIME and DATETIMEOFFSET.  Then there are the functions & operators that let you do all sorts of fun stuff with them — DATEDIFF, DATEADD, DATEPART, GETDATE, ISDATE, and even some newer ones like EOMONTH and DATEFROMPARTS.  These are really powerful tools in the hands of a DB-Developer or DBA.

i just want to use dates
Is that so much to ask?  Courtesy of this guy’s blog, which sounds like a great place to learn iOS programming if one was interested in such things…

But you know what SQL Server is not so great at?  Reading your mind.  Oh, wait, that goes for most applications & systems.  Let me rephrase.  SQL is not the best platform for knowing how end-users will want their Dates/Times displayed in a contextually/culturally sensitive manner, and executing said preferences.

That’s what we have UX/UI layers for!

While it’s true that the underlying data store (SQL, in this case) needs to be aware of localization & globalization requirements, it shouldn’t be asked to serve-up, say, a Sales-Order-Date in 5 different flavors just because Report X wants it in typical USA fashion (mm/dd/yyyy), User B wants it in “long-form” because they’re reading it like prose (“Jan 13 2016 08:32pm”) in an email, and SSIS Package FooBar needs it in “ISO” format (yyyymmdd) because it’s using the date in a filename!  Actually, of those 3 examples, the latter is the most “legit” — or at least, the most justifiable use-case.  The other two should have been handled by the overlaying application or middleware — SSRS in the first case, or whatever automation app produced User B’s email in the second.

i can has string to store dates
Because once wasn’t enough…

But surely there’s a good reason that the T-SQL gods included the CAST/CONVERT functionality with all those special date-format arguments, right?  Obviously.  There are always valid use-cases; or, more proverbially, “There’s a tool for every job.”  Just stop abusing the tool!

broken-hammer-pulling-nail
Abused tools can fail you…

A typical DBA or DB-Dev is often asked to write ad-hoc queries or build one-off reports to meet some business request, and he/she often doesn’t have the time or the resources to offload the nitty-gritty formatting details to the appropriate layer of abstraction.  So yes, that’s why these conversion options exist (among other reasons).  And there’s nothing wrong with that, in and of itself.  But like anything, if it becomes a bad habit and a hindrance to overall productivity, it’s time to take a step back and re-examine the situation.  Ask the hard questions, like “Why am I being asked to create these one-off reports all the time, which sound so similar to each other, yet inevitably are always a bit different?”, or “Have my business users developed unrealistic expectations about what can/should be done by me vs. by other teams/contributors in the organization?”

This isn’t about passing the buck — I’ve already established that’s not my style.  It’s about working smarter, bringing more value to the organization by leveraging better technologies and techniques than obsolete habits and old-guard mentality would otherwise allow.

calvin-and-susie-arguing
“Why are you making me write these horrible queries!?” .. “Because you’re the DBA!” .. “Fine, but give me the resources to automate this for the future.”

So, dear reader, take the time to learn about SQL’s Date/Time types & functions, including the myriad formatting options of CONVERT.  But do yourself a favor and consider, when you find yourself using & abusing them, whether the task at-hand is truly best suited for the database layer, or if it really belongs somewhere else.

Thanks for reading!