Finding Trigger Dependencies

Preamble

In SQL, TRIGGERs are often a subject of contention.  Some data professionals have strong opinions about them, especially on the “they’re bad!” side.  But like anything, they’re a specific kind of tool that has a place in a specific problem.  One of the common scenarios for their usage is what we call an “Audit trail” — a running log of any & all changes made to a table or data-set (or even a database schema, as RedGate DLM Dashboard employs).

Yes, there are newer product features built around this very requirement, such as Change Data Capture (CDC) and Temporal Tables, but they require particular versions and editions of the product, and not every environment is going to be there.  So we need something native and compatible to the lowest common/supportable denominator.

Like most areas of the database environment, triggers are often poorly documented.  So what happens when we need to review all the triggers in a database and show what tables they’re on & what columns they’re “tracking” (sticking with the audit-trail example)?  Or, more generally, you could say, we want to find all table & column dependencies of our triggers.  Well sure, we could use the SSMS Object Explorer GUI — find the table, the triggers, right-click and “Show Dependencies”.. but that’s not good enough, for two reasons: A) it doesn’t get down to the column level, and B) it’s not script-able.

System catalogs and DMVs to the rescue!

dmv-line-of-people
No, not THAT DMV…

Disclaimer: this is heavy on MS Docs links.  There was a StackOverflow Q/A that helped point me in that direction, but sadly I lost the tab and didn’t save the link.

The Goals

In this script, I mostly focus on the UPDATE triggers, because they’re the most heavy-handed in terms of auditing.  Whereas with a typical INSERT or DELETE, the triggered audit action is basically “Hey, a row was inserted/deleted, here’s its primary key” — and sure you can get fancier and log what all the values were in the entire row — those are fairly straightforward.  It’s basically “all or nothing”.  In contrast, with an UPDATE audit, we want to know just what was updated, and from what (old value) to what (new value).

So the goal of documenting the dependencies is to see which columns we care about — because, inevitably, there are some columns that we either “don’t care about” for auditing, or some that just plain don’t change (by their nature — they’re an identity or a primary key, or controlled via other means).  In turn, this helps us understand how to reproduce these audit mechanisms when a better technology comes along — such as CDC or temporal tables, as alluded to earlier.

cdc-centers-for-disease-control
No, not THAT CDC… yeesh.

The Walkthru

We start with the system catalog views sys.tables and sys.triggers.  This establishes our primary relationship — a trigger ‘lives’ on (is attached to) a single table.  Then we immediately step into murky water.  A trigger, like other programmable objects, has a definition script, which you can view in a myriad of ways — Object Explorer, sys.sp_helptext, sys.sql_modules, sys.syscomments (deprecated), or OBJECT_DEFINITION(OBJECT_ID).  So if we really wanted to go spelunking, we could just read the trigger’s definition and see what columns it deals with.  But after about 2 or 3 of those, our eyes are going to glaze over and our fingers will start to twitch.  We need something better, something that can handle hundreds of triggers and output a nice unified result!

Oh wait, what’s this?  Object Explorer has a “View Dependencies” function.. aw, bummer, it doesn’t show columns.  Just tables.  As we know, most of the GUI functionality is driven by existing system meta-data/catalogs/dmv’s, so there’s definitely something that keeps track of these dependencies.  Let’s hazard a guess… sys.sp_depends or sys.sql_expression_dependencies?  The former is deprecated, the latter doesn’t give us a way to get to columns either.  But thankfully, on the docs, the deprecated note points us toward the correct path: sys.dm_sql_referenced_entities and sys.dm_sql_referencing_entities.

Now we’re cooking with gas!  So how do I use these suckers?  Well, there’s a couple intermediary joins I need to go thru, and in fact these are functions, not views, so we need to know the proper arguments.  But fear not, I have done the hard work for you!  Check out the code and let me know what you think.

But wait, there’s more!  There’s a slight monkey-wrench here.  You see, in the column dependencies, we’re getting both the “audited” columns and the “key” columns.  But we’d like those to be distinguished from each other.  When we document and/or replicate the auditing using newer technology, the primary keys aren’t what we need to track (they’re in a sense “given”, known) — we need to know which columns are “audit-able” and which aren’t.  Hence the two queries with a UNION ALL.

union-vs-confederate
That is ALSO inappropriate. Good lord, who’s choosing these pictures today?

Conclusion

So there you have it, documenting some audit-trail triggers.  Or triggers in general.  Complete with the tables & columns on which they depend.  Enjoy!  ‘Til next time folks.

Automating SQL Installation

..while there are likely even better ways to do this in the long-run, this quick & easy approach was sufficient to save me time and effort..

At this point in my career, I’m not managing massive environments, so I don’t feel the need (nor have the expertise) to use a large scale solution like DSC or SCCM.  But I’ve had to install SQL Server a few times, so I figured it’s worth at least scripting out a standard pre-configured installation, so that A) I don’t need click through a GUI ‘wizard’ hearkening back to the ’90s, and B) the SysAdmins can “fire and forget” (read: stop bugging me about it).

keep it simple stupid
the patented one-eyebrow-raise..

The Disclaimer

Thus, I’m attempting to K.I.S.S., while making it configurable & repeatable.  There are some limitations of this approach, as alluded above.  It’s not “massively scalable” (scaleable? scale-able?) because:

  1. The PoSh script still needs to be deployed locally to the server in question
  2. The installer config (.ini) also lives locally (though it probably could be a UNC path, it’s just a file after all)
  3. The script prompts you for the service account (SQL engine, Agent) credentials and the sa password using the Read-Host -AsSecureString method cmdlet, so some meatbag still has to type those in.  This is because we don’t have an enterprise pwd/secret-management system where I could, say, ask it for a service account credential set and tell it to embed that securely in a script without it actually being visible to me.  So, while yes, they’re kept in a “vault”, it’s not query-able by anything else, so an admin still needs to copy & paste them into whatever configuration screen he’s working with at the time.  Not ideal, I know, but we work with what we’ve got.

PS:  Yeah, yeah, “don’t use sa, rename it or disable it; or use Windows Auth only!”.  Rage, howl, fire & brimstone.  I’m not going to argue about it; we can save that for another post.  This environment dictates that its used during setup and then disabled later, so that’s beyond the scope of the installer config.

So yes, while there are likely even better ways to do this in the long-run, this quick & easy approach was sufficient to save me time and effort for the occasions when a new SQL box/VM needs to be spun-up.

Useful links

  1. A primer on SQL cmd-prompt installation & its arguments
  2. A couple community articles on the subject (the latter about slipstreaming updates)
  3. A technet article & couple Q&A threads (technet, stackoverflow) that helped me figure out how to securely get & put the credentials
  4. An example for mounting an ISO in PowerShell
  5. And finally, two things that I attempted to understand but ultimately failed to implement, because (apparently, at least to me), PowerShell remote-ing is a P.I.T.A.
config.ini, to command prompt, to PowerShell
3 steps toward a better workflow

The Outline

First we need an .ini file to work with.  You could either create it from scratch, or take it from an existing SQL box’s “Setup Bootstrap” folder.  Example path C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Log\20170801_073414\ConfigurationFile.ini​  — indicating this was an install done on 8/1/2017 at 7:34am.  Right above that, at simply C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Log\, you’ll see a Summary.txt file, which can actually come in handy while you’re testing these unattended installs and wanting to see why it failed.

The first link above, from MSFT Docs, does a pretty nice job of telling you all the things that make up this config file.  You get to bypass the TOS prompt, enter service account details, specify drive letters (paths) for default data/log file locations & tempdb, slipstream update packages (UpdateSource​), and even more advanced stuff like AG settings and whatnot.  My example will be a simple standalone instance using the default name, so I’ll be sticking with the basics.

We can then use this file in the ConfigurationFile argument of setup.exe from the SQL Server install media.  To put a little more color on that: the .ini file is really just a collection of command-line arguments to setup.exe​; you could also list them all out in-line, but that would be tedious and silly.  Here’s a couple major selling points of creating your own config file:

  1. Slipstream updates (SP’s, CU’s), instead of having it go out to MSFT update servers (or *aghast* sticking with the original RTM bits, you heathen you!)
  2. Specify drive letters / default file locations: sure, this may be considered old-hat if you’re running super slick storage, but I still find it makes management a bit easier if I know where my MDFs, LDFs, TempDB, & backups will always be.
  3. Take advantage of 2016’s better TempDB setup options (# files, size & growth)

We will, however, keep a couple arguments out of the .ini file and instead throw them into the ArgumentList from the calling PowerShell script.  Speaking of, here’s what the PowerShell script needs to do:

  1. Prompt the operator (SysAdmin or DBA) for the SQL & Agent service account credentials, and (optionally) the sa pwd (if using it).
  2. Fetch our install media from the central network share where we store such things (server & office ISO​s, for example).
  3. Mount said ISO to our virtual disc drive.
  4. Run its setup.exe with the following arguments:
    1. The config .ini file
    2. The service & sa accounts
  5. After it’s done, un-mount (dismount) the ISO.

Then the DBA can connect to the brand-spankin’-new running SQL instance and do other post-setup configurations as desired (i.e. set max-memory, maxDOP/CTFP, etc).  And sure, those could also be done in PowerShell (thanks in no small part to the awesome team at DbaTools), I chose not to do so in this case.

As the bloggers say, “that’s left as an exercise to the reader”.

Plus, they’re never quite as deterministic as we’d like them to be — they depend on the server’s compute resources, i.e. memory size & CPU cores, as well as estimated workload & environment tier, so it’s often a gamble in “how correct” your initial settings will be anyway.  Still, anything is better than the defaults, so configure-away!

husky puppies sharing
because sharing is caring!

The Code

Here are the Gists I’ve created to go along with this post.  If I’ve made a mistake, or if you, dear reader, have a suggestion, we can incorporate them into the gist without me having to go back and edit the blog post!

Yay technology!

I’d love to get feedback on how you would improve this, what you might do differently, etc.  Drop me a comment or a tweet!

Config/INI file:

; AUTHOR: Nate Johnson, @njohnson9402/@natethedba, natethedba.wordpress.com
; LICENSE: https://choosealicense.com/licenses/unlicense/
; TYPE: SQL Server 2016 Configuration File (for command-line installation)
; DESCRIPTION:
; Inline comments are mostly copied from existing file generated by install wizard, with clarification where necessary.
; I use drive D:\ for data files (MDF), L:\ for transaction logs (LDF), T:\ for TempDB, and X:\ for backups.
; Most other options are "normal", i.e. I don't deal with clustering, Availability Groups, or other exotic things.
; I am only installing the database engine and replication components; see FEATURES option for more.
; Read the corresponding blog post at https://natethedba.wordpress.com/automating-sql-installation/
; USAGE:
; There are THREE static placeholders you need to replace/type when you use this:
; 1. <YOUR UPDATES/PATCHES FOLDER>, which is for slipstreaming whatever service-pack(s) and cumulative-update(s).
; 2-3. <YOUR DOMAIN> and <YOUR DBA GROUP>: for specifying domain account(s) which are given SQL sysadmin ('sa') rights,
; hence, usually your DBA group. You could move that part out to the calling script instead, if you wanted,
; but I chose to leave it here because it's not likely to change much within an environment.
; required first line
[OPTIONS]
; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter.
ACTION="Install"
; Specifies that SQL Server Setup should not display the privacy statement when ran from the command line.
SUPPRESSPRIVACYSTATEMENTNOTICE="True"
; By specifying this parameter and accepting Microsoft SQL Server terms, you acknowledge that you have read and understood the terms of use.
IACCEPTSQLSERVERLICENSETERMS="True"
; Use the /ENU parameter to install the English version of SQL Server on your localized Windows operating system.
ENU="True"
; Setup will run silently, logging to files in the setup-boostrap directory
Q="True"
; Specify whether SQL Server Setup should discover and include product updates. The valid values are True and False or 1 and 0. By default SQL Server Setup will include updates that are found.
UpdateEnabled="True"
; If this parameter is provided, then this computer will use Microsoft Update to check for updates.
USEMICROSOFTUPDATE="False"
; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, MDS, and Tools. The SQL feature will install the Database Engine, Replication, Full-Text, and Data Quality Services (DQS) server. The Tools feature will install shared components.
FEATURES=SQLENGINE,REPLICATION
; Specify the location where SQL Server Setup will obtain product updates. The valid values are "MU" to search Microsoft Update, a valid folder path, a relative path such as .\MyUpdates or a UNC share. By default SQL Server Setup will search Microsoft Update or a Windows Update service through the Window Server Update Services.
UpdateSource="<YOUR UPDATES/PATCHES FOLDER>"
; Displays the command line parameters usage
HELP="False"
; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system.
X86="False"
; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS).
INSTANCENAME="MSSQLSERVER"
; Specify the root installation directory for shared components. This directory remains unchanged after shared components are already installed.
INSTALLSHAREDDIR="C:\Program Files\Microsoft SQL Server"
; Specify the root installation directory for the WOW64 shared components. This directory remains unchanged after WOW64 shared components are already installed.
INSTALLSHAREDWOWDIR="C:\Program Files (x86)\Microsoft SQL Server"
; Specify the installation directory.
INSTANCEDIR="C:\Program Files\Microsoft SQL Server"
; SQL Agent account startup type
AGTSVCSTARTUPTYPE="Automatic"
; Level to enable FILESTREAM feature at (0, 1, 2 or 3).
FILESTREAMLEVEL="0"
; Specifies a Windows collation or an SQL collation to use for the Database Engine.
SQLCOLLATION="SQL_Latin1_General_CP1_CI_AS"
; Set to "True" to enable instant file initialization for SQL Server service. If enabled, Setup will grant Perform Volume Maintenance Task privilege to the Database Engine Service SID. This may lead to information disclosure as it could allow deleted content to be accessed by an unauthorized principal.
SQLSVCINSTANTFILEINIT="True"
; Windows account(s) to provision as SQL Server system administrators.
SQLSYSADMINACCOUNTS="<YOUR DOMAIN>\<YOUR DBA GROUP>"
; The default is Windows Authentication. Use "SQL" for Mixed Mode Authentication.
SECURITYMODE="SQL"
; The number of Database Engine TempDB files.
SQLTEMPDBFILECOUNT="4"
; Specifies the initial size of a Database Engine TempDB data file in MB.
SQLTEMPDBFILESIZE="1024"
; Specifies the automatic growth increment of each Database Engine TempDB data file in MB.
SQLTEMPDBFILEGROWTH="64"
; Specifies the initial size of the Database Engine TempDB log file in MB.
SQLTEMPDBLOGFILESIZE="1024"
; Specifies the automatic growth increment of the Database Engine TempDB log file in MB.
SQLTEMPDBLOGFILEGROWTH="64"
; The Database Engine root data directory.
INSTALLSQLDATADIR="D:"
; Default directory for the Database Engine backup files.
SQLBACKUPDIR="X:"
; Default directory for the Database Engine user databases.
SQLUSERDBDIR="D:\Data"
; Default directory for the Database Engine user database log files.
SQLUSERDBLOGDIR="L:\Log"
; Directories for Database Engine TempDB files.
SQLTEMPDBDIR="T:\TempDB"
; Directories for Database Engine TempDB log files.
SQLTEMPDBLOGDIR="T:\TempDB"
; PS: yes, there is some debate as to whether your TempDB logs should go with your user DB logs, or with your TempDB data;
; I'm inclined to say the latter, because I like to use locally attached NVMe/M.2 flash storage specifically for it. But, YMMV.

PowerShell install script:

# AUTHOR: Nate Johnson, @njohnson9402/@natethedba, natethedba.wordpress.com
# LICENSE: https://choosealicense.com/licenses/unlicense/
# TYPE: PowerShell script
# DESCRIPTION/USAGE:
# There are TWO static placeholders that you need to change/type-in when you want to use this,
# they both start with <PATH TO …>. I also added a #CHANGE THIS! comment to the end of their lines.
# The first one is for your installer config .ini file
# (see other gist at https://gist.github.com/NJohnson9402/a3c13429a055771efd26eefa66c69d62).
# The second is for the location of your SQL server installation media (ISO).
# See corresponding blog post at https://natethedba.wordpress.com/automating-sql-installation/
# get account credentials to configure SQL
$sqlsvc = Get-Credential Message "SQL Service account?"
$agtsvc = Get-Credential Message "SQL Agent account?"
$sapwd = $(Read-Host Prompt "SQL 'sa' login pwd" AsSecureString)
# multi-line string concatenation
$arglist = '/ConfigurationFile="<PATH TO your config file>.ini"' ` #CHANGE THIS!
+ ' /AGTSVCACCOUNT="' + $agtsvc.UserName + '"' `
+ ' /AGTSVCPASSWORD="' + $agtsvc.GetNetworkCredential().Password + '"' `
+ ' /SQLSVCACCOUNT="' + $sqlsvc.UserName + '"' `
+ ' /SQLSVCPASSWORD="' + $sqlsvc.GetNetworkCredential().Password + '"' `
+ ' /SAPWD="' + [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sapwd)) + '"'
# this last crazy tidbit converts the SecureString to a plain string that is still "secure"
# in the sense that it can't be sniffed or dumped from memory by another process. At least, that's my understanding.
# mount the install image
$iso = Get-ChildItem Path "\\<PATH TO your SQL Server 2016 Standard x64 iso folder>\" #CHANGE THIS!
Mount-DiskImage $iso.FullName
# get the drive letter of the mounted image to reference setup.exe
$setup = $(Get-DiskImage ImagePath $iso.FullName | Get-Volume).DriveLetter + ":\setup.exe"
# run installer with arg-list built above, including config file and service/SA accounts
Start-Process Verb runas FilePath $setup ArgumentList $arglist Wait
# un-mount the install image when done after waiting 1 second (just for kicks)
Start-Sleep Seconds 1
Dismount-DiskImage $iso.FullName

Quickie: Timing a HUGE Data Copy Operation

We think the total operation will take several hours, possibly days.  So we want to be able to check up on it.

While I try to compose my resources and samples for the previous two teasers, I found this to be a particularly interesting problem and thought I’d share my solution.

Scenario

trump it's gonna be yuge
yuuuuge

We have a very large table, half a billion rows.  It needs to be copied (well actually, indexed, but that’s another part of the discussion).  We know that standard TSQL would suck for this, even if we wrote a batch-loop proc for it (sure, it might be less blocking, but it’ll take forever).  Plus, we might be going between two different servers (well, not in my case, but you very well could be, and in fact that’s probably the more likely case, given the implications of such a large operation).  SSIS to the rescue!

Now, again, we’re lazy, so we use the Import/Export Data wizard (a component of SSIS) to build the .dtsx package.  Then we can schedule it via a SQL Agent Job.

Sure, we could build it with Visual Studio and/or BIML and be super-awesome, but where’s the fun lazy in that?

Based on some preliminary estimates of the throughput using SSIS for a small sub-set of the data, we think the total operation will take several hours, possibly days.  So we want to be able to check up on it — to see its status & elapsed time, and get an estimate of time remaining.  Sound good?

Assumptions

assumptions this way
over thar!

Two assumptions going into this, to keep the example simple.

  1. We know the Agent Job’s scheduled start time, and it does start on-schedule.
  2. We’re only dealing with 2 tables — 1 source, 1 destination.  Furthermore, they’re both on the same server instance; or if not, we can query a linked-server connection once to get the “source” table size-stats, because they won’t change.

(Okay that was slightly more than 2, but again, simple.  One can imagine expanding this to multiple tables & multiple instances, but then at some point you’re going to need a “witness” that can talk to all the disparate sources of data and conglomerate those bits & pieces together, and then you have to ask yourself “is it really worth it, or can I just give a SWAG & move on with my day?”)

Before I move on: helpful SO answer that reminded me how surprisingly-not-that-difficult (and-still-pretty-efficient) it is to convert a time interval (seconds, in this case) in a “human friendly format” like Days.hh:mm:ss.  Yay.

I’m sure I’ve done it before, and subsequently forgotten about it.  Because, again, it’s one of those things that should be done in the presentation layer.

The Outline

One key component of any kind of “check-up” or general monitoring solution is, you need it to be light weight, i.e. have low overhead.  We don’t want our method of monitoring the process to add some non-trivial extra load to that process (or the servers doing said processing).  So let’s avoid COUNT(*) shall we?  Unfortunately, the top Google results don’t readily point to this, but MS actually gave us a handy-dandy built-in way of measuring this, sys.sp_spaceused.  Yay again.

Granted, it doesn’t get you “up to the millisecond” accuracy, but it’s truly close enough for what we’re doing; and frankly, it’s your only sane option when asking the question “how many rows?” of a TOUS.

So we’re going to use the output of that system proc, specifically the rows column, to measure how much data has been bulk-copied into our destination table.  We’ll also measure the source table, once, as stated, because (we’re assuming!) that won’t change.

Finally, we’ll use those row-counts, with some simple tricks & nonsense — basic math, dateadd/datediff — to calculate a percent-complete, elapsed time, estimated total time, and ETA (estimated time of arrival completion).

it's all a lot of simple tricks and nonsense
The Force? Hah!

The Code

See Gist.

In the next post, I’ll zoom out a bit and explain why I needed to do this, and in what situations it should come in handy.  Stay tuned!

Clean Code, the SQL

as a developer, DBA, or hybrid “DbDev”, you’re often tasked with writing or improving the stored procedures which house that complex logic.  And that’s my topic today: being clean about your SQL code.

Get it? It’s just too punny! … Ok I’m done.

The Coding Blocks guys did a series of episodes about the perennial favorite Clean Code book.  If you haven’t subscribed to their podcast…

do it do it now -Arnold
What are you waiting for?!?!

And it’s a great book, no doubt. But those guidelines for application code are not 100% directly applicable to database code.

wait... what?
Srsly?

Let’s back up a second. Why? That sounds about counter-intuitive, no?  Ok, more context. See, the traditional (“legacy”?) app consists of about 3 layers, the bottom one being the database. And it’s usually relational, and is usually responsible for far more than simple data access and persistence.  Read: complex business rules and process logic.  Data flow, not just getters and setters.

So that means, as a developer, DBA, or hybrid “DbDev”, you’re often tasked with writing or improving the stored procedures which house that complex logic.  And that’s my topic today: being clean about your SQL code.

Part 1: Comments

There’s a fairly famous quote from the book about comments:

Comments are always failures.

He’s using hyperbole, but for a purpose.  While his views on comments may be extreme, most programmers tend to realize the core essence of that chapter, which is that comments only serve to express something in plain English that the code has failed to express clearly enough to be easily and immediately understood.

With SQL scripts, and in particular with stored-procedures, I’m taking a somewhat opposite stance:

Comments are always appreciated, even if they’re potentially outdated or inaccurate.

There are two types of comments in SQL, the --inline and the /* block */.  Different people have their preferred flavors of block — sometimes it’s just several lines prefaced with the double-dash --.  And that’s fine, whatever floats your comment-boat.

hms commentus comment-boat
I made my own!! (most copied from an example at http://www.asciiworld.com/-Boats-.html)

In particular, I always encourage a comment block at the top of ever stored-proc & other user-defined programmable objects (function, types, etc).  Just a small example for illustration:

/*
Location: Server.Database
Author: NateTheDBA
Created: 2012-12-21
Description: Gets users who have not logged in since the given date.
Consumers: MyCoolAppName, MyReportServer
Revisions
2015-05-15, Nate: removed archive (never used after archive-date)
2017-06-07, Nate: fixed formatting for blog post
*/
CREATE PROCEDURE GetUsersNotLoggedInSince
    @SinceDate datetime2
BEGIN
    --some clever stuff goes here...
END

“But wait”, you say, “what about source control?”  Yes, all your programmable objects (and even, arguably, your reference data) should be in source control.  There are tool-vendors aplenty to help you with that.  But guess what?  Budgets.  Time & effort.  Oh, did I mention, legacy legacy legacy?  Yes, dear reader, the average business has years (decades) of organically evolved relational databases and processes.  Are you the guy or gal to swoop in on your unicorn and seamlessly convert their entire data tier infrastructure to a beautiful DevOps pipeline with shiny rainbows and kittens for all?  No?  Okay then.  Baby-steps.

devops-unicorn-dba-cleaning-up
Not that I’m bitter or anything…

Yes, my procs are in source control.  It’s called “daily automated script-out-objects-to-files which are then committed to SVN”.  It’s not built-in to SSMS.  Which means that I, or another DBA, or a potential consultant, or a Dev who gets enlisted to help improve a proc that runs for hours when it should only take minutes, would be inconvenienced by the extra trip to a separate tool/system to fetch some change-history just for context.  And really, that’s all this is for — CONTEXT.  We like to know what it is we’re working on when we start to work on it, without having to traverse a change-tree or go bug 3 other people who “might” have touched it last.  I’m not asking for a detailed log of every single time someone touched the thing; just give me the overview, the milestones and significant changes to functionality/features/scope so that I have a jump-off point for troubleshooting/testing/reasoning about it.

“But wait”, you say again, “shouldn’t your name be a sufficient description of what the proc does?”  Sure, in theory.  Until you have dependencies which need that name to stay the same even after an update or logic-change.  Like reports.  Or data-connected Excel workbooks.  Which are used daily by managers, who will come yelling at you if their worksheets suddenly stop functioning.

end rant

Back to comments in general.  The reason they’re helpful (besides documentation-headers for objects) is that they provide context and explain intent.  Half the time, my job as a DBA is improving or fixing someone else’s code.  Therefore, I want to see, in plain English, what it is they’re trying to accomplish, notes about attempts and failures, and the like.  Yes, I could have a discussion with them.  And I will.  But if I’m working on it asynchronously and they’ve moved on to something else, or our hours are different, I want those little nuggets of context and intent to be right there in the script, because that’s where I’m working!

What about queries that get passed-down from the app to the DB?  ORMs don’t support pre-pending a comment to their data calls, do they?  I wish.  Maybe some do, I haven’t researched it, but I know for sure that LINQ doesn’t.  But then again, when I’m using a query-capture tool (like DMVs, Profiler, X-events, or a vendor monitoring tool), ORM queries are so painfully obvious in comparison to hand-crafted SQL that I can usually spot them from a mile away, and go bother the app-devs for some context & conversation.  If you’re one of the poor unfortunate souls who still passes ad-hoc generated SQL statements down thru ODBC to your DB, then sure, a little comment won’t hurt anybody.

you poor unfortunate soul
it’s sad, but true…

So do your DBAs a favor, comment your SQL code, at least in terms of programmable database objects and ad-hoc scripts.  I promise, it’ll make them hate you less.  It might even make you love yourself more, because 3 months down the road when you revisit that proc, and you need to remember what it was for and why you did it that way, you’ll see it right there in your very own writing!  (OK, typing.)

Part 2: SRP, Encapsulation, and Abstraction

A bit of paraphrase of one of the book’s key points:

A reusable module (function, method) should do one thing, and do it well.

Also, the DRY principle:

Don’t repeat yourself.

When building SQL modules, we’re usually concerned with performance and accuracy, over abstraction and composability.  Therefore, repeating oneself is not necessarily a bad thing, when done for the right reasons.  Diligence is a big factor here — if there’s a non-trivial relationship between some entities that you’re repeating in several places, and you know that it could become a maintenance headache if that relationship’s definition has to change later, do as much as possible to mitigate the risk of dependency/consistency-loss.  This can be documentation, comments, and/or building that relationship into a view.

Make-Time-for-Due-Diligence
It’s important.

The latter brings up an interesting topic, one which I had a lively discussion about with a colleague recently (he’s a developer, and a dang good one) — nested views.  Because inevitably, the encapsulation of those relationships & business-rules into things like views or ITVF’s can and will lead to nesting those objects into other objects.  And troubleshooting many-level-nested views is a particularly frustrating exercise; in fact they’re what some DBAs call one of the “deadly sins of SQL“.  But there are perfectly valid reasons and uses for them, sometimes, and I really enjoyed the discussion thread we had on it, so I’ll have to expand on that in another post.

Anyway, I’m already getting long-winded and well over 1k words, so I’ll wrap it up for now, and continue this topic next week.

Thanks for reading, stay tuned!

Dates, Date-pickers, and the Devil

When a date range, or time period, is specified in SQL, it’s easiest, clearest, and most concise, to use a “greater-than-or-equal-to Period-Start, and less-than Next-Period-Start” logic. Mathematically speaking, we are defining the range as closed on the left, open on the right.

This is a bit rant-y, but… indulge me.  I’ve been writing/refactoring a lot of old reporting queries.  And, like most reports, they deal with dates and datetimes — as parameters, boundaries, or where/join predicates.  I also got way too intense with a recent SSC post (Sql Server Central), which fueled the fire even more.

fluffy-angry-puppy
I’m so cute and ANGRY!

SQL Server is very good at handling temporal datatypes and calculations against them.  We’ve got functions like dateadd, datediff, dateparts, datatypes datetime2 and datetimeoffset, datetime, etc.  It supports all sorts of format conversions if you need to display them in various ways.

..even though that should be left to the presentation layer!

Here’s the issue.  Well, there are several issues, but we only have time for a few.

Here’s the first problem

Report users don’t understand the “end of a time period” problem.  I don’t have a good name for it; others might call it the “Day plus one” problem or the “Less than date” problem.  What do I mean by this?  Well, let’s back up a bit, to DBA Commandment #6, “Thou shalt not use between with datetimes.”  In order to first understand the issue, we have to understand why this is a commandment.

When a date range, or time period, is specified in SQL, it’s easiest, clearest, and most concise, to specify it like so: @TheDate >= @StartOfPeriod and @TheDate < @StartOfNextPeriod.  Mathematically speaking, we’re defining the range as “closed on the left, open on the right”.  In other words, Min <= X < Max.

The reason we do this with datetimes is found right there in the name of the datatype — it has (or can have) a time component!

stone-tablets-with-roman-numerals-to-10
There are probably more than 10, but it’s a good starting point…

Let’s talk examples

Say you’d like to report on the month of March 2017.  How do you determine if your data-points (stored as datetime or, hopefully, datetime2) are within that period, that month?  Well sure, you could write where month(MyDateColumn) = 3 and year(myDateColumn) = 2017 

NO.  That is horrible, don’t do that.

It’s not SARGable and renders your index on that column useless.  (You do have an index on it, don’t you? No? Make one!)  Okay, let’s stick with something SARGable.  How about MyDateColumn between '20170301' and '2017-03-31T23:59:55.999'?  (You did read this post about using culture-neutral datetime literals right?)  But wait!  If your data is a datetime, it’s not actually that precise — your literal gets rounded up to 20170401 and you’re now including dates from April 1st (at midnight)!

Oh that’ll never happen… until it does.

Second problem

Many developers and report-writers assume that the values in their data will never be within the typical “1 second before midnight” or “1/300th of a second before midnight” escape window of your “3/31/2017 23:59:59.997” bounding value.  But can you guarantee that?  Didn’t think so.  Worse, if you use the .999 fraction as given in the 2nd example, you’re either “more” or “less” correct, and nobody can actually tell you which way that pendulum swings because it depends on the statistical likelihood of your data having actual literal “midnight” values vs. realistic (millisecond-y, aka “continuous”) values.  Sure, if you’re storing just a date, these things become a lot less complicated and more predictable.

But then why aren’t you storing it as an actual date, not a datetime!?

So what’s the right answer?

As I said, “greater than or equal to  ‘Start’, and less than ‘End'”, where ‘End’ is the day after the end of the period, at midnight (no later!).  Hence, MyDateColumn >= '20170301' and MyDateColumn < '20170401'.  Simple, yes?

keep calm and keep it simple
KCKS

But wait, there’s more!

I mentioned “date-pickers” in the title.  When it comes to UX, date-pickers are a sore subject, and rightly so — it’s difficult to truly “get it right”.  On a “desktop-ish” device (i.e. something with a keyboard), it may be easiest on the user to give them a simple text-box which can handle various formats and interpret them intelligently — this is what SSRS does.  But on mobile devices, you often see those “spinner” controls, which is a pain in the arse when you have to select, say, your birth date and the “Year” spinner starts at 2017.  #StopIt

I mean, I’m not that old, but spinning thru a few decades is still slower than just typing 4 digits on my keyboard — especially if your input-box is smart enough to flip my keyboard into “numeric only” mode.

Another seemingly popular date-picker UX is the “calendar control”.  Oh gawd.  It’s horrible!  Clicking thru pages and pages of months to find and click (tap?) on an itty bitty day box, only to realize “Oh crap, that was the wrong year… ok let me go back.. click, click, tap..” ad-nauseum.

stop-it-sign
#StopIt again

The point here is, use the type of date-picker that’s right for the context.  If it’s meant to be a date within a few days/weeks of today, past/future — OK, spinner or calendar is probably fine.  If it’s a birth date or something that could reasonably be several years in the past or future, just give me a damn box.  (Heck, I’ll take a series of 3 boxes, M/D/Y or Y/M/D, as long as they’re labeled and don’t break when I omit the leading-zero from a single-digit month #!)  If there’s extra pre-validation logic that “blocks out” certain dates (think bill-payer calendars or Disneyland annual-pass blackout-days), that probably needs to be a calendar too.

..just make sure it’s responsive on a mobile device.

And in all cases, pass that “ending date” to your SQL queries in a consistent, logical, sensible manner.  For reporting, where the smallest increment of a period is 1 day, that probably means automagically “adding 1 day” to their given end-date, because the end-user tends to think in those terms.  I.e. if I say “show me my bank activity from 1/1/2017 to 1/31/2017”, I really mean “through the end of the month“, i.e. the end of the day of 1/31.  So your query is going to end up wanting the end-date parameter to be 2/1/2017, because it’s using the correct & consistent “greater than or equal to start, and less than start-of-next” logic.

context-consistency-clarity
The 3 C’s

Final thoughts

I know it’s not easy to explain to business folks, and it’s not easy to implement correctly.  But it’s important.  The >= & < logic is clear, concise, and can be used consistently regardless of underlying datatype.  You just need to adjust your presentation layer (whether that’s SSRS parameters or a .NET date-picker) to convey their intent to the user, whether that’s “show/enter the last day of the month, but translate to the next day to feed to the query/proc.”, or “make them enter the next-day (day after the end of the month/period) and understand the ‘less than’ logic.”  I’m more inclined to the first, but it depends on your audience.

Thanks for reading, and happy date-ing!

DBA Holy Wars Part 2

Battle 4: GUIDs vs Identities

This is an oldie but goody.  A) Developers want their apps to manage the record identifiers, but DBAs want the database to do it.  B) Developers prefer abstracting the identity values out of sight/mind, DBAs know that occasionally (despite your best efforts to avoid it) your eyeballs will have to look at those values and visually connect them with their foreign key relationships while troubleshooting some obscure bug.

but-wait-theres-more-billy-mays
there’s ALWAYS more…

But there’s more to it than that.  See, none of those arguments really matter, because there are easy answers to those problems.  The real core issue lies with the lazy acceptance of GUI/designer defaults, instead of using a bit of brainpower to make a purposeful decision about your Primary Key and your Clustered Index.

Now wait a minute Mr. DBA, aren’t those the same thing?

NO!  That’s where this problem comes from!

A good Clustered Index is: narrow (fewer bytes), unique (or at least, highly selective), static (not subject to updates), and ever-increasing (or decreasing, if you really want).  NUSE, as some writers have acronym’d it.  A GUID fails criteria ‘N’ and ‘E’.  However, that’s not to say a GUID isn’t a fine Primary Key!  See, your PK really only needs to be ‘U’; and to a lesser extent, ‘S’.  See how those don’t overlap each other?  So sure, use those GUIDs, make them your PK.  Just don’t let your tool automagically also make that your CX (Clustered indeX).  Spend a few minutes making a conscious effort to pick a different column (or couple columns) that meet more of these requirements.

For example, a datetime column that indicates the age of each record.  Chances are, you’re using this column in most of your queries on this table anyway, so clustering on it will speed those up.

Most of the time, though, if your data model is reasonably normalized and you’re indexing your foreign keys (because you should!), your PKs & CX’s will be the same.  There’s nothing wrong with that.  Just be mindful of the trade-offs.

Battle 5: CSV vs TAB

bluray-vs-hddvd-fight
Who doesn’t love a good format-war?

Often, we have to deal with data from outside sources that gets exchanged via “flat files”, i.e. text files that represent a single monolithic table of data.  Each line is a row, and within each line, each string between each delimiting character is a column value.  So the question is, which is easier to deal with as that delimiter: comma, or tab?

String data values often have commas in them, so usually,the file also needs a “quoting character”, i.e. something that surrounds the string values so that the reader/interpreter of the file knows that anything found inside those quotes is all one value, regardless of any commas found within it.

But tabs are bigger.. aren’t they?  No, they’re still just 1 byte (or 2, in Unicode).  So that’s a non-argument.  Compatibility?  Every program that can read and automatically parse a .csv can just as easily do so with a .tab, even if Windows Explorer’s file icon & default-program handler would lead you to believe otherwise.

I recently encountered an issue with BCP (a SQL command-line utility for bulk copying data into / out of SQL server), where the csv was just being a pain in the arse. I tried a tab and all was well! I’m sure it was partially my fault but regardless, it was the path of least resistance.

Battle 6: designers vs scripting

no-wizard-allowed
Wizards are usually good, but in this case, they’re lazy and bad for you…

This should be a no-brainer. There is absolutely no excuse for using the table designer or any other wizardy GUIs for database design and maintenance, unless you’re just learning the ropes. And even then, instead of pressing ‘OK’, use the ‘Script’ option to let SSMS generate a `tsql` script to perform whatever actions you just clicked-thru.  Now yes, admittedly those generated scripts are rarely a shining example of clean code, but they get the job done, even with some unnecessary filler and fluff.  Learn the critical bits and try to write the script yourself next time– and sure, use the GUI-to-script to double check your work, if you still need to.

Confession: I still use the GUI to create new SQL Agent Jobs. It’s not that I don’t know how to script it, it’s just that there are so many non-intuitive parameters to those msdb system-sp’s that I usually have to look them up, thereby spending the time I would have otherwise saved.

Bonus round: the pronunciation of “Data”

its-data-not-data
Call me “big Data” one more time…

Dah-tuh, or Day-tuh?  Or, for the 3 people in the world who can actually read those ridiculous pronunciation glyphs, /ˈdeɪtə/ or /ˈdætə/ ?  It’s a question as old as the industry itself… or maybe not.  Anecdotally, it seems like most data professionals, and people in related industries, tend to say “day-tuh”; while those in the media and generally less technical communities tend to say “dah-tuh”.  (Where the first syllable is the same vowel-sound as in “dad” or “cat”.)  This likely means that the latter is more popular, but the former is more industrially accepted.

In either case, it doesn’t really matter, because at the end of the day, we’re talking about the same thing.  So if some dogmatic DBA or pedantic PHB tries to correct your pronunciation, tell ’em to stop being so persnickety and get on with the task at hand!

Until next time…

Little Gotchas

If the caller of our stored-procedure literally passes NULL as the parameter value, we might have a problem!

A large part of most DBA/DBD’s daily job is writing & maintaining stored-procedures.  In SQL Server or other RDBMSs, stored-procs (“SP’s”, “procs”, however you like to abbreviate), serve as one of the building-blocks of your overlaying applications and day-to-day operations, including maintenance and automation.

sprocket
This is a sprocket, not to be confused with a sproc, which is really just a proc.

Today, something struck me, and I was both shocked and comforted by the fact that this hadn’t really “come back to bite me in the arse“, as the proverbial saying goes.  But first, some context.

When we declare our proc signature with our parameters, we of course give them datatypes, and often default values — the parameter value that is assumed & used upon execution when the caller (operator, application, agent job, etc.) calls said proc without passing a value to that parameter.  So we create our proc like so:

CREATE PROCEDURE dbo.MyProc @MyParam BIT = 0 AS BEGIN SET NOCOUNT ON; END

So that users are allowed to call it like so, and assume some correct default behavior:

EXEC dbo.MyProc;

Coincidentally, that CREATE line is part of a typical “boilerplate” snippet or template which I use to create procs with “create if not exists, else alter” logic and a nice header-comment-block, which I’ll publish on my GitHub or Gist shortly, so I can show it here.  I know that MS recently added DROP IF EXISTS support to the language, but frankly, I like to keep procs intact if they exist because it’s easier not to have to remember/re-apply their metadata, such as security (grants/deny’s, certificate signatures, etc.) and extended properties.  Wake me up when they add true CREATE OR ALTER syntax!  Oh snap, they did… in 2016 SP1.  Anyway.

Now for the “catch”, the gotcha.

gotcha-programming-wikipedia-def
In programming/software-dev/IT-systems, “gotcha” has a specific meaning.  Thanks Wikipedia!

If the caller says exec dbo.MyProc, that’s great — they didn’t pass a parameter value, so the execution uses the default value (0) and off we go.  However, if the caller is so malicious as to literally pass NULL, we might have a problem!  Because let’s say that @MyParam value is used in a JOIN predicate or a IN (SELECT...) block, or even a CASE expression.  We won’t get an actual error; SQL Server is smart enough to jump over the syntactical variations required for equivalence checking (i.e. Column1 = 0 vs. Column1 is NULL) when it interprets/compiles the stored-procedure.  But, what we’re probably going to get is unexpected or unknown behavior.

warning-assumptions-ahead

It seemed worth re-using a classic…

And really, it all comes back to those nasty things called assumptions.  See, as the proc author, we’re assuming that our @MyParam will always be a 0 or 1, because it’s a BIT, and we gave it a default value, right?  Sure, maybe in another language, but this is T-SQL!  NULL is a separate and distinct thing, a valid value for any datatype, and must be accounted for and treated as such.  It can get especially dicey when you have a NOT IN (SELECT...) block that ends up as an empty-set, which suddenly morphs the outer query into a “without a WHERE clause” beast, and.. well, you can guess the rest.

So what do we do about it?  Well, we can add a “check parameter values” block to the top of our procedure where we either throw an error, or set the NULL value back to a default.

Examples:

IF (@MyParam IS NULL) RAISERROR ('@MyParam cannot be NULL; try again.', 15, 1);
IF (@MyParam IS NULL) SET @MyParam = 0;

We could also work on the internal proc logic to account for NULL values and “eliminate the guesswork” (i.e. prevent unexpected behavior) by actually having logical branches/conditions which “do something” if the parameter is NULL.  Then, at least we know what our proc will do if that infamous caller does exec MyProc @MyParam = NULL.  Yay!  But that sounds like a lot of work.  Maybe.

Or maybe it’s worthwhile because you actually want NULL to be treated differently than all other parameter values, and then, hey, you’ve already spent the time on that logic, so you’re done!

what-if-i-told-you-null-does-not-equal-null
But NULL does not NOT equal NULL, either!  Crap, somebody give me the red pill…

I hope this helps somebody else avoid the same assumptions.

Nested Set++ Wrap-Up

So we’ve built our Cat Tree. But how do we know it’s all correct?

One more time, with feeling!  Not that this dead horse needs another beating, but I did promise…

So we’ve built our Cat Tree.  We’ve written our CrUD ops, our “move” op, and even some readers.  But how do we know it’s all correct?  We can select from our Cats view, of course.  But we want to be really sure.  Plus there’s that pesky SwapCatNode method.

Easy one first.  SwapCatNode can mean swapping sibling order, or switching a parent with a child or grandchild, or toggling nodes that are in completely different places in the tree & not related at all!  This is the least logical operation, if you think about a proper hierarchy, but it turns out to be necessary sometimes.  We’re just swapping the nodes’ position values & ParentIDs with each other, and updating ParentIDs on their children to each others’ IDs.

I really don’t even need to draw this one… but because I needed a header image, I did.  Anyway, just get the rows with the given target IDs, swap the PLeft, PRight, Depth, and ParentID values, and call it a day.

Now the complex.  To validate that our tree is properly structured, the following statements need to be true:

  1. Each node’s Right value is greater than its Left.
  2. More to the point, each node’s Right value is greater than all of its ancestors’ Left values.
  3. Similarly, each node’s Left value is less than all of its descendants’ Left values (and Right values, obviously!)
  4. Leaf nodes have no gaps between Left & Right: Right = Left + 1
  5. Depth is easy to verify because we already wrote the rCTE to calculate it!
  6. And of course, no orphans – all ParentIDs lead to an actual parent node, except of course if they’re NULL (root nodes).

We can either go thru lots of logical checks in different queries, or we can try building a mock tree out of the base adjacency-list structure (ParentIDs) and compare values.  The latter will only help us with #1-5; the orphans problem is a different animal, but it’s also not part of the model per-se, so it’s actually good to separate that check from the rest.  (And it’s really simple – use a not exists query on ParentIDs and presto, orphans checked!)

Building a mock-tree, or a “position re-builder”, will come in handy for another reason:  Let’s say we need to completely revamp a subtree, i.e. insert & update a bunch of nodes at once because somebody royally screwed up that branch.  And we’ve got our shiny fixed data, 100’s of rows, ready to go, if only those damn triggers weren’t there, preventing us from doing bulk operations!  What we’d really like to do is, knowing the starting ParentID, just insert all our new nodes with PLeft values in sequence to each other (and not care about the rest of the tree); and/or, update a few sets or families of nodes to massively re-order them, without having to call the Swap routine one-at-a-time ad-nauseum.  We also don’t want to care about figuring out correct PRight & Depth values.  After that’s all done, our new subtree will have “bad” position values, so we need to rely on some other routine to fix them for us, so that the tree can again be well-formed and things can go back to normal.

nsm-cat-bulk-reorder-insert-rebuild
Simon & Tigger got re-ordered. Then we bulk-added 3 new Cats under Mittens and didn’t know what their position values would be, so we let the rebuilder take care of it.

In our RebuildCatTree routine, we actually need to re-number all nodes to the right & above our “bulk-inserted” subtree, just in case we’ve caused things to move.  And since we’ve re-ordered some siblings elsewhere, it turns out to be easiest, in practice, to re-number the whole tree.  This is where our fair-weather friend recursion comes in — and not just another rCTE, but real stored-procedure recursion.  This can get dicey; SQL only supports a certain # of recursion levels, and it can really eat up those CPU cycles & RAM buffers.  So this should be done rarely, and preferably during a time where the tree is not under heavy usage.

The code samples are now available on my GitHub page.  Comments abound!

I hope you’ve enjoyed this little mini-series.  And now, I promise to move on to new topics & rantings of various nature!  Thanks for reading.

~Fin~

Update:

I’d like to point future readers at two very informative articles for those interested in deep-diving down the hierarchical rabbit-hole: Aaron Bertrand, and Jeff Moden.  There are many more tweaks and enhancements that can be made to the “classical” Nested Set model, which those lucky Devs/DBAs who are in a position to actually [re]implement their hierarchies will want to read about and take advantage of.

The Nested Set Model++

This time we talk about adding a Depth field, and good ol’ CrUD ops – Create, Update, Delete.

Since my first post on this topic got a lot of attention and traction, I felt it appropriate to expand on the topic a bit, even if it’s been largely covered by other bloggers in the past.  I’ve also found it very useful to have a “depth” field, which isn’t canonically part of the model (hence the “++” in the title!), but is quite handy not only for display purposes (while you’re querying & testing the thing), and also for making certain “get” ops easier.  Sure, it adds a wee bit more to structural maintenance, but since that’s already the most complicated part of the model anyway, it’s hardly worth a second thought.  So let’s dive in!

The big topic last time was this operation of “move a subtree” — of course, sometimes you’re just moving one node, but only if it’s a leaf; otherwise you’re moving a node and all its descendants, so I’ve kept the procedure name MoveCatSubtree intact.  This time we’ll talk about good ol’ CrUD ops – Create, Update, Delete.  In my implementation, I chose to handle these with table triggers.  Some would argue in favor of stored-procs, and while that would seem “more consistent” with the precedent set, I’d counter with 2 points:

  1. To be really fool-proof, you’ll need to prevent ungoverned inserts/updates/deletes anyway; you could either do this with GRANT/DENY permissions, or triggers.  Permissions would be more complex because you’d still need your users to be able to exec the CrUD procs, so you’d end up using some convoluted security mechanisms that can be tricky to maintain over time.
  2. With triggers, we can allow the consumers of the data (apps, users) to continue to use “plain-ol’-TSQL” to access and manipulate the data, instead of having to remember stored-proc names and hunt for documentation on them.  (The exception being, of course, MoveCatSubtree, which, honestly, could be integrated into the insert trigger, but I’ll leave that as an exercise to the reader!)

Again, yes, we could easily do the same implementations in stored-proc form, and you’re welcome to fork my GitHub repo if you feel like exploring that.

Let’s outline the steps and draw some pictures.

1. InsertMake a hole!

When we INSERT a node, we want to specify its parent and a name, and let the triggers do the rest!  We place it at the right of its siblings-to-be, and update the position values of all nodes to the right so that everything stays kosher.  This should sound familiar — it’s essentially that “make a gap” part of the subtree-move op.  In terms of depth, we just +1 to the parent’s.

nsm-cats-insert-gadget-under-stripes
Stripes is a breeder; Gadget comes in and makes Fluffy & children move over.

Also, for some reason, our cats reproduce asexually…

 

2. Delete: Think of the children!

Similarly, to DELETE a node, we want to “close the gap” left by said deleted node.  But what of the children?  We don’t want to leave any orphans behind!  So we “promote” the children of our deleted node to the level (depth) of their parent, sandwiching them in between the deleted node’s siblings (aka their former aunts/uncles!).  This is easier than it sounds.

nsm-cast-delete-fluffy
He killed Fluffy!

Fluffy is survived by his children, who are now for some reason his siblings, and are very confused by their sudden increase in age & status.

3. UpdateRename; everything else is encapsulated.

Finally, we only allow UPDATEs on the Name, because everything else (position values, depth, parent) is structural, and encapsulated by our tree maintenance logic.  Moving a node or subtree?  MoveCatSubtree.  Swapping positions with another node?  SwapCatNode (TBD!).

4. Depth: Set it once, & encapsulate it!

Depth is pretty simple to add if you’ve already got a tree full of data.  We can use a recursive common table expression, or “rCTE“.  While normally these are frown-worthy (remember, recursion is not SQL’s strong suite), we’re only using it one time to populate an existing data-set, so we can keep on smiling.

;WITH CatTree AS
(
    SELECT CatID, ParentID, Name, PLeft, PRight, Depth = 0
    FROM nsm.Cat
    WHERE ParentID IS NULL
  UNION ALL
    SELECT cat.CatID, cat.ParentID, cat.Name
        , cat.PLeft, cat.PRight, Depth = tree.Depth + 1
    FROM CatTree tree
    JOIN nsm.Cat cat
        ON cat.ParentID = tree.CatID
)
UPDATE cat SET cat.Depth = CatTree.Depth
FROM CatTree
JOIN nsm.Cat cat
    ON cat.CatID = CatTree.CatID

The last order of business (for now) is to add Depth support to our MoveCatSubtree method.  As illustrated below, we have to move the subtree “up” or “down” in Depth depending on its new parent’s position relative to its old position.  The details are, of course, in the GitHub repo, but here’s a quick snippet of what that looks like: NodeNewDepth = /*NodeCurrent*/Depth + (@NewParentDepth - @SubtreeOldDepth) + 1  (where @SubtreeOldDepth is the depth of the top node of the moving subtree.)

nsm-cast-move-jack-to-mittens
Move Jack to under Mittens; I won’t repeat the Left/Right logic, just note the Depth logic.

 

In a future little addendum, I’ll briefly go over the “get” queries and that TDB SwapCatNode method.  For now,  enjoy the cats (again)!  Thanks or sticking around, I know it’s been a few more weeks than normal.

PS: A big thank-you to the dudes in the CodingBlocks #blogging Slack channel for their encouragement and motivation to get this done!  You guys rock.  Check out their blogs for some terrific content: http://dotnetcore.gaprogman.com/ , http://www.codeshare.co.uk/ , http://thereactionary.net/ .

Update:

I’d like to point future readers at two very informative articles for those interested in deep-diving down the hierarchical rabbit-hole: Aaron Bertrand, and Jeff Moden.  There are many more tweaks and enhancements that can be made to the “classical” Nested Set model, which those lucky Devs/DBAs who are in a position to actually [re]implement their hierarchies will want to read about and take advantage of.