Today’s post is brought to you by an unplanned AWS RDS outage, the desire to search its SQL Error Logs for events during the target time-frame, and the horrendously inefficient interface which AWS gives you in the GUI console for reading said logs.
- Even the dedicated ‘admin’ user that you create for your instance, doesn’t have permission to read the error logs via the SSMS GUI nor with
xp_readerrorlog. You can, however, use
sp_readerrorlog. That’s with an ‘S‘.
- The parameters here are quite arcane, namely
@1, @2, @3, @4. Unfortunately, none of them allow you to filter on a time-span (those are the 5th and 6th parameters of the underlying
xp, which we don’t have access to, as per #1).
- My solution involves a
#temptable, a loop of
insert / execcommands, and then whatever queries you desire to search/filter/analyze the results. I also throw in a conversion to local time (from UTC, which is what the AWS servers use).
Details and The Why
You can check out the script; it’s short enough to embed, IMHO, so I’ll just leave it right here.
|DECLARE @CONST_P2 int = 1; —Error log, not Agent log.|
|DECLARE @NumLogFiles int = 30; —SET THIS APPROPRIATELY!|
|—^ (there's no way to get it programatically outside of registry-reads/other-xp's, none of which is doable in RDS, AFAIK.|
|—Example: show events from 7/19/2018 between 2am and 2:30am PDT|
|DECLARE @StartDate datetime = '2018-07-19T02:00:00'|
|, @EndDate datetime = '2018-07-19T02:30:00';|
|CREATE TABLE #ErrorLogs ([LogDate] datetime, [ProcessInfo] nvarchar(50), [Text] nvarchar(max));|
|DECLARE @LogNum int = 0;|
|WHILE (@LogNum <= @NumLogFiles)|
|INSERT #ErrorLogs (LogDate, ProcessInfo, [Text])|
|EXEC sys.sp_readerrorlog @p1 = @LogNum, @p2 = @CONST_P2|
|SET @LogNum += 1;|
|ALTER TABLE #ErrorLogs|
|ADD LocalDate datetime;|
|SET LocalDate = CONVERT(datetime, LogDate AT TIME ZONE 'UTC'|
|AT TIME ZONE 'Pacific Standard Time');|
|SELECT el.LocalDate, el.ProcessInfo, el.[Text]|
|FROM #ErrorLogs el|
|WHERE el.LocalDate >= @StartDate|
|AND el.LocalDate < @EndDate|
|ORDER BY LogDate|
Line 25-26 is particularly interesting to me, and only works with SQL 2016 and up. I got the idea from this StackOverflow answer. You can chain two
AT TIME ZONE commands together to convert a given
datetime value from one zone to another. The reason you still need the
CONVERT is because the output of the
AT TIME ZONE command is always a
datetimeoffset type, which, while quite useful in its own right, has its quirks, and doesn’t serve our purposes for ease-of-readability.
If you’re not running 2016, at least in RDS, you’ve got nearly no excuse. The upgrade process is vastly simpler with RDS than with traditional on-prem servers. Although, I did run into my share of snags with it recently, which I’ll blog about later.
You should plug in whatever values suit your need & environment — the
@EndDate. I used 2-2:30am, because… well, that’s always when those damn outages seem to happen, ain’t it?
As I mentioned, “the Why” is basically because AWS RDS limits your permissions (even as an admin) in some key ways, and one of those limitations prevents you from reading the error logs in the more “normal” ways — SSMS GUI,
xp_readerrorlog, etc. And the interface given to read the logs in the AWS console GUI is quite a sad-panda. They offer a wrapper proc
rdsadmin.dbo.rds_read_error_log @index = 0, @type = 1 , which really only serves the purpose of naming the parameters more nicely; under the hood it’s doing the exact same thing.
Of course we can’t prove that, because it’s encrypted, but the results are identical.
So there you have it. Go check out your AWS RDS SQL Error Logs! See what tickles your curiosity. =)