"Extraordinary claims require extraordinary evidence" - Carl Sagan
One of the advantages (as I see it) of the co-opting of ELMAH error logging into my automated tests is that errors end up where they should, in ELMAH. As in, in the "ELMAH_Error" table in your database, or whatever back-end storage option you choose. Because you are using ELMAH, right? (At this stage it probably looks as if I'm keyword spamming for "ELMAH". Oops, said ELMAH again.) Some might see this as noisy, an adulteration of the logging stream for your application's unhandled exceptions, but if I'm going to use my database to coordinate my automated test sessions, then the ELMAH table is the most natural place to put it. And if you don't want automated test kruft there, you can easily identify it and filter it out later. Or at the end of the test. Or hire a cleaner if you're too busy.
Automated test and JS errors go to ELMAH to die. Might wanna fix up those Google+ errors!
Pass the parcelSo how does the error get in there in the first place? Well, as I describe in part 1, the session id gets generated during the test, slapped on the end of the URL of the Frankenpage:
The Services controller Test() method creates a ViewBag.SessionId from the modelbound querystring sessionId value. The Razor engine generates a meta tag
<meta name="sessionId" content="@ViewBag.SessionId">which the jQuery can get:
var sessionId = $("meta[name=sessionId]").attr("content");Of course, there's a few different ways you could do all this. Maybe a cookie. But I think this is the easiest.
The consA couple of times on outings with Tina, my wife, I've said something like "You go to that shop, I'll go to this one, and I'll text you if there's a change of plan, otherwise I'll give you a missed call, and we'll meet at such-and-such a place, and if that's closed, I'll text you where to meet." Too many moving parts. And we've come unstuck one or two times because of it. Phones and batteries don't always work. Same for (my) short-term memory.
This testing technique has, as it stands, too many moving parts. It relies on a few things working. For instance, it relies on the fact that the JS file is correct, that the controller and Razor between them are serving up the test page with the meta tag containing the session id, etc. To that end, you could assert that the page is serving in the first place by adding in a HttpClient check to see if all is OK, that is that the Http Response Code is 200, because if it doesn't respond with 200, I don't have a page.
So, the null hypothesis (denoted H0 by statisticians, so now you know) is that the test will fail. Or rather, will stay failed, since I've changed the test to log an error as part of its setup, and the rest of the test must clear that error. This is because it is the simplest option: there are many ways for this flaky arrangement of test class/rigged-up HTML and Razor/JS/Web API and ELMAH to fail, and only one way for it to succeed. So I engineer the initial conditions to create an error, then challenge the rest of the test harness - the JS, HTML, ELMAH etc - to prove me wrong.
The JS file is just an instruction to run the tests, checks to see that the divs have loaded content from the public APIs they're supposed to, and then if all is well an instruction to delete the original error token.
Note the nice bddify report at the bottom of the test session window