Spying on native functions in Jasmine

Recently I’ve been getting really into Jasmine spies. I’ve come across plenty of people online talking about the basics of how to use spies in Jasmine. They’re great resources and I’m indebted to them, but I don’t think they really do Spies justice. Spies are incredible. I feel like now I’m really starting to get my head around them and wanted to talk about that.

This week I’ve been using Spies to spy on native functions. It’s possible that the reason I’ve never found anyone else talking about doing this is it’s a bad idea, or so basic that no one else has thought to mention it before. I find it important to never discount the idea that I might be an idiot.. However, this wasn’t obvious to me at first and has proven devilishly useful so here’s what I’ve been up to.

It all started with testing a form submission. In the natural course of events this would take us off the page running the Jasmine tests, or create an endless loop of page refreshes. It occurred to me that there had to be a function behind the form submission hidden somewhere, and if I could spy on that, it’d prevent the form submitting and I could get on with my life. I can even use it to check that the form submission has been called. Sure enough, a bit of poking around revealed that:
spyOn(HTMLFormElement.prototype, 'submit');
Gives you a spy on the native submit function. It makes perfect sense really, of course the HTMLFormElement prototype has a submit function attached to it, and why shouldn’t Jasmine be able to spy on it? And with that one line, the headache of not refreshing or leaving your spec runner on form submissions goes away.

Pleased with this little trick I later came up against a bit of code that needed testing. It took an uploaded image file, converted it into an Image object and looked at scaled it to fit the user’s screen size. We’ve dabbled in actually uploading images for another image uploader test in the past. It was horrible and involved adding a timeOut into the test then hoping it was long enough. We eventually took that out. Keen not to re-introduce it, I figured if I could spy on a Form submission then why not spy on the Image object constructor? Sure enough:
spyOn(window, 'Image')
let me spy on that constructor, so I could then get it to return a mock object with just the information needed for the test.
spyOn(window, 'Image').andReturn({width: 100, height: 200});

Alas, I can’t claim to understand the inner workings of JavaScript well enough to predict what object any given function is attached to. I had assumed that the Image constructor would be attached to document, but turned out it’s on the window. There’s probably some easy way to look it up if you know where to look. If I were a better person, that last sentence would have been a link to that resource but I don’t have it, just assume it exists somewhere.

So anyone who, like me, didn’t find this immediately obvious I encourage you to go out and give it a try. Then if it creates some sort of horrible mess come back and let me know why I shouldn’t be doing it either. Remember, I may well be an idiot.

Tagged , , , , ,

Organising IE specific SASS stylesheets

So last week I spoke about using SASS to create IE specific stylesheets. This is a good idea if you’re using SASS (Or any CSS pre-compiler really) and, like me, you prefer to keep your dirty IE fixes away from proper styling.

At the end of that I said I’d talk about how we’re organising these. Unfortunately, thinking about it, this isn’t too exciting. Premature promises, eh? So here it is in brief:

Every version of IE has its own bundle. There is also a bundle for all IEs. The All IE bundle includes the regular CSS bundle and anything we want applied to all versions. Then, each specific IE version bundle includes the All IE bundle and anything specific to that version.

Fixes are (as far as practical) kept in separate, small files with names describing what they’re for and which versions they should apply to.

This means, for example, that we have a single file for applying a polyfill for gradients which is then imported into the relevant IE version bundles.

That’s about it really. Have a good week.

Tagged , , , ,

IE Specific Stylesheets with SASS

Recently I’ve been concerning myself with fixing IE. Well obviously not fixing it, but navigating around its idiosyncrasies. We’re trying to develop the site for modern browsers with fallbacks and fixes appended for old ones (IE) in such a way that when they go obsolete we can whip them out quickly and easily. In styling terms this means we need a way to apply rules to elements in IE only, or even in some cases to specific versions of IE only.

Keeping styling rules separate seems to be somewhat contentious. Paul Irish for whom I have great respect prefers the IE specific rules to be next to the other rules styling an element. His complaint is that it can be hard to identify which stylesheet a rule is coming from if it’s from an external one. I see his point but still have a personal preference for keeping them separate as it means the main CSS is cleaner and I’ve never really had a problem with identifying the source stylesheet for anything.

In previous projects we’ve adopted one of two approaches. We’ve used JavaScript to detect if it’s IE and add a class to the body tag and then included these IE specific rules with the proper CSS. I dislike this approach because it keeps the IE specific rules in with the main CSS. More significantly, it introduces a hard JavaScript dependency. I’m all for ignoring IE users without JavaScript on the grounds that they’re clearly being willfully awkward, but apparently that’s not allowed.

We’ve also used IE Specific stylesheets. This approach uses Conditional statements in the html. These are ignored by real browsers but picked up by IE, meaning you can add an extra stylesheet to IE full of fixes. I like this because it keeps the IE fixes separate and only punishes IE users with the extra (admittedly tiny) download size and it doesn’t depend on JavaScript. On the down side however, this adds an extra http request to IE uses.

Paul Irish has discussed this. In brief, his solution was to use IE Conditional comments to add classes to the html tag. This bypasses the need for JavaScript but lets you put your styling in amongst your code proper. This is an elegant solution and I like it, but as I’ve said I don’t want my IE styling rules in my main CSS. Instead we’re taking advantage of SASS to do something a little different.

Because SASS’ stylesheet import copies the whole imported sheet it does not create extra http requests. This means that if we create a special stylesheet for each version of IE and import our regular stylesheet bundle into it, we can have IE specific rules in a separate stylesheet for development but bundled up for the user meaning it only needs one http request. Then, using the same conditional comments we were using for import an extra stylesheet we can import the single bundle for each version of IE. The only change needed to these is the addition of a not IE selector to import the regular bundle

<!--[if !IE]><!-->
<link rel="stylesheet" href="/css/desktop/bundle.css"/>
<!--<![endif]-->

<!--[if IE 7]>
<link rel="stylesheet" href="/css/desktop/ie/ie7-bundle.css"/>
<![endif]-->
<!--[if IE 8]>
<link rel="stylesheet" href="/css/desktop/ie/ie8-bundle.css"/>
<![endif]-->
<!--[if IE 9]>
<link rel="stylesheet" href="/css/desktop/ie/ie9-bundle.css"/>
<![endif]-->
<!--[if IE 10]>
<link rel="stylesheet" href="/css/desktop/ie/ie10-bundle.css"/>
<![endif]-->

Do take note of the extra <!--> in the not IE selector, this means that the stylesheet is still loaded in in not-IE browsers while the others are ignored as html comments.

Now we’re not actually keeping all the IE fixes in a file each for every version of IE. But I won’t talk about how we’re doing it now, that’ll be something to talk about next time.

Tagged , , , ,

Using Blanket with Jasmine and Require

Using Blanket with Jasmine and Require

UPDATE: So Alex Seville has pointed out in a comment that there is a much simpler solution to this problem – Adding a second script tag for blanket right after the require script tag. I can think of no good reason not to do this instead of what I’ve done below. I’ll leave it here in case it’s useful to someone though.

So I’ve Talked about Require and Jasmine and even how to get them to work together. I’ve also talked about the magical code coverage tool Blanket. Today I’m going to talk about getting all three of these working well together.

Using Blanket with Jasmine is trivial, that’s part of the point behind Blanket. It’s just a case of including it in the page. With Require in the mix it gets a little more complex. I’m going to pick up here from where I left the code getting Require and Jasmine working together. So if you’ve not read that, it might be an idea now.

All on the same page? Good. So, we’re going to be doing everything in specRunner.js. The first thing to do is download blanket-jasmine

The second thing we need to do is load Blanket in with require. Blanket is not a require module and it depends on Jasmine, so it will also need shimming. We’ve already got some Shims set up, so just add blanket: {deps: ['jasmine'], exports: 'blanket'} to it to produce something like this

shim: {
  jasmine: {
    exports: 'jasmine'
  },
  jasmineHtml: ['jasmine'],
  jasmineHelper: ['jasmine'],      
  jasmineJquery: ['jasmine'],
  blanket: {
      deps: ['jasmine'],
      exports: 'blanket'
  },
  jquery: {
      exports: '$'
  }
}

You will recall that we’ve got a pair of nested require calls, one loading in Jasmine the other loading in all our specs. Add blanket to the one loading in jasmine.

Then we’ve got two more changes to make, one before the require call loading in the specs one after. Before loading in the specs add this line:

blanket.setFilter(['path/to/your/jsFolder']);

The parameter for the setFilter function tells Blanket what files to pay attention to and which to ignore. Using this lets you avoid being shouted at for not testing your external libraries. It takes an array, so you can define multiple paths but I only needed the one.

After loading in your specs add this

jasmineEnv.addReporter(new jasmine.BlanketReporter());

No thought required here, this just turns Blanket on. Job done.

And with that, we have Blanket working! Blanket does have two down sides, though. One is that a file which is not referenced at any point in the tests will not show up in the results. Since we already had a file linking Jasmine into Gradle, it wasn’t too hard to add a stage which compared the list of files picked up by Blanket to the list of files in our js folders to catch anything completely untested. I’ll talk about this a bit in the future, but I can’t really take credit for or claim to understand most of it.

The other down side to Blanket is it cannot be run with the file:// protocol in most browsers because it issues cross domain requests. It is convenient to be able to quickly run our tests on a local device through the file:// protocol so I preferred the notion of not losing this.

I actually came up with two solutions, one which required no effort on the developer in question’s part and one which required a bit of set up by whoever wanted to do it.

The first solution was detecting if we were running through the file:// protocol in the specRunner, and not turning on blanket if we are. To do this, I created a boolean called onServer and put all of our Blanket modifications inside if statements on this boolean. In the case of adding blanket to the array of required modules this meant taking it out of the initialization and adding this line:

if (onServer) {
	requiredModules.push('blanket');
}

Then, to detect if we were on a server I used this code

var phantom = navigator.userAgent.indexOf('Phantom') !== -1 ? true : false,
http = document.location.protocol === 'http:' ? true : false,
onServer = (phantom || http);

Since we run our tests in Phantom, which uses the file:// protocol without complaining about cross domain calls I’ve got an extra bit detecting if we’re in Phantom. Now, if we can handle it our tests run Blanket, otherwise they run fine without. Lovely.

The other solution was sorting out an easy local server for running our tests on. I can’t really take too much credit for this, because it’s blanket’s own recommendation. Blanket provides a simple node server, download that and put it somewhere that it doesn’t need to go up a level in the file structure to reach your jasmine tests.

If you don’t have node installed then what are you doing? Install node (and grunt while you’re at it.

In the command line navigate to the folder you’re keeping this test server in and type npm install express. This installs a plugin used by this server. Still in the command line, type node testserver.js which will start your server running. Finally, in a browser navigate to localhost:3000/path/to/your/specRunner.html

Tah-dah. Jasmine-Require-Blanket.

Tagged , , , , , ,

Reporting test coverage with Blanket.js

Way back in my list of things we’re planning to use I said we were going to use saga for our JavaScript test coverage tool. But since that decision was made, Blanket has updated to support Jasmine and Blanket was always more attractive to me. So instead of Saga, here’s some chat about Blanket.

What is Blanket.js?

Blanket.js is a code coverage tool. It involves almost no effort to set up and provides detailed reports of the line coverage for every file tested.

Why use Blanket.js?

Because having an idication of how well tested your code is valuable. The problem with code coverage in general is that all it is able to check is whether or not a line has been run by your test suite. This means that while it can fairly reliably tell you if something is not being tested, it can’t with any certainty say that something is being tested. As long as you remember this though, it’s handy to have around. Who wouldn’t want a warning that they’ve missed something in their tests?

Blanket is particularly good because it is easy to set up and use, and offers custom reporters to allow its output to be adapted to your needs. Not that we’ve used that.

On the down side, Blanket provides no feedback on how tested a file that isn’t being tested is. This means that it takes a bit of wrangling to get warnings about files with 0% test coverage. I was only really tangentially involved in this but I’ll try to cover it in the future.

We also needed to engage in a bit of code wrangling to get it to play nicely with Require. The final problem it presented was that it cannot be run through the file:// protocol which . Next time, details on these wranglings.

Tagged , , , , , ,

Testing require Main files with Jasmine

So last week I spoke about testing require modules with jasmine. The problem with this approach is it can’t be used on the main files. This is for a couple of reasons.

Main files do not define themselves as modules and as such aren’t really equipped to be loaded in through a require call like the rest of our modules. Also, the main files are where we bind everything to on page load. This means that to test these modules we need to simulate a page load event.

A couple of weeks ago we resigned ourselves to not testing these because the functionality is pretty basic, all there is to test in ours at least is that the relevant script has been triggered. But then I found myself with an afternoon to spare and used it to come up with this. It’s imperfect and dirty, but it’s better than doing nothing.

At the top of a main test, in place of the require call most modules get, we need to mock any scripts being loaded into the main file. For most scripts this is just a case of creating an object with functions inside them for what is being called by the main file. It doesn’t matter what these functions do, we’re not testing them here. This is just so there is something to be called. Like this:
var FirstPageScript = {init: function(){}}

The exception to this is whatever is being used for the domReady event. We’re using the require domReady plugin for mobile and jQuery for desktop. We need to replace whichever is being used with something that instead binds what we want happening on page load to a triggerable event. For jQuery:

var fakeJQuery = function(func){
	bodyTag.bind('domReady', function(){
	    func();
	});
}

You’ll note that this only mocks out the $() shorthand for $(document).ready() and no other jQuery functions. This might be a problem if we used jQuery for anything other than domReady in the main file, but we don’t. If we did It would be necessary to mock out other functions, but main files should be lightweight anyway.

This is almost the same but less problematic with the domReady plugin. It only does one thing, so just mocking that thing does the business.

var fakeDomReady = function(func){
        bodyTag.bind('domReady', function(){
            func();
        });
    }

Let’s take a quick look at what this is doing. When this mock domReady function is called, it binds the function that was passed to it to the 'domReady' event being triggered on the body tag. It’s irrelevant what this is called or even what it’s bound to. I used the body tag because I already had it in variable, and it could just as well be called 'ponyAttack'. All it is is an event we can trigger at will. So in this case, bodyTag.trigger('domReady'); will fire whatever is supposed to happen at domReady.

So we’ve now got everything we need mocked, we just need to insert these mocks into the main file in place of the real things. The answer is spies. With Jasmine, the answer is always spies. Spies are awesome.

By creating a spy on the require function which calls a fake function that triggers the function passed into the require call with our mocks instead of the intended objects, we can do this. That make sense? No? Ok. Let’s look at some code.

requireSpy = spyOn(window, 'require').andCallFake(function(){
    requireSpy.mostRecentCall.args[1](fakeDomReady, FirstPageScript, SecondPageScript, ThirdPageScript);
});

requireSpy.mostRecentCall.args is an array containing the arguments passed into the most recent call to the spied on function. The first argument is the array of required modules, the second is the function being triggered. Therefore requireSpy.mostRecentCall.args[1] refers to the function that the require call was going to trigger. We’re then passing into this call all of our mocks, in the same order that they are expected in the main file. Now, when require is called, we will instead trigger the main file’s function but using mocks.

Finally we need to trigger the require call. We can’t load the main file in alongside the rest of our scripts. Apart from anything else, we can’t load it until after our requireSpy has been created or it won’t work. That means we need to load the file here.

I am doing this by appending and removing a script tag pointing at the main file. I see no reason why you couldn’t use a jQuery load, but I didn’t.

$('head').append('');

Then for neatness’ sake, I added an afterEach for removing this tag

afterEach(function(){
   $('#mobileMainScriptTag').remove();
});

So, putting all this together we get the full set up for a main test:

describe('mobileMain.js', function(){
	var FirstPageScript = {init: function(){}},
	    SecondPageScript = {init: function(){}},
	    ThirdPageScript = {init: function(){}},
	    bodyTag = $('body');
	beforeEach(function(){
	    var fakeDomReady = function(func){
	            bodyTag.bind('domReady', function(){
	                func();
	            });
	        },
	        Browser = {getByTag: function(){ return bodyTag; }},
	        requireSpy = spyOn(window, 'require').andCallFake(function(){
	            requireSpy.mostRecentCall.args[1](fakeDomReady, FirstPageScript, SecondPageScript, ThirdPageScript);
	        });
	    $('head').append('');
	});

	afterEach(function(){
	   $('#mobileMainScriptTag').remove();
	});

This is a relatively complex main file because it’s filling the role of Backbone’s concept of a router, the others mostly just have jQuery and a single script mocked.

But the set up alone isn’t going to test anything, let’s quickly write some tests.

it('initializes the script', function(){
	spyOn(Script, 'init');
	bodyTag.trigger('domReady');
	expect(Script.init).toHaveBeenCalled();
});

That’s pretty much it. Because the example I’ve been using has a bit more going on, what with detecting what page we’re on that got to have a helper function for doing that set up. But it’s not all that interesting and is pretty specific to our needs so I’ll omit it at least for now.

So, that’s a dirty hack to test your main files. The approach has some down sides. We’re not actually testing that the correct script is being called, we’re testing that the object being passed into the require function in that order is called. It’s a pretty small risk though, and I can’t think of a way to check that without straying away from unit testing and into integration testing.

It is also impossible to run the test using the file:// protocol, which is annoying but I’m increasingly finding the advantages of being able to run the tests that easily are being outweighed by being able to do more stuff by running them on a local server.

The most unfortunate thing about this though is that loading in the files like this means that they are not picked up by blanket‘s coverage report. This is a shame because it was blanket that originally prompted me to start trying again to get these files tested., but hey-ho.

Speaking of Blanket, I’ve not spoken about Blanket before. It’s pretty incredible. I’m pretty sure it’s the work of at least one wizard. I’ll talk about it next time.

Tagged , , , , , ,

Using Jasmine with Require

Last week I spoke about the basic use of Jasmine, a JavaScript testing framework. The problem presented to us by using Jasmine was that it is not currently designed to work with RequireJs. This meant we had to find a way to get them to play nicely together.

Most of the work for this I stole from Ben Nadel and Uzi Kilon but I did find I needed and wanted to make some minor changes to the code to make it work as I liked.

The fundamental problem is that require needs you to load in any used files through a call to the require function, and needs all the code to be kicked off by the main file. Require, on the other hand, expects all the files, specs and source, to be loaded in with script tags. These two approaches are not one and the same.

To get around this, first I needed to replace the main file from our actual code with some code for setting up and triggering the Jasmine tests. This Spec runner takes out the inline code from the html file and is therefore responsible for setting up any config, loading in the core libraries needed throughout the tests, loading in the tests and launching Jasmine.

Setting up config

Let’s look at this in order then, the first thing we do is load in the Require Config from our real code. Because our tests are kept a few folders away from our production code, this isn’t massively pretty but otherwise it’s the same as our regular main files:
require(['../../../main/web/js/mobile/mobileRequireConfig.js'], function(){

Then I needed to set up variables to link back to the files in the Jasmine folder since we are setting the base url to be inside the source folder. They’re a bit dirty, but they work:

    var jasmineFolder = "../../../test/javascript/jasmine/",
        specsFolder = jasmineFolder + 'mobileSpecs/';

When appended to the front of urls for files they guide us back to the jasmine folders. This means that all of the dirty business of keeping tests separate from source is dealt with here rather than impacting the production code.

Although the regular config isn’t quite perfect for us, becuase we need to replace some settings and add a few too. We do this quite simply with a call to require.config:

  require.config({
    baseUrl: "../../../main/web/js/",
    urlArgs: "cb="+Math.random(),
    paths: {
        jasmine: jasmineFolder+'lib/jasmine-1.2.0/jasmine',
        jasmineHtml: jasmineFolder+'lib/jasmine-1.2.0/jasmine-html',
        jasmineHelper: jasmineFolder+'lib/jasmine-1.2.0/jasmine-helper',
        jasmineJquery: jasmineFolder+'lib/jasmine-jquery',
        jquery: 'lib/jquery'
    },
    shim: {
      jasmine: {
        exports: 'jasmine'
      },
      jasmineHtml: ['jasmine'],
      jasmineHelper: ['jasmine'],      
      jasmineJquery: ['jasmine'],
      jquery: {
          exports: '$'
      }
    }
  });

Overwriting options like baseUrl is as easy as resetting them, because the loaded in config is processed first the config set here overrides it. However, we also need to set up some new options as well. We need to overwrite the baseUrl because it is set up relative to the html file, which is a long way from here. This is the last bit of dirtiness needed to get this all working beautifully.

urlArgs isn’t important, but by adding a random number to them we prevent files from being cached, which is just convenience. We’re also adding all the components of Jasmine to the paths and shimming it so that it can be loaded with Require.

Loading Jasmine files

Loading in the core libraries and the specs requires a nested pair of require calls. The first layer loads in the core libraries and initializes Jasmine. The second layer loads in the specs and triggers Jasmine’s execute function. Let’s take a look:

var requiredModules = [
     'jquery',
     'jasmine',
     'jasmineHtml',
     'jasmineHelper',
     'jasmineJquery'
 ];

require(requiredModules, function(){
	var jasmineEnv = jasmine.getEnv();
	jasmineEnv.updateInterval = 1000;

	var htmlReporter = new jasmine.HtmlReporter();

	jasmineEnv.addReporter(htmlReporter);

	jasmineEnv.specFilter = function(spec) {
	  return htmlReporter.specFilter(spec);
	};
	var specs = [
	    specsFolder+'myFile',
	    specsFolder+'mySpec',
	    specsFolder+'yourSpec'
	];

	require(specs, function(){
		jasmineEnv.execute();
	});

});

I have set up the list of required modules in both cases before the require call because it makes it easier to add or remove modules depending on if you’re running the tests on a server or not, which will become significant when I talk about Blanket.js. It’s not necessary, you could put those arrays straight into the require calls if you like.

Setting up a Spec

With the main file replaced I then needed each indvidual test suite have access to the modules it was testing. This involves putting a beforeEach at the top of every spec file requiring in the desired files for the test:

describe('myfile.js', function(){
	var Utils;
	beforeEach(function(){
		var flag = false;
		require(['Utils'], function(_Utils){
			Utils =  _Utils;
			flag = true;
		});

		waitsFor(function(){
			return flag;
		});

	});

...some specs...
});

Let’s pick this apart a bit. The whole test is inside a describe call. At the top of this any variables to be defined by the require call are instantiated so that all the specs have the necessary scope.

Then, inside a beforeEach we create a variable equal to false which only becomes inside the require block. This means that this will only become true once all the required modules are loaded and their variables initialized.

The waitsFor function means that this beforeEach is not considered complete until that flag becomes true, so no test will run until the modules have been loaded.

Finally, loading in the modules is done with a regular require call. You may notice that the variable being brought into the require function is _Utils when we intend to be referring to it throughout the suite as Utils. This is to avoid naming conflicts between the variable local to the require call and the variable for the whole suite.

Just to finish off let’s have a quick look at the html used to launch all this off. All it does it loads in the Jasmine CSS and favicon. Then loads require with the specRunner javascript file we just set up as its main file. This is all inside the head of the file, with a completely empty body tag.

<title>Mobile Spec Runner</title>

<link rel=”shortcut icon” type=”image/png” href=”libs/jasmine-1.2.0/jasmine_favicon.png”>
<link rel=”stylesheet” type=”text/css” href=”lib/jasmine-1.2.0/jasmine.css”>
<script type=”text/javascript” src=”../../../main/web/js/lib/require.js” data-main=”mobileSpecRunner”></script>

 

Et voila, a require-friendly suite of Jasmine tests. We’ve divided our specs into three spec runners: mobile, desktop, and shared. This is to accommodate the fact that we have a separate requireConfig for desktop and mobile. You could do more or less as you like really.

Those of you paying altogether too much attention may have noticed that this means that our main files are being untested. I couldn’t find a neat way around that problem, but I have found a way. Of sorts. Which I’ll talk about next time.

Tagged , , , ,

Actually using Jasmine

Jasmine is a JavaScript testing framework designed to run in a browser which I’ve already discussed. Jasmine itself takes the form of 2 .js files and a .css file. These are loaded into a .html file and called with a short in-line script. This will then run any tests, called specs, which have been loaded into this html file. With all this set up, you just need to open that file in a browser.

There are versions of Jasmine designed to integrate with specific development environments, I’ve no idea how they work. So all this chat will be about the Standalone version. And as ever, this is probably not the best place to come for your tutorial. The Jasmine website is very good as is Andrew Burgess’ tutorial

Nuts and Bolts

Let’s take a look at an example. Fortunately, there’s one bundled in with Jasmine anyway. Head to the Jasmine standalone download page and get a copy to take a look. I’m looking at version 1.3.1, but I’d be surprised if there are any major differences if you come to this from the future and there are later incarnations.

In the zip file there is a folder called lib. This contains Jasmine, you shouldn’t need to mess with these files. In setting up your own project you’re free to move this folder around as long as you keep the filepath right in the link, but apart from that let’s ignore this for now.

There’s also a folder called src. This contains the source files that are being tested in the example. You can ignore this. It’s got nothing to do with Jasmine, it’s just something to test.

The third and final folder is spec. In here are two files, PlayerSpec which contains some tests and SpecHelper which contains a custom matcher. More on what that is later. Open up PlayerSpec.

The two key functions in Jasmine are describe and it. These two work together to create sentences to describe the behaviour each individual spec is testing. The describe function provides a noun for your sentence, defining what object you are testing. Within describe functions will be a collection of its declaring what you want your noun to do. Looking to PlayerSpec, it opens with:

describe("Player", function() {

The first parameter of this function is that noun, the second is a function which contains your its. The first it is:

it("should be able to play a Song", function() {

Which again, has a String as its first parameter which is the behaviour expected and a function as the second one which is the actual test. This comes out in the results as “Player should be able to play a Song”.

You may have also noted that the example contains a describe nested within a describe. This is valid and good, allowing you to group together similar specs and let them share variables or set up and tear down functions.

Jasmine’s set up function is beforeEach which takes a function to run before every spec in its containing describe as its only parameters. Its tear down is afterEach which also takes a single function as a parameter but this is run after every spec.

The final function I want to talk about now is expect. This takes a parameter of some object that you want to test and is followed by a matcher like this:

expect(player.isPlaying).toBeTruthy();

Here we are expecting the property isPlaying of the player object to be true. There are a variety of matchers available in Jasmine, but I’m not going through them all here. The Jasmine webiste provides a list.

As well as using the provided matchers you may want to write your own, custom matchers. To do this create an external file with a beforeEach function in it. inside this, call this.addMatchers, passing in a function with the name of the matcher you want to use. This function needs to take as a parameter the object you are putting into an expect. This function should then compare that against whatever you like and return a boolean to say if it matched or not. You can also define what message is outputted when the spec fails, but I’ve already gone into far too much detail on this relatively minor feature so I’ll leave that here.

That’s all I really want to cover here. Jasmine is capable of much more, with a powerful mocking framework provided in the form of Spies. But they can wait until another time.

Let’s finish up by leaving the spec folder and looking at SpecRunner.html in the base Jasmine folder. For the most part, you can ignore this. In the example it contains four sections of script tags. The first loads in Jasmine itself, the second loads in the source files being tested, the third loads in the specs and the fourth starts Jasmine off. As long as you keep your list of Source and Specs up to date, the rest isn’t of much interest.

Contradicting myself almost immediately, next time I’ll cover ripping apart that html file to use Jasmine with RequireJs.

Tagged , , , ,

Testing with Jasmine

Historically we’ve sloppy with our JavaScript Unit testing. Jasmine has been part of our lives for a while, but we’ve not followed a Test Driven (and certainly not Behaviour Driven) approach to writing JavaScript.

This changes with this project. The green-pasture of the new code base has removed a lot of the resistance to this (“There’s no point writing tests for this, when everything else is untested”). The introduction of Require has also made it easier to start doing things properly. With require it’s very easy to load in the distinct modules to tests and it’s very clear how to start accessing its functions in a test.

What is Jasmine for?

Jasmine is a JavaScript unit test framework. It encourages Behaviour Driven Development. My understanding of BDD is that it’s basically Test Driven Development but the tests are defined by coherent human-readable descriptions of the behaviour they are testing. Jasmine runs these tests in an actual browser.

Jasmine is not the only one of its kind, but it is a good one and it is the one we’ve gone with. qUnit is also popular. In our case Jasmine’s main contender was RhinoUnit, the most significant difference between the two being that Jasmine runs in a browser while Rhino is Ant based and does not require a browser.

Why use Jasmine?

The only unique attribute of it that lead us to it was that we’ve used it before. However, it has other selling points:

  • Is one of the popular and good frameworks
  • Easy syntax
  • Runs in browser
  • Has a Grunt plugin

The first two points are pretty straight forward. The third is perhaps slightly contentious. RhinoUnit was the main alternative for being our JS testing framework specifically because it doesn’t run in a browser. Not running in a browser is a good thing because it is much easier to incorporate into our Java build process and also means you aren’t plagued by browser differences. The down side is, your tests no longer throw up errors caused by browser differences.

The other nail in the coffin for Rhino as far as we’re concerned is that we’re using Grunt to automate a lot of our work anyway, and with Grunt already part of our work flow adding Jasmine to it is barely a job. This means we can easily automate the running of it. Our automated tests currently only run in PhantomJS, but we can open the tests manually in any browser we want to see how they run. Our Grunt plugin claims that it can be used to run in multiple browsers too, we just haven’t tried yet.

We’re also using Jasmine-jquery. This is a Jasmine plugin which provides some extra functionality to Jasmine at the expense of introduing a dependency on jQuery. We have no problem with this because we’d rather write our tests using jQuery to save time anyway and our JS will either never reference the mighty $ (mobile) or have a dependency on it anyway (desktop) so we shouldn’t be introducing any problems with this (famous last words?)

Next time, using Jasmine

Tagged , , , , , ,

Headless Browsing with PhantomJS

We’re using PhantomJS because it is Grunt’s preferred browser for anything automated that needs one. Like running JS Tests.

What is PhantomJS?

PhantomJS is a headless webkit browser. What does that mean? Well, it’s a webkit browser (like Chrome and Safari) that doesn’t have a visible interface.

Why use PhantomJS?

Because it gives you most of the benefits of running tests in a browser but it goes much faster and less intrusive. I say most because it is not designed to emulate the quirks of a particular mainstream browser, it is its own thing. However, because it is webkit it has a lot in common with others in that family.

Currently our main build process does not include Grunt, so we need to be able to run our JavaScript tests through Gradle as well. Originally our plan was to use JS Test Driver but we found that it does not sit well with RequireJS. There are apparently work arounds to fix JS Test Driver, but we opted instead to adopt Phantom as our test runner. This does mean our automated tests are not running in other browsers, but this should be temporary until Grunt is up and running anyway.

Unfortunately this JS Test Driver vs Phantom debate was something I had very little to do with, so I have no real thoughts on that.

That’s all I really have to say on the matter, if I had much direct interaction with it I’d probably like it less. Its real beauty is in how little I have to do because of it.

Tagged , , , , , ,
Follow

Get every new post delivered to your Inbox.