Tag Archives: UX

Organising IE specific SASS stylesheets

So last week I spoke about using SASS to create IE specific stylesheets. This is a good idea if you’re using SASS (Or any CSS pre-compiler really) and, like me, you prefer to keep your dirty IE fixes away from proper styling.

At the end of that I said I’d talk about how we’re organising these. Unfortunately, thinking about it, this isn’t too exciting. Premature promises, eh? So here it is in brief:

Every version of IE has its own bundle. There is also a bundle for all IEs. The All IE bundle includes the regular CSS bundle and anything we want applied to all versions. Then, each specific IE version bundle includes the All IE bundle and anything specific to that version.

Fixes are (as far as practical) kept in separate, small files with names describing what they’re for and which versions they should apply to.

This means, for example, that we have a single file for applying a polyfill for gradients which is then imported into the relevant IE version bundles.

That’s about it really. Have a good week.

Tagged , , , ,

IE Specific Stylesheets with SASS

Recently I’ve been concerning myself with fixing IE. Well obviously not fixing it, but navigating around its idiosyncrasies. We’re trying to develop the site for modern browsers with fallbacks and fixes appended for old ones (IE) in such a way that when they go obsolete we can whip them out quickly and easily. In styling terms this means we need a way to apply rules to elements in IE only, or even in some cases to specific versions of IE only.

Keeping styling rules separate seems to be somewhat contentious. Paul Irish for whom I have great respect prefers the IE specific rules to be next to the other rules styling an element. His complaint is that it can be hard to identify which stylesheet a rule is coming from if it’s from an external one. I see his point but still have a personal preference for keeping them separate as it means the main CSS is cleaner and I’ve never really had a problem with identifying the source stylesheet for anything.

In previous projects we’ve adopted one of two approaches. We’ve used JavaScript to detect if it’s IE and add a class to the body tag and then included these IE specific rules with the proper CSS. I dislike this approach because it keeps the IE specific rules in with the main CSS. More significantly, it introduces a hard JavaScript dependency. I’m all for ignoring IE users without JavaScript on the grounds that they’re clearly being willfully awkward, but apparently that’s not allowed.

We’ve also used IE Specific stylesheets. This approach uses Conditional statements in the html. These are ignored by real browsers but picked up by IE, meaning you can add an extra stylesheet to IE full of fixes. I like this because it keeps the IE fixes separate and only punishes IE users with the extra (admittedly tiny) download size and it doesn’t depend on JavaScript. On the down side however, this adds an extra http request to IE uses.

Paul Irish has discussed this. In brief, his solution was to use IE Conditional comments to add classes to the html tag. This bypasses the need for JavaScript but lets you put your styling in amongst your code proper. This is an elegant solution and I like it, but as I’ve said I don’t want my IE styling rules in my main CSS. Instead we’re taking advantage of SASS to do something a little different.

Because SASS’ stylesheet import copies the whole imported sheet it does not create extra http requests. This means that if we create a special stylesheet for each version of IE and import our regular stylesheet bundle into it, we can have IE specific rules in a separate stylesheet for development but bundled up for the user meaning it only needs one http request. Then, using the same conditional comments we were using for import an extra stylesheet we can import the single bundle for each version of IE. The only change needed to these is the addition of a not IE selector to import the regular bundle

<!--[if !IE]><!-->
<link rel="stylesheet" href="/css/desktop/bundle.css"/>

<!--[if IE 7]>
<link rel="stylesheet" href="/css/desktop/ie/ie7-bundle.css"/>
<!--[if IE 8]>
<link rel="stylesheet" href="/css/desktop/ie/ie8-bundle.css"/>
<!--[if IE 9]>
<link rel="stylesheet" href="/css/desktop/ie/ie9-bundle.css"/>
<!--[if IE 10]>
<link rel="stylesheet" href="/css/desktop/ie/ie10-bundle.css"/>

Do take note of the extra <!--> in the not IE selector, this means that the stylesheet is still loaded in in not-IE browsers while the others are ignored as html comments.

Now we’re not actually keeping all the IE fixes in a file each for every version of IE. But I won’t talk about how we’re doing it now, that’ll be something to talk about next time.

Tagged , , , ,

Using Blanket with Jasmine and Require

Using Blanket with Jasmine and Require

UPDATE: So Alex Seville has pointed out in a comment that there is a much simpler solution to this problem – Adding a second script tag for blanket right after the require script tag. I can think of no good reason not to do this instead of what I’ve done below. I’ll leave it here in case it’s useful to someone though.

So I’ve Talked about Require and Jasmine and even how to get them to work together. I’ve also talked about the magical code coverage tool Blanket. Today I’m going to talk about getting all three of these working well together.

Using Blanket with Jasmine is trivial, that’s part of the point behind Blanket. It’s just a case of including it in the page. With Require in the mix it gets a little more complex. I’m going to pick up here from where I left the code getting Require and Jasmine working together. So if you’ve not read that, it might be an idea now.

All on the same page? Good. So, we’re going to be doing everything in specRunner.js. The first thing to do is download blanket-jasmine

The second thing we need to do is load Blanket in with require. Blanket is not a require module and it depends on Jasmine, so it will also need shimming. We’ve already got some Shims set up, so just add blanket: {deps: ['jasmine'], exports: 'blanket'} to it to produce something like this

shim: {
  jasmine: {
    exports: 'jasmine'
  jasmineHtml: ['jasmine'],
  jasmineHelper: ['jasmine'],      
  jasmineJquery: ['jasmine'],
  blanket: {
      deps: ['jasmine'],
      exports: 'blanket'
  jquery: {
      exports: '$'

You will recall that we’ve got a pair of nested require calls, one loading in Jasmine the other loading in all our specs. Add blanket to the one loading in jasmine.

Then we’ve got two more changes to make, one before the require call loading in the specs one after. Before loading in the specs add this line:


The parameter for the setFilter function tells Blanket what files to pay attention to and which to ignore. Using this lets you avoid being shouted at for not testing your external libraries. It takes an array, so you can define multiple paths but I only needed the one.

After loading in your specs add this

jasmineEnv.addReporter(new jasmine.BlanketReporter());

No thought required here, this just turns Blanket on. Job done.

And with that, we have Blanket working! Blanket does have two down sides, though. One is that a file which is not referenced at any point in the tests will not show up in the results. Since we already had a file linking Jasmine into Gradle, it wasn’t too hard to add a stage which compared the list of files picked up by Blanket to the list of files in our js folders to catch anything completely untested. I’ll talk about this a bit in the future, but I can’t really take credit for or claim to understand most of it.

The other down side to Blanket is it cannot be run with the file:// protocol in most browsers because it issues cross domain requests. It is convenient to be able to quickly run our tests on a local device through the file:// protocol so I preferred the notion of not losing this.

I actually came up with two solutions, one which required no effort on the developer in question’s part and one which required a bit of set up by whoever wanted to do it.

The first solution was detecting if we were running through the file:// protocol in the specRunner, and not turning on blanket if we are. To do this, I created a boolean called onServer and put all of our Blanket modifications inside if statements on this boolean. In the case of adding blanket to the array of required modules this meant taking it out of the initialization and adding this line:

if (onServer) {

Then, to detect if we were on a server I used this code

var phantom = navigator.userAgent.indexOf('Phantom') !== -1 ? true : false,
http = document.location.protocol === 'http:' ? true : false,
onServer = (phantom || http);

Since we run our tests in Phantom, which uses the file:// protocol without complaining about cross domain calls I’ve got an extra bit detecting if we’re in Phantom. Now, if we can handle it our tests run Blanket, otherwise they run fine without. Lovely.

The other solution was sorting out an easy local server for running our tests on. I can’t really take too much credit for this, because it’s blanket’s own recommendation. Blanket provides a simple node server, download that and put it somewhere that it doesn’t need to go up a level in the file structure to reach your jasmine tests.

If you don’t have node installed then what are you doing? Install node (and grunt while you’re at it.

In the command line navigate to the folder you’re keeping this test server in and type npm install express. This installs a plugin used by this server. Still in the command line, type node testserver.js which will start your server running. Finally, in a browser navigate to localhost:3000/path/to/your/specRunner.html

Tah-dah. Jasmine-Require-Blanket.

Tagged , , , , , ,

Reporting test coverage with Blanket.js

Way back in my list of things we’re planning to use I said we were going to use saga for our JavaScript test coverage tool. But since that decision was made, Blanket has updated to support Jasmine and Blanket was always more attractive to me. So instead of Saga, here’s some chat about Blanket.

What is Blanket.js?

Blanket.js is a code coverage tool. It involves almost no effort to set up and provides detailed reports of the line coverage for every file tested.

Why use Blanket.js?

Because having an idication of how well tested your code is valuable. The problem with code coverage in general is that all it is able to check is whether or not a line has been run by your test suite. This means that while it can fairly reliably tell you if something is not being tested, it can’t with any certainty say that something is being tested. As long as you remember this though, it’s handy to have around. Who wouldn’t want a warning that they’ve missed something in their tests?

Blanket is particularly good because it is easy to set up and use, and offers custom reporters to allow its output to be adapted to your needs. Not that we’ve used that.

On the down side, Blanket provides no feedback on how tested a file that isn’t being tested is. This means that it takes a bit of wrangling to get warnings about files with 0% test coverage. I was only really tangentially involved in this but I’ll try to cover it in the future.

We also needed to engage in a bit of code wrangling to get it to play nicely with Require. The final problem it presented was that it cannot be run through the file:// protocol which . Next time, details on these wranglings.

Tagged , , , , , ,

Testing require Main files with Jasmine

So last week I spoke about testing require modules with jasmine. The problem with this approach is it can’t be used on the main files. This is for a couple of reasons.

Main files do not define themselves as modules and as such aren’t really equipped to be loaded in through a require call like the rest of our modules. Also, the main files are where we bind everything to on page load. This means that to test these modules we need to simulate a page load event.

A couple of weeks ago we resigned ourselves to not testing these because the functionality is pretty basic, all there is to test in ours at least is that the relevant script has been triggered. But then I found myself with an afternoon to spare and used it to come up with this. It’s imperfect and dirty, but it’s better than doing nothing.

At the top of a main test, in place of the require call most modules get, we need to mock any scripts being loaded into the main file. For most scripts this is just a case of creating an object with functions inside them for what is being called by the main file. It doesn’t matter what these functions do, we’re not testing them here. This is just so there is something to be called. Like this:
var FirstPageScript = {init: function(){}}

The exception to this is whatever is being used for the domReady event. We’re using the require domReady plugin for mobile and jQuery for desktop. We need to replace whichever is being used with something that instead binds what we want happening on page load to a triggerable event. For jQuery:

var fakeJQuery = function(func){
	bodyTag.bind('domReady', function(){

You’ll note that this only mocks out the $() shorthand for $(document).ready() and no other jQuery functions. This might be a problem if we used jQuery for anything other than domReady in the main file, but we don’t. If we did It would be necessary to mock out other functions, but main files should be lightweight anyway.

This is almost the same but less problematic with the domReady plugin. It only does one thing, so just mocking that thing does the business.

var fakeDomReady = function(func){
        bodyTag.bind('domReady', function(){

Let’s take a quick look at what this is doing. When this mock domReady function is called, it binds the function that was passed to it to the 'domReady' event being triggered on the body tag. It’s irrelevant what this is called or even what it’s bound to. I used the body tag because I already had it in variable, and it could just as well be called 'ponyAttack'. All it is is an event we can trigger at will. So in this case, bodyTag.trigger('domReady'); will fire whatever is supposed to happen at domReady.

So we’ve now got everything we need mocked, we just need to insert these mocks into the main file in place of the real things. The answer is spies. With Jasmine, the answer is always spies. Spies are awesome.

By creating a spy on the require function which calls a fake function that triggers the function passed into the require call with our mocks instead of the intended objects, we can do this. That make sense? No? Ok. Let’s look at some code.

requireSpy = spyOn(window, 'require').andCallFake(function(){
    requireSpy.mostRecentCall.args[1](fakeDomReady, FirstPageScript, SecondPageScript, ThirdPageScript);

requireSpy.mostRecentCall.args is an array containing the arguments passed into the most recent call to the spied on function. The first argument is the array of required modules, the second is the function being triggered. Therefore requireSpy.mostRecentCall.args[1] refers to the function that the require call was going to trigger. We’re then passing into this call all of our mocks, in the same order that they are expected in the main file. Now, when require is called, we will instead trigger the main file’s function but using mocks.

Finally we need to trigger the require call. We can’t load the main file in alongside the rest of our scripts. Apart from anything else, we can’t load it until after our requireSpy has been created or it won’t work. That means we need to load the file here.

I am doing this by appending and removing a script tag pointing at the main file. I see no reason why you couldn’t use a jQuery load, but I didn’t.


Then for neatness’ sake, I added an afterEach for removing this tag


So, putting all this together we get the full set up for a main test:

describe('mobileMain.js', function(){
	var FirstPageScript = {init: function(){}},
	    SecondPageScript = {init: function(){}},
	    ThirdPageScript = {init: function(){}},
	    bodyTag = $('body');
	    var fakeDomReady = function(func){
	            bodyTag.bind('domReady', function(){
	        Browser = {getByTag: function(){ return bodyTag; }},
	        requireSpy = spyOn(window, 'require').andCallFake(function(){
	            requireSpy.mostRecentCall.args[1](fakeDomReady, FirstPageScript, SecondPageScript, ThirdPageScript);


This is a relatively complex main file because it’s filling the role of Backbone’s concept of a router, the others mostly just have jQuery and a single script mocked.

But the set up alone isn’t going to test anything, let’s quickly write some tests.

it('initializes the script', function(){
	spyOn(Script, 'init');

That’s pretty much it. Because the example I’ve been using has a bit more going on, what with detecting what page we’re on that got to have a helper function for doing that set up. But it’s not all that interesting and is pretty specific to our needs so I’ll omit it at least for now.

So, that’s a dirty hack to test your main files. The approach has some down sides. We’re not actually testing that the correct script is being called, we’re testing that the object being passed into the require function in that order is called. It’s a pretty small risk though, and I can’t think of a way to check that without straying away from unit testing and into integration testing.

It is also impossible to run the test using the file:// protocol, which is annoying but I’m increasingly finding the advantages of being able to run the tests that easily are being outweighed by being able to do more stuff by running them on a local server.

The most unfortunate thing about this though is that loading in the files like this means that they are not picked up by blanket‘s coverage report. This is a shame because it was blanket that originally prompted me to start trying again to get these files tested., but hey-ho.

Speaking of Blanket, I’ve not spoken about Blanket before. It’s pretty incredible. I’m pretty sure it’s the work of at least one wizard. I’ll talk about it next time.

Tagged , , , , , ,

Actually using Jasmine

Jasmine is a JavaScript testing framework designed to run in a browser which I’ve already discussed. Jasmine itself takes the form of 2 .js files and a .css file. These are loaded into a .html file and called with a short in-line script. This will then run any tests, called specs, which have been loaded into this html file. With all this set up, you just need to open that file in a browser.

There are versions of Jasmine designed to integrate with specific development environments, I’ve no idea how they work. So all this chat will be about the Standalone version. And as ever, this is probably not the best place to come for your tutorial. The Jasmine website is very good as is Andrew Burgess’ tutorial

Nuts and Bolts

Let’s take a look at an example. Fortunately, there’s one bundled in with Jasmine anyway. Head to the Jasmine standalone download page and get a copy to take a look. I’m looking at version 1.3.1, but I’d be surprised if there are any major differences if you come to this from the future and there are later incarnations.

In the zip file there is a folder called lib. This contains Jasmine, you shouldn’t need to mess with these files. In setting up your own project you’re free to move this folder around as long as you keep the filepath right in the link, but apart from that let’s ignore this for now.

There’s also a folder called src. This contains the source files that are being tested in the example. You can ignore this. It’s got nothing to do with Jasmine, it’s just something to test.

The third and final folder is spec. In here are two files, PlayerSpec which contains some tests and SpecHelper which contains a custom matcher. More on what that is later. Open up PlayerSpec.

The two key functions in Jasmine are describe and it. These two work together to create sentences to describe the behaviour each individual spec is testing. The describe function provides a noun for your sentence, defining what object you are testing. Within describe functions will be a collection of its declaring what you want your noun to do. Looking to PlayerSpec, it opens with:

describe("Player", function() {

The first parameter of this function is that noun, the second is a function which contains your its. The first it is:

it("should be able to play a Song", function() {

Which again, has a String as its first parameter which is the behaviour expected and a function as the second one which is the actual test. This comes out in the results as “Player should be able to play a Song”.

You may have also noted that the example contains a describe nested within a describe. This is valid and good, allowing you to group together similar specs and let them share variables or set up and tear down functions.

Jasmine’s set up function is beforeEach which takes a function to run before every spec in its containing describe as its only parameters. Its tear down is afterEach which also takes a single function as a parameter but this is run after every spec.

The final function I want to talk about now is expect. This takes a parameter of some object that you want to test and is followed by a matcher like this:


Here we are expecting the property isPlaying of the player object to be true. There are a variety of matchers available in Jasmine, but I’m not going through them all here. The Jasmine webiste provides a list.

As well as using the provided matchers you may want to write your own, custom matchers. To do this create an external file with a beforeEach function in it. inside this, call this.addMatchers, passing in a function with the name of the matcher you want to use. This function needs to take as a parameter the object you are putting into an expect. This function should then compare that against whatever you like and return a boolean to say if it matched or not. You can also define what message is outputted when the spec fails, but I’ve already gone into far too much detail on this relatively minor feature so I’ll leave that here.

That’s all I really want to cover here. Jasmine is capable of much more, with a powerful mocking framework provided in the form of Spies. But they can wait until another time.

Let’s finish up by leaving the spec folder and looking at SpecRunner.html in the base Jasmine folder. For the most part, you can ignore this. In the example it contains four sections of script tags. The first loads in Jasmine itself, the second loads in the source files being tested, the third loads in the specs and the fourth starts Jasmine off. As long as you keep your list of Source and Specs up to date, the rest isn’t of much interest.

Contradicting myself almost immediately, next time I’ll cover ripping apart that html file to use Jasmine with RequireJs.

Tagged , , , ,

Testing with Jasmine

Historically we’ve sloppy with our JavaScript Unit testing. Jasmine has been part of our lives for a while, but we’ve not followed a Test Driven (and certainly not Behaviour Driven) approach to writing JavaScript.

This changes with this project. The green-pasture of the new code base has removed a lot of the resistance to this (“There’s no point writing tests for this, when everything else is untested”). The introduction of Require has also made it easier to start doing things properly. With require it’s very easy to load in the distinct modules to tests and it’s very clear how to start accessing its functions in a test.

What is Jasmine for?

Jasmine is a JavaScript unit test framework. It encourages Behaviour Driven Development. My understanding of BDD is that it’s basically Test Driven Development but the tests are defined by coherent human-readable descriptions of the behaviour they are testing. Jasmine runs these tests in an actual browser.

Jasmine is not the only one of its kind, but it is a good one and it is the one we’ve gone with. qUnit is also popular. In our case Jasmine’s main contender was RhinoUnit, the most significant difference between the two being that Jasmine runs in a browser while Rhino is Ant based and does not require a browser.

Why use Jasmine?

The only unique attribute of it that lead us to it was that we’ve used it before. However, it has other selling points:

  • Is one of the popular and good frameworks
  • Easy syntax
  • Runs in browser
  • Has a Grunt plugin

The first two points are pretty straight forward. The third is perhaps slightly contentious. RhinoUnit was the main alternative for being our JS testing framework specifically because it doesn’t run in a browser. Not running in a browser is a good thing because it is much easier to incorporate into our Java build process and also means you aren’t plagued by browser differences. The down side is, your tests no longer throw up errors caused by browser differences.

The other nail in the coffin for Rhino as far as we’re concerned is that we’re using Grunt to automate a lot of our work anyway, and with Grunt already part of our work flow adding Jasmine to it is barely a job. This means we can easily automate the running of it. Our automated tests currently only run in PhantomJS, but we can open the tests manually in any browser we want to see how they run. Our Grunt plugin claims that it can be used to run in multiple browsers too, we just haven’t tried yet.

We’re also using Jasmine-jquery. This is a Jasmine plugin which provides some extra functionality to Jasmine at the expense of introduing a dependency on jQuery. We have no problem with this because we’d rather write our tests using jQuery to save time anyway and our JS will either never reference the mighty $ (mobile) or have a dependency on it anyway (desktop) so we shouldn’t be introducing any problems with this (famous last words?)

Next time, using Jasmine

Tagged , , , , , ,

Headless Browsing with PhantomJS

We’re using PhantomJS because it is Grunt’s preferred browser for anything automated that needs one. Like running JS Tests.

What is PhantomJS?

PhantomJS is a headless webkit browser. What does that mean? Well, it’s a webkit browser (like Chrome and Safari) that doesn’t have a visible interface.

Why use PhantomJS?

Because it gives you most of the benefits of running tests in a browser but it goes much faster and less intrusive. I say most because it is not designed to emulate the quirks of a particular mainstream browser, it is its own thing. However, because it is webkit it has a lot in common with others in that family.

Currently our main build process does not include Grunt, so we need to be able to run our JavaScript tests through Gradle as well. Originally our plan was to use JS Test Driver but we found that it does not sit well with RequireJS. There are apparently work arounds to fix JS Test Driver, but we opted instead to adopt Phantom as our test runner. This does mean our automated tests are not running in other browsers, but this should be temporary until Grunt is up and running anyway.

Unfortunately this JS Test Driver vs Phantom debate was something I had very little to do with, so I have no real thoughts on that.

That’s all I really have to say on the matter, if I had much direct interaction with it I’d probably like it less. Its real beauty is in how little I have to do because of it.

Tagged , , , , , ,

Setting up Compass with or without Grunt

Setting up Compass wasn’t massively complex, but did involve a bit of thought. Mostly this thought was to get around the limitations imposed by my specific situation but I’ll recount them for anyone in a similar one.

To install SASS, and therefore Compass, you first need Ruby to be installed. Because the main language we use at work is Java I opted to use JRuby in case it became useful at some point in the future. Other people on the team just used Ruby and had no problems though.

Get Ruby

JRuby, however, did provide a problem. There is apparently a bug in it that means that when it is running in Ruby 1.9 mode, SASS doesn’t work. This is a problem with JRuby becuase Ruby 1.9 proper is fine. The solution was simple though, I just had to change JRuby to run in 1.8 mode. This was done by adding a new Environment Variable called JRUBY_OPTS with a value --1.8.

Bypass your proxy

With Ruby working, the next thing to do is to install SASS or Compass. Compass and SASS are both gems, which are packages of Ruby code which can be easily transported and installed.

Unfortunately, it requires some wrangling to install gems when you are behind a proxy like I am at work. Fortunately, it’s not that hard. I set an environment variable called http_proxy with the value proxy.myworkproxy.co.uk:myworkproxyport.

Install Compass

To install SASS go to the command line and type gem install sass. But also, don’t do that, because if you install Compass it brings SASS with it and Compass is better.

To install Compass, go to the command line and type gem update --system and then gem install compass. And with that, Compass is installed on your local machine.

You can then use the Compass watch command to start polling for changes to your SASS/SCSS files. There is a SASS watch command too, but if you really want to avoid compass I’m sure you’re capable of finding it yourself. You contrary kraken.

Set up Compass in a project without Grunt

With Compass installed on the machine, you still need to add it to any given project. The Compass website provides a tool for creating config options. You may find that useful, I found it more confusing than it needed to be. If you already have a project set up then use the command line to navigate to it and type compass install compass. If you are starting a new project using compass then go to the folder you want it with the command line and type compass create myProject. Probably not myProject though, call it whatever you want. Do that somewhere and take a look at what you get is my advice, after that the tool provided by Compass should become useful for guiding you through the set up. I’m not going to because I have barely used this method, preferring instead to use Grunt.

Set up Compass in a Grunt project

Let’s assume you have a Grunt project already set up. Installing Compass to your project is easy.

First, get Ruby, bypass any proxy you may need to and install Compass to your machine as described above. Then, in the command line navigate to the folder holding your Gruntfile and type npm install grunt-compass.

This installs grunt-compass, the grunt plugin for compass to your project. Npm stands for Node Package Module and is a similar concept to a Ruby Gem.

Then you need to add this line to your gruntfile
This registers the newly installed node module to Grunt in your project. Compass is now installed to your Grunt project, but not doing anything. Let’s configure it.
Add something like this to your Grunt initConfig. I won’t go into massive detail about the configuration because the grunt-compass documentation is already good.

 compass: {
    src: 'sass',
    dest: 'css',
    forcecompile: true

The three options we’ve set are src, which tells it where to find your SASS files and dest, which tells it where to put your css files. forcecompile is also pretty important for our purposes because we’re about to add Compass to the Grunt watch, which means we need to force it to compile and replace older versions of the css. There are many more options available but I’m not going to mess with them now.

To add compass to your watch in grunt go to the watch section in your Grunt initConfig, it should already be there and add path/to/scss/**/*.scss to the files and compass to the tasks. Like so

    watch: {
      files: 'scss/**/*.js ',
      tasks: 'compass lint'

There we have it though, setting up Compass.

Tagged , , , ,

Grunt in the flesh

So last week I spoke about why Grunt is one of my favourite things. Today I’m going to cover how to bring Grunt into your life.

Setting up a grunt project

Grunt offers a variety of pre-made project set ups to save time when you create a new project. From what I understand about these things, they create a folder structure with some placeholder files for you to suit a selection of different project types. I’m sure these are all lovely and useful but I haven’t touched them. I’ve either wanted to construct my projects from scratch to get my head around the process or was adding grunt to pre-existing code.

Doing this means creating just a gruntfile. To do this, from the command line navigate to the folder you want it to be in and type grunt init:gruntfile. To use one of the others just change the gruntfile bit to something else. Just type grunt init to get a list of them.

grunt init:gruntfile

After issuing the command to create a gruntfile, it’ll ask you a load of questions. Answer them honestly or if you’re playing around, just go with the one in lower case when presenter with the Y/n choice. You’re not committing to anything forever here, just defining the start point for your gruntfile.

After this, it creates a single file called grunt.js (If you’re reading this from the future, then it might be called gruntfile.js). Let’s take a look at it.

module.exports = function(grunt) { is at the top of the file. Don’t touch it. It needs to be there, but you don’t need to do anything to it.

grunt.registerTask('default', 'lint test'); is at the bottom. This defines what happens if you just type grunt into your command line. Well actually, if you are using windows you’ll need to type grunt.cmd whenever I mention typing grunt after the initial creation of the file, I really mean type grunt.cmd in Windows. This is because of a naming conflict where typing grunt while you’re in a folder with a file called grunt, like the gruntfile, it’ll try to open that file instead. This is why future people will find their gruntfile called gruntfile.js.

But I digress, the second parameter of register task, lint test says it will run the lint and test tasks. More on those in a moment. You can define further tasks by adding lines like this:
grunt.registerTask('myTask', 'some tasks')
This can be called by typing grunt myTask into the command line. Tasks can be called directly without doing this by just typing grunt task into the command line, but this is a way to combine multiple tasks into a single command.


This is the big one, near the top of the file an attribute of the grunt object called initConfig is defined. This is an object which contains all the config information for your tasks. Some tasks are built into grunt – lint and test are examples. Others can be added with plugins but I won’t be doing that here.

Any tasks can (and probably will if you’re using it) be listed as a property of grunt.initConfig like this

    lint: {
      files: ['grunt.js', 'lib/**/*.js', 'test/**/*.js']
    test: {
      files: ['test/**/*.js']
    watch: {
      files: '',
      tasks: 'lint test'
    jshint: {
      options: {
        curly: true,
        eqeqeq: true,
        immed: true,
        latedef: true,
        newcap: true,
        noarg: true,
        sub: true,
        undef: true,
        boss: true,
        eqnull: true

Lint and jshint are actually related to each other. Lint defines which files are looked at when the lint task is called, jshint configures jshint which is grunt’s linter of choice.

Test runs qunit tests if you’re into that sort of thing. I am not.

Watch is pretty magical. It defines files to watch and then tasks to run if any of them change. Watch is invoked by typing grunt watch into your command line.

An unfortunate fact of Grunt right now is it watches all the files listed and then runs all the tasks if any of them change, so here if I am watching CSS and JavaScript and I change a CSS file it’ll lint my JavaScript. Just in case. Apparently in the next major release of grunt different files can be watched for different tasks. I am excited about this and I imagine you future people are already enjoying it.

That’s the anatomy of grunt then, I’ll probably talk about it quite a lot in the coming weeks. Becuase it is amazing.

Tagged , , , ,