Sunday, October 16, 2011

How to wait while an NSTimer runs in a unit test

Suppose you were a lowly noob to iOS development. Further suppose you had a Cool Idea (tm) which involved timers. The internets might rapidly guide you to NSTimer and you might decide to try to get it to log to the console in a unit test. The most obvious approach seems to be to setup a timer to tick frequently, lets say every 0.1 seconds, and setup a timer callback that logs something, then make a test that sleeps for a couple of seconds. Presumably during the sleep period we'll see a bunch of timer output. The code might look like this (inside an XCode 4.2 test implementation class):

- (void)onTimerTick:(NSTimer*)timer
{
    NSLog(@"MY TIMER TICKED");
}

- (void)testTimerBasics
{
    NSLog(@"timer time");
    
    [NSTimer scheduledTimerWithTimeInterval:0.1
                   target:self
                   selector:@selector(onTimerTick:)
                   userInfo:nil
                   repeats:YES];
    
    [timer fire]; //manually calling fire DOES log 'MY TIMER TICKED'
    
    NSLog(@"about to wait");    
    [NSThread sleepForTimeInterval:2.0]; //absolutely no logs of 'MY TIMER TICKED' occur; somehow the time doesn't fire during a thread sleep :(
    NSLog(@"wait time is over");    
}

Sadly absolutely no log messages are printed during our two second sleep ([NSThread sleepForTimeInterval:2.0]) ; WTF?!

After much Google and literally in the midst of typing a Stack Overflow question I came across a question involving waiting for something else that mentioned NSRunLoop in passing. The very existence of a run loop class suggests an answer: our tests run on the same thread as the run loop. This means if we put the run loop to sleep nothing gets processed. Instead of sleep we need some sort of "run the run loop for a while" approach. Luckily it turns out that NSRunLoop provides a runUntilDate API so we can re-write the test above as follows:

- (void)onTimerTick:(NSTimer*)timer
{
    NSLog(@"MY TIMER TICKED");
}

- (void)testTimerBasics
{
    NSLog(@"timer time");
    
    NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:0.1
                              target:self
                              selector:@selector(onTimerTick:)
                              userInfo:nil
                              repeats:YES];
    
    //[timer fire];
    
    NSDate *runUntil = [NSDate dateWithTimeIntervalSinceNow: 3.0 ];
    
    NSLog(@"about to wait");    
    [[NSRunLoop currentRunLoop] runUntilDate:runUntil];
    NSLog(@"wait time is over");    
}
We've found the right magic incantation! Knuth would be proud.

Speaking of magic incantations, I am using the SyntaxHighlighter libraries hosted @ http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/. However, there is no Objectionable-C brush there so I took the one posted @ http://www.undermyhat.org/blog/wp-content/uploads/2009/09/shBrushObjectiveC.js and updated it the casing and namespace names to the newer highlighter standard. The updated brush looks like this:


dp.sh.Brushes.ObjC = function()
{
 var datatypes = 'char bool BOOL double float int long short id void';
 
 var keywords = 'IBAction IBOutlet SEL YES NO readwrite readonly nonatomic nil NULL ';
 keywords += 'super self copy ';
 keywords += 'break case catch class const copy __finally __exception __try ';
 keywords += 'const_cast continue private public protected __declspec ';
 keywords += 'default delete deprecated dllexport dllimport do dynamic_cast ';
 keywords += 'else enum explicit extern if for friend goto inline ';
 keywords += 'mutable naked namespace new noinline noreturn nothrow ';
 keywords += 'register reinterpret_cast return selectany ';
 keywords += 'sizeof static static_cast struct switch template this ';
 keywords += 'thread throw true false try typedef typeid typename union ';
 keywords += 'using uuid virtual volatile whcar_t while';
 // keywords += '@property @selector @interface @end @implementation @synthesize ';
 
  
 this.regexList = [
  { regex: dp.sh.RegexLib.SingleLineCComments,  css: 'comments' },  // one line comments
  { regex: dp.sh.RegexLib.MultiLineCComments,  css: 'comments' },  // multiline comments
  { regex: dp.sh.RegexLib.DoubleQuotedString,  css: 'string' },   // double quoted strings
  { regex: dp.sh.RegexLib.SingleQuotedString,  css: 'string' },   // single quoted strings
  { regex: new RegExp('^ *#.*', 'gm'),      css: 'preprocessor' },  // preprocessor
  { regex: new RegExp(this.GetKeywords(datatypes), 'gm'),  css: 'datatypes' },  // datatypes
  { regex: new RegExp(this.GetKeywords(keywords), 'gm'),  css: 'keyword' },   // keyword
  { regex: new RegExp('\\bNS\\w+\\b', 'g'),     css: 'keyword' },   // keyword
  { regex: new RegExp('@\\w+\\b', 'g'),      css: 'keyword' },   // keyword
  ];
 this.CssClass = 'dp-objc';
 this.Style = '.dp-objc .datatypes { color: #2E8B57; font-weight: bold; }'; 
}
dp.sh.Brushes.ObjC.prototype = new dp.sh.Highlighter();
dp.sh.Brushes.ObjC.Aliases  = ['objc'];

Friday, October 14, 2011

Integrating Javascript tests into a CLI build

Wherein we walk through a basic setup for running Javascript unit tests on the command line. After some initial investigation (here) I didn't find time to get back to Javascript unit testing until recently. I have now managed to get Javascript unit tests running fairly gracefully in a command line build at work; here is an outline of how, simplified from the "real" implementation to highlight the basics. Fortunately a great deal of the work is done for us, always nice when it turns out that way.

We are going to run everything off the filesystem to avoid having our tests impacted by external influences.

Part 1: Basic test setup
  1. Create a directory to house Javascript unit test files; we will refer to this as \jsunit henceforth when giving paths. 
  2. Download QUnit.js and QUnit.css into \jsunit
  3. Download run-qunit.js into \jsunit
  4. Create a file testme.js in \jsunit with the following content
    /**
     * var-args; adds up all arguments and returns sum
     */
    function add() {
    }
    
  5. Create a file testme.test.htm in \jsunit with the following content
    • Note we are using local filesystem paths to load all content; we have no external dependencies
    • <!DOCTYPE html>
      <html>
      <head>
      	<!-- we need QUnit as a test runner -->
          <link rel="stylesheet" href="qunit.css" type="text/css" media="screen" />
          <script src="qunit.js"></script>
      	
      	<!-- we'd like to have the file we're going to test -->
          <script src="testme.js"></script>
      	
      	<!-- and finally lets write some tests -->
      	<script>
      		console.log("test time baby");
      
      		test("add is defined", function() {
      			equals(typeof window.add, "function", "add isn't a function :(");
      		});
      	</script>
          
      </head>
      <body>
      	 <h1 id="qunit-header">QUnit Tests</h1>
      	 <h2 id="qunit-banner"></h2>
      	 <div id="qunit-testrunner-toolbar"></div>
      	 <h2 id="qunit-userAgent"></h2>
      	 <ol id="qunit-tests"></ol>
      	 <div id="qunit-fixture"></div>    
      </body>
      </html>
      
  6. Download PhantomJS (1.3.0 at time of writing)
    • For example PhantomJS commands I will assume it is on PATH (eg phantomjs args); use the qualified path if not (eg C:\where\phantom\is\phantomjs args)
  7. Open testme.test.htm in a browser; it should look like this:

  8. Open a command prompt, navigate to \jsunit and run phantomjs run-qunit.js testme.test.htm
    • Output should be similar to:
      test time baby
      'waitFor()' finished in 211ms.
      Tests completed in 57 milliseconds.
      1 tests of 1 passed, 0 failed.
      
    • Note we don't see any "test blah pass" or "test two fail" style output
Part 2: CLI build integration prep
So far so good, now we need to get setup to run in a CLI build. There a couple of things we'd like here, most of which are already implemented in run-qunit.js:
  1. Output each test pass/fail
  2. Output log messages from tests to the console
    1. This "just works" courtesy of run-qunit.js, yay!
  3. Exit with non-zero error code if tests fail
    1. This makes it easy for build to detect failure and do something in response; for example an Ant build could simply set failonerror
    2. This "just works" courtesy of run-qunit.js, yay!
We just have to setup output of test pass/fail information to the console. We'll add a test that fails to show what that looks like. Proceed as follows:
  1. Create a file test-support.js in \jsunit with the following content:
    //create a scope so we don't pollute global
    (function() {  
       var testName;
       
       //arg: { name }
    	QUnit.testStart = function(t) {
    	    testName = t.name;
    	};
    	
    	//arg: { name, failed, passed, total }
    	QUnit.testDone = function(t) {
    	    console.log('Test "' + t.name + '" completed: ' + (0 === t.failed ? 'pass' : 'FAIL'))
    	};
    	
    	//{ result, actual, expected, message }
    	QUnit.log = function(t) {
    	    if (!t.result) {
    	        console.log('Test "' + testName + '" assertion failed. Expected <' + t.expected + '> Actual <' + t.actual + '>' + (t.message ? ': \'' + t.message + '\'' : ''));
    	    }
    	};
    }());
    
  2. Edit testme.test.htm to pull in test-support.js and add a test that will currently fail
    • <!DOCTYPE html>
      <html>
      <head>
      	<!-- we need QUnit as a test runner -->
          <link rel="stylesheet" href="qunit.css" type="text/css" media="screen" />
          <script src="qunit.js"></script>
      	
      	<!-- where would our tests be without support! -->
      	<script src="test-support.js"></script>
      	
      	<!-- we'd like to have the file we're going to test -->
          <script src="testme.js"></script>
      	
      	<!-- and finally lets write some tests -->
      	<script>
      	
      		test("add is defined", function() {
      			equals(typeof window.add, "function", "add isn't a function :(");
      		});
      		
      		test("add 1+1", function() {
      			equals(add(1, 1), 2);
      		});		
      	</script>
          
      </head>
      <body>
      	 <h1 id="qunit-header">QUnit Tests</h1>
      	 <h2 id="qunit-banner"></h2>
      	 <div id="qunit-testrunner-toolbar"></div>
      	 <h2 id="qunit-userAgent"></h2>
      	 <ol id="qunit-tests"></ol>
      	 <div id="qunit-fixture"></div>    
      </body>
      </html>
      
  3. Open a command prompt, navigate to \jsunit and run phantomjs run-qunit.js testme.test.htm
    • Output should be similar to:
      Test "add is defined" completed: pass
      Test "add 1+1" assertion failed. Expected <2> Actual <undefined>
      Test "add 1+1" completed: FAIL
      'waitFor()' finished in 209ms.
      Tests completed in 70 milliseconds.
      1 tests of 2 passed, 1 failed.
    • If you print the exit code (echo %ERRORLEVEL% in Windoze) you should get a 1, indicating we have fulfilled the 'exit with non-zero exit code on failure' requirement :)
Part 3: Ant integration
At long last we are ready to integrate this mess into a build. For this example I will use Ant and will assume Ant is on PATH. At time of writing I am using Ant 1.8.2.

  1. Create a phantomjs.bat file in \jsunit with the following content
    @echo off
    C:\where\you\put\phantom\phantomjs.exe %*
    
    • Alternately create phantomjs.sh with equivalent functionality if on *nix
  2. Create a build.xml file in \jsunit with the following content
    • <?xml version="1.0" encoding="UTF-8"?>
      <project name="jsunittests" basedir="." default="main">
      	<property name="builddir" location="${basedir}/target"/>
      	
      	<condition property="phantom.filename" value="phantomjs.bat"><os family="windows"/></condition>
      	<condition property="phantom.filename" value="phantomjs.sh"><os family="unix"/></condition>   
      	
      	<target name="clean">
      		<delete dir="${builddir}"/>
      	</target>
      	
      	<target name="prep">
      		<mkdir dir="${builddir}"/>
      	</target>
      	
      	<target name="jstest">
            <!--Run all tests w/phantom, fail if tests fail. Execute all files w/extension .test.htm. -->
            <apply executable="${phantom.filename}" failonerror="true" dir="${basedir}" relative="true">
               <arg value="run-qunit.js"/>
               <fileset dir="${basedir}">
                  <include name="**/*.test.htm" />
               </fileset>
            </apply>			
      	</target>
      	
      	<target name="main" depends="clean, prep, jstest">
      	</target>
      </project>
      
  3. Run 'ant'; you should get output similar to the following (yes, it's supposed to fail, remember we have a test that fails setup on purpose)
    • Buildfile: build.xml
      
      clean:
         [delete] Deleting directory C:\Code\jsunit-trial\target
      
      prep:
          [mkdir] Created dir: C:\Code\jsunit-trial\target
      
      jstest:
          [apply] Test "add is defined" completed: pass
          [apply] Test "add 1+1" assertion failed. Expected <2> Actual 
          [apply] Test "add 1+1" completed: FAIL
          [apply] 'waitFor()' finished in 218ms.
          [apply] Tests completed in 58 milliseconds.
          [apply] 1 tests of 2 passed, 1 failed.
      
      BUILD FAILED
      C:\Code\jsunit-trial\build.xml:18: apply returned: 1
      
      Total time: 0 seconds
      
  4. Edit testme.js just enough to fix the test
    • /**
       * var-args; adds up all arguments and returns sum
       */
      function add() {
      	var sum =0;
      	for (var i=0; i<arguments.length; i++)
      		sum += arguments[i];
      	return sum;
      }
  5. Run 'ant'; you should get output similar to the following
    • Buildfile: build.xml
      
      clean:
         [delete] Deleting directory C:\Code\jsunit-trial\target
      
      prep:
          [mkdir] Created dir: C:\Code\jsunit-trial\target
      
      jstest:
          [apply] Test "add is defined" completed: pass
          [apply] Test "add 1+1" completed: pass
          [apply] 'waitFor()' finished in 214ms.
          [apply] Tests completed in 59 milliseconds.
          [apply] 2 tests of 2 passed, 0 failed.
      
      main:
      
      BUILD SUCCESSFUL
      Total time: 0 seconds
Pretty sweet, we've got Javascript tests running in an Ant build as a first-class citizen. Now if you break my Javascript my Continuous Integration server will let me know!

Part 4: Code coverage
Finally we are ready to get some code coverage. We are going to get code coverage by instrumenting our js files using JSCoverage, running our QUnit tests such that the relative paths resolve to the instrumented copies, and then using the PhantomJS file system APIs to create a colorized copy of the original js file to visually display coverage. We'll do a quick and dirty percentage coverage output to the console as well.


  1. Download JSCoverage 0.5.1
  2. Create a jscoverage.bat file in \jsunit with the following content
    @echo off
    C:\where\you\put\jscoverage\jscoverage.exe %*
    
  3. Create a template file for coverage information named coverageBase.htm in \jsunit
    • <!DOCTYPE html>
      <html>
      <head>
          <style>
              .code {
                  white-space: pre;
                  font-family: courier new;
                  width: 100%;            
              }
              
              .miss {
                  background-color: #FF0000;
              }
              
              .hit {
                  background-color: #94FF7C;
              }
              
              .undef {
                  background-color: #AFFF9E;
              }        
          </style>
      </head>
      <body>
      
      COLORIZED_LINE_HTML
      
      </body>
      </html>
      
  4. Update build.xml to perform a few new steps
    1. Create a \target\testjs\js directory and copy our js files into it
    2. Index our js files for code coverage, putting the indexed version into \target\testjs\jsinstrumented
    3. Copy *.test.htm into \target\testhtm
    4. Copy base resources to run tests (run-qunit.js, qunit.js, qunit.css) into \target\testhtm
    5. Copy the instrumented js files into \target\testhtm
      1. Note that because we used relative paths to our test js files the *.test.htm QUnit html files will now resolve js to the instrumented version when we run the files out of \target\testhtm
    6. Run PhantomJS on *.test.htm in \target\testhtm
    7. The updated build.xml looks like this:
    8. <?xml version="1.0" encoding="UTF-8"?>
      <project name="jsunittests" basedir="." default="main">
      	<property name="builddir" location="${basedir}/target"/>
      	<property name="jstestdir" location="${builddir}/testjs"/>
      	<property name="jsdir" location="${jstestdir}/js"/>
      	<property name="jsinstrumenteddir" location="${jstestdir}/jsinstrumented"/>
      	<property name="testhtmdir" location="${builddir}/testhtm"/>
      	
      	<condition property="phantom.filename" value="phantomjs.bat"><os family="windows"/></condition>
      	<condition property="phantom.filename" value="phantomjs.sh"><os family="unix"/></condition>   
      	
      	<property name="jscoverage.filename" value="jscoverage.bat" />
      	
      	<target name="clean">
      		<delete dir="${builddir}"/>
      	</target>
      	
      	<target name="prep">
      		<mkdir dir="${jsdir}"/>
      		<mkdir dir="${jsinstrumenteddir}"/>		
      		<mkdir dir="${testhtmdir}"/>
      		
      		<!-- copy non-test js files to target so we can mess with 'em. how we select which files may vary; for this 
      			 example just pick the one file we are testing.-->
      		<copy todir="${jsdir}">
      			<fileset dir="${basedir}">
      				<include name="testme.js" />
      			</fileset>
      		</copy>
      				
      		<!-- run jscoverage to produce a version of the file instrumented for code coverage -->
      		<exec executable="${jscoverage.filename}" failonerror="true">
      			<arg value="${jsdir}"/>
      			<arg value="${jsinstrumenteddir}"/>
      		</exec>   		
      		
      		<!-- copy our test htm files and modify them to point to the coverage indexed version of the test file. -->
      		<copy todir="${testhtmdir}">
      			<fileset dir="${basedir}">
      				<include name="**/*.test.htm" />
      			</fileset>
      		</copy>		
      		
      		<!-- copy core resources to testhtmdir so we can load them with same paths as when executing test htm files directly -->
      		<copy todir="${testhtmdir}">
      			<fileset dir="${jsinstrumenteddir}">
      				<include name="**/*.js" />
      				<exclude name="jscoverage.js"/>
      			</fileset>
      		</copy>				
      		<copy todir="${testhtmdir}">
      			<fileset dir="${basedir}">
      				<include name="test-support.js" />
      				<include name="run-qunit.js" />
      				<include name="qunit.css" />
      				<include name="qunit.js" />
      			</fileset>
      		</copy>				
      	</target>
      	
      	<target name="jstest">
            <!--Run all tests w/phantom, fail if tests fail. Execute all files w/extension .test.htm. -->
            <apply executable="${basedir}/${phantom.filename}" failonerror="true" dir="${testhtmdir}" relative="false">
               <arg value="run-qunit.js"/>
      		 <srcfile/>
      		 <arg value="${basedir}"/>
               <fileset dir="${testhtmdir}">
                  <include name="**/*.test.htm" />
               </fileset>
            </apply>			
      	</target>
      	
      	<target name="main" depends="clean, prep, jstest">
      	</target>
      </project>
      
  5. Modify our test-support.js to look for jscoverage data and output a rough count of lines hit, missed, and irrelevant (non-executable). Also expose a function a caller outside of page context can use to access coverage information. The new version should look like this:
    • //create a scope so we don't pollute global
      (function() {  
         var testName;
         
         //arg: { name }
      	QUnit.testStart = function(t) {
      	    testName = t.name;
      	};
      	
      	//arg: { name, failed, passed, total }
      	QUnit.testDone = function(t) {
      	    console.log('Test "' + t.name + '" completed: ' + (0 === t.failed ? 'pass' : 'FAIL'))
      	};
      	
      	//{ result, actual, expected, message }
      	QUnit.log = function(t) {
      	    if (!t.result) {
      	        console.log('Test "' + testName + '" assertion failed. Expected <' + t.expected + '> Actual <' + t.actual + '>' + (t.message ? ': \'' + t.message + '\'' : ''));
      	    }
      	};
      	
      	//we want this at global scope so outside callers can find it. In a more realistic implementation we
      	//should probably put it in a namespace.
      	window.getCoverageByLine = function() {
      		var key = null;
              var lines = null;
              //look for code coverage data    
              if (typeof _$jscoverage === 'object') {
      			for (key in _$jscoverage) {}
      			lines = _$jscoverage[key];
              } 
      
      		if (!lines) {
                 console.log('code coverage data is NOT available');
              } 
              		
              return { 'key': key, 'lines': lines };
         };
      
         QUnit.done = function(t) {
              var cvgInfo = getCoverageByLine();
              if (!!cvgInfo.lines) {
                  var testableLines = 0;
                  var testedLines = 0;
      			var untestableLines = 0;
                  for (lineIdx in cvgInfo.lines) {
      				var cvg = cvgInfo.lines[lineIdx];
      				if (typeof cvg === 'number') {
      					testableLines += 1;
      					if (cvg > 0) {
      						testedLines += 1;
      					}					
      				} else {
      					untestableLines += 1;
      				}
                  }     
                  var coverage = '' + Math.floor(100 * testedLines / testableLines) + '%';
                  
      			var result = document.getElementById('qunit-testresult');
      			if (result != null) {
      				result.innerHTML = result.innerHTML + ' ' + coverage + ' test coverage of ' + cvgInfo.key;
      			} else {
      				console.log('can\'t find test-result element to update');
      			}			
              }
         };  	
      }());
      
  6. Finally, modify run-qunit.js to load the original js file and produce a colorized version based on the coverage data we get by running the test against the version of the js file indexed for coverage. The new version should look like this:
    • /**
       * Wait until the test condition is true or a timeout occurs. Useful for waiting
       * on a server response or for a ui change (fadeIn, etc.) to occur.
       *
       * @param testFx javascript condition that evaluates to a boolean,
       * it can be passed in as a string (e.g.: "1 == 1" or "$('#bar').is(':visible')" or
       * as a callback function.
       * @param onReady what to do when testFx condition is fulfilled,
       * it can be passed in as a string (e.g.: "1 == 1" or "$('#bar').is(':visible')" or
       * as a callback function.
       * @param timeOutMillis the max amount of time to wait. If not specified, 3 sec is used.
       */
      function waitFor(testFx, onReady, timeOutMillis) {
          var maxtimeOutMillis = timeOutMillis ? timeOutMillis : 3001, //< Default Max Timout is 3s
              start = new Date().getTime(),
              condition = false,
              interval = setInterval(function() {
                  if ( (new Date().getTime() - start < maxtimeOutMillis) && !condition ) {
                      // If not time-out yet and condition not yet fulfilled
                      condition = (typeof(testFx) === "string" ? eval(testFx) : testFx()); //< defensive code
                  } else {
                      if(!condition) {
                          // If condition still not fulfilled (timeout but condition is 'false')
                          console.log("'waitFor()' timeout");
                          phantom.exit(1);
                      } else {
                          // Condition fulfilled (timeout and/or condition is 'true')
                          console.log("'waitFor()' finished in " + (new Date().getTime() - start) + "ms.");
                          typeof(onReady) === "string" ? eval(onReady) : onReady(); //< Do what it's supposed to do once the condition is fulfilled
                          clearInterval(interval); //< Stop this interval
                      }
                  }
              }, 100); //< repeat check every 250ms
      };
      
      
      if (phantom.args.length === 0 || phantom.args.length > 3) {
          console.log('Usage: run-qunit.js URL basedir');
          phantom.exit(1);
      }
      
      var fs = require('fs');
      var page = require('webpage').create();
      
      // Route "console.log()" calls from within the Page context to the main Phantom context (i.e. current "this")
      page.onConsoleMessage = function(msg) {
          console.log(msg);
      };
      
      var openPath = phantom.args[0].replace(/^.*(\\|\/)/, '');
      var basedir = phantom.args[1];
      var coverageBase = fs.read(basedir + fs.separator + 'coverageBase.htm');
      
      page.open(openPath, function(status){
          if (status !== "success") {
              console.log("Unable to access network");
              phantom.exit(1);
          } else {
              waitFor(function(){
                  return page.evaluate(function(){
                      var el = document.getElementById('qunit-testresult');
                      if (el && el.innerText.match('completed')) {
                          return true;
                      }
                      return false;
                  });
              }, function(){
      			//BEGIN MODIFIED: output colorized code coverage
      			//reach into page context and pull out coverage info. stringify to pass context boundaries.
      			var coverageInfo = JSON.parse(page.evaluate(function() { return JSON.stringify(getCoverageByLine()); }));
      			var lineCoverage = coverageInfo.lines;
      			var originalFile = basedir + fs.separator + coverageInfo.key;
      			var fileLines = readFileLines(originalFile);
      			
                  var colorized = '';
                  
      			console.log('lines=' + JSON.stringify(lineCoverage));
                  for (var idx=0; idx < lineCoverage.length; idx++) { 
                      //+1: coverage lines count from 1.
                      var cvg = lineCoverage[idx + 1];
                      var hitmiss = '';
                      if (typeof cvg === 'number') {
                          hitmiss = ' ' + (cvg>0 ? 'hit' : 'miss');
                      } else {
                          hitmiss = ' ' + 'undef';
                      }
                      var htmlLine = fileLines[idx].replace('<', '&lt;').replace('>', '&gt;');
                      colorized += '<div class="code' + hitmiss + '">' + htmlLine + '</div>\n';
                  };        
                  colorized = coverageBase.replace('COLORIZED_LINE_HTML', colorized);
                  
                  var coverageOutputFile = phantom.args[0].replace('.test.htm', '.coverage.htm');
                  fs.write(coverageOutputFile, colorized, 'w');
                  
                  console.log('Coverage for ' + coverageInfo.key + ' in ' + coverageOutputFile);			
      			//END MODIFIED
      		
                  var failedNum = page.evaluate(function(){
                      var el = document.getElementById('qunit-testresult');
                      console.log(el.innerText);
                      try {
                          return el.getElementsByClassName('failed')[0].innerHTML;
                      } catch (e) { }
                      return 10000;
                  });
                  phantom.exit((parseInt(failedNum, 10) > 0) ? 1 : 0);
              });
          }
      });
      
      //MODIFIED: add new fn
      function readFileLines(filename) {
          var stream = fs.open(filename, 'r');
          var lines = [];
          var line;
          while (!stream.atEnd()) {
              lines.push(stream.readLine());
          }
          stream.close();
          
          return lines;
      }  
      
      
  7. Run 'ant'; you should see output similar to:

  8. Open \jsunit\target\testhtm\testme.test.htm in a browser; you should see something similar to this (note coverage % appears):

  9. Open \jsunit\target\testhtm\testme.coverage.htm in a browser; you should see something similar to this (red for untested, green for tested, light green for non-executable lines):

So where does that leave us?
We have clearly displayed we can accomplish some important things:


  • Write unit tests for Javascript
  • Run unit tests for Javascript in a command line build
  • Index Javascript files for code coverage
  • Output coverage percentage to the test runner (QUnit html file)
  • Render a colorized version of the Javascript under test clearly indicating which lines are/aren't being tested
I think this is awesome! Bear in mind in a real version we would of course make numerous refinements to this rather basic implementation; what we have is a proof of concept not by any stretch of the imagination an implementation ready for a team to consume.