Friday, October 14, 2011

Integrating Javascript tests into a CLI build

Wherein we walk through a basic setup for running Javascript unit tests on the command line. After some initial investigation (here) I didn't find time to get back to Javascript unit testing until recently. I have now managed to get Javascript unit tests running fairly gracefully in a command line build at work; here is an outline of how, simplified from the "real" implementation to highlight the basics. Fortunately a great deal of the work is done for us, always nice when it turns out that way.

We are going to run everything off the filesystem to avoid having our tests impacted by external influences.

Part 1: Basic test setup
  1. Create a directory to house Javascript unit test files; we will refer to this as \jsunit henceforth when giving paths. 
  2. Download QUnit.js and QUnit.css into \jsunit
  3. Download run-qunit.js into \jsunit
  4. Create a file testme.js in \jsunit with the following content
    /**
     * var-args; adds up all arguments and returns sum
     */
    function add() {
    }
    
  5. Create a file testme.test.htm in \jsunit with the following content
    • Note we are using local filesystem paths to load all content; we have no external dependencies
    • <!DOCTYPE html>
      <html>
      <head>
      	<!-- we need QUnit as a test runner -->
          <link rel="stylesheet" href="qunit.css" type="text/css" media="screen" />
          <script src="qunit.js"></script>
      	
      	<!-- we'd like to have the file we're going to test -->
          <script src="testme.js"></script>
      	
      	<!-- and finally lets write some tests -->
      	<script>
      		console.log("test time baby");
      
      		test("add is defined", function() {
      			equals(typeof window.add, "function", "add isn't a function :(");
      		});
      	</script>
          
      </head>
      <body>
      	 <h1 id="qunit-header">QUnit Tests</h1>
      	 <h2 id="qunit-banner"></h2>
      	 <div id="qunit-testrunner-toolbar"></div>
      	 <h2 id="qunit-userAgent"></h2>
      	 <ol id="qunit-tests"></ol>
      	 <div id="qunit-fixture"></div>    
      </body>
      </html>
      
  6. Download PhantomJS (1.3.0 at time of writing)
    • For example PhantomJS commands I will assume it is on PATH (eg phantomjs args); use the qualified path if not (eg C:\where\phantom\is\phantomjs args)
  7. Open testme.test.htm in a browser; it should look like this:

  8. Open a command prompt, navigate to \jsunit and run phantomjs run-qunit.js testme.test.htm
    • Output should be similar to:
      test time baby
      'waitFor()' finished in 211ms.
      Tests completed in 57 milliseconds.
      1 tests of 1 passed, 0 failed.
      
    • Note we don't see any "test blah pass" or "test two fail" style output
Part 2: CLI build integration prep
So far so good, now we need to get setup to run in a CLI build. There a couple of things we'd like here, most of which are already implemented in run-qunit.js:
  1. Output each test pass/fail
  2. Output log messages from tests to the console
    1. This "just works" courtesy of run-qunit.js, yay!
  3. Exit with non-zero error code if tests fail
    1. This makes it easy for build to detect failure and do something in response; for example an Ant build could simply set failonerror
    2. This "just works" courtesy of run-qunit.js, yay!
We just have to setup output of test pass/fail information to the console. We'll add a test that fails to show what that looks like. Proceed as follows:
  1. Create a file test-support.js in \jsunit with the following content:
    //create a scope so we don't pollute global
    (function() {  
       var testName;
       
       //arg: { name }
    	QUnit.testStart = function(t) {
    	    testName = t.name;
    	};
    	
    	//arg: { name, failed, passed, total }
    	QUnit.testDone = function(t) {
    	    console.log('Test "' + t.name + '" completed: ' + (0 === t.failed ? 'pass' : 'FAIL'))
    	};
    	
    	//{ result, actual, expected, message }
    	QUnit.log = function(t) {
    	    if (!t.result) {
    	        console.log('Test "' + testName + '" assertion failed. Expected <' + t.expected + '> Actual <' + t.actual + '>' + (t.message ? ': \'' + t.message + '\'' : ''));
    	    }
    	};
    }());
    
  2. Edit testme.test.htm to pull in test-support.js and add a test that will currently fail
    • <!DOCTYPE html>
      <html>
      <head>
      	<!-- we need QUnit as a test runner -->
          <link rel="stylesheet" href="qunit.css" type="text/css" media="screen" />
          <script src="qunit.js"></script>
      	
      	<!-- where would our tests be without support! -->
      	<script src="test-support.js"></script>
      	
      	<!-- we'd like to have the file we're going to test -->
          <script src="testme.js"></script>
      	
      	<!-- and finally lets write some tests -->
      	<script>
      	
      		test("add is defined", function() {
      			equals(typeof window.add, "function", "add isn't a function :(");
      		});
      		
      		test("add 1+1", function() {
      			equals(add(1, 1), 2);
      		});		
      	</script>
          
      </head>
      <body>
      	 <h1 id="qunit-header">QUnit Tests</h1>
      	 <h2 id="qunit-banner"></h2>
      	 <div id="qunit-testrunner-toolbar"></div>
      	 <h2 id="qunit-userAgent"></h2>
      	 <ol id="qunit-tests"></ol>
      	 <div id="qunit-fixture"></div>    
      </body>
      </html>
      
  3. Open a command prompt, navigate to \jsunit and run phantomjs run-qunit.js testme.test.htm
    • Output should be similar to:
      Test "add is defined" completed: pass
      Test "add 1+1" assertion failed. Expected <2> Actual <undefined>
      Test "add 1+1" completed: FAIL
      'waitFor()' finished in 209ms.
      Tests completed in 70 milliseconds.
      1 tests of 2 passed, 1 failed.
    • If you print the exit code (echo %ERRORLEVEL% in Windoze) you should get a 1, indicating we have fulfilled the 'exit with non-zero exit code on failure' requirement :)
Part 3: Ant integration
At long last we are ready to integrate this mess into a build. For this example I will use Ant and will assume Ant is on PATH. At time of writing I am using Ant 1.8.2.

  1. Create a phantomjs.bat file in \jsunit with the following content
    @echo off
    C:\where\you\put\phantom\phantomjs.exe %*
    
    • Alternately create phantomjs.sh with equivalent functionality if on *nix
  2. Create a build.xml file in \jsunit with the following content
    • <?xml version="1.0" encoding="UTF-8"?>
      <project name="jsunittests" basedir="." default="main">
      	<property name="builddir" location="${basedir}/target"/>
      	
      	<condition property="phantom.filename" value="phantomjs.bat"><os family="windows"/></condition>
      	<condition property="phantom.filename" value="phantomjs.sh"><os family="unix"/></condition>   
      	
      	<target name="clean">
      		<delete dir="${builddir}"/>
      	</target>
      	
      	<target name="prep">
      		<mkdir dir="${builddir}"/>
      	</target>
      	
      	<target name="jstest">
            <!--Run all tests w/phantom, fail if tests fail. Execute all files w/extension .test.htm. -->
            <apply executable="${phantom.filename}" failonerror="true" dir="${basedir}" relative="true">
               <arg value="run-qunit.js"/>
               <fileset dir="${basedir}">
                  <include name="**/*.test.htm" />
               </fileset>
            </apply>			
      	</target>
      	
      	<target name="main" depends="clean, prep, jstest">
      	</target>
      </project>
      
  3. Run 'ant'; you should get output similar to the following (yes, it's supposed to fail, remember we have a test that fails setup on purpose)
    • Buildfile: build.xml
      
      clean:
         [delete] Deleting directory C:\Code\jsunit-trial\target
      
      prep:
          [mkdir] Created dir: C:\Code\jsunit-trial\target
      
      jstest:
          [apply] Test "add is defined" completed: pass
          [apply] Test "add 1+1" assertion failed. Expected <2> Actual 
          [apply] Test "add 1+1" completed: FAIL
          [apply] 'waitFor()' finished in 218ms.
          [apply] Tests completed in 58 milliseconds.
          [apply] 1 tests of 2 passed, 1 failed.
      
      BUILD FAILED
      C:\Code\jsunit-trial\build.xml:18: apply returned: 1
      
      Total time: 0 seconds
      
  4. Edit testme.js just enough to fix the test
    • /**
       * var-args; adds up all arguments and returns sum
       */
      function add() {
      	var sum =0;
      	for (var i=0; i<arguments.length; i++)
      		sum += arguments[i];
      	return sum;
      }
  5. Run 'ant'; you should get output similar to the following
    • Buildfile: build.xml
      
      clean:
         [delete] Deleting directory C:\Code\jsunit-trial\target
      
      prep:
          [mkdir] Created dir: C:\Code\jsunit-trial\target
      
      jstest:
          [apply] Test "add is defined" completed: pass
          [apply] Test "add 1+1" completed: pass
          [apply] 'waitFor()' finished in 214ms.
          [apply] Tests completed in 59 milliseconds.
          [apply] 2 tests of 2 passed, 0 failed.
      
      main:
      
      BUILD SUCCESSFUL
      Total time: 0 seconds
Pretty sweet, we've got Javascript tests running in an Ant build as a first-class citizen. Now if you break my Javascript my Continuous Integration server will let me know!

Part 4: Code coverage
Finally we are ready to get some code coverage. We are going to get code coverage by instrumenting our js files using JSCoverage, running our QUnit tests such that the relative paths resolve to the instrumented copies, and then using the PhantomJS file system APIs to create a colorized copy of the original js file to visually display coverage. We'll do a quick and dirty percentage coverage output to the console as well.


  1. Download JSCoverage 0.5.1
  2. Create a jscoverage.bat file in \jsunit with the following content
    @echo off
    C:\where\you\put\jscoverage\jscoverage.exe %*
    
  3. Create a template file for coverage information named coverageBase.htm in \jsunit
    • <!DOCTYPE html>
      <html>
      <head>
          <style>
              .code {
                  white-space: pre;
                  font-family: courier new;
                  width: 100%;            
              }
              
              .miss {
                  background-color: #FF0000;
              }
              
              .hit {
                  background-color: #94FF7C;
              }
              
              .undef {
                  background-color: #AFFF9E;
              }        
          </style>
      </head>
      <body>
      
      COLORIZED_LINE_HTML
      
      </body>
      </html>
      
  4. Update build.xml to perform a few new steps
    1. Create a \target\testjs\js directory and copy our js files into it
    2. Index our js files for code coverage, putting the indexed version into \target\testjs\jsinstrumented
    3. Copy *.test.htm into \target\testhtm
    4. Copy base resources to run tests (run-qunit.js, qunit.js, qunit.css) into \target\testhtm
    5. Copy the instrumented js files into \target\testhtm
      1. Note that because we used relative paths to our test js files the *.test.htm QUnit html files will now resolve js to the instrumented version when we run the files out of \target\testhtm
    6. Run PhantomJS on *.test.htm in \target\testhtm
    7. The updated build.xml looks like this:
    8. <?xml version="1.0" encoding="UTF-8"?>
      <project name="jsunittests" basedir="." default="main">
      	<property name="builddir" location="${basedir}/target"/>
      	<property name="jstestdir" location="${builddir}/testjs"/>
      	<property name="jsdir" location="${jstestdir}/js"/>
      	<property name="jsinstrumenteddir" location="${jstestdir}/jsinstrumented"/>
      	<property name="testhtmdir" location="${builddir}/testhtm"/>
      	
      	<condition property="phantom.filename" value="phantomjs.bat"><os family="windows"/></condition>
      	<condition property="phantom.filename" value="phantomjs.sh"><os family="unix"/></condition>   
      	
      	<property name="jscoverage.filename" value="jscoverage.bat" />
      	
      	<target name="clean">
      		<delete dir="${builddir}"/>
      	</target>
      	
      	<target name="prep">
      		<mkdir dir="${jsdir}"/>
      		<mkdir dir="${jsinstrumenteddir}"/>		
      		<mkdir dir="${testhtmdir}"/>
      		
      		<!-- copy non-test js files to target so we can mess with 'em. how we select which files may vary; for this 
      			 example just pick the one file we are testing.-->
      		<copy todir="${jsdir}">
      			<fileset dir="${basedir}">
      				<include name="testme.js" />
      			</fileset>
      		</copy>
      				
      		<!-- run jscoverage to produce a version of the file instrumented for code coverage -->
      		<exec executable="${jscoverage.filename}" failonerror="true">
      			<arg value="${jsdir}"/>
      			<arg value="${jsinstrumenteddir}"/>
      		</exec>   		
      		
      		<!-- copy our test htm files and modify them to point to the coverage indexed version of the test file. -->
      		<copy todir="${testhtmdir}">
      			<fileset dir="${basedir}">
      				<include name="**/*.test.htm" />
      			</fileset>
      		</copy>		
      		
      		<!-- copy core resources to testhtmdir so we can load them with same paths as when executing test htm files directly -->
      		<copy todir="${testhtmdir}">
      			<fileset dir="${jsinstrumenteddir}">
      				<include name="**/*.js" />
      				<exclude name="jscoverage.js"/>
      			</fileset>
      		</copy>				
      		<copy todir="${testhtmdir}">
      			<fileset dir="${basedir}">
      				<include name="test-support.js" />
      				<include name="run-qunit.js" />
      				<include name="qunit.css" />
      				<include name="qunit.js" />
      			</fileset>
      		</copy>				
      	</target>
      	
      	<target name="jstest">
            <!--Run all tests w/phantom, fail if tests fail. Execute all files w/extension .test.htm. -->
            <apply executable="${basedir}/${phantom.filename}" failonerror="true" dir="${testhtmdir}" relative="false">
               <arg value="run-qunit.js"/>
      		 <srcfile/>
      		 <arg value="${basedir}"/>
               <fileset dir="${testhtmdir}">
                  <include name="**/*.test.htm" />
               </fileset>
            </apply>			
      	</target>
      	
      	<target name="main" depends="clean, prep, jstest">
      	</target>
      </project>
      
  5. Modify our test-support.js to look for jscoverage data and output a rough count of lines hit, missed, and irrelevant (non-executable). Also expose a function a caller outside of page context can use to access coverage information. The new version should look like this:
    • //create a scope so we don't pollute global
      (function() {  
         var testName;
         
         //arg: { name }
      	QUnit.testStart = function(t) {
      	    testName = t.name;
      	};
      	
      	//arg: { name, failed, passed, total }
      	QUnit.testDone = function(t) {
      	    console.log('Test "' + t.name + '" completed: ' + (0 === t.failed ? 'pass' : 'FAIL'))
      	};
      	
      	//{ result, actual, expected, message }
      	QUnit.log = function(t) {
      	    if (!t.result) {
      	        console.log('Test "' + testName + '" assertion failed. Expected <' + t.expected + '> Actual <' + t.actual + '>' + (t.message ? ': \'' + t.message + '\'' : ''));
      	    }
      	};
      	
      	//we want this at global scope so outside callers can find it. In a more realistic implementation we
      	//should probably put it in a namespace.
      	window.getCoverageByLine = function() {
      		var key = null;
              var lines = null;
              //look for code coverage data    
              if (typeof _$jscoverage === 'object') {
      			for (key in _$jscoverage) {}
      			lines = _$jscoverage[key];
              } 
      
      		if (!lines) {
                 console.log('code coverage data is NOT available');
              } 
              		
              return { 'key': key, 'lines': lines };
         };
      
         QUnit.done = function(t) {
              var cvgInfo = getCoverageByLine();
              if (!!cvgInfo.lines) {
                  var testableLines = 0;
                  var testedLines = 0;
      			var untestableLines = 0;
                  for (lineIdx in cvgInfo.lines) {
      				var cvg = cvgInfo.lines[lineIdx];
      				if (typeof cvg === 'number') {
      					testableLines += 1;
      					if (cvg > 0) {
      						testedLines += 1;
      					}					
      				} else {
      					untestableLines += 1;
      				}
                  }     
                  var coverage = '' + Math.floor(100 * testedLines / testableLines) + '%';
                  
      			var result = document.getElementById('qunit-testresult');
      			if (result != null) {
      				result.innerHTML = result.innerHTML + ' ' + coverage + ' test coverage of ' + cvgInfo.key;
      			} else {
      				console.log('can\'t find test-result element to update');
      			}			
              }
         };  	
      }());
      
  6. Finally, modify run-qunit.js to load the original js file and produce a colorized version based on the coverage data we get by running the test against the version of the js file indexed for coverage. The new version should look like this:
    • /**
       * Wait until the test condition is true or a timeout occurs. Useful for waiting
       * on a server response or for a ui change (fadeIn, etc.) to occur.
       *
       * @param testFx javascript condition that evaluates to a boolean,
       * it can be passed in as a string (e.g.: "1 == 1" or "$('#bar').is(':visible')" or
       * as a callback function.
       * @param onReady what to do when testFx condition is fulfilled,
       * it can be passed in as a string (e.g.: "1 == 1" or "$('#bar').is(':visible')" or
       * as a callback function.
       * @param timeOutMillis the max amount of time to wait. If not specified, 3 sec is used.
       */
      function waitFor(testFx, onReady, timeOutMillis) {
          var maxtimeOutMillis = timeOutMillis ? timeOutMillis : 3001, //< Default Max Timout is 3s
              start = new Date().getTime(),
              condition = false,
              interval = setInterval(function() {
                  if ( (new Date().getTime() - start < maxtimeOutMillis) && !condition ) {
                      // If not time-out yet and condition not yet fulfilled
                      condition = (typeof(testFx) === "string" ? eval(testFx) : testFx()); //< defensive code
                  } else {
                      if(!condition) {
                          // If condition still not fulfilled (timeout but condition is 'false')
                          console.log("'waitFor()' timeout");
                          phantom.exit(1);
                      } else {
                          // Condition fulfilled (timeout and/or condition is 'true')
                          console.log("'waitFor()' finished in " + (new Date().getTime() - start) + "ms.");
                          typeof(onReady) === "string" ? eval(onReady) : onReady(); //< Do what it's supposed to do once the condition is fulfilled
                          clearInterval(interval); //< Stop this interval
                      }
                  }
              }, 100); //< repeat check every 250ms
      };
      
      
      if (phantom.args.length === 0 || phantom.args.length > 3) {
          console.log('Usage: run-qunit.js URL basedir');
          phantom.exit(1);
      }
      
      var fs = require('fs');
      var page = require('webpage').create();
      
      // Route "console.log()" calls from within the Page context to the main Phantom context (i.e. current "this")
      page.onConsoleMessage = function(msg) {
          console.log(msg);
      };
      
      var openPath = phantom.args[0].replace(/^.*(\\|\/)/, '');
      var basedir = phantom.args[1];
      var coverageBase = fs.read(basedir + fs.separator + 'coverageBase.htm');
      
      page.open(openPath, function(status){
          if (status !== "success") {
              console.log("Unable to access network");
              phantom.exit(1);
          } else {
              waitFor(function(){
                  return page.evaluate(function(){
                      var el = document.getElementById('qunit-testresult');
                      if (el && el.innerText.match('completed')) {
                          return true;
                      }
                      return false;
                  });
              }, function(){
      			//BEGIN MODIFIED: output colorized code coverage
      			//reach into page context and pull out coverage info. stringify to pass context boundaries.
      			var coverageInfo = JSON.parse(page.evaluate(function() { return JSON.stringify(getCoverageByLine()); }));
      			var lineCoverage = coverageInfo.lines;
      			var originalFile = basedir + fs.separator + coverageInfo.key;
      			var fileLines = readFileLines(originalFile);
      			
                  var colorized = '';
                  
      			console.log('lines=' + JSON.stringify(lineCoverage));
                  for (var idx=0; idx < lineCoverage.length; idx++) { 
                      //+1: coverage lines count from 1.
                      var cvg = lineCoverage[idx + 1];
                      var hitmiss = '';
                      if (typeof cvg === 'number') {
                          hitmiss = ' ' + (cvg>0 ? 'hit' : 'miss');
                      } else {
                          hitmiss = ' ' + 'undef';
                      }
                      var htmlLine = fileLines[idx].replace('<', '&lt;').replace('>', '&gt;');
                      colorized += '<div class="code' + hitmiss + '">' + htmlLine + '</div>\n';
                  };        
                  colorized = coverageBase.replace('COLORIZED_LINE_HTML', colorized);
                  
                  var coverageOutputFile = phantom.args[0].replace('.test.htm', '.coverage.htm');
                  fs.write(coverageOutputFile, colorized, 'w');
                  
                  console.log('Coverage for ' + coverageInfo.key + ' in ' + coverageOutputFile);			
      			//END MODIFIED
      		
                  var failedNum = page.evaluate(function(){
                      var el = document.getElementById('qunit-testresult');
                      console.log(el.innerText);
                      try {
                          return el.getElementsByClassName('failed')[0].innerHTML;
                      } catch (e) { }
                      return 10000;
                  });
                  phantom.exit((parseInt(failedNum, 10) > 0) ? 1 : 0);
              });
          }
      });
      
      //MODIFIED: add new fn
      function readFileLines(filename) {
          var stream = fs.open(filename, 'r');
          var lines = [];
          var line;
          while (!stream.atEnd()) {
              lines.push(stream.readLine());
          }
          stream.close();
          
          return lines;
      }  
      
      
  7. Run 'ant'; you should see output similar to:

  8. Open \jsunit\target\testhtm\testme.test.htm in a browser; you should see something similar to this (note coverage % appears):

  9. Open \jsunit\target\testhtm\testme.coverage.htm in a browser; you should see something similar to this (red for untested, green for tested, light green for non-executable lines):

So where does that leave us?
We have clearly displayed we can accomplish some important things:


  • Write unit tests for Javascript
  • Run unit tests for Javascript in a command line build
  • Index Javascript files for code coverage
  • Output coverage percentage to the test runner (QUnit html file)
  • Render a colorized version of the Javascript under test clearly indicating which lines are/aren't being tested
I think this is awesome! Bear in mind in a real version we would of course make numerous refinements to this rather basic implementation; what we have is a proof of concept not by any stretch of the imagination an implementation ready for a team to consume.

7 comments:

Jackie Bolinsky said...

Hi there, I’m not a CI user, but I thought I’d sign up and tell you that the problem is actually with Doctrine cli not correctly passing the configuration to the generatemodelsdb task.
JavaScript Countdown Timer

Nicu Girleanu said...

have you ever tried writing the add function like this:

function add() {
var sum = 0;
for(var i in arguments){
sum += arguments[i];
}
return sum;
}

The test fails when I run it from the CL. In the browser the test is OK.

Superbowl2012 said...

Thanks for sharing your info. I really appreciate your efforts and I will be waiting for your further write ups thanks once again.
SEO tools

Eric said...

Check this out, it adds sweet colors to the pass/fail output. Totally ganked it from https://gist.github.com/1588423, but it's still nice.

/*global QUnit */

//create a scope so we don't pollute global
(function(QUnit) {
var testName
, displayPass = '\033[1;92mPass\033[0m'
, displayFail = '\033[1;31mFail\033[0m';

//arg: { name }
QUnit.testStart = function(t) {
testName = t.name;
};

//arg: { name, failed, passed, total }
QUnit.testDone = function(t) {
console.log((0 === t.failed ? displayPass : displayFail) + ': ' + t.name);
};

//{ result, actual, expected, message }
QUnit.log = function(t) {
if (!t.result) {
console.log(' ' + testName + ' assertion failed. Expected <' + t.expected +
'> Actual <' + t.actual + '>' + (t.message ? ': \'' + t.message + '\'' : ''));
}
};
}(QUnit));

Eric said...

Actually, this too. You want it to handle .equal and .ok both, I think:

/*global QUnit */

//create a scope so we don't pollute global
(function(QUnit) {
var testName
, displayPass = '\033[1;92mPass\033[0m'
, displayFail = '\033[1;31mFail\033[0m';

//arg: { name }
QUnit.testStart = function(t) {
testName = t.name;
};

//arg: { name, failed, passed, total }
QUnit.testDone = function(t) {
console.log((0 === t.failed ? displayPass : displayFail) + ': ' + t.name);
};

//{ result, actual, expected, message }
QUnit.log = function(t) {
if (!t.result) {
if (t.expected) {//Only print this if it's there (works for 'equal' assertions)
console.log (' ' + 'Expected "' + t.expected + '" Actual "' + t.actual + '"');
}
console.log( ' ' + testName + ' assertion failed: ' + (t.message ? t.message : ''));
}
};
}(QUnit));

http://www.simpleascouldbe.com

Eric said...

Great instructions, thank you.

I ended up getting this to work in a way that is fairly elegant for our multi-platform maven system. I followed your instructions, with these enhancements to promote automation:

* I put installs of phantom-win and phantom-osx in the source tree so that run scripts could always depend on their location.
* I used maven-exec-plugin to run the .bat or .sh file, depending on the environment (developer needs to manually set the path to the batch file in ~/.m2/settings.xml, but that's a one time thing). This is attached to the integration-test phase.

This makes it so that after SCM update, the developer only needs to change the path to the executable in order for this to be automated.

http://www.simpleascouldbe.com

Interesting facts said...

Thanks for sharing your info. I really appreciate your efforts and I will be waiting for your further write ups thanks once again.
html5 music player| html5 media player

Post a Comment