Attempting to use Mocha & Chai to unit test ES6.

In this post I will cover using Mocha (JS test framework) and Chai (For BDD syntax) to unit test ES6 Javascript in VS Code.

I started working on a small side project, for no reason other than to play with ES6+. It’s a(nother) relatively simple toast library written in as much vanilla JS as possible to avoid reliance on libraries & packages.

I got the code working, but I couldn’t prove that the functions worked. I used qUnit in the past to test JavaScript but if I am completely honest my JavaScript testing knowledge is a bit lacking.

My aim is to get some unit tests for one of my main classes where I can test directly against ES6 and not against the compiled ES5 code. I want the tests to be clear to what they are doing. What I am doing is not new at all, nor is the library! I just wanted to keep notes of how I achieved this first time around.

Disclaimer: This is by no means a comprehensive guide or walkthrough, just the results of me messing about to see if I can get the outcome I wanted whilst learning something new!

Enter, Mocha

I decided to useMocha to do my unit testing, which was chosen purely as it seemed to work well with ES6 code (using Babel). Later I will go into how I also used Chai along side to provide much nicer, fluid assertions using BDD-style syntax.

First of all, I had to install Mocha.

> npm install --save-dev mocha

Then under a new root folder of “test” I created a bread.spec.js – where “bread” here is the name of the class I am testing.

At this point it is fairly easy to create a simple test, like so.

import {Bread} from "../src/bread";
var assert = require('assert');
describe('Fluent methods', function() {
  describe('Title set is not called', function() {
    it('should set the title correctly (null)', function() {
        let options = [ ... code to get options ... ]     
        let b = new Bread(0,"Foo", options);       
      assert.equal(b.Title, null);
    });
  });
});

I then added the appropriate script to package.json to allow us to run the tests.

 "test": "mocha --require @babel/polyfill --require @babel/register './test/**/*.spec.js'"

Which is ran with:

npm run-script test
VS code window with output of script above. Shows a single completed unit test.
Output of running above command.

This script states that it will run Mocha, on all files under the test directory where the JS file ends with “.spec.js”. I then had to add the 2 requires which enable Mocha to call the ES6 directly and not have to use the transpiled version. Failing to do provide these requires will mean Mocha will not run as it cannot parse ES6.

Using Chai for BDD syntax

In the above, I import my class then create a “test set”. In this test set I then have a single test which is checking if the title gets automatically set. It’s fairly easy to attain what the test does, but it could be clearer. This is where I decided to use Chai. Chai will allow me to have a BDD-style test written which is closer to plain english. Mocha does support some of this (at time of writing) but Chai is much closer to BDD-style syntax I was used to.

To use Chai I need to install the package:

npm install --save-dev chai

Then import the “expect” module from the framework, and refactor the method so it looks a little like this:

import { expect } from "chai";
import {Bread} from "../src/bread";
describe("Fluent methods", () => {
    describe("Title set is not called", () => {
        it("should set the title correctly (null).", () => {
            var options = getValidOptions();            
            let b = new Bread(0,"Foo", options);
            expect(b.Title).to.equal(null);
        });
    });  
)};

Running the tests will yield the same result as before, but now its a lot more readable (In my opinion!)

Not a lot more to add really. Mocha and Chai both have great documentation to read through. The only difficulty I had was getting Mocha to run ES6 directly, as a lot of the information online for this was out of date (that I found…)

Update: I have also posted about debugging using ES6 Mocha tests here

Recursive folder comparison with PowerShell

The Issue

This post definitely isn’t “new” or revolutionary, but I was quite surprised to find the Compare-object helper in PS, and I’m bound to forget in the future…

As part of some recent roadmap work, we moved over to a new installer technology for some of our tooling. This came with some minor headaches such as validating we have harvested all the correct files. The first iteration of this was a manual check which obviously is prone is human error – aside from being mind numbing!

I didn’t really want to use a third party tool. WinMerge can perform fantastic comparisons, but I wanted something quick and custom. Ideally also not spending longer than 10 minutes creating any code!

The first iteration was to do a recursive loop, pull out all the file names (Note: not the path) into 2 separate text files. The only “nicety” was I wrapped directory names in square brackets to give it some organisation.

The downside of this is that it only really worked for my sample folder with a few items. In production with thousands of files and nested folders this was plain chaos. Also I had to compare these files in a third party tool like WinMerge anyway – taking away the point of doing this!

The final version of my script aimed to only show the difference (avoid noise), ideally show which direction the change occurred using Compare-Object in PowerShell.

The Result

  • Do a recursive loop through the directory structure
  • Output Folder names as [Folder], and recursively dive-down. This is a bit dirty as I didn’t want the full path (harder to compare) but wanted to differentiate when I dug down. YMMV.
  • Output File names, excluding some files I didn’t care about (Like .tmp & .XML files)
  • Do this for folder A and folder B, storing the result to a variable
  • Using Compare-Object on these variables and outputting the result.
function GetFiles($path, [string[]]$excludedFiles)
{
    foreach ($item in Get-ChildItem $path)
    {
        if ($excludedFiles | Where {$item -like $_}) { continue }

        if( (Get-Item $item.FullName) -is [System.IO.DirectoryInfo]){
         $('['+$item.Name+']')
        }else{
          $($item.Name)
        }
        if (Test-Path $item.FullName -PathType Container)
        {
            GetFiles $item.FullName $excludedFiles
        }
    }
} 
$env1 = GetFiles -path "C:\folderA\" -excludedFiles "*.xml",".tmp"
$env2 = GetFiles -path "C:\folderB\"  -excludedFiles "*.xml",".tmp"

Compare-Object -DifferenceObject $env1 -ReferenceObject $env2

Which provides output like:

This could definitely be optimized and cleaned up for sure, and YMMV massively.

Overall, a few minutes in PowerShell and I managed to save substantial time – and that was my only real goal!

My attempt at using SonarQube for static code analysis

This post covers my attempts to use SonarQube as a stand-alone install to perform static code analysis on a regular basis. This will cover purely getting the tool working, Maybe I will pick up how I can use the data in a later post?

I will be doing this in a very narrow focus which is for the project I am currently working on which is .NET stack, with builds running in VSTS using MSBUILD.

SonarQube runs code analysis as solutions are being built and provides a web dashboard of code smells, security vulnerabilities, duplication and more. My aim is to use it to identify technical debt, as well as track debt is reducing over time.

Note you can hook this into Azure Dev-ops fairly easily too with a few clicks and less setup, but I wanted to host the tool on our own infrastructure for zero-cost. I also believe you can use their cloud version free if you are open source.

My aims are:

  • To get the self-hosted version of SQ installed/setup
  • Get it running against a local solution
  • Work out how to hook this into our VSTS build process (If possible)

Getting started

First of all, I downloaded and extracted the free self-hosted version of SQ (Community edition) and placed it on one of our build servers. This package is essentially a self-hosting application, and following the 2-min getting started guide here , it’s genuinely quite easy to get the dashboard running within that 2 minutes (Providing the system requirements are met – which looks like you just need a recent Java JRE/JDK installed)

Following the above guide, and launching the shell/batch script of your choice, you can then navigate to http://localhost:9000 and see the SonarQube dashboard asking you to create a new project.

When creating a new project you are prompted for a project key and display name. The key will be used for the integration, and the display name will be the name displayed on the dashboard.

Next up is the token. The token is used for authentication purposes when uploading analysis files and can be changed and revoked later. I just used the word “sausages” as an example, but when you click “generate” it will provide your token.

Next it will tell you how to configure your project for SQ. I am doing this against a .NET project (C#, JS, etc) so will continue with this example.

For a C# project which will be built using MSBUILD, you first need the “SonarScanner for MSBUILD“. The SonarScanner is the tool that performs the analysis by starting before MSBUILD kicks in, and then ending, and collating the results to send to the server when it ends.

This tool can be placed anywhere, but the folder will need adding to the PATH on your windows environment.

At this point, we can do some powershell to test that it’s all hooked up correctly! (You can use pre/post build events too, but this is much simpler for testing) In this example we will CD into the directory the SLN is in, start the scanner tool using our project name and key from earlier and build our solution file (in rebuild). Finally we will end the scanning.

cd 'C:\path to your SLN'
SonarScanner.MSBuild.exe begin /k:"My-Project" /d:sonar.host.url="http://localhost:9000" /d:sonar.login="Your Key"
msbuild MySolution.sln /t:Rebuild
SonarScanner.MSBuild.exe end /d:sonar.login="Your Key"

Note that this assumes you have the scanner and MSBUILD in your PATH variable. If you do not, you can simply call the exe directly. Note that the MSBUILD exe is located at C:\Program Files (x86)\Microsoft Visual Studio\<version>\<edition>\MSBuild\<version>\Bin\msbuild.exe. I believe that SQ requires MSBUILD of 12 and above – I am currently using 15.0.

If we run this, it will take a few moments to start and a variable amount of time to complete (Depends heavily on the size of your solution). At the very end you will see a line of “Execution Success” and if you still have your dash open you may have seen it update.

If you navigate back to http://localhost:9000 you should now see your project. Note that if your sln was particularly large you may just see a “background processing” message whilst it imports the analysis file.


This is good news! However, there is a bit of an ominous warning in the footer of the dashboard which reads

Embedded database should be used for evaluation purposes only. The embedded database will not scale, it will not support upgrading to newer versions of SonarQube, and there is no support for migrating your data out of it into a different database engine.

This is easily solved by simply having a backing database for SQ. You will however lose all your progress so far.

Moving beyond proof of concept.

So we can use this in our build/deploy pipeline, I want to have it hook into a database, and installed onto one of our servers and finally have one of our CI builds run it!

The process is the same as above in terms of placing the extracted files onto the server apart from we also have to punch a hole in Windows Firewall for TCP port 9000 so it can be accessed remotely. Running the start batch script now will bring you to the same dashboard and warning as before which is what we want to avoid, so we will need to hook up a database.

For the database I will be using MS SQL hosted on 2016. You can find documentation for other database types (MySql, Oracle etc.) on the SQ documentation.

First I created an empty database on the SQL 2016 server named “SonarQube”, and also a new SQL user named “Sonar” who is a dbo on the SonarQube database.

Back in the SQ install, in the \sonarqube\conf folder is a sonar.properties file. In here we need to add the following:

 sonar.jdbc.url=jdbc:sqlserver://myserver;databaseName=SonarQube
sonar.jdbc.username=sonar
sonar.jdbc.password=thepassword

Note: If, like me, your SQL instance is named like “server\instancename” you will need to escape the slash before the instance so it is like “server\\instancename”. The error that is generated does not lead to this being the cause of SQ not launching which was a pain!

Next up, I wanted to have my VSTS build automatically run the analysis and process it. I wanted this to be part of the departments CI builds, but due to the size of the project it was taking upwards of 30 minutes to complete so we moved it to the nightly builds.

Now you should be able to launch SQ again and not see the banner in the footer. A new project should be created like before.

As we use TFS/VSTS in house, this guide will show working with self-hosted TFS. There is a lot more in-depth (and more useful!) guides on the SQ docs here. The steps to take before progressing are:

  • Download the SQ VSTS extension appropriate for your version of TFS/VSTS https://github.com/SonarSource/sonar-scanner-vsts/releases
  • Install extension to TFS
  • Add the extension to the collection.

Once the extension is installed, you should see 2 new build steps of:

  • SonarQube Scanner for MSBUILD – Begin Analysis
  • SonarQube Scanner for MSBUILD – End Analysis

Before you can use these, you will need to configure the SQ endpoint. To do this, you can either goto the collections administration page, then the “service” tab, or add the “SonarQube Scanner For MSBUILD – Begin analysis” build step to your VSTS build and then click “Manage” the server panel. In here, you can click “New Service Endpoint” and then “SonarQube”

When you add a new SQ endpoint, you configure it using a friendly name, the URL of the dashboard, and the token you set up for your project/user (much like the earlier powershell script had)

Now if you go back to edit your build definition. You will be able to use Sonar Qube with the Begin and End tasks either side of your build actions (again, much like the earlier powershell)

To set it up, select the SQ end point as configured earlier, and then under Project Settings, set the Project Key and Project name appropriately (based on however the project in SQ was set up) and finally, under “Advanced”, check “Include full analysis report in the build summary”. There is no configuration required for the end analysis build step.

Running it now will most likely result in a failure unless the machine which is hosting the build agent has the correct software installed.

Much like we did for the local version, we need to download and setup the SonarScanner and add it to the build path. The server which hosts the agents will also need to be running MSBUILD v14 or v15 (at time of writing). You can get a standalone version of MSBUILD v15 direct from Microsofts download pages. It took me a while to find it, so this is the direct link to v14. (Which I had to use due to a separate issue)

https://download.microsoft.com/download/E/E/D/EEDF18A8-4AED-4CE0-BEBE-70A83094FC5A/BuildTools_Full.exe . Hopefully this is the correct one for 14.0.25420.1

Now if you run your build it should (hopefully) produce some output. I did run into a few errors in this stage which may be just me being a bit uninformed (and not really reading docs…)

Some of the “Gotcha’s” I ran into

  • As noted earlier, MS SQL server names which contain slashed must be escaped, and the error that it thrown does not indicate this is the case!
  • Weird, ambiguous messages. (An instance of analyzer SonarAnalyzer.Rules.CSharp.ThreadStaticWithInitializer cannot be created from SonarAnalyzer.CSharp.dll ) which actually had nothing to do with that, and was actually related to MSBUILD versions. I was pulling my hair out until I saw this post which states you need MSBUILD v14+ from update 3 (14.0.25420.1) where I was using MSBUILD v14 but 14.0.23107.10).
  • MSBUILD Version issues. The build would work but the SQ analysis would fail with an error about supported MSBUILD versions. To get around this I made sure that the MSBUILD step in the VSTS build was using an argument of /tv:14.0 to ensure it would use a specific version.
  • Static code analysis could not be completed on CSS files due to Node JS version (ERROR: Only Node.js v6 or later is supported, got <ver>. No CSS files will be analyzed.). Simply install latest nodeJS to the server hosting the agents.
  • Timeouts! This was only an issue for me as the projects LOC is in the hundreds of thousands so it generates a large log. (##[error]The analysis did not complete in the allotted time of 300 seconds. Consider setting the build variable SonarQubeAnalysisTimeoutInSeconds to a higher value.) To get around this, you add a build variable to the VSTS build named “SonarQubeAnalysisTimeoutInSeconds”. I tried setting it to zero (which is usually ‘infinite’) but then I got “##[error]The analysis did not complete in the allotted time of 0 seconds“. I couldn’t find any reliable info about max values so set mine to 20 minutes to be safe.

Conclusion:

Finally, (for me) after some light tinkering it all worked. When I was scanning google/stackoverflow it seems a few people had the same issues as me, so I don’t feel too bad about it for a first try!

Our nightly builds now pump out some lovely code analysis to the dashboard which I have shared with the team and the results are feeding into our technical backlog to be resolved. I’m going to leave it running for at least a month, and see if we get any real usage out of it.

More on results later, perhaps?

This wasn’t really written as a complete step-by-step, more of a stream of consciousness as I tried to learn about something new, but hopefully it helps someone, even if that is future me when I come back to re-remember how it works.


Update: I realised this morning that the project version on the SonarQube dash never increased or changed which meant it was hard to check when issues were implemented without cross-referencing the date and the checkin history. If you set the “Project Version” on the Begin Analysis step the in-built variable of $(Build.SourceVersion) you will get the changeset the build was ran off as the version.