Setting up a self-hosted build agent for Azure DevOps

Azure DevOps has brilliant build pipeline options and as easy as it is to get set up with their hosted build agents, it can get quite costly rather quick. In this post I cover off setting up a self-hosted build agent for use with Azure.

This post won’t cover setting up the build box, but can be covered in a later guide if required. I actually have my build box scripted out using Choco commands to allow building of .NET projects to make this step easier.

Pros/Cons

  • Pro: Full control over the build
  • Pro: Can have your builds build items or run services which simply aren’t available in the Hosted agents.
  • Pro: Low cost. If you already have the hardware, why pay for Azure VMs?
  • Con: Maintenance and redundancy. If the machine goes down or breaks it blocks your pipeline.
  • Con: Extra setup steps.

Prerequisites

Before starting you will need to make sure:

  • You are a collection/build admin
  • You have a server configured to build the appropriate software (i.e. Correct SDKs etc which won’t be covered in this post)

Personal Access Tokens

First of all, you will need a personal access token for your account. This is used to allow your build agent access to Azure without hard-coding your credentials into your build scripts. You can use your own account for this, or a specially created service account – Just note it will need permissions to access the collections it will be building.

To get this, log in to your Azure Devops portal, and navigate to your security page.

In here, select “Personal Access Tokens” and then “New”. A panel will be displayed to configure this PAT. Specify a friendly and unique name, select the organisation you are using this token for, and then set its security access.

For the security access, I recommend selecting Full Access under “Scopes” so you can use this PAT for general Dev Ops activities. You can fine-tune the control, but you must ensure it has read/execute on the build scope as an absolute minimum. For expiry I typically select the longest period which is 1 year.

Agent download and configuration

Next up you will need to navigate to the project settings > Pipelines > Agent Pools.

Create a new Agent Pool with an appropriate name (You don’t *have* to do this and can just use the default pool if you wish, but I like the separation). When your pool is created you will see the option to add a new agent to it.


Clicking “New Agent” will give you the instructions for the OS of your choice. As per the instructions, download the agent (A ~130 ZIP file) and then place somewhere sensible on the machine that will be acting as a build server. When extracted, run config.cmd in an elevated command window

When running the config.cmd command you will require the following information:

  • Server URL
    • This will be https://dev.azure.com/{organisation name}
  • What type of authentication you will use (Just press return as it will default to PAT)
  • Your PAT to access the server, as set up in the first step.
  • The Pool to connect to. This will be the name of the agent pool created above.
  • The working folder. The folder to use for storing workspaces being built.
  • A name for this agent. Call it whatever you want, but I would personally always include the machine name as it makes it easier to work out which agents are running.

Providing all the above settings are specified correctly and there are no authentication issues, it should now attempt to start.

Confirming the agent is active

Going back to the Agent Pools configuration screen you should now see the agent listed in the appropriate agent pool.

If the agent is not displaying after a few minutes, something went wrong in setup.

If the agent is displaying offline, try running the “run.cmd” command in an elevated command window on your build server.

Now all you have to do is select your new agent pool when creating your next build!

👆Level up👆 your retrospectives! (and why you should run one!).

You can also view this post on: https://dev.to/wabbbit

Regardless of which Agile methodology you use to deliver your software – You are using one, right?, retrospective is one of the most important meetings you can have. Sadly, it’s one of those meetings that can turn into an unstructured nightmare, and if you aren’t getting anything out of it, what is the point? This is a quick 5-min guide of leveling up your retrospective, or convincing you if you don’t already run one.

What is the retrospective for? Why should I run one?

By definition, a retrospective is a look into the past. In this context it is a chance for the team to get together at the end of a sprint/milestone and discuss what did and didn’t go well, and more importantly what can be done to improve. It should give the entire team a chance to voice their opinions in a safe space, and provide honest feedback that can be used to improve future iterations/sprints.

TL;DR: The aim of retrospective is to identify incremental improvements to make the next iteration/sprint better than the last.

Quick fact-sheet

Who runs it?: Arguably the meeting should be ran by the Team leader/Scrum master etc. However it can be ran by anybody as it is an open discussion owned by the entire team. Ideally one person should take ownership though to ensure the meeting stays focused and to track any outcomes.

Who attends: The retrospective should be attended by the entire delivery team. Business analysts, developers, testers, project managers, product owners etc. Anybody that was directly involved in the delivery of items. This meeting is not for the wider community such as stakeholders as this often limits honesty.

When is it ran:: The retrospective should be ran at the start of the iteration – looking back at the previous iteration. Typically this would be ran after any sprint-review/show & tell meeting and the stakeholders have left the room.

How long should it be: Keep it focused! 5 minutes per attendee. In an “ideal-size” SCRUM team this shouldn’t be more than 30-45 minutes.

An effective retrospective template

We have trialed many different approaches over the years for our retrospective. The theme has always been the same with the focus being on what went well, what didn’t and what could we change.

Lately we introduced a new template as retrospectives were becoming slightly unfocused talking-shops and although solid actions were coming out of them, people seemed hesitant to self-judge unless prompted.

Before this we also used the “Traffic light method”, also known as “Start, Stop, Continue”. My issue with SSC is that it has the potential to derail quicker and doesn’t feel as focused. SSC can also sometimes become dominated by one or two people and I felt that I was excluding people at times. That is why I like the 4 question method.

The 4(ish) question method

Part 1: Questions for each team member

Go around the table, each person answering the following 4 focuses.

  • What was a success for you this iteration?
    • For example: a particular piece of work they were proud of, a bug they resolved quickly, etc. This is important as it allows the team member to show-off or gloat a little. If you don’t let people discuss success, they won’t want to talk about failure!
  • What was a failure for you this iteration,and why? Could it have been avoided?
    • For example: “Not completing agreed work, because of X”, or “spending more time than they thought on Y”. 
      They should know why this happened and ideally what we could do to stop it happening again.
  • If you could have changed one thing about the last iteration, what would it be?
    • For example: “Break down work items sooner”, “Spend less time in meetings”
  • Did you learn anything?
    • Important as you want your team members to grow. Whether that is technical knowledge or product knowledge.

The answers should be simple one liners, and may prompt discussion among the group. For example, let’s say somebody says they felt one failure of the iteration was that they couldn’t complete a piece of work as the requirements changed last minute. You could ask why that happened and what we could do it mitigate this in the future.
Keep any follow up discussions short and concise. It’s very easy for someone to “go down the rabbit hole”. Take any discussion that will take longer than a couple of minutes offline or come back to them at the end. You don’t want to chew through all your time on one person, after all!

The person running the session should capture all the failures and changes.

Example:

  • What was a success for you this iteration?
    • A success for me this sprint is that we resolved a critical priority item well within the SLA and had a happy client with positive feedback.
  • What was a failure for you this iteration,and why? Could it have been avoided?
    • what: A failure for me this sprint I couldn’t get my changes into the test environment quick enough due to lack of Change Control resource.
      why: Change Control staff were not available at short notice.
      action We should schedule in resource capacity at the start of a sprint.
  • If you could have changed one thing about the last iteration, what would it be?
    • n/a
  • Did you learn anything?
    • I learned how {{some component}} worked.

This may have prompted discussion about how we need to think about how we deploy changes via another department and an action would be that we will do some up-front scheduling.

Part 2: Take action!

This whole process is pointless if nobody is taking any actions off the back of this session!
In part 1, the person leading the session should have taken notes of all the potential actions.

The session lead should quickly iterate over the actions captured and as a team decide;

  • Is this something that we should do immediately, or place in the backlog?
  • Who is taking ownership for this action?

Part 3: Coffee.

You deserved it, go you!

Although, there is no such thing as a free coffee. The actual 3rd item is to follow up on actions.

Issues you may run into:

People don’t want to talk about failure: Perhaps people feel like they are being judged. To combat this, really focus on the success of the sprint as the main talking point & promote an open/safe environment. If individuals don’t want to open up, the session-lead could drive an open conversation – i.e. “Why do you think we (as a team) didn’t achieve X”

Hostility: Make sure any feedback is constructive and blame free. Never attribute failure to another team member. Always use “we” never “you”. Promote collective ownership!

Lack of interaction Some people are introverts and may not want to join open discussion. If this is an issue, suggest people write the answers on post-it notes and have the session primarily led by the team-lead (Collating and discussing). Still give them the option to share or change their answers within the session.

Turning into a moan-fest. This one is really easy to slip into. To avoid this, promote constructive discussion. Ensure actions are being followed up or people may not take the meeting seriously and just use it as an opportunity to vent.

Retrospective your retrospective

Yo dawg. I heard you like retrospectives.

There is no “best” way to run a retrospective session. Every team is different and sometimes you may find the format doesn’t work with the people that are attending. If it isn’t working for you, don’t abandon it, adapt it! YMMV.

Using time tracking software to your advantage.

Ugh, Time tracking!

People often convulse when hearing “Time tracking” in a work context. Although some companies use this to ensure their pesky workers aren’t having any downtime and are constantly little cash cows, I believe it can be used for good and more importantly boost your own productivity. Why not use time tracking software to your advantage?!

Of course, this doesn’t mean the horrid aforementioned, corporate time tracking doesn’t exist, but don’t let this cloud your judgement.

The tool I will introduce is not a new tool by any means, but everybody I have introduced it to since has been hooked and hasn’t looked back.

So, why should I care?

You know the life of a typical developer. Constant interruptions, sporadic support emails and “quick” breakout sessions that turn into hour long meetings. You get to the end of your iteration and get asked “Why didn’t you get around to X”. Wouldn’t it be best to quantify instead of sounding off excuses?

Introducing, Toggl

Toggl is a completely free (With paid options) time tracking tool which is available for web, desktop and mobile. I have been using this now for almost 4 years.

The way it works is that you can assign Tasks against a particular project (I.e. “Client X support”, or “Internal Meeting”), which are attributed to a specific client. Tasks are free-text but can be grouped together for reporting by using the same auto-completed task name. You can keep everything completely unassigned or be anal retentive and go super-granular with multiple projects per client and different work streams (like me!)

For example, If I take a random day from the last few weeks I can see what I was doing, and how the day was distributed. I found it best to only create a new entry when my context changes – such as moving to a different bit of work, or having to go provide some support at someones desk etc.

Here, you can see I spent 7hr 53min on pure work activities that day. Within this day we spent 24 minutes in our daily catchup (Ouch!), and a chunk of my day was taken up with support of some kind and the rest pushing releases out the door.

What I like about Toggl is I can say something like “Wow, that is a lot of time for standup, how much time is ‘wasted’ each month?”

Toggl has a fairly decent web based dashboard, offering much more functionality than the desktop or mobile applications. You can drill down into each task, project or client to find out where your time is spent. See below.

That’s nice, but what’s the point? (aka. How I benefited)

a) Find out how much time you spend on repeating activities (& IDentify OPTIMIZATIONS)

Find out what your “expensive” repeatative tasks are and optimize them! For example:

Here, you can see I spent 10.5hr last month either in daily standups, show & tell or retrospectives (All under my SCRUM umbrella [scrumbrella if you will…]). Whilst this is only ~6% of my time, it is still a considerable chunk and an area I could look at optimizing within the team. A future post will be on running effective standups, watch this space.

One major win I had from using this tool was finding out how much time I was spending pushing and prepping release files. I monitored this over a couple of months, and then implemented Powershell scripts to automate the task. It now takes 10% of the time!

B) Track time working on QA vs. new features

The way I break down my projects is to have a core collection of projects under each client, usually:

  • <clientname> : (Used for any Change Requests and billable work)
  • <clientname> Support : (Used for client reported defects or queries)
  • <clientname> Release Activities : (Used to track testing and release cycle time)

This means that I have the ability to do the following:

  • Identify which clients are the most demanding or raise the most queries/defects (and allow us to highlight testing gaps, knowledge issues etc.)
  • Identify how much time I spend on QA vs billable work
  • Fill my time sheets in after the fact without manically scanning email trails and check in histories!!

C) Have a valid excuse!

It’s the last day of the delivery cycle and you haven’t completed a feature. Wouldn’t it be great to say (and have the backup) that you spend 25% of your time couped up in unplanned meetings?

Being able to justify something is good. Being able to justify it and point at a pie chart is better!

Conclusion

Toggl is a great tool (for me at least) and I really do recommend it if you want to see how your day is really spent and generate reports . After years of use, I dont see myself stopping.

Toggl has a ton of additional options too, such as tagging up tasks, marking items as billable for better invoicing and more- features I don’t personally use, but may be useful for someone.

What do you use, if anything, and have there been any success or failure stories of doing so? Note: this is posted to dev.to for discussion.

Remote NLOG logging with Azure Functions (Part two) – Persisting data into Azure Cosmos DB.

Last time, I got a very basic C# Azure Function hooked up to accept a request from an NLOG web service target. This time, I will be attempting to persist(insert) the incoming log information into an Azure Cosmos database container, direct from my Azure Function in VS Code.

Disclaimer: This blog is more of a “mental notes” for me. I am nowhere near an expert in this area, and Cosmos DB is still new (hours old) to me. Drop me a line if I have missed the point massively 🙂

Setting up Cosmos DB, databases and containers.

To get started I will make a new Cosmos DB on the Azure Portal by selecting “Azure Cosmos DB” from the resources panel and selecting “Create new”.

On this page I need to specify:

  • Subscription: Your Azure subscription to create this under.
  • Resource Group: You should already have a resource group which matches the Azure Function you created earlier.
  • Instance Details
    • Account Name: This will be prefixed to the URL – i.e. blahdocuments.azure.com
    • API: For this example I will be using Core(SQL) so I can create a document database and query using SQL syntax. 
  • Location: Select the closest location to you.
  • Other options: Other options like Geo-Redundancy etc can be left as disabled for now.

Select “Review and Create”, then on the next screen “Create” – Providing you are happy with the inputs.

Which will switch to “Your deployment is complete” when it is ready to be used. It shouldn’t take longer than a couple of minutes.

Clicking on “Go to resource”, or navigating to your new Cosmos DB via the Resource manager will load up the quick-start window for this database. First however, we need a “container”. Selecting the Azure Cosmos DB account we just created, we need to select “Add container”.

Here we have a few inputs:

  • Database ID. I didn’t have a database, so needed to create one. If you already have one, specify the name here.
  • Throughput: 400 RU (Request Units/S) should be more than enough for basic testing and operation for my purpose.
  • Container ID: I specified a container ID that lives inside the new/existing database. azlogger is where I want all my logging related data, and a container of azlogger-logs for the logs I will be storing.
  • Partition key: I used “loggerName” as my partition key. See this video for info, but essentially I believe this is for managing partitions if the data exceeds the limit so partitions can be grouped(?) correctly (~10GB?). I”m not 100% sure to be honest, without reading more. I just went with a recommended S/O post.

Updating the Azure function to connect with Cosmos DB

We first need to use the CosmosDB package in this project, so in the terminal, run:

dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB

Now I need to set up the solution so it’s ready for using Cosmos DB.

In local.settings.json I added my connection string:

 {
    "IsEncrypted": false,
    "Values": {
        "AzureWebJobsStorage": "",
        "FUNCTIONS_WORKER_RUNTIME": "dotnet",
        "MyCosmosDBConnection": "<conn string>"
    }
} 

Where the connection string value comes from your Cosmos dashboard, under “Keys” -> “Primary connection string”

Now I will need a C# model to bind against. I made a simple LogDetail class with the required fields. Note that I am using the JsonProperty items on the fields. I read conflicting blog posts about the requirement for annotating everything other than the ID, but I found no harm in leaving it in for now.

  public class LogDetail{
        [JsonProperty("id")]
        public string Id { get; set; }

        [JsonProperty("timestamp")]
        public string Timestamp;
        [JsonProperty("logName")]
        public string LogName;

        [JsonProperty("logLevel")]
        public string LogLevel;

        [JsonProperty("message")]
        public string Message;
    } 

Now time to update the main method/function! This was actually the hardest part for me (Partly due to lack of experience with this tech), the documentation was a little confusing, misleading and often specific to a particular scenario.

I’m not sure how correct this is, but I ended up changing my main method so that it read:

public static class Log
    {
        [FunctionName("Log")]
        public static void AcceptLogRequest(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "Log")] HttpRequest req,
            [CosmosDB(
                databaseName: "azlogger",
                collectionName: "azlogger-logs",
                ConnectionStringSetting = "MyCosmosDBConnection",
                Id = "{sys.randguid}",
                PartitionKey ="/loggerName"
                )]
            out LogDetail logDetail,
            ILogger log)
        {
            log.LogInformation("HTTP trigger fired for log entry.");
            string timestamp = req.Form["timestamp"]; 
            string loggerName = req.Form["loggerName"]; 
            string loggerLevel = req.Form["loggerLevel"]; 
            string message = req.Form["message"]; 

            var res = $"{timestamp} | {loggerName} | {loggerLevel.ToUpper()} | {message}";
            log.LogInformation(res);            
            logDetail = new LogDetail();
            logDetail.Timestamp = timestamp;
            logDetail.LogLevel = loggerLevel;
            logDetail.LogName = loggerName;
            logDetail.Message = message; 
        }
    }

The main changes were:

  • Making it a synchronous void method for saving data into CosmosDb which was recommended in the Microsoft Docs here. Could be wrong, but it works and is in-line with their docs.
  • changing LogDetail logdetail to be out LogDetail logDetail
  • Add in the Cosmos DB annotation (See below)

The CosmosDB annotation has the following options:

  • databaseName: This is the main database name for the database set up in the previous step.
  • collectionName: This is the container name set up in the previous step
  • ConnectionStringSetting: The app setting name placed inside local.settings.json in the previous step.
  • id: The Id entry for the record. For this I used an inbuilt system parameter of {sys.randguid}
  • partitionKey: The partition key I specified in the earlier setup step.

Now, if I debug (or deploy) this Azure Function and cause my application to insert a bunch of NLOG entries, it should now create some entries in a Cosmos DB container.

To view the results, I can go to Cosmos DB on the Azure portal and select “Data Explorer”. From here, the Database and Container can be expanded to show the “Items” in the container – In my case, a load of NLOG entries.

Conclusion

It’s early days, but deploying a new Azure function is trivially easy using the most recent tooling, and the only real difficulty seems to be nailing down decent documentation in Azure (Which has always been the case from my experience!)

The next stages would be to look into Azure Function triggers on a timer to produce MI reports, or time/action based triggers to forward captured events onto the appropriate person.

Remote NLOG logging with Azure Functions (Part one).

Part of a journey I was on today to learn about Azure Functions and Cosmos DB. The final code for the Azure Functions element can be found on https://github.com/Wabbbit/AzLog

  • Part one: Setup, Creating my first function, forwarding NLOG events and deploying to Azure within VS Code.
  • Part two: persisting the incoming data using Cosmos DB.

Disclaimer: This blog is more like my mental notes, this tech is super fresh to me so take with a pinch of salt.

What I want to achieve

Logging is a necessity in any application, I can’t even count the amount of times having some verbose logging has saved me many hours of debugging.

Currently, I almost exclusively use NLOG for .net projects. I typically structure my logging into discrete, separate loggers (i.e. Startup, API, Business logic failures, etc), which are usually configured to dump into .txt and/or the system event log.

This is great for our internal dev/SIT/QAT machines, and also when a client rings up about an error they encounter as they can just provide the appropriate log. The downside of this of course is that we don’t know if a client (With a self-hosted, remote installation) has a fatal error until they contact us, and with some clients the chain of reporting means the system could have been impacted for a short while before we get notified.

What if we could remotely capture major errors? As a proof of concept I will be attempting to use the NLOG web service adapter to talk to a C# Azure Function.

This assumes previous knowledge of working with NLOG and C#, but not Azure.

Creating my first Azure Function.

Prerequisites

Azure functions can be created directly within the Azure Portal, but for this demo I will be using VS Code.

First we need to make sure the system is set up to work with Azure Functions. We will need the following:

  • VS Code
  • Azure Functions Core Tools: For this we can use NPM. npm install -g azure-functions-core-tools. Note that this also exists on choco but has issues with x64 debugging in vscode.
  • Azure Functions VS Code extension.
  • C# VS Code extension.
  • and later on, an Azure account so we can deploy

Lets make a function!

With the Azure Functions extension installed, select the Azure menu and then “Create new project”. Don’t worry about connecting to your Azure subscription yet if you have not done so.

Once a folder is specified, a language must be chosen. I chose C#.

Next, the template for the first function will need to be specified. For this demo I will be using the HttpTrigger which means it will fire when hit on receipt of HTTP Get or Post (Like any standard API)

The next panel will ask for a function name. For this I just chose “Log”.

A new Azure Function will be created. Lets take a look at the files that are created:

  • .vscode: All the standard VS Code items which assist in build, debug and required extensions.
  • *.csproj: The project file for this Azure Function.
  • <function-name>.cs : This is the function that was created by providing a name in the last dialog. This is essentially like a Web API Controller.

Pressing F5 should restore any packages, start a debug session and output the temporary URL into the terminal, like so:

Navigating to that URL with a browser or postman will render something like:

Hooking up NLOG WebService target

Now I have a base function (Even if it doesn’t do anything), I can update NLOG in my project to make a web request with some information.

In my NLOG.config, I need to add a new target between the <targets></targets>

<target type='WebService'
            name='azurelogger'
            url='http://localhost:7071/api/Log'
            protocol='HttpPost'
            encoding='UTF-8'   >
      <parameter name='timestamp' type='System.String' layout='${longdate}'/>
      <parameter name='loggerName' type='System.String' layout='${logger}'/>
      <parameter name='loggerLevel' type='System.String' layout='${level}'/>
      <parameter name='message' type='System.String' layout='${message}'/>
    </target>

What we have done here is:

  • Create a new NLOG target of type “Web Service” to the URL from the step previously.
  • Set up a few parameters to send across with our request, which are NLOG parameters for things like the log message, the time the entry was created, etc.

Now I need to ensure that one of the loggers is set to use the new “azurelogger”. For example:

<rules>   
  <logger name="StartupLogger" minlevel="Error" writeTo="event, azurelogger" />
</rules>

Now if I do an IIS Reset where my NLOG config lives, and trigger off an error message manually, the new Azure Function should receive all the information it requires.

However, as our function doesn’t *do* anything, we can only prove this by debugging the function in VS Code. To do this I placed a breakpoint within the function and inspected the req object.

Here, I can see that all the fields I wanted are present!

Changing function code to accept incoming NLOG params

Fairly trivial – I altered the contents of the function to be as per below. In this code, I simply read the 4 items that my NLOG config is set to provide. I also changed the method name to something a little nicer than Run() as it is more descriptive. However this doesn’t actually control the endpoint name. To explicitly set the endpoint name I also changed the Route from null to “Log”. If I wanted to hit /api/blah instead of api/log I would simply do so by changing the route name.

  public static class Log
    {
        [FunctionName("Log")]
        public static async Task<IActionResult> AcceptLogRequest(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "Log")] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("HTTP trigger fired for log entry.");
            
            string timestamp = req.Form["timestamp"]; 
            string loggerName = req.Form["loggerName"]; 
            string loggerLevel = req.Form["loggerLevel"]; 
            string message = req.Form["message"]; 
           
            var res = $"{timestamp}   | {loggerName} | {loggerLevel.ToUpper()} | {message}";
            log.LogInformation(res);

            //TODO: Persist the data

            return (ActionResult)new OkObjectResult(res);

        }
    } 

Now, if I debug and cause NLOG to log an error, I can see the terminal window and debugger capturing the same information that gets placed in my event log.

Deploying to Azure

I will skip the step of connecting to Azure, which is as simple as just pressing “Sign in” and following the instructions.

To deploy from VS Code, simply select “Deploy to Function App” and then provide a new name of a function to deploy to.

It takes a while to set up a new function app, but when its done, simply click “Deploy to function app”. Thew API will now be accessible via the web (using the azurewebsites url) and Azure dashboard.

Wrap up, until next time…

So far I have a new Azure Function, which is being contacted by the NLOG Web Service target.

Next time I will attempt to persist the incoming logs, using Cosmos DB

Unshelving TFS changes into another branch (VS 2017)

I had some pending changes recently on the wrong branch within TFS in Visual Studio 2017. Rather than clone all my changes in the other branch, I wanted to “migrate” my changes. In GIT this is fairly trivial, in TFS however…

To move changes between 2 branches, you have to ensure:

  • The changes you want to migrate are shelved on the source branch.
  • There are no pending changes in the workspace – This was rather annoying but a limitation of the tooling.
  • You do a “get-latest” on both branches.
  • You have access to Visual Studio Command Prompt.
  • If you are using lower than VS2017, you will also need the TFS Power tools.
  • The source and target branch are in the same workspace. This took me longer than I want to admit to work out as the error is not helpful!

With the above prerequisites met, you need to spin up the VS Command Prompt. This can be found via a start menu search but you can also add it to VS (If not already), following the steps below in VS.

Adding Visual Studio Command Prompt to Visual Studio

Go to “Tools” > “External Tools”, and select “Add”.

Give it an appropriate title – I chose “VS Command Prompt.”. From here we want to specify the following:

  • Command: C:\Windows\System32\cmd.exe
  • Arguments: /k “C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\Tools\VsDevCmd.bat”
  • Initial Directory: $(SolutionDir)

This means (On saving) that if you go to “Tools” you will now see an option “VS Command Prompt”.

Back to the migration…

With the console open, and using a working directory of a folder under source control (I use the source solution directory), run the command:

tfpt unshelve /migrate /source:"$/Core/MyProduct" /target:"$Core/MyProduct-Branch" "MyShelveset"

In this command, we are saying to use the TFS Power Tool to unshelve a shelveset named “MyShelveset”. The migrate flag indicates that it will be moving between areas, and the source and target are named TFS folders.

If you get an error “An item with the same key has already been added“, ensure you do not have any pending changes in the source or target.

If you get an error “unable to determine the workspace“, make sure you are running the tool within a directory under the source folder.

Providing this command runs successfully, you will then see the “Shelveset Details” panel.

Shelveset details

In this panel you should see the files that make up the shelveset you defined in the command. Pressing “Unshelve” will start the process.

In my case I also saw a “Unshelve/Merge Shelveset” window. You should be able to “auto-merge all”.

Oddly, Auto-Merge took quite a while on my machine (You can see the progress in the cmd window). I am unsure if this is normal, or because I was remote working that day over a VPN.

“Item could not be found in your workspace, or you do not have permission to access it.

"Item could not be found in your workspace, or  you do not have permission to access it.
“Item could not be found in your workspace, or you do not have permission to access it.”

If you get this during the merge, you may do what I did and go down a rabbit hole of getting latest, checking mappings etc. Turns out that this command does not work cross-workspace. When I branch, I map the branch to a completely new workspace as it’s cleaner.

The workaround for this (If like me, you use a new workspace per branch) is to temporarily map the branch into the same workspace.

Wrap up

So overall, it is possible to do but a process that would take a GIT Novice like me minutes to do in GIT took closer to an hour total! Luckily this is still less effort than a manual merge, but if you only had a couple of files I would recommend just doing it manually…

Bonus round: Unshelving another users shelveset into another branch

If the shelveset is a colleagues and not yours, you can simply append “;username” at the end of the command above (Where username is their TFS user), and it will search for that shelveset under that user.

Debugging ES6 Mocha unit tests using VS Code

The world of Mocha, VS Code and Node is still fairly new to me. Typically in the past all my JS unit tests have been debuggable in-browser using DevTools, but with Mocha this is not the case (As I am not deploying my spec files). I got Mocha to load via a launch config, but it would not originally work due to using ES6 directly.

If you do not have a launch.json, start here. Otherwise skip to the next section. Add a new Debug Configuration by selecting “Debug”, and then “Add Configuration”. Selecting “Node.js” automatically creates a “launch.json” under a root folder named .vscode. If you already had debug set up, this step would be irrelevant.

Add Mocha configuration to launch.json

In the launch.json, much like the surprisingly helpful comments suggest, you can simply type “Mocha” then [ctrl]+[space] to bring up the intellisense for a Mocha configuration!

Which will insert the appropriate snippet.

Now, in theory it is as simple as clicking the play icon in debug, with “Mocha Tests” selected.

Supporting ES6.

For me however, this didn’t work.

The issue here is that I get a lot of unexpected token errors as my tests are using ES6 and I suspect that by default it wants to use ES5. The issue of using ES6 for unit tests was resolved in another post .

Much like my previous post, I can update the launch arguments to use require to pull in the same 2 Babel modules, and will also specify a wild card file name of my tests so it doesn’t pick up any other code.

 {
            "type": "node",
            "request": "launch",
            "name": "Mocha Tests",
            "program": "${workspaceFolder}/node_modules/mocha/bin/_mocha",
            "args": [
                "./test/**/*.spec.js",
                "--require", "@babel/polyfill",
                "--require", "@babel/register",
                "-u",
                "tdd",
                "--timeout",
                "999999",
                "--colors",           
            ],
            "internalConsoleOptions": "openOnSessionStart"
        }

Now for me, this also didn’t work as I am using Chai for my BDD test syntax.

For this I had to change “tdd” to “bdd” under the args.

Now I can attach and debug, providing a breakpoint is set!

Attempting to use Mocha & Chai to unit test ES6.

In this post I will cover using Mocha (JS test framework) and Chai (For BDD syntax) to unit test ES6 Javascript in VS Code.

I started working on a small side project, for no reason other than to play with ES6+. It’s a(nother) relatively simple toast library written in as much vanilla JS as possible to avoid reliance on libraries & packages.

I got the code working, but I couldn’t prove that the functions worked. I used qUnit in the past to test JavaScript but if I am completely honest my JavaScript testing knowledge is a bit lacking.

My aim is to get some unit tests for one of my main classes where I can test directly against ES6 and not against the compiled ES5 code. I want the tests to be clear to what they are doing. What I am doing is not new at all, nor is the library! I just wanted to keep notes of how I achieved this first time around.

Disclaimer: This is by no means a comprehensive guide or walkthrough, just the results of me messing about to see if I can get the outcome I wanted whilst learning something new!

Enter, Mocha

I decided to useMocha to do my unit testing, which was chosen purely as it seemed to work well with ES6 code (using Babel). Later I will go into how I also used Chai along side to provide much nicer, fluid assertions using BDD-style syntax.

First of all, I had to install Mocha.

> npm install --save-dev mocha

Then under a new root folder of “test” I created a bread.spec.js – where “bread” here is the name of the class I am testing.

At this point it is fairly easy to create a simple test, like so.

import {Bread} from "../src/bread";
var assert = require('assert');
describe('Fluent methods', function() {
  describe('Title set is not called', function() {
    it('should set the title correctly (null)', function() {
        let options = [ ... code to get options ... ]     
        let b = new Bread(0,"Foo", options);       
      assert.equal(b.Title, null);
    });
  });
});

I then added the appropriate script to package.json to allow us to run the tests.

 "test": "mocha --require @babel/polyfill --require @babel/register './test/**/*.spec.js'"

Which is ran with:

npm run-script test
VS code window with output of script above. Shows a single completed unit test.
Output of running above command.

This script states that it will run Mocha, on all files under the test directory where the JS file ends with “.spec.js”. I then had to add the 2 requires which enable Mocha to call the ES6 directly and not have to use the transpiled version. Failing to do provide these requires will mean Mocha will not run as it cannot parse ES6.

Using Chai for BDD syntax

In the above, I import my class then create a “test set”. In this test set I then have a single test which is checking if the title gets automatically set. It’s fairly easy to attain what the test does, but it could be clearer. This is where I decided to use Chai. Chai will allow me to have a BDD-style test written which is closer to plain english. Mocha does support some of this (at time of writing) but Chai is much closer to BDD-style syntax I was used to.

To use Chai I need to install the package:

npm install --save-dev chai

Then import the “expect” module from the framework, and refactor the method so it looks a little like this:

import { expect } from "chai";
import {Bread} from "../src/bread";
describe("Fluent methods", () => {
    describe("Title set is not called", () => {
        it("should set the title correctly (null).", () => {
            var options = getValidOptions();            
            let b = new Bread(0,"Foo", options);
            expect(b.Title).to.equal(null);
        });
    });  
)};

Running the tests will yield the same result as before, but now its a lot more readable (In my opinion!)

Not a lot more to add really. Mocha and Chai both have great documentation to read through. The only difficulty I had was getting Mocha to run ES6 directly, as a lot of the information online for this was out of date (that I found…)

Update: I have also posted about debugging using ES6 Mocha tests here

Recursive folder comparison with PowerShell

The Issue

This post definitely isn’t “new” or revolutionary, but I was quite surprised to find the Compare-object helper in PS, and I’m bound to forget in the future…

As part of some recent roadmap work, we moved over to a new installer technology for some of our tooling. This came with some minor headaches such as validating we have harvested all the correct files. The first iteration of this was a manual check which obviously is prone is human error – aside from being mind numbing!

I didn’t really want to use a third party tool. WinMerge can perform fantastic comparisons, but I wanted something quick and custom. Ideally also not spending longer than 10 minutes creating any code!

The first iteration was to do a recursive loop, pull out all the file names (Note: not the path) into 2 separate text files. The only “nicety” was I wrapped directory names in square brackets to give it some organisation.

The downside of this is that it only really worked for my sample folder with a few items. In production with thousands of files and nested folders this was plain chaos. Also I had to compare these files in a third party tool like WinMerge anyway – taking away the point of doing this!

The final version of my script aimed to only show the difference (avoid noise), ideally show which direction the change occurred using Compare-Object in PowerShell.

The Result

  • Do a recursive loop through the directory structure
  • Output Folder names as [Folder], and recursively dive-down. This is a bit dirty as I didn’t want the full path (harder to compare) but wanted to differentiate when I dug down. YMMV.
  • Output File names, excluding some files I didn’t care about (Like .tmp & .XML files)
  • Do this for folder A and folder B, storing the result to a variable
  • Using Compare-Object on these variables and outputting the result.
function GetFiles($path, [string[]]$excludedFiles)
{
    foreach ($item in Get-ChildItem $path)
    {
        if ($excludedFiles | Where {$item -like $_}) { continue }

        if( (Get-Item $item.FullName) -is [System.IO.DirectoryInfo]){
         $('['+$item.Name+']')
        }else{
          $($item.Name)
        }
        if (Test-Path $item.FullName -PathType Container)
        {
            GetFiles $item.FullName $excludedFiles
        }
    }
} 
$env1 = GetFiles -path "C:\folderA\" -excludedFiles "*.xml",".tmp"
$env2 = GetFiles -path "C:\folderB\"  -excludedFiles "*.xml",".tmp"

Compare-Object -DifferenceObject $env1 -ReferenceObject $env2

Which provides output like:

This could definitely be optimized and cleaned up for sure, and YMMV massively.

Overall, a few minutes in PowerShell and I managed to save substantial time – and that was my only real goal!

My attempt at using SonarQube for static code analysis

This post covers my attempts to use SonarQube as a stand-alone install to perform static code analysis on a regular basis. This will cover purely getting the tool working, Maybe I will pick up how I can use the data in a later post?

I will be doing this in a very narrow focus which is for the project I am currently working on which is .NET stack, with builds running in VSTS using MSBUILD.

SonarQube runs code analysis as solutions are being built and provides a web dashboard of code smells, security vulnerabilities, duplication and more. My aim is to use it to identify technical debt, as well as track debt is reducing over time.

Note you can hook this into Azure Dev-ops fairly easily too with a few clicks and less setup, but I wanted to host the tool on our own infrastructure for zero-cost. I also believe you can use their cloud version free if you are open source.

My aims are:

  • To get the self-hosted version of SQ installed/setup
  • Get it running against a local solution
  • Work out how to hook this into our VSTS build process (If possible)

Getting started

First of all, I downloaded and extracted the free self-hosted version of SQ (Community edition) and placed it on one of our build servers. This package is essentially a self-hosting application, and following the 2-min getting started guide here , it’s genuinely quite easy to get the dashboard running within that 2 minutes (Providing the system requirements are met – which looks like you just need a recent Java JRE/JDK installed)

Following the above guide, and launching the shell/batch script of your choice, you can then navigate to http://localhost:9000 and see the SonarQube dashboard asking you to create a new project.

When creating a new project you are prompted for a project key and display name. The key will be used for the integration, and the display name will be the name displayed on the dashboard.

Next up is the token. The token is used for authentication purposes when uploading analysis files and can be changed and revoked later. I just used the word “sausages” as an example, but when you click “generate” it will provide your token.

Next it will tell you how to configure your project for SQ. I am doing this against a .NET project (C#, JS, etc) so will continue with this example.

For a C# project which will be built using MSBUILD, you first need the “SonarScanner for MSBUILD“. The SonarScanner is the tool that performs the analysis by starting before MSBUILD kicks in, and then ending, and collating the results to send to the server when it ends.

This tool can be placed anywhere, but the folder will need adding to the PATH on your windows environment.

At this point, we can do some powershell to test that it’s all hooked up correctly! (You can use pre/post build events too, but this is much simpler for testing) In this example we will CD into the directory the SLN is in, start the scanner tool using our project name and key from earlier and build our solution file (in rebuild). Finally we will end the scanning.

cd 'C:\path to your SLN'
SonarScanner.MSBuild.exe begin /k:"My-Project" /d:sonar.host.url="http://localhost:9000" /d:sonar.login="Your Key"
msbuild MySolution.sln /t:Rebuild
SonarScanner.MSBuild.exe end /d:sonar.login="Your Key"

Note that this assumes you have the scanner and MSBUILD in your PATH variable. If you do not, you can simply call the exe directly. Note that the MSBUILD exe is located at C:\Program Files (x86)\Microsoft Visual Studio\<version>\<edition>\MSBuild\<version>\Bin\msbuild.exe. I believe that SQ requires MSBUILD of 12 and above – I am currently using 15.0.

If we run this, it will take a few moments to start and a variable amount of time to complete (Depends heavily on the size of your solution). At the very end you will see a line of “Execution Success” and if you still have your dash open you may have seen it update.

If you navigate back to http://localhost:9000 you should now see your project. Note that if your sln was particularly large you may just see a “background processing” message whilst it imports the analysis file.


This is good news! However, there is a bit of an ominous warning in the footer of the dashboard which reads

Embedded database should be used for evaluation purposes only. The embedded database will not scale, it will not support upgrading to newer versions of SonarQube, and there is no support for migrating your data out of it into a different database engine.

This is easily solved by simply having a backing database for SQ. You will however lose all your progress so far.

Moving beyond proof of concept.

So we can use this in our build/deploy pipeline, I want to have it hook into a database, and installed onto one of our servers and finally have one of our CI builds run it!

The process is the same as above in terms of placing the extracted files onto the server apart from we also have to punch a hole in Windows Firewall for TCP port 9000 so it can be accessed remotely. Running the start batch script now will bring you to the same dashboard and warning as before which is what we want to avoid, so we will need to hook up a database.

For the database I will be using MS SQL hosted on 2016. You can find documentation for other database types (MySql, Oracle etc.) on the SQ documentation.

First I created an empty database on the SQL 2016 server named “SonarQube”, and also a new SQL user named “Sonar” who is a dbo on the SonarQube database.

Back in the SQ install, in the \sonarqube\conf folder is a sonar.properties file. In here we need to add the following:

 sonar.jdbc.url=jdbc:sqlserver://myserver;databaseName=SonarQube
sonar.jdbc.username=sonar
sonar.jdbc.password=thepassword

Note: If, like me, your SQL instance is named like “server\instancename” you will need to escape the slash before the instance so it is like “server\\instancename”. The error that is generated does not lead to this being the cause of SQ not launching which was a pain!

Next up, I wanted to have my VSTS build automatically run the analysis and process it. I wanted this to be part of the departments CI builds, but due to the size of the project it was taking upwards of 30 minutes to complete so we moved it to the nightly builds.

Now you should be able to launch SQ again and not see the banner in the footer. A new project should be created like before.

As we use TFS/VSTS in house, this guide will show working with self-hosted TFS. There is a lot more in-depth (and more useful!) guides on the SQ docs here. The steps to take before progressing are:

  • Download the SQ VSTS extension appropriate for your version of TFS/VSTS https://github.com/SonarSource/sonar-scanner-vsts/releases
  • Install extension to TFS
  • Add the extension to the collection.

Once the extension is installed, you should see 2 new build steps of:

  • SonarQube Scanner for MSBUILD – Begin Analysis
  • SonarQube Scanner for MSBUILD – End Analysis

Before you can use these, you will need to configure the SQ endpoint. To do this, you can either goto the collections administration page, then the “service” tab, or add the “SonarQube Scanner For MSBUILD – Begin analysis” build step to your VSTS build and then click “Manage” the server panel. In here, you can click “New Service Endpoint” and then “SonarQube”

When you add a new SQ endpoint, you configure it using a friendly name, the URL of the dashboard, and the token you set up for your project/user (much like the earlier powershell script had)

Now if you go back to edit your build definition. You will be able to use Sonar Qube with the Begin and End tasks either side of your build actions (again, much like the earlier powershell)

To set it up, select the SQ end point as configured earlier, and then under Project Settings, set the Project Key and Project name appropriately (based on however the project in SQ was set up) and finally, under “Advanced”, check “Include full analysis report in the build summary”. There is no configuration required for the end analysis build step.

Running it now will most likely result in a failure unless the machine which is hosting the build agent has the correct software installed.

Much like we did for the local version, we need to download and setup the SonarScanner and add it to the build path. The server which hosts the agents will also need to be running MSBUILD v14 or v15 (at time of writing). You can get a standalone version of MSBUILD v15 direct from Microsofts download pages. It took me a while to find it, so this is the direct link to v14. (Which I had to use due to a separate issue)

https://download.microsoft.com/download/E/E/D/EEDF18A8-4AED-4CE0-BEBE-70A83094FC5A/BuildTools_Full.exe . Hopefully this is the correct one for 14.0.25420.1

Now if you run your build it should (hopefully) produce some output. I did run into a few errors in this stage which may be just me being a bit uninformed (and not really reading docs…)

Some of the “Gotcha’s” I ran into

  • As noted earlier, MS SQL server names which contain slashed must be escaped, and the error that it thrown does not indicate this is the case!
  • Weird, ambiguous messages. (An instance of analyzer SonarAnalyzer.Rules.CSharp.ThreadStaticWithInitializer cannot be created from SonarAnalyzer.CSharp.dll ) which actually had nothing to do with that, and was actually related to MSBUILD versions. I was pulling my hair out until I saw this post which states you need MSBUILD v14+ from update 3 (14.0.25420.1) where I was using MSBUILD v14 but 14.0.23107.10).
  • MSBUILD Version issues. The build would work but the SQ analysis would fail with an error about supported MSBUILD versions. To get around this I made sure that the MSBUILD step in the VSTS build was using an argument of /tv:14.0 to ensure it would use a specific version.
  • Static code analysis could not be completed on CSS files due to Node JS version (ERROR: Only Node.js v6 or later is supported, got <ver>. No CSS files will be analyzed.). Simply install latest nodeJS to the server hosting the agents.
  • Timeouts! This was only an issue for me as the projects LOC is in the hundreds of thousands so it generates a large log. (##[error]The analysis did not complete in the allotted time of 300 seconds. Consider setting the build variable SonarQubeAnalysisTimeoutInSeconds to a higher value.) To get around this, you add a build variable to the VSTS build named “SonarQubeAnalysisTimeoutInSeconds”. I tried setting it to zero (which is usually ‘infinite’) but then I got “##[error]The analysis did not complete in the allotted time of 0 seconds“. I couldn’t find any reliable info about max values so set mine to 20 minutes to be safe.

Conclusion:

Finally, (for me) after some light tinkering it all worked. When I was scanning google/stackoverflow it seems a few people had the same issues as me, so I don’t feel too bad about it for a first try!

Our nightly builds now pump out some lovely code analysis to the dashboard which I have shared with the team and the results are feeding into our technical backlog to be resolved. I’m going to leave it running for at least a month, and see if we get any real usage out of it.

More on results later, perhaps?

This wasn’t really written as a complete step-by-step, more of a stream of consciousness as I tried to learn about something new, but hopefully it helps someone, even if that is future me when I come back to re-remember how it works.


Update: I realised this morning that the project version on the SonarQube dash never increased or changed which meant it was hard to check when issues were implemented without cross-referencing the date and the checkin history. If you set the “Project Version” on the Begin Analysis step the in-built variable of $(Build.SourceVersion) you will get the changeset the build was ran off as the version.