Ensuring “dotnet test” TRX & Coverage files end up in SonarQube

Do feel free to provide any comments/feedback to @TheRichCarey on Twitter

I have written before about using SonarQube to do static analysis, but one issue I never came back to was ensuring that code coverage files generated via a build pipeline end up being picked up by the Sonar Scanner to assess code coverage.

Note that the following I am actually using the ‘dotnet test’ build step, rather than the ‘Vs Test’ one. Do let me know if you find a nice work around for the VS Test variant, as I couldn’t get it to drop coverage files!

The issue

The issue is that:

  • When using VSTest, TRX files are deleted automatically if using version 2+ of the VS Test task as per this stack overflow post.
  • When I switched back to ‘dotnet test’ the same thing appeared to be happening.
  • .coverage files are not output by default
  • TRX and Coverage files are placed in a temporary folder of the build agent rather than the executing agents working directory.
  • Even though SonarQube could detect the tests, it would still register as 0.0% code coverage!

Getting ‘dotnet test’ to collect coverage

The first step was to get the ‘dotnet test’ build step to collect the code coverage, and not just dump TRX files.

To do this, go to the “Arguments” field of the dotnet test build step and append --collect "Code Coverage", as well as ensuring that “Publish test results and code coverage” is enabled.

Ensure generated files are copied to the working directory

As the coverage files will end up in the /tmp folder of the build agent, SonarQube will not be able to scan them.

We will need to add a new build step of “copy files” with the correct filter set to get the .trxand .coverage files from the default temporary directory on the build agent, to the test results folder of the workspace. To do this we need to add the “Copy Files” task into the build and place it after the “VS Test” task. The source folder for the copy will be $(Agent.HomeDirectory)\_work\_temp and the target folder will be $(Common.TestResultsDirectory) – The contents can remain as ** but feel free to filter if required. Example below.

If we run a build now, we should now see files in the TestResults folder of the build agent’s working directory.

I didn’t have to make any changes to the configuration within SonarQube as it should just pick up the coverage files. If I follow the above I get the following (Lets just ignore the fact the number is low ๐Ÿ˜‰)

CSPROJ Changes to test projects

One thing I did notice in the console when attempting to fix this code coverage issue was that I got a lot of warnings like:

SonarQube.Integration.targets: warning : The project does not have a valid ProjectGuid. Analysis results for this project will not be uploaded to SonarQube. 

As all my projects were .net core or .net standard the CSPROJ files do not contain a <ProjectGuid> tag by default. As also suggested in this stack overflow answer , I added a GUID to my test project file. I am not 100% if this is required, but it stopped warnings appearing in my console and does no harm.

Bonus

If you have multiple builds to update and you are using Azure Devops, you can take advantage of “Task Groups”. This allows you to create a single build step which in turn executes a series of other build steps. Using the steps above, you can create a new Task Group to create a single build step to run the test script and make sure the files are copied to the correct location for analysis. For example I have the single build step below:

Which means I can then just call this single build step in all my builds

Include both Nuget Package References and project reference DLL using “dotnet pack” ๐Ÿ“ฆ

Do feel free to provide any comments/feedback to @TheRichCarey on Twitter

Recently I have been trying to generate more Nuget packages for our dotnet core projects, utilizing the dotnet pack command. One issue I have been encountering is that the command was either referencing the required nuget packages, or the project reference DLLs, never both.

The current problem.

If you have Project A which has a project reference to Project B as well as including a nuget package called Package A you would expect the generated package to contain a link to both the required nuget package, and the DLL(s) for Project B, yes? This however is not how the dotnet pack command works.

This issue is widely reported on their repo (I.e. https://github.com/NuGet/Home/issues/3891 ) and unfortunately it seems the developers and the community are in a bit of a disagreement to what is “correct”. The official stance (as I understood it) is that the project references won’t be included as they should be their own packages. This however is not always practical or desired.

The workaround.

Plenty of workarounds have been suggested around Stack Overflow and Github including having a seperate nuspec file, using Powershell to inject things into the generated nupkg and so on…

The solution below worked for me, but of course, YMMV.

In the end I ditched having my own .nuspec file within my project (as per some SO posts) and instead used the CSPROJ (as recommended). Below you can see the required fields for the packaging (version, naming, etc), a reference to a nuget package, and a reference to another project within the solution.

CSProj Snippet of dotnet core project
Snippet of CSPROJ with basic package info filled in.

If you run dotnet pack now, it will generate an appropriately named package which will contain a nuget dependancy on SomeNugetPackage. This can be confirmed by opening the nupkg with an archive tool (7Zip,WinRar, WinZip…) and seeing that the only DLL in the lib folder will be the DLL of the project being packed.

The fix is as follows:

  • Alter the project reference to set the ReferenceOutputAssembly flag to true, and IncludeAssets to the DLL name
<ProjectReference Include="..\ProjectB.csproj">
  <ReferenceOutputAssembly>true</ReferenceOutputAssembly>
  <IncludeAssets>ProjectB.dll</IncludeAssets>
</ProjectReference>  
  • Add the following line into the <PropertyGroup> element
<TargetsForTfmSpecificBuildOutput>$(TargetsForTfmSpecificBuildOutput);CopyProjectReferencesToPackage</TargetsForTfmSpecificBuildOutput>
  • Add new target between <project> tags
<Target DependsOnTargets="ResolveReferences" Name="CopyProjectReferencesToPackage">
    <ItemGroup>
      <BuildOutputInPackage Include="@(ReferenceCopyLocalPaths->WithMetadataValue('ReferenceSourceTarget', 'ProjectReference'))"/>
    </ItemGroup>
  </Target>

So now you end up with something that looks like this

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <Version>1.0.9</Version>
    <Product>MyProduct</Product>
    <id>MyProduct</id>
    <PackageId>MyProduct</PackageId>
    <Authors>Your name</Authors>
    <Company>Company Name</Company>
    <Description>My library</Description>
    <Copyright>Copyright ยฉ 2019 MyCompany</Copyright>
    <TargetsForTfmSpecificBuildOutput>$(TargetsForTfmSpecificBuildOutput);CopyProjectReferencesToPackage</TargetsForTfmSpecificBuildOutput>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="SomeNugetPackage" Version="1.2.3"/>  
  </ItemGroup>
  <ItemGroup>
    <ProjectReference Include="..\ProjectB.csproj">
      <ReferenceOutputAssembly>true</ReferenceOutputAssembly>
      <IncludeAssets>ProjectB.dll</IncludeAssets>
    </ProjectReference>  
  </ItemGroup>
  <!--Next line is to ensure that dependant DLLS are copied-->
  <Target DependsOnTargets="ResolveReferences" Name="CopyProjectReferencesToPackage">
    <ItemGroup>
      <BuildOutputInPackage Include="@(ReferenceCopyLocalPaths->WithMetadataValue('ReferenceSourceTarget', 'ProjectReference'))"/>
    </ItemGroup>
  </Target>
</Project>
End result CSPROJ. (Click to enlarge)

Now if you run dotnet pack you should see any project reference DLL under the lib folder of the package, and if you inspect the nuspec file inside the package (or upload it to your package repo) you should see the nuget dependencies.

Hopefully this helps someone, as there is a lot of conflicting info around. Please let me know if this would cause any issues!

Setting up a self-hosted build agent for Azure DevOps

Azure DevOps has brilliant build pipeline options and as easy as it is to get set up with their hosted build agents, it can get quite costly rather quick. In this post I cover off setting up a self-hosted build agent for use with Azure.

This post won’t cover setting up the build box, but can be covered in a later guide if required. I actually have my build box scripted out using Choco commands to allow building of .NET projects to make this step easier.

Pros/Cons

  • Pro: Full control over the build
  • Pro: Can have your builds build items or run services which simply aren’t available in the Hosted agents.
  • Pro: Low cost. If you already have the hardware, why pay for Azure VMs?
  • Con: Maintenance and redundancy. If the machine goes down or breaks it blocks your pipeline.
  • Con: Extra setup steps.

Prerequisites

Before starting you will need to make sure:

  • You are a collection/build admin
  • You have a server configured to build the appropriate software (i.e. Correct SDKs etc which won’t be covered in this post)

Personal Access Tokens

First of all, you will need a personal access token for your account. This is used to allow your build agent access to Azure without hard-coding your credentials into your build scripts. You can use your own account for this, or a specially created service account – Just note it will need permissions to access the collections it will be building.

To get this, log in to your Azure Devops portal, and navigate to your security page.

In here, select “Personal Access Tokens” and then “New”. A panel will be displayed to configure this PAT. Specify a friendly and unique name, select the organisation you are using this token for, and then set its security access.

For the security access, I recommend selecting Full Access under “Scopes” so you can use this PAT for general Dev Ops activities. You can fine-tune the control, but you must ensure it has read/execute on the build scope as an absolute minimum. For expiry I typically select the longest period which is 1 year.

Agent download and configuration

Next up you will need to navigate to the project settings > Pipelines > Agent Pools.

Create a new Agent Pool with an appropriate name (You don’t *have* to do this and can just use the default pool if you wish, but I like the separation). When your pool is created you will see the option to add a new agent to it.


Clicking “New Agent” will give you the instructions for the OS of your choice. As per the instructions, download the agent (A ~130 ZIP file) and then place somewhere sensible on the machine that will be acting as a build server. When extracted, run config.cmd in an elevated command window

When running the config.cmd command you will require the following information:

  • Server URL
    • This will be https://dev.azure.com/{organisation name}
  • What type of authentication you will use (Just press return as it will default to PAT)
  • Your PAT to access the server, as set up in the first step.
  • The Pool to connect to. This will be the name of the agent pool created above.
  • The working folder. The folder to use for storing workspaces being built.
  • A name for this agent. Call it whatever you want, but I would personally always include the machine name as it makes it easier to work out which agents are running.

Providing all the above settings are specified correctly and there are no authentication issues, it should now attempt to start.

Confirming the agent is active

Going back to the Agent Pools configuration screen you should now see the agent listed in the appropriate agent pool.

If the agent is not displaying after a few minutes, something went wrong in setup.

If the agent is displaying offline, try running the “run.cmd” command in an elevated command window on your build server.

Now all you have to do is select your new agent pool when creating your next build!

๐Ÿ‘†Level up๐Ÿ‘† your retrospectives! (and why you should run one!).

You can also view this post on: https://dev.to/wabbbit

Regardless of which Agile methodology you use to deliver your software – You are using one, right?, retrospective is one of the most important meetings you can have. Sadly, it’s one of those meetings that can turn into an unstructured nightmare, and if you aren’t getting anything out of it, what is the point? This is a quick 5-min guide of leveling up your retrospective, or convincing you if you don’t already run one.

What is the retrospective for? Why should I run one?

By definition, a retrospective is a look into the past. In this context it is a chance for the team to get together at the end of a sprint/milestone and discuss what did and didn’t go well, and more importantly what can be done to improve. It should give the entire team a chance to voice their opinions in a safe space, and provide honest feedback that can be used to improve future iterations/sprints.

TL;DR: The aim of retrospective is to identify incremental improvements to make the next iteration/sprint better than the last.

Quick fact-sheet

Who runs it?: Arguably the meeting should be ran by the Team leader/Scrum master etc. However it can be ran by anybody as it is an open discussion owned by the entire team. Ideally one person should take ownership though to ensure the meeting stays focused and to track any outcomes.

Who attends: The retrospective should be attended by the entire delivery team. Business analysts, developers, testers, project managers, product owners etc. Anybody that was directly involved in the delivery of items. This meeting is not for the wider community such as stakeholders as this often limits honesty.

When is it ran:: The retrospective should be ran at the start of the iteration – looking back at the previous iteration. Typically this would be ran after any sprint-review/show & tell meeting and the stakeholders have left the room.

How long should it be: Keep it focused! 5 minutes per attendee. In an “ideal-size” SCRUM team this shouldn’t be more than 30-45 minutes.

An effective retrospective template

We have trialed many different approaches over the years for our retrospective. The theme has always been the same with the focus being on what went well, what didn’t and what could we change.

Lately we introduced a new template as retrospectives were becoming slightly unfocused talking-shops and although solid actions were coming out of them, people seemed hesitant to self-judge unless prompted.

Before this we also used the “Traffic light method”, also known as “Start, Stop, Continue”. My issue with SSC is that it has the potential to derail quicker and doesn’t feel as focused. SSC can also sometimes become dominated by one or two people and I felt that I was excluding people at times. That is why I like the 4 question method.

The 4(ish) question method

Part 1: Questions for each team member

Go around the table, each person answering the following 4 focuses.

  • What was a success for you this iteration?
    • For example: a particular piece of work they were proud of, a bug they resolved quickly, etc. This is important as it allows the team member to show-off or gloat a little. If you don’t let people discuss success, they won’t want to talk about failure!
  • What was a failure for you this iteration,and why? Could it have been avoided?
    • For example: “Not completing agreed work, because of X”, or “spending more time than they thought on Y”. 
      They should know why this happened and ideally what we could do to stop it happening again.
  • If you could have changed one thing about the last iteration, what would it be?
    • For example: “Break down work items sooner”, “Spend less time in meetings”
  • Did you learn anything?
    • Important as you want your team members to grow. Whether that is technical knowledge or product knowledge.

The answers should be simple one liners, and may prompt discussion among the group. For example, let’s say somebody says they felt one failure of the iteration was that they couldn’t complete a piece of work as the requirements changed last minute. You could ask why that happened and what we could do it mitigate this in the future.
Keep any follow up discussions short and concise. It’s very easy for someone to “go down the rabbit hole”. Take any discussion that will take longer than a couple of minutes offline or come back to them at the end. You don’t want to chew through all your time on one person, after all!

The person running the session should capture all the failures and changes.

Example:

  • What was a success for you this iteration?
    • A success for me this sprint is that we resolved a critical priority item well within the SLA and had a happy client with positive feedback.
  • What was a failure for you this iteration,and why? Could it have been avoided?
    • what: A failure for me this sprint I couldn’t get my changes into the test environment quick enough due to lack of Change Control resource.
      why: Change Control staff were not available at short notice.
      action We should schedule in resource capacity at the start of a sprint.
  • If you could have changed one thing about the last iteration, what would it be?
    • n/a
  • Did you learn anything?
    • I learned how {{some component}} worked.

This may have prompted discussion about how we need to think about how we deploy changes via another department and an action would be that we will do some up-front scheduling.

Part 2: Take action!

This whole process is pointless if nobody is taking any actions off the back of this session!
In part 1, the person leading the session should have taken notes of all the potential actions.

The session lead should quickly iterate over the actions captured and as a team decide;

  • Is this something that we should do immediately, or place in the backlog?
  • Who is taking ownership for this action?

Part 3: Coffee.

You deserved it, go you!

Although, there is no such thing as a free coffee. The actual 3rd item is to follow up on actions.

Issues you may run into:

People don’t want to talk about failure: Perhaps people feel like they are being judged. To combat this, really focus on the success of the sprint as the main talking point & promote an open/safe environment. If individuals don’t want to open up, the session-lead could drive an open conversation – i.e. “Why do you think we (as a team) didn’t achieve X”

Hostility: Make sure any feedback is constructive and blame free. Never attribute failure to another team member. Always use “we” never “you”. Promote collective ownership!

Lack of interaction Some people are introverts and may not want to join open discussion. If this is an issue, suggest people write the answers on post-it notes and have the session primarily led by the team-lead (Collating and discussing). Still give them the option to share or change their answers within the session.

Turning into a moan-fest. This one is really easy to slip into. To avoid this, promote constructive discussion. Ensure actions are being followed up or people may not take the meeting seriously and just use it as an opportunity to vent.

Retrospective your retrospective

Yo dawg. I heard you like retrospectives.

There is no “best” way to run a retrospective session. Every team is different and sometimes you may find the format doesn’t work with the people that are attending. If it isn’t working for you, don’t abandon it, adapt it! YMMV.

Using time tracking software to your advantage.

Ugh, Time tracking!

People often convulse when hearing “Time tracking” in a work context. Although some companies use this to ensure their pesky workers aren’t having any downtime and are constantly little cash cows, I believe it can be used for good and more importantly boost your own productivity. Why not use time tracking software to your advantage?!

Of course, this doesn’t mean the horrid aforementioned, corporate time tracking doesn’t exist, but don’t let this cloud your judgement.

The tool I will introduce is not a new tool by any means, but everybody I have introduced it to since has been hooked and hasn’t looked back.

So, why should I care?

You know the life of a typical developer. Constant interruptions, sporadic support emails and “quick” breakout sessions that turn into hour long meetings. You get to the end of your iteration and get asked “Why didn’t you get around to X”. Wouldn’t it be best to quantify instead of sounding off excuses?

Introducing, Toggl

Toggl is a completely free (With paid options) time tracking tool which is available for web, desktop and mobile. I have been using this now for almost 4 years.

The way it works is that you can assign Tasks against a particular project (I.e. “Client X support”, or “Internal Meeting”), which are attributed to a specific client. Tasks are free-text but can be grouped together for reporting by using the same auto-completed task name. You can keep everything completely unassigned or be anal retentive and go super-granular with multiple projects per client and different work streams (like me!)

For example, If I take a random day from the last few weeks I can see what I was doing, and how the day was distributed. I found it best to only create a new entry when my context changes – such as moving to a different bit of work, or having to go provide some support at someones desk etc.

Here, you can see I spent 7hr 53min on pure work activities that day. Within this day we spent 24 minutes in our daily catchup (Ouch!), and a chunk of my day was taken up with support of some kind and the rest pushing releases out the door.

What I like about Toggl is I can say something like “Wow, that is a lot of time for standup, how much time is ‘wasted’ each month?”

Toggl has a fairly decent web based dashboard, offering much more functionality than the desktop or mobile applications. You can drill down into each task, project or client to find out where your time is spent. See below.

That’s nice, but what’s the point? (aka. How I benefited)

a) Find out how much time you spend on repeating activities (& IDentify OPTIMIZATIONS)

Find out what your “expensive” repeatative tasks are and optimize them! For example:

Here, you can see I spent 10.5hr last month either in daily standups, show & tell or retrospectives (All under my SCRUM umbrella [scrumbrella if you will…]). Whilst this is only ~6% of my time, it is still a considerable chunk and an area I could look at optimizing within the team. A future post will be on running effective standups, watch this space.

One major win I had from using this tool was finding out how much time I was spending pushing and prepping release files. I monitored this over a couple of months, and then implemented Powershell scripts to automate the task. It now takes 10% of the time!

B) Track time working on QA vs. new features

The way I break down my projects is to have a core collection of projects under each client, usually:

  • <clientname> : (Used for any Change Requests and billable work)
  • <clientname> Support : (Used for client reported defects or queries)
  • <clientname> Release Activities : (Used to track testing and release cycle time)

This means that I have the ability to do the following:

  • Identify which clients are the most demanding or raise the most queries/defects (and allow us to highlight testing gaps, knowledge issues etc.)
  • Identify how much time I spend on QA vs billable work
  • Fill my time sheets in after the fact without manically scanning email trails and check in histories!!

C) Have a valid excuse!

It’s the last day of the delivery cycle and you haven’t completed a feature. Wouldn’t it be great to say (and have the backup) that you spend 25% of your time couped up in unplanned meetings?

Being able to justify something is good. Being able to justify it and point at a pie chart is better!

Conclusion

Toggl is a great tool (for me at least) and I really do recommend it if you want to see how your day is really spent and generate reports . After years of use, I dont see myself stopping.

Toggl has a ton of additional options too, such as tagging up tasks, marking items as billable for better invoicing and more- features I don’t personally use, but may be useful for someone.

What do you use, if anything, and have there been any success or failure stories of doing so? Note: this is posted to dev.to for discussion.

Remote NLOG logging with Azure Functions (Part two) – Persisting data into Azure Cosmos DB.

Last time, I got a very basic C# Azure Function hooked up to accept a request from an NLOG web service target. This time, I will be attempting to persist(insert) the incoming log information into an Azure Cosmos database container, direct from my Azure Function in VS Code.

Disclaimer: This blog is more of a “mental notes” for me. I am nowhere near an expert in this area, and Cosmos DB is still new (hours old) to me. Drop me a line if I have missed the point massively ๐Ÿ™‚

Setting up Cosmos DB, databases and containers.

To get started I will make a new Cosmos DB on the Azure Portal by selecting “Azure Cosmos DB” from the resources panel and selecting “Create new”.

On this page I need to specify:

  • Subscription: Your Azure subscription to create this under.
  • Resource Group: You should already have a resource group which matches the Azure Function you created earlier.
  • Instance Details
    • Account Name: This will be prefixed to the URL – i.e. blahdocuments.azure.com
    • API: For this example I will be using Core(SQL) so I can create a document database and query using SQL syntax. 
  • Location: Select the closest location to you.
  • Other options: Other options like Geo-Redundancy etc can be left as disabled for now.

Select “Review and Create”, then on the next screen “Create” – Providing you are happy with the inputs.

Which will switch to “Your deployment is complete” when it is ready to be used. It shouldn’t take longer than a couple of minutes.

Clicking on “Go to resource”, or navigating to your new Cosmos DB via the Resource manager will load up the quick-start window for this database. First however, we need a “container”. Selecting the Azure Cosmos DB account we just created, we need to select “Add container”.

Here we have a few inputs:

  • Database ID. I didn’t have a database, so needed to create one. If you already have one, specify the name here.
  • Throughput: 400 RU (Request Units/S) should be more than enough for basic testing and operation for my purpose.
  • Container ID: I specified a container ID that lives inside the new/existing database. azlogger is where I want all my logging related data, and a container of azlogger-logs for the logs I will be storing.
  • Partition key: I used “loggerName” as my partition key. See this video for info, but essentially I believe this is for managing partitions if the data exceeds the limit so partitions can be grouped(?) correctly (~10GB?). I”m not 100% sure to be honest, without reading more. I just went with a recommended S/O post.

Updating the Azure function to connect with Cosmos DB

We first need to use the CosmosDB package in this project, so in the terminal, run:

dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB

Now I need to set up the solution so it’s ready for using Cosmos DB.

In local.settings.json I added my connection string:

 {
    "IsEncrypted": false,
    "Values": {
        "AzureWebJobsStorage": "",
        "FUNCTIONS_WORKER_RUNTIME": "dotnet",
        "MyCosmosDBConnection": "<conn string>"
    }
} 

Where the connection string value comes from your Cosmos dashboard, under “Keys” -> “Primary connection string”

Now I will need a C# model to bind against. I made a simple LogDetail class with the required fields. Note that I am using the JsonProperty items on the fields. I read conflicting blog posts about the requirement for annotating everything other than the ID, but I found no harm in leaving it in for now.

  public class LogDetail{
        [JsonProperty("id")]
        public string Id { get; set; }

        [JsonProperty("timestamp")]
        public string Timestamp;
        [JsonProperty("logName")]
        public string LogName;

        [JsonProperty("logLevel")]
        public string LogLevel;

        [JsonProperty("message")]
        public string Message;
    } 

Now time to update the main method/function! This was actually the hardest part for me (Partly due to lack of experience with this tech), the documentation was a little confusing, misleading and often specific to a particular scenario.

I’m not sure how correct this is, but I ended up changing my main method so that it read:

public static class Log
    {
        [FunctionName("Log")]
        public static void AcceptLogRequest(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "Log")] HttpRequest req,
            [CosmosDB(
                databaseName: "azlogger",
                collectionName: "azlogger-logs",
                ConnectionStringSetting = "MyCosmosDBConnection",
                Id = "{sys.randguid}",
                PartitionKey ="/loggerName"
                )]
            out LogDetail logDetail,
            ILogger log)
        {
            log.LogInformation("HTTP trigger fired for log entry.");
            string timestamp = req.Form["timestamp"]; 
            string loggerName = req.Form["loggerName"]; 
            string loggerLevel = req.Form["loggerLevel"]; 
            string message = req.Form["message"]; 

            var res = $"{timestamp} | {loggerName} | {loggerLevel.ToUpper()} | {message}";
            log.LogInformation(res);            
            logDetail = new LogDetail();
            logDetail.Timestamp = timestamp;
            logDetail.LogLevel = loggerLevel;
            logDetail.LogName = loggerName;
            logDetail.Message = message; 
        }
    }

The main changes were:

  • Making it a synchronous void method for saving data into CosmosDb which was recommended in the Microsoft Docs here. Could be wrong, but it works and is in-line with their docs.
  • changing LogDetail logdetail to be out LogDetail logDetail
  • Add in the Cosmos DB annotation (See below)

The CosmosDB annotation has the following options:

  • databaseName: This is the main database name for the database set up in the previous step.
  • collectionName: This is the container name set up in the previous step
  • ConnectionStringSetting: The app setting name placed inside local.settings.json in the previous step.
  • id: The Id entry for the record. For this I used an inbuilt system parameter of {sys.randguid}
  • partitionKey: The partition key I specified in the earlier setup step.

Now, if I debug (or deploy) this Azure Function and cause my application to insert a bunch of NLOG entries, it should now create some entries in a Cosmos DB container.

To view the results, I can go to Cosmos DB on the Azure portal and select “Data Explorer”. From here, the Database and Container can be expanded to show the “Items” in the container – In my case, a load of NLOG entries.

Conclusion

It’s early days, but deploying a new Azure function is trivially easy using the most recent tooling, and the only real difficulty seems to be nailing down decent documentation in Azure (Which has always been the case from my experience!)

The next stages would be to look into Azure Function triggers on a timer to produce MI reports, or time/action based triggers to forward captured events onto the appropriate person.

Remote NLOG logging with Azure Functions (Part one).

Part of a journey I was on today to learn about Azure Functions and Cosmos DB. The final code for the Azure Functions element can be found on https://github.com/Wabbbit/AzLog

  • Part one: Setup, Creating my first function, forwarding NLOG events and deploying to Azure within VS Code.
  • Part two: persisting the incoming data using Cosmos DB.

Disclaimer: This blog is more like my mental notes, this tech is super fresh to me so take with a pinch of salt.

What I want to achieve

Logging is a necessity in any application, I can’t even count the amount of times having some verbose logging has saved me many hours of debugging.

Currently, I almost exclusively use NLOG for .net projects. I typically structure my logging into discrete, separate loggers (i.e. Startup, API, Business logic failures, etc), which are usually configured to dump into .txt and/or the system event log.

This is great for our internal dev/SIT/QAT machines, and also when a client rings up about an error they encounter as they can just provide the appropriate log. The downside of this of course is that we don’t know if a client (With a self-hosted, remote installation) has a fatal error until they contact us, and with some clients the chain of reporting means the system could have been impacted for a short while before we get notified.

What if we could remotely capture major errors? As a proof of concept I will be attempting to use the NLOG web service adapter to talk to a C# Azure Function.

This assumes previous knowledge of working with NLOG and C#, but not Azure.

Creating my first Azure Function.

Prerequisites

Azure functions can be created directly within the Azure Portal, but for this demo I will be using VS Code.

First we need to make sure the system is set up to work with Azure Functions. We will need the following:

  • VS Code
  • Azure Functions Core Tools: For this we can use NPM. npm install -g azure-functions-core-tools. Note that this also exists on choco but has issues with x64 debugging in vscode.
  • Azure Functions VS Code extension.
  • C# VS Code extension.
  • and later on, an Azure account so we can deploy

Lets make a function!

With the Azure Functions extension installed, select the Azure menu and then “Create new project”. Don’t worry about connecting to your Azure subscription yet if you have not done so.

Once a folder is specified, a language must be chosen. I chose C#.

Next, the template for the first function will need to be specified. For this demo I will be using the HttpTrigger which means it will fire when hit on receipt of HTTP Get or Post (Like any standard API)

The next panel will ask for a function name. For this I just chose “Log”.

A new Azure Function will be created. Lets take a look at the files that are created:

  • .vscode: All the standard VS Code items which assist in build, debug and required extensions.
  • *.csproj: The project file for this Azure Function.
  • <function-name>.cs : This is the function that was created by providing a name in the last dialog. This is essentially like a Web API Controller.

Pressing F5 should restore any packages, start a debug session and output the temporary URL into the terminal, like so:

Navigating to that URL with a browser or postman will render something like:

Hooking up NLOG WebService target

Now I have a base function (Even if it doesn’t do anything), I can update NLOG in my project to make a web request with some information.

In my NLOG.config, I need to add a new target between the <targets></targets>

<target type='WebService'
            name='azurelogger'
            url='http://localhost:7071/api/Log'
            protocol='HttpPost'
            encoding='UTF-8'   >
      <parameter name='timestamp' type='System.String' layout='${longdate}'/>
      <parameter name='loggerName' type='System.String' layout='${logger}'/>
      <parameter name='loggerLevel' type='System.String' layout='${level}'/>
      <parameter name='message' type='System.String' layout='${message}'/>
    </target>

What we have done here is:

  • Create a new NLOG target of type “Web Service” to the URL from the step previously.
  • Set up a few parameters to send across with our request, which are NLOG parameters for things like the log message, the time the entry was created, etc.

Now I need to ensure that one of the loggers is set to use the new “azurelogger”. For example:

<rules>   
  <logger name="StartupLogger" minlevel="Error" writeTo="event, azurelogger" />
</rules>

Now if I do an IIS Reset where my NLOG config lives, and trigger off an error message manually, the new Azure Function should receive all the information it requires.

However, as our function doesn’t *do* anything, we can only prove this by debugging the function in VS Code. To do this I placed a breakpoint within the function and inspected the req object.

Here, I can see that all the fields I wanted are present!

Changing function code to accept incoming NLOG params

Fairly trivial – I altered the contents of the function to be as per below. In this code, I simply read the 4 items that my NLOG config is set to provide. I also changed the method name to something a little nicer than Run() as it is more descriptive. However this doesn’t actually control the endpoint name. To explicitly set the endpoint name I also changed the Route from null to “Log”. If I wanted to hit /api/blah instead of api/log I would simply do so by changing the route name.

  public static class Log
    {
        [FunctionName("Log")]
        public static async Task<IActionResult> AcceptLogRequest(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "Log")] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("HTTP trigger fired for log entry.");
            
            string timestamp = req.Form["timestamp"]; 
            string loggerName = req.Form["loggerName"]; 
            string loggerLevel = req.Form["loggerLevel"]; 
            string message = req.Form["message"]; 
           
            var res = $"{timestamp}   | {loggerName} | {loggerLevel.ToUpper()} | {message}";
            log.LogInformation(res);

            //TODO: Persist the data

            return (ActionResult)new OkObjectResult(res);

        }
    } 

Now, if I debug and cause NLOG to log an error, I can see the terminal window and debugger capturing the same information that gets placed in my event log.

Deploying to Azure

I will skip the step of connecting to Azure, which is as simple as just pressing “Sign in” and following the instructions.

To deploy from VS Code, simply select “Deploy to Function App” and then provide a new name of a function to deploy to.

It takes a while to set up a new function app, but when its done, simply click “Deploy to function app”. Thew API will now be accessible via the web (using the azurewebsites url) and Azure dashboard.

Wrap up, until next time…

So far I have a new Azure Function, which is being contacted by the NLOG Web Service target.

Next time I will attempt to persist the incoming logs, using Cosmos DB

Unshelving TFS changes into another branch (VS 2017)

Do feel free to provide any comments/feedback to @TheRichCarey on Twitter

I had some pending changes recently on the wrong branch within TFS in Visual Studio 2017. Rather than clone all my changes in the other branch, I wanted to “migrate” my changes. In GIT this is fairly trivial, in TFS however…

To move changes between 2 branches, you have to ensure:

  • The changes you want to migrate are shelved on the source branch.
  • There are no pending changes in the workspace – This was rather annoying but a limitation of the tooling.
  • You do a “get-latest” on both branches.
  • You have access to Visual Studio Command Prompt.
  • If you are using lower than VS2017, you will also need the TFS Power tools.
  • The source and target branch are in the same workspace. This took me longer than I want to admit to work out as the error is not helpful!

With the above prerequisites met, you need to spin up the VS Command Prompt. This can be found via a start menu search but you can also add it to VS (If not already), following the steps below in VS.

Adding Visual Studio Command Prompt to Visual Studio

Go to “Tools” > “External Tools”, and select “Add”.

Give it an appropriate title – I chose “VS Command Prompt.”. From here we want to specify the following:

  • Command: C:\Windows\System32\cmd.exe
  • Arguments: /k “C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\Tools\VsDevCmd.bat”
  • Initial Directory: $(SolutionDir)

This means (On saving) that if you go to “Tools” you will now see an option “VS Command Prompt”.

Back to the migration…

With the console open, and using a working directory of a folder under source control (I use the source solution directory), run the command:

tfpt unshelve /migrate /source:"$/Core/MyProduct" /target:"$Core/MyProduct-Branch" "MyShelveset"

In this command, we are saying to use the TFS Power Tool to unshelve a shelveset named “MyShelveset”. The migrate flag indicates that it will be moving between areas, and the source and target are named TFS folders.

If you get an error “An item with the same key has already been added“, ensure you do not have any pending changes in the source or target.

If you get an error “unable to determine the workspace“, make sure you are running the tool within a directory under the source folder.

Providing this command runs successfully, you will then see the “Shelveset Details” panel.

Shelveset details

In this panel you should see the files that make up the shelveset you defined in the command. Pressing “Unshelve” will start the process.

In my case I also saw a “Unshelve/Merge Shelveset” window. You should be able to “auto-merge all”.

Oddly, Auto-Merge took quite a while on my machine (You can see the progress in the cmd window). I am unsure if this is normal, or because I was remote working that day over a VPN.

“Item could not be found in your workspace, or you do not have permission to access it.

"Item could not be found in your workspace, or  you do not have permission to access it.
“Item could not be found in your workspace, or you do not have permission to access it.”

If you get this during the merge, you may do what I did and go down a rabbit hole of getting latest, checking mappings etc. Turns out that this command does not work cross-workspace. When I branch, I map the branch to a completely new workspace as it’s cleaner.

The workaround for this (If like me, you use a new workspace per branch) is to temporarily map the branch into the same workspace.

Wrap up

So overall, it is possible to do but a process that would take a GIT Novice like me minutes to do in GIT took closer to an hour total! Luckily this is still less effort than a manual merge, but if you only had a couple of files I would recommend just doing it manually…

Bonus round: Unshelving another users shelveset into another branch

If the shelveset is a colleagues and not yours, you can simply append “;username” at the end of the command above (Where username is their TFS user), and it will search for that shelveset under that user.

Debugging ES6 Mocha unit tests using VS Code

The world of Mocha, VS Code and Node is still fairly new to me. Typically in the past all my JS unit tests have been debuggable in-browser using DevTools, but with Mocha this is not the case (As I am not deploying my spec files). I got Mocha to load via a launch config, but it would not originally work due to using ES6 directly.

If you do not have a launch.json, start here. Otherwise skip to the next section. Add a new Debug Configuration by selecting “Debug”, and then “Add Configuration”. Selecting “Node.js” automatically creates a “launch.json” under a root folder named .vscode. If you already had debug set up, this step would be irrelevant.

Add Mocha configuration to launch.json

In the launch.json, much like the surprisingly helpful comments suggest, you can simply type “Mocha” then [ctrl]+[space] to bring up the intellisense for a Mocha configuration!

Which will insert the appropriate snippet.

Now, in theory it is as simple as clicking the play icon in debug, with “Mocha Tests” selected.

Supporting ES6.

For me however, this didn’t work.

The issue here is that I get a lot of unexpected token errors as my tests are using ES6 and I suspect that by default it wants to use ES5. The issue of using ES6 for unit tests was resolved in another post .

Much like my previous post, I can update the launch arguments to use require to pull in the same 2 Babel modules, and will also specify a wild card file name of my tests so it doesn’t pick up any other code.

 {
            "type": "node",
            "request": "launch",
            "name": "Mocha Tests",
            "program": "${workspaceFolder}/node_modules/mocha/bin/_mocha",
            "args": [
                "./test/**/*.spec.js",
                "--require", "@babel/polyfill",
                "--require", "@babel/register",
                "-u",
                "tdd",
                "--timeout",
                "999999",
                "--colors",           
            ],
            "internalConsoleOptions": "openOnSessionStart"
        }

Now for me, this also didn’t work as I am using Chai for my BDD test syntax.

For this I had to change “tdd” to “bdd” under the args.

Now I can attach and debug, providing a breakpoint is set!

Attempting to use Mocha & Chai to unit test ES6.

In this post I will cover using Mocha (JS test framework) and Chai (For BDD syntax) to unit test ES6 Javascript in VS Code.

I started working on a small side project, for no reason other than to play with ES6+. It’s a(nother) relatively simple toast library written in as much vanilla JS as possible to avoid reliance on libraries & packages.

I got the code working, but I couldn’t prove that the functions worked. I used qUnit in the past to test JavaScript but if I am completely honest my JavaScript testing knowledge is a bit lacking.

My aim is to get some unit tests for one of my main classes where I can test directly against ES6 and not against the compiled ES5 code. I want the tests to be clear to what they are doing. What I am doing is not new at all, nor is the library! I just wanted to keep notes of how I achieved this first time around.

Disclaimer: This is by no means a comprehensive guide or walkthrough, just the results of me messing about to see if I can get the outcome I wanted whilst learning something new!

Enter, Mocha

I decided to useMocha to do my unit testing, which was chosen purely as it seemed to work well with ES6 code (using Babel). Later I will go into how I also used Chai along side to provide much nicer, fluid assertions using BDD-style syntax.

First of all, I had to install Mocha.

> npm install --save-dev mocha

Then under a new root folder of “test” I created a bread.spec.js – where “bread” here is the name of the class I am testing.

At this point it is fairly easy to create a simple test, like so.

import {Bread} from "../src/bread";
var assert = require('assert');
describe('Fluent methods', function() {
  describe('Title set is not called', function() {
    it('should set the title correctly (null)', function() {
        let options = [ ... code to get options ... ]     
        let b = new Bread(0,"Foo", options);       
      assert.equal(b.Title, null);
    });
  });
});

I then added the appropriate script to package.json to allow us to run the tests.

 "test": "mocha --require @babel/polyfill --require @babel/register './test/**/*.spec.js'"

Which is ran with:

npm run-script test
VS code window with output of script above. Shows a single completed unit test.
Output of running above command.

This script states that it will run Mocha, on all files under the test directory where the JS file ends with “.spec.js”. I then had to add the 2 requires which enable Mocha to call the ES6 directly and not have to use the transpiled version. Failing to do provide these requires will mean Mocha will not run as it cannot parse ES6.

Using Chai for BDD syntax

In the above, I import my class then create a “test set”. In this test set I then have a single test which is checking if the title gets automatically set. It’s fairly easy to attain what the test does, but it could be clearer. This is where I decided to use Chai. Chai will allow me to have a BDD-style test written which is closer to plain english. Mocha does support some of this (at time of writing) but Chai is much closer to BDD-style syntax I was used to.

To use Chai I need to install the package:

npm install --save-dev chai

Then import the “expect” module from the framework, and refactor the method so it looks a little like this:

import { expect } from "chai";
import {Bread} from "../src/bread";
describe("Fluent methods", () => {
    describe("Title set is not called", () => {
        it("should set the title correctly (null).", () => {
            var options = getValidOptions();            
            let b = new Bread(0,"Foo", options);
            expect(b.Title).to.equal(null);
        });
    });  
)};

Running the tests will yield the same result as before, but now its a lot more readable (In my opinion!)

Not a lot more to add really. Mocha and Chai both have great documentation to read through. The only difficulty I had was getting Mocha to run ES6 directly, as a lot of the information online for this was out of date (that I found…)

Update: I have also posted about debugging using ES6 Mocha tests here