Pragmatically upgrading .net framework version for all projects with PowerShell

Do feel free to provide any comments/feedback to @TheRichCarey on Twitter

We had a situation where we needed to upgrade all the CSPROJ files in a solution to 4.8. The issue is that some of our solutions contain almost a hundred projects so a manual intervention would be prone to error. (plus we have multiple solutions to apply this against!)

Whilst there are a number of extensions that do this on the VS Marketplace, they seemed a little overkill for something that can surely be achieved in PowerShell? At the end of the day its a simple find-and-replace, right?

This will load the solution file, iterate over each CSPROJ referenced and then replace the current framework version with the one specified in $versionToUse. It will then overwrite the file.

Potential improvements

This met my needs fine, but could be improved for sure! Things that it could do better are:

  • Auto scan for .sln files.
  • Provide some form of report at the end
  • Auto-checkout for TFS or Git (Using git CLI or TFS CLI)

Ensuring “dotnet test” TRX & Coverage files end up in SonarQube

Do feel free to provide any comments/feedback to @TheRichCarey on Twitter

I have written before about using SonarQube to do static analysis, but one issue I never came back to was ensuring that code coverage files generated via a build pipeline end up being picked up by the Sonar Scanner to assess code coverage.

Note that the following I am actually using the ‘dotnet test’ build step, rather than the ‘Vs Test’ one. Do let me know if you find a nice work around for the VS Test variant, as I couldn’t get it to drop coverage files!

The issue

The issue is that:

  • When using VSTest, TRX files are deleted automatically if using version 2+ of the VS Test task as per this stack overflow post.
  • When I switched back to ‘dotnet test’ the same thing appeared to be happening.
  • .coverage files are not output by default
  • TRX and Coverage files are placed in a temporary folder of the build agent rather than the executing agents working directory.
  • Even though SonarQube could detect the tests, it would still register as 0.0% code coverage!

Getting ‘dotnet test’ to collect coverage

The first step was to get the ‘dotnet test’ build step to collect the code coverage, and not just dump TRX files.

To do this, go to the “Arguments” field of the dotnet test build step and append --collect "Code Coverage", as well as ensuring that “Publish test results and code coverage” is enabled.

Ensure generated files are copied to the working directory

As the coverage files will end up in the /tmp folder of the build agent, SonarQube will not be able to scan them.

We will need to add a new build step of “copy files” with the correct filter set to get the .trxand .coverage files from the default temporary directory on the build agent, to the test results folder of the workspace. To do this we need to add the “Copy Files” task into the build and place it after the “VS Test” task. The source folder for the copy will be $(Agent.HomeDirectory)\_work\_temp and the target folder will be $(Common.TestResultsDirectory) – The contents can remain as ** but feel free to filter if required. Example below.

If we run a build now, we should now see files in the TestResults folder of the build agent’s working directory.

I didn’t have to make any changes to the configuration within SonarQube as it should just pick up the coverage files. If I follow the above I get the following (Lets just ignore the fact the number is low ๐Ÿ˜‰)

CSPROJ Changes to test projects

One thing I did notice in the console when attempting to fix this code coverage issue was that I got a lot of warnings like:

SonarQube.Integration.targets: warning : The project does not have a valid ProjectGuid. Analysis results for this project will not be uploaded to SonarQube. 

As all my projects were .net core or .net standard the CSPROJ files do not contain a <ProjectGuid> tag by default. As also suggested in this stack overflow answer , I added a GUID to my test project file. I am not 100% if this is required, but it stopped warnings appearing in my console and does no harm.

Bonus

If you have multiple builds to update and you are using Azure Devops, you can take advantage of “Task Groups”. This allows you to create a single build step which in turn executes a series of other build steps. Using the steps above, you can create a new Task Group to create a single build step to run the test script and make sure the files are copied to the correct location for analysis. For example I have the single build step below:

Which means I can then just call this single build step in all my builds

Setting up a self-hosted build agent for Azure DevOps

Azure DevOps has brilliant build pipeline options and as easy as it is to get set up with their hosted build agents, it can get quite costly rather quick. In this post I cover off setting up a self-hosted build agent for use with Azure.

This post won’t cover setting up the build box, but can be covered in a later guide if required. I actually have my build box scripted out using Choco commands to allow building of .NET projects to make this step easier.

Pros/Cons

  • Pro: Full control over the build
  • Pro: Can have your builds build items or run services which simply aren’t available in the Hosted agents.
  • Pro: Low cost. If you already have the hardware, why pay for Azure VMs?
  • Con: Maintenance and redundancy. If the machine goes down or breaks it blocks your pipeline.
  • Con: Extra setup steps.

Prerequisites

Before starting you will need to make sure:

  • You are a collection/build admin
  • You have a server configured to build the appropriate software (i.e. Correct SDKs etc which won’t be covered in this post)

Personal Access Tokens

First of all, you will need a personal access token for your account. This is used to allow your build agent access to Azure without hard-coding your credentials into your build scripts. You can use your own account for this, or a specially created service account – Just note it will need permissions to access the collections it will be building.

To get this, log in to your Azure Devops portal, and navigate to your security page.

In here, select “Personal Access Tokens” and then “New”. A panel will be displayed to configure this PAT. Specify a friendly and unique name, select the organisation you are using this token for, and then set its security access.

For the security access, I recommend selecting Full Access under “Scopes” so you can use this PAT for general Dev Ops activities. You can fine-tune the control, but you must ensure it has read/execute on the build scope as an absolute minimum. For expiry I typically select the longest period which is 1 year.

Agent download and configuration

Next up you will need to navigate to the project settings > Pipelines > Agent Pools.

Create a new Agent Pool with an appropriate name (You don’t *have* to do this and can just use the default pool if you wish, but I like the separation). When your pool is created you will see the option to add a new agent to it.


Clicking “New Agent” will give you the instructions for the OS of your choice. As per the instructions, download the agent (A ~130 ZIP file) and then place somewhere sensible on the machine that will be acting as a build server. When extracted, run config.cmd in an elevated command window

When running the config.cmd command you will require the following information:

  • Server URL
    • This will be https://dev.azure.com/{organisation name}
  • What type of authentication you will use (Just press return as it will default to PAT)
  • Your PAT to access the server, as set up in the first step.
  • The Pool to connect to. This will be the name of the agent pool created above.
  • The working folder. The folder to use for storing workspaces being built.
  • A name for this agent. Call it whatever you want, but I would personally always include the machine name as it makes it easier to work out which agents are running.

Providing all the above settings are specified correctly and there are no authentication issues, it should now attempt to start.

Confirming the agent is active

Going back to the Agent Pools configuration screen you should now see the agent listed in the appropriate agent pool.

If the agent is not displaying after a few minutes, something went wrong in setup.

If the agent is displaying offline, try running the “run.cmd” command in an elevated command window on your build server.

Now all you have to do is select your new agent pool when creating your next build!