Setting up a self-hosted build agent for Azure DevOps

Azure DevOps has brilliant build pipeline options and as easy as it is to get set up with their hosted build agents, it can get quite costly rather quick. In this post I cover off setting up a self-hosted build agent for use with Azure.

This post won’t cover setting up the build box, but can be covered in a later guide if required. I actually have my build box scripted out using Choco commands to allow building of .NET projects to make this step easier.

Pros/Cons

  • Pro: Full control over the build
  • Pro: Can have your builds build items or run services which simply aren’t available in the Hosted agents.
  • Pro: Low cost. If you already have the hardware, why pay for Azure VMs?
  • Con: Maintenance and redundancy. If the machine goes down or breaks it blocks your pipeline.
  • Con: Extra setup steps.

Prerequisites

Before starting you will need to make sure:

  • You are a collection/build admin
  • You have a server configured to build the appropriate software (i.e. Correct SDKs etc which won’t be covered in this post)

Personal Access Tokens

First of all, you will need a personal access token for your account. This is used to allow your build agent access to Azure without hard-coding your credentials into your build scripts. You can use your own account for this, or a specially created service account – Just note it will need permissions to access the collections it will be building.

To get this, log in to your Azure Devops portal, and navigate to your security page.

In here, select “Personal Access Tokens” and then “New”. A panel will be displayed to configure this PAT. Specify a friendly and unique name, select the organisation you are using this token for, and then set its security access.

For the security access, I recommend selecting Full Access under “Scopes” so you can use this PAT for general Dev Ops activities. You can fine-tune the control, but you must ensure it has read/execute on the build scope as an absolute minimum. For expiry I typically select the longest period which is 1 year.

Agent download and configuration

Next up you will need to navigate to the project settings > Pipelines > Agent Pools.

Create a new Agent Pool with an appropriate name (You don’t *have* to do this and can just use the default pool if you wish, but I like the separation). When your pool is created you will see the option to add a new agent to it.


Clicking “New Agent” will give you the instructions for the OS of your choice. As per the instructions, download the agent (A ~130 ZIP file) and then place somewhere sensible on the machine that will be acting as a build server. When extracted, run config.cmd in an elevated command window

When running the config.cmd command you will require the following information:

  • Server URL
    • This will be https://dev.azure.com/{organisation name}
  • What type of authentication you will use (Just press return as it will default to PAT)
  • Your PAT to access the server, as set up in the first step.
  • The Pool to connect to. This will be the name of the agent pool created above.
  • The working folder. The folder to use for storing workspaces being built.
  • A name for this agent. Call it whatever you want, but I would personally always include the machine name as it makes it easier to work out which agents are running.

Providing all the above settings are specified correctly and there are no authentication issues, it should now attempt to start.

Confirming the agent is active

Going back to the Agent Pools configuration screen you should now see the agent listed in the appropriate agent pool.

If the agent is not displaying after a few minutes, something went wrong in setup.

If the agent is displaying offline, try running the “run.cmd” command in an elevated command window on your build server.

Now all you have to do is select your new agent pool when creating your next build!

Remote NLOG logging with Azure Functions (Part two) – Persisting data into Azure Cosmos DB.

Last time, I got a very basic C# Azure Function hooked up to accept a request from an NLOG web service target. This time, I will be attempting to persist(insert) the incoming log information into an Azure Cosmos database container, direct from my Azure Function in VS Code.

Disclaimer: This blog is more of a “mental notes” for me. I am nowhere near an expert in this area, and Cosmos DB is still new (hours old) to me. Drop me a line if I have missed the point massively ๐Ÿ™‚

Setting up Cosmos DB, databases and containers.

To get started I will make a new Cosmos DB on the Azure Portal by selecting “Azure Cosmos DB” from the resources panel and selecting “Create new”.

On this page I need to specify:

  • Subscription: Your Azure subscription to create this under.
  • Resource Group: You should already have a resource group which matches the Azure Function you created earlier.
  • Instance Details
    • Account Name: This will be prefixed to the URL – i.e. blahdocuments.azure.com
    • API: For this example I will be using Core(SQL) so I can create a document database and query using SQL syntax. 
  • Location: Select the closest location to you.
  • Other options: Other options like Geo-Redundancy etc can be left as disabled for now.

Select “Review and Create”, then on the next screen “Create” – Providing you are happy with the inputs.

Which will switch to “Your deployment is complete” when it is ready to be used. It shouldn’t take longer than a couple of minutes.

Clicking on “Go to resource”, or navigating to your new Cosmos DB via the Resource manager will load up the quick-start window for this database. First however, we need a “container”. Selecting the Azure Cosmos DB account we just created, we need to select “Add container”.

Here we have a few inputs:

  • Database ID. I didn’t have a database, so needed to create one. If you already have one, specify the name here.
  • Throughput: 400 RU (Request Units/S) should be more than enough for basic testing and operation for my purpose.
  • Container ID: I specified a container ID that lives inside the new/existing database. azlogger is where I want all my logging related data, and a container of azlogger-logs for the logs I will be storing.
  • Partition key: I used “loggerName” as my partition key. See this video for info, but essentially I believe this is for managing partitions if the data exceeds the limit so partitions can be grouped(?) correctly (~10GB?). I”m not 100% sure to be honest, without reading more. I just went with a recommended S/O post.

Updating the Azure function to connect with Cosmos DB

We first need to use the CosmosDB package in this project, so in the terminal, run:

dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB

Now I need to set up the solution so it’s ready for using Cosmos DB.

In local.settings.json I added my connection string:

 {
    "IsEncrypted": false,
    "Values": {
        "AzureWebJobsStorage": "",
        "FUNCTIONS_WORKER_RUNTIME": "dotnet",
        "MyCosmosDBConnection": "<conn string>"
    }
} 

Where the connection string value comes from your Cosmos dashboard, under “Keys” -> “Primary connection string”

Now I will need a C# model to bind against. I made a simple LogDetail class with the required fields. Note that I am using the JsonProperty items on the fields. I read conflicting blog posts about the requirement for annotating everything other than the ID, but I found no harm in leaving it in for now.

  public class LogDetail{
        [JsonProperty("id")]
        public string Id { get; set; }

        [JsonProperty("timestamp")]
        public string Timestamp;
        [JsonProperty("logName")]
        public string LogName;

        [JsonProperty("logLevel")]
        public string LogLevel;

        [JsonProperty("message")]
        public string Message;
    } 

Now time to update the main method/function! This was actually the hardest part for me (Partly due to lack of experience with this tech), the documentation was a little confusing, misleading and often specific to a particular scenario.

I’m not sure how correct this is, but I ended up changing my main method so that it read:

public static class Log
    {
        [FunctionName("Log")]
        public static void AcceptLogRequest(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "Log")] HttpRequest req,
            [CosmosDB(
                databaseName: "azlogger",
                collectionName: "azlogger-logs",
                ConnectionStringSetting = "MyCosmosDBConnection",
                Id = "{sys.randguid}",
                PartitionKey ="/loggerName"
                )]
            out LogDetail logDetail,
            ILogger log)
        {
            log.LogInformation("HTTP trigger fired for log entry.");
            string timestamp = req.Form["timestamp"]; 
            string loggerName = req.Form["loggerName"]; 
            string loggerLevel = req.Form["loggerLevel"]; 
            string message = req.Form["message"]; 

            var res = $"{timestamp} | {loggerName} | {loggerLevel.ToUpper()} | {message}";
            log.LogInformation(res);            
            logDetail = new LogDetail();
            logDetail.Timestamp = timestamp;
            logDetail.LogLevel = loggerLevel;
            logDetail.LogName = loggerName;
            logDetail.Message = message; 
        }
    }

The main changes were:

  • Making it a synchronous void method for saving data into CosmosDb which was recommended in the Microsoft Docs here. Could be wrong, but it works and is in-line with their docs.
  • changing LogDetail logdetail to be out LogDetail logDetail
  • Add in the Cosmos DB annotation (See below)

The CosmosDB annotation has the following options:

  • databaseName: This is the main database name for the database set up in the previous step.
  • collectionName: This is the container name set up in the previous step
  • ConnectionStringSetting: The app setting name placed inside local.settings.json in the previous step.
  • id: The Id entry for the record. For this I used an inbuilt system parameter of {sys.randguid}
  • partitionKey: The partition key I specified in the earlier setup step.

Now, if I debug (or deploy) this Azure Function and cause my application to insert a bunch of NLOG entries, it should now create some entries in a Cosmos DB container.

To view the results, I can go to Cosmos DB on the Azure portal and select “Data Explorer”. From here, the Database and Container can be expanded to show the “Items” in the container – In my case, a load of NLOG entries.

Conclusion

It’s early days, but deploying a new Azure function is trivially easy using the most recent tooling, and the only real difficulty seems to be nailing down decent documentation in Azure (Which has always been the case from my experience!)

The next stages would be to look into Azure Function triggers on a timer to produce MI reports, or time/action based triggers to forward captured events onto the appropriate person.

Remote NLOG logging with Azure Functions (Part one).

Part of a journey I was on today to learn about Azure Functions and Cosmos DB. The final code for the Azure Functions element can be found on https://github.com/Wabbbit/AzLog

  • Part one: Setup, Creating my first function, forwarding NLOG events and deploying to Azure within VS Code.
  • Part two: persisting the incoming data using Cosmos DB.

Disclaimer: This blog is more like my mental notes, this tech is super fresh to me so take with a pinch of salt.

What I want to achieve

Logging is a necessity in any application, I can’t even count the amount of times having some verbose logging has saved me many hours of debugging.

Currently, I almost exclusively use NLOG for .net projects. I typically structure my logging into discrete, separate loggers (i.e. Startup, API, Business logic failures, etc), which are usually configured to dump into .txt and/or the system event log.

This is great for our internal dev/SIT/QAT machines, and also when a client rings up about an error they encounter as they can just provide the appropriate log. The downside of this of course is that we don’t know if a client (With a self-hosted, remote installation) has a fatal error until they contact us, and with some clients the chain of reporting means the system could have been impacted for a short while before we get notified.

What if we could remotely capture major errors? As a proof of concept I will be attempting to use the NLOG web service adapter to talk to a C# Azure Function.

This assumes previous knowledge of working with NLOG and C#, but not Azure.

Creating my first Azure Function.

Prerequisites

Azure functions can be created directly within the Azure Portal, but for this demo I will be using VS Code.

First we need to make sure the system is set up to work with Azure Functions. We will need the following:

  • VS Code
  • Azure Functions Core Tools: For this we can use NPM. npm install -g azure-functions-core-tools. Note that this also exists on choco but has issues with x64 debugging in vscode.
  • Azure Functions VS Code extension.
  • C# VS Code extension.
  • and later on, an Azure account so we can deploy

Lets make a function!

With the Azure Functions extension installed, select the Azure menu and then “Create new project”. Don’t worry about connecting to your Azure subscription yet if you have not done so.

Once a folder is specified, a language must be chosen. I chose C#.

Next, the template for the first function will need to be specified. For this demo I will be using the HttpTrigger which means it will fire when hit on receipt of HTTP Get or Post (Like any standard API)

The next panel will ask for a function name. For this I just chose “Log”.

A new Azure Function will be created. Lets take a look at the files that are created:

  • .vscode: All the standard VS Code items which assist in build, debug and required extensions.
  • *.csproj: The project file for this Azure Function.
  • <function-name>.cs : This is the function that was created by providing a name in the last dialog. This is essentially like a Web API Controller.

Pressing F5 should restore any packages, start a debug session and output the temporary URL into the terminal, like so:

Navigating to that URL with a browser or postman will render something like:

Hooking up NLOG WebService target

Now I have a base function (Even if it doesn’t do anything), I can update NLOG in my project to make a web request with some information.

In my NLOG.config, I need to add a new target between the <targets></targets>

<target type='WebService'
            name='azurelogger'
            url='http://localhost:7071/api/Log'
            protocol='HttpPost'
            encoding='UTF-8'   >
      <parameter name='timestamp' type='System.String' layout='${longdate}'/>
      <parameter name='loggerName' type='System.String' layout='${logger}'/>
      <parameter name='loggerLevel' type='System.String' layout='${level}'/>
      <parameter name='message' type='System.String' layout='${message}'/>
    </target>

What we have done here is:

  • Create a new NLOG target of type “Web Service” to the URL from the step previously.
  • Set up a few parameters to send across with our request, which are NLOG parameters for things like the log message, the time the entry was created, etc.

Now I need to ensure that one of the loggers is set to use the new “azurelogger”. For example:

<rules>   
  <logger name="StartupLogger" minlevel="Error" writeTo="event, azurelogger" />
</rules>

Now if I do an IIS Reset where my NLOG config lives, and trigger off an error message manually, the new Azure Function should receive all the information it requires.

However, as our function doesn’t *do* anything, we can only prove this by debugging the function in VS Code. To do this I placed a breakpoint within the function and inspected the req object.

Here, I can see that all the fields I wanted are present!

Changing function code to accept incoming NLOG params

Fairly trivial – I altered the contents of the function to be as per below. In this code, I simply read the 4 items that my NLOG config is set to provide. I also changed the method name to something a little nicer than Run() as it is more descriptive. However this doesn’t actually control the endpoint name. To explicitly set the endpoint name I also changed the Route from null to “Log”. If I wanted to hit /api/blah instead of api/log I would simply do so by changing the route name.

  public static class Log
    {
        [FunctionName("Log")]
        public static async Task<IActionResult> AcceptLogRequest(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "Log")] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("HTTP trigger fired for log entry.");
            
            string timestamp = req.Form["timestamp"]; 
            string loggerName = req.Form["loggerName"]; 
            string loggerLevel = req.Form["loggerLevel"]; 
            string message = req.Form["message"]; 
           
            var res = $"{timestamp}   | {loggerName} | {loggerLevel.ToUpper()} | {message}";
            log.LogInformation(res);

            //TODO: Persist the data

            return (ActionResult)new OkObjectResult(res);

        }
    } 

Now, if I debug and cause NLOG to log an error, I can see the terminal window and debugger capturing the same information that gets placed in my event log.

Deploying to Azure

I will skip the step of connecting to Azure, which is as simple as just pressing “Sign in” and following the instructions.

To deploy from VS Code, simply select “Deploy to Function App” and then provide a new name of a function to deploy to.

It takes a while to set up a new function app, but when its done, simply click “Deploy to function app”. Thew API will now be accessible via the web (using the azurewebsites url) and Azure dashboard.

Wrap up, until next time…

So far I have a new Azure Function, which is being contacted by the NLOG Web Service target.

Next time I will attempt to persist the incoming logs, using Cosmos DB