Jordi

Azure DevOps MCP Server banner

1. Why wire AI agents straight into Azure DevOps?

Manual DevOps and CI/CD hand-offs slow down modern teams. By exposing Azure DevOps through the open Model Context Protocol (MCP), your LLM agents can raise work items, queue builds or approve releases on your behalf—no brittle REST glue or browser automation required.

2. What the server gives you today

The repository contains thin .NET libraries for every Azure DevOps “hub”, all surfaced as discoverable MCP tools:

  • Boards – create Epics, User Stories and Tasks
  • Repos – open PRs, add reviewers, merge or abandon
  • Pipelines – queue, cancel, retry and fetch logs
  • Artifacts – publish or yank packages
  • Test Plans – manage plans, suites and cases
  • Wiki – create pages or entire wikis

Everything runs on .NET 9 and talks to Azure DevOps via the official SDK or REST API, keeping the server implementation replaceable.

3. Getting the bits


        # clone and build
        git clone https://github.com/Jordiag/azure-devops-mcp-server.git
        cd azure-devops-mcp-server
        dotnet build
        # (optional) run the test suites
        dotnet test
        

4. Spinning up the server locally

The MCP server is a console app under src/Dotnet.AzureDevOps.Mcp.Server. Supply a personal-access token (PAT) and the URL of your Azure DevOps organisation as environment variables:


        export ADO_ORG_URL="https://dev.azure.com/contoso"
        export ADO_PAT=""
        dotnet run --project src/Dotnet.AzureDevOps.Mcp.Server
        

By default the server listens on http://localhost:5010 and serves an SSE endpoint that agents can subscribe to.

5. Talking to it from Semantic Kernel

Because each MCP tool is announced through function calling metadata, you can let Semantic Kernel auto-invoke them. The snippet below initialises an agent that calls the built-in echo test tool.


        // Program.cs
        using Microsoft.Extensions.DependencyInjection;
        using Microsoft.SemanticKernel;
        using Microsoft.SemanticKernel.Agents;
        using Microsoft.SemanticKernel.Connectors.OpenAI;
        using ModelContextProtocol.SemanticKernel.Extensions;

        var serverUrl = Environment.GetEnvironmentVariable("MCP_SERVER_URL")!;
        var openAiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY")!;
        var model = Environment.GetEnvironmentVariable("OPENAI_MODEL")!;

        IKernelBuilder builder = Kernel.CreateBuilder();
        builder.Services.AddOpenAIChatCompletion("openai", model, openAiKey);
        var kernel = builder.Build();

        // register every MCP tool exposed by the server
        await kernel.Plugins.AddMcpFunctionsFromSseServerAsync("ado-mcp", serverUrl);

        var settings = new OpenAIPromptExecutionSettings
        {
        ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
        };

        var agent = new ChatCompletionAgent
        {
        Name = "DevOpsBot",
        Kernel = kernel,
        Instructions = "Use available tools to answer the user.",
        Arguments = new KernelArguments(settings)
        };

        await foreach (var update in agent.InvokeAsync(
        "Queue the build definition #123 in project Contoso and return the run URL."))
        {
        Console.WriteLine(update.Message);
        }
        

The LLM interprets the plain-English instruction, discovers the queuePipeline tool and fires the build without you writing explicit API calls. Magic.

6. Under the bonnet: tests and quality gates

The repo ships integration and full agent end-to-end tests that hit a real Azure DevOps org, plus SonarCloud analysis for maintainability, security and coverage badges.

7. Final thoughts

If you’re already experimenting with agentic workflows, the Azure DevOps MCP Server hands your bots a first-class CI/CD toolbox while keeping credentials on your machine. Give it a try, star the repo and open an issue with your feedback—new ideas land here weekly.

Comments


Comments are closed