test
Some checks are pending
CodeQL / Analyze (csharp) (push) Waiting to run
CodeQL / Analyze (python) (push) Waiting to run
dotnet-build-and-test / paths-filter (push) Waiting to run
dotnet-build-and-test / dotnet-build-and-test (Debug, windows-latest, net9.0) (push) Blocked by required conditions
dotnet-build-and-test / dotnet-build-and-test (Release, integration, true, ubuntu-latest, net10.0) (push) Blocked by required conditions
dotnet-build-and-test / dotnet-build-and-test (Release, integration, true, windows-latest, net472) (push) Blocked by required conditions
dotnet-build-and-test / dotnet-build-and-test (Release, ubuntu-latest, net8.0) (push) Blocked by required conditions
dotnet-build-and-test / dotnet-build-and-test-check (push) Blocked by required conditions

This commit is contained in:
2026-01-24 03:05:12 +11:00
parent f78f2388b3
commit 539852f81c
2584 changed files with 287471 additions and 0 deletions

32
.github/.linkspector.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
dirs:
- .
excludedFiles:
- ./python/CHANGELOG.md
ignorePatterns:
- pattern: "/github/"
- pattern: "./actions"
- pattern: "./blob"
- pattern: "./issues"
- pattern: "./discussions"
- pattern: "./pulls"
- pattern: "https:\/\/platform.openai.com"
- pattern: "http:\/\/localhost"
- pattern: "http:\/\/127.0.0.1"
- pattern: "https:\/\/localhost"
- pattern: "https:\/\/127.0.0.1"
- pattern: "0001-spec.md"
- pattern: "0001-madr-architecture-decisions.md"
- pattern: "https://api.powerplatform.com/.default"
- pattern: "https://your-resource.openai.azure.com/"
- pattern: "http://host.docker.internal"
- pattern: "https://openai.github.io/openai-agents-js/openai/agents/classes/"
# excludedDirs:
# Folders which include links to localhost, since it's not ignored with regular expressions
baseUrl: https://github.com/microsoft/agent-framework/
aliveStatusCodes:
- 200
- 206
- 429
- 500
- 503
useGitIgnore: true

4
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,4 @@
# Code ownership assignments
# https://docs.github.com/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
python/packages/azurefunctions/ @microsoft/agentframework-durabletask-developers

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: true
contact_links:
- name: Documentation
url: https://aka.ms/agent-framework
about: Check out the official documentation for guides and API reference.
- name: Discussions
url: https://github.com/microsoft/agent-framework/discussions
about: Ask questions about Agent Framework.

70
.github/ISSUE_TEMPLATE/dotnet-issue.yml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: .NET Bug Report
description: Report a bug in the Agent Framework .NET SDK
title: ".NET: [Bug]: "
labels: ["bug", ".NET"]
type: bug
body:
- type: textarea
id: description
attributes:
label: Description
description: Please provide a clear and detailed description of the bug.
placeholder: |
- What happened?
- What did you expect to happen?
- Steps to reproduce the issue
validations:
required: true
- type: textarea
id: code-sample
attributes:
label: Code Sample
description: If applicable, provide a minimal code sample that demonstrates the issue.
placeholder: |
```csharp
// Your code here
```
render: markdown
validations:
required: false
- type: textarea
id: error-messages
attributes:
label: Error Messages / Stack Traces
description: Include any error messages or stack traces you received.
placeholder: |
```
Paste error messages or stack traces here
```
render: markdown
validations:
required: false
- type: input
id: dotnet-packages
attributes:
label: Package Versions
description: List the Microsoft.Agents.* packages and versions you are using
placeholder: "e.g., Microsoft.Agents.AI.Abstractions: 1.0.0, Microsoft.Agents.AI.OpenAI: 1.0.0"
validations:
required: true
- type: input
id: dotnet-version
attributes:
label: .NET Version
description: What version of .NET are you using?
placeholder: "e.g., .NET 8.0"
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context or screenshots that might be helpful.
placeholder: "Any additional information..."
validations:
required: false

View File

@@ -0,0 +1,51 @@
name: Feature Request
description: Request a new feature for Microsoft Agent Framework
title: "[Feature]: "
type: feature
body:
- type: textarea
id: description
attributes:
label: Description
description: Please describe the feature you'd like and why it would be useful.
placeholder: |
Describe the feature you're requesting:
- What problem does it solve?
- What would the expected behavior be?
- Are there any alternatives you've considered?
validations:
required: true
- type: textarea
id: code-sample
attributes:
label: Code Sample
description: If applicable, provide a code sample showing how you'd like to use this feature.
placeholder: |
```python
# Your code here
```
or
```csharp
// Your code here
```
render: markdown
validations:
required: false
- type: dropdown
id: language
attributes:
label: Language/SDK
description: Which language/SDK does this feature apply to?
options:
- Both
- .NET
- Python
- Other / Not Applicable
default: 0
validations:
required: false

70
.github/ISSUE_TEMPLATE/python-issue.yml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: Python Bug Report
description: Report a bug in the Agent Framework Python SDK
title: "Python: [Bug]: "
labels: ["bug", "Python"]
type: bug
body:
- type: textarea
id: description
attributes:
label: Description
description: Please provide a clear and detailed description of the bug.
placeholder: |
- What happened?
- What did you expect to happen?
- Steps to reproduce the issue
validations:
required: true
- type: textarea
id: code-sample
attributes:
label: Code Sample
description: If applicable, provide a minimal code sample that demonstrates the issue.
placeholder: |
```python
# Your code here
```
render: markdown
validations:
required: false
- type: textarea
id: error-messages
attributes:
label: Error Messages / Stack Traces
description: Include any error messages or stack traces you received.
placeholder: |
```
Paste error messages or stack traces here
```
render: markdown
validations:
required: false
- type: input
id: python-packages
attributes:
label: Package Versions
description: List the agent-framework-* packages and versions you are using
placeholder: "e.g., agent-framework-core: 1.0.0, agent-framework-azure-ai: 1.0.0"
validations:
required: true
- type: input
id: python-version
attributes:
label: Python Version
description: What version of Python are you using?
placeholder: "e.g., Python 3.11"
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context or screenshots that might be helpful.
placeholder: "Any additional information..."
validations:
required: false

View File

@@ -0,0 +1,48 @@
name: Azure Functions Integration Test Setup
description: Prepare local emulators and tools for Azure Functions integration tests
runs:
using: "composite"
steps:
- name: Start Durable Task Scheduler Emulator
shell: bash
run: |
if [ "$(docker ps -aq -f name=dts-emulator)" ]; then
echo "Stopping and removing existing Durable Task Scheduler Emulator"
docker rm -f dts-emulator
fi
echo "Starting Durable Task Scheduler Emulator"
docker run -d --name dts-emulator -p 8080:8080 -p 8082:8082 -e DTS_USE_DYNAMIC_TASK_HUBS=true mcr.microsoft.com/dts/dts-emulator:latest
echo "Waiting for Durable Task Scheduler Emulator to be ready"
timeout 30 bash -c 'until curl --silent http://localhost:8080/healthz; do sleep 1; done'
echo "Durable Task Scheduler Emulator is ready"
- name: Start Azurite (Azure Storage emulator)
shell: bash
run: |
if [ "$(docker ps -aq -f name=azurite)" ]; then
echo "Stopping and removing existing Azurite (Azure Storage emulator)"
docker rm -f azurite
fi
echo "Starting Azurite (Azure Storage emulator)"
docker run -d --name azurite -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite
echo "Waiting for Azurite (Azure Storage emulator) to be ready"
timeout 30 bash -c 'until curl --silent http://localhost:10000/devstoreaccount1; do sleep 1; done'
echo "Azurite (Azure Storage emulator) is ready"
- name: Start Redis
shell: bash
run: |
if [ "$(docker ps -aq -f name=redis)" ]; then
echo "Stopping and removing existing Redis"
docker rm -f redis
fi
echo "Starting Redis"
docker run -d --name redis -p 6379:6379 redis:latest
echo "Waiting for Redis to be ready"
timeout 30 bash -c 'until docker exec redis redis-cli ping | grep -q PONG; do sleep 1; done'
echo "Redis is ready"
- name: Install Azure Functions Core Tools
shell: bash
run: |
echo "Installing Azure Functions Core Tools"
npm install -g azure-functions-core-tools@4 --unsafe-perm true
func --version

25
.github/actions/python-setup/action.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Reusable Setup UV
description: Reusable workflow to setup uv environment
inputs:
python-version:
description: The Python version to set up
required: true
os:
description: The operating system to set up
required: true
runs:
using: "composite"
steps:
- name: Set up uv
uses: astral-sh/setup-uv@v6
with:
version-file: "python/pyproject.toml"
enable-cache: true
cache-suffix: ${{ inputs.os }}-${{ inputs.python-version }}
cache-dependency-glob: "**/uv.lock"
- name: Install the project
shell: bash
run: |
cd python && uv sync --all-packages --all-extras --dev -U --prerelease=if-necessary-or-explicit

69
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,69 @@
# GitHub Copilot Instructions
This repository contains both Python and C# code.
All python code resides under the `python/` directory.
All C# code resides under the `dotnet/` directory.
The purpose of the code is to provide a framework for building AI agents.
When contributing to this repository, please follow these guidelines:
## C# Code Guidelines
Here are some general guidelines that apply to all code.
- The top of all *.cs files should have a copyright notice: `// Copyright (c) Microsoft. All rights reserved.`
- All public methods and classes should have XML documentation comments.
- After adding, modifying or deleting code, run `dotnet build`, and then fix any reported build errors.
- After adding or modifying code, run `dotnet format` to automatically fix any formatting errors.
### C# Sample Code Guidelines
Sample code is located in the `dotnet/samples` directory.
When adding a new sample, follow these steps:
- The sample should be a standalone .net project in one of the subdirectories of the samples directory.
- The directory name should be the same as the project name.
- The directory should contain a README.md file that explains what the sample does and how to run it.
- The README.md file should follow the same format as other samples.
- The csproj file should match the directory name.
- The csproj file should be configured in the same way as other samples.
- The project should preferably contain a single Program.cs file that contains all the sample code.
- The sample should be added to the solution file in the samples directory.
- The sample should be tested to ensure it works as expected.
- A reference to the new samples should be added to the README.md file in the parent directory of the new sample.
The sample code should follow these guidelines:
- Configuration settings should be read from environment variables, e.g. `var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");`.
- Environment variables should use upper snake_case naming convention.
- Secrets should not be hardcoded in the code or committed to the repository.
- The code should be well-documented with comments explaining the purpose of each step.
- The code should be simple and to the point, avoiding unnecessary complexity.
- Prefer inline literals over constants for values that are not reused. For example, use `new ChatClientAgent(chatClient, instructions: "You are a helpful assistant.")` instead of defining a constant for "instructions".
- Ensure that all private classes are sealed
- Use the Async suffix on the name of all async methods that return a Task or ValueTask.
- Prefer defining variables using types rather than var, to help users understand the types involved.
- Follow the patterns in the samples in the same directories where new samples are being added.
- The structure of the sample should be as follows:
- The top of the Program.cs should have a copyright notice: `// Copyright (c) Microsoft. All rights reserved.`
- Then add a comment describing what the sample is demonstrating.
- Then add the necessary using statements.
- Then add the main code logic.
- Finally, add any helper methods or classes at the bottom of the file.
### C# Unit Test Guidelines
Unit tests are located in the `dotnet/tests` directory in projects with a `.UnitTests.csproj` suffix.
Unit tests should follow these guidelines:
- Use `this.` for accessing class members
- Add Arrange, Act and Assert comments for each test
- Ensure that all private classes, that are not subclassed, are sealed
- Use the Async suffix on the name of all async methods
- Use the Moq library for mocking objects where possible
- Validate that each test actually tests the target behavior, e.g. we should not have tests that creates a mock, calls the mock and then verifies that the mock was called, without the target code being involved. We also shouldn't have tests that test language features, e.g. something that the compiler would catch anyway.
- Avoid adding excessive comments to tests. Instead favour clear easy to understand code.
- Follow the patterns in the unit tests in the same project or classes to which new tests are being added

52
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
# Maintain dependencies for nuget
- package-ecosystem: "nuget"
directory: "dotnet/"
schedule:
interval: "cron"
cronjob: "0 8 * * 4,0" # Every Thursday(4) and Sunday(0) at 8:00 UTC
ignore:
# For all System.* and Microsoft.Extensions/Bcl.* packages, ignore all major version updates
- dependency-name: "System.*"
update-types: ["version-update:semver-major"]
- dependency-name: "Microsoft.Extensions.*"
update-types: ["version-update:semver-major"]
- dependency-name: "Microsoft.Bcl.*"
update-types: ["version-update:semver-major"]
- dependency-name: "Moq"
labels:
- ".NET"
- "dependencies"
# Maintain dependencies for python
- package-ecosystem: "pip"
directory: "python/"
schedule:
interval: "weekly"
day: "monday"
labels:
- "python"
- "dependencies"
- package-ecosystem: "uv"
directory: "python/"
schedule:
interval: "weekly"
day: "monday"
labels:
- "python"
- "dependencies"
# Maintain dependencies for github-actions
- package-ecosystem: "github-actions"
# Workflow files stored in the
# default location of `.github/workflows`
directory: "/"
schedule:
interval: "weekly"
day: "sunday"

View File

@@ -0,0 +1,17 @@
---
applyTo: "dotnet/src/Microsoft.Agents.AI.DurableTask/**,dotnet/src/Microsoft.Agents.AI.Hosting.AzureFunctions/**"
---
# Durable Task area code instructions
The following guidelines apply to pull requests that modify files under
`dotnet/src/Microsoft.Agents.AI.DurableTask/**` or
`dotnet/src/Microsoft.Agents.AI.Hosting.AzureFunctions/**`:
## CHANGELOG.md
- Each pull request that modifies code should add just one bulleted entry to the `CHANGELOG.md` file containing a change title (usually the PR title) and a link to the PR itself.
- New PRs should be added to the top of the `CHANGELOG.md` file under a "## [Unreleased]" heading.
- If the PR is the first since the last release, the existing "## [Unreleased]" heading should be replaced with a "## v[X.Y.Z]" heading and the PRs since the last release should be added to the new "## [Unreleased]" heading.
- The style of new `CHANGELOG.md` entries should match the style of the other entries in the file.
- If the PR introduces a breaking change, the changelog entry should be prefixed with "[BREAKING]".

34
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
# Add 'python' label to any change within the 'python' directory
python:
- changed-files:
- any-glob-to-any-file:
- python/**
# Add '.NET' label to any change within samples or kernel 'dotnet' directories.
.NET:
- changed-files:
- any-glob-to-any-file:
- dotnet/**
# Add 'documentation' label to any change within the 'docs' directory, or any '.md' files
documentation:
- changed-files:
- any-glob-to-any-file:
- docs/**
- '**/*.md'
# Add 'workflows' label to any change within the dotnet or python workflows src or samples
workflows:
- changed-files:
- any-glob-to-any-file:
- dotnet/src/Microsoft.Agents.AI.Workflows/**
- dotnet/src/Microsoft.Agents.AI.Workflows.Declarative/**
- dotnet/samples/GettingStarted/Workflow/**
- python/packages/main/agent_framework/_workflow/**
- python/samples/getting_started/workflow/**
# Add 'lab' label to any change within the 'python/packages/lab' directory
lab:
- changed-files:
- any-glob-to-any-file:
- python/packages/lab/**

23
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,23 @@
### Motivation and Context
<!-- Thank you for your contribution to the Agent Framework repo!
Please help reviewers and future users, providing the following information:
1. Why is this change required?
2. What problem does it solve?
3. What scenario does it contribute to?
4. If it fixes an open issue, please link to the issue here.
-->
### Description
<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->
### Contribution Checklist
<!-- Before submitting this PR, please make sure: -->
- [ ] The code builds clean without any errors or warnings
- [ ] The PR follows the [Contribution Guidelines](https://github.com/microsoft/agent-framework/blob/main/CONTRIBUTING.md)
- [ ] All unit tests pass, and I have added new tests where possible
- [ ] **Is this a breaking change?** If yes, add "[BREAKING]" prefix to the title of the PR.

File diff suppressed because it is too large Load Diff

69
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
# CodeQL is the code analysis engine developed by GitHub to automate security checks.
# The results are shown as code scanning alerts in GitHub. For more details, visit:
# https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/about-code-scanning-with-codeql
name: "CodeQL"
on:
workflow_dispatch:
push:
# TODO: Add "feature*" back in again, once we determine the cause of the ongoing CodeQL failures.
branches: ["main", "experimental*", "*-development"]
schedule:
- cron: "17 11 * * 2"
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: ["csharp", "python"]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Use only 'java' to analyze code written in Java, Kotlin or both
# Use only 'javascript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
persist-credentials: false
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v4
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v4
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v4
with:
category: "/language:${{matrix.language}}"

View File

@@ -0,0 +1,285 @@
#
# This workflow will build all .slnx files in the dotnet folder, and run all unit tests and integration tests using dotnet docker containers,
# each targeting a single version of the dotnet SDK.
#
name: dotnet-build-and-test
on:
workflow_dispatch:
pull_request:
branches: ["main", "feature*"]
merge_group:
branches: ["main", "feature*"]
push:
branches: ["main", "feature*"]
schedule:
- cron: "0 0 * * *" # Run at midnight UTC daily
env:
COVERAGE_THRESHOLD: 80
COVERAGE_FRAMEWORK: net10.0 # framework target for which we run/report code coverage
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions:
contents: read
id-token: "write"
jobs:
paths-filter:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
outputs:
dotnetChanges: ${{ steps.filter.outputs.dotnet }}
cosmosDbChanges: ${{ steps.filter.outputs.cosmosdb }}
steps:
- uses: actions/checkout@v6
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
dotnet:
- 'dotnet/**'
cosmosdb:
- 'dotnet/src/Microsoft.Agents.AI.CosmosNoSql/**'
# run only if 'dotnet' files were changed
- name: dotnet tests
if: steps.filter.outputs.dotnet == 'true'
run: echo "Dotnet file"
- name: dotnet CosmosDB tests
if: steps.filter.outputs.cosmosdb == 'true'
run: echo "Dotnet CosmosDB changes"
# run only if not 'dotnet' files were changed
- name: not dotnet tests
if: steps.filter.outputs.dotnet != 'true'
run: echo "NOT dotnet file"
dotnet-build-and-test:
needs: paths-filter
if: needs.paths-filter.outputs.dotnetChanges == 'true'
strategy:
fail-fast: false
matrix:
include:
- { targetFramework: "net10.0", os: "ubuntu-latest", configuration: Release, integration-tests: true, environment: "integration" }
- { targetFramework: "net9.0", os: "windows-latest", configuration: Debug }
- { targetFramework: "net8.0", os: "ubuntu-latest", configuration: Release }
- { targetFramework: "net472", os: "windows-latest", configuration: Release, integration-tests: true, environment: "integration" }
runs-on: ${{ matrix.os }}
environment: ${{ matrix.environment }}
steps:
- uses: actions/checkout@v6
with:
persist-credentials: false
sparse-checkout: |
.
.github
dotnet
python
workflow-samples
# Start Cosmos DB Emulator for all integration tests and only for unit tests when CosmosDB changes happened)
- name: Start Azure Cosmos DB Emulator
if: ${{ runner.os == 'Windows' && (needs.paths-filter.outputs.cosmosDbChanges == 'true' || (github.event_name != 'pull_request' && matrix.integration-tests)) }}
shell: pwsh
run: |
Write-Host "Launching Azure Cosmos DB Emulator"
Import-Module "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules\Microsoft.Azure.CosmosDB.Emulator"
Start-CosmosDbEmulator -NoUI -Key "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw=="
echo "COSMOS_EMULATOR_AVAILABLE=true" >> $env:GITHUB_ENV
- name: Setup dotnet
uses: actions/setup-dotnet@v5.1.0
with:
global-json-file: ${{ github.workspace }}/dotnet/global.json
- name: Build dotnet solutions
shell: bash
run: |
export SOLUTIONS=$(find ./dotnet/ -type f -name "*.slnx" | tr '\n' ' ')
for solution in $SOLUTIONS; do
dotnet build $solution -c ${{ matrix.configuration }} --warnaserror
done
- name: Package install check
shell: bash
# All frameworks are only built for the release configuration, so we only run this step for the release configuration
# and dotnet new doesn't support net472
if: matrix.configuration == 'Release' && matrix.targetFramework != 'net472'
run: |
TEMP_DIR=$(mktemp -d)
export SOLUTIONS=$(find ./dotnet/ -type f -name "*.slnx" | tr '\n' ' ')
for solution in $SOLUTIONS; do
dotnet pack $solution /property:TargetFrameworks=${{ matrix.targetFramework }} -c ${{ matrix.configuration }} --no-build --no-restore --output "$TEMP_DIR/artifacts"
done
pushd "$TEMP_DIR"
# Create a new console app to test the package installation
dotnet new console -f ${{ matrix.targetFramework }} --name packcheck --output consoleapp
# Create minimal nuget.config and use only dotnet nuget commands
echo '<?xml version="1.0" encoding="utf-8"?><configuration><packageSources><clear /></packageSources></configuration>' > consoleapp/nuget.config
# Add sources with local first using dotnet nuget commands
dotnet nuget add source ../artifacts --name local --configfile consoleapp/nuget.config
dotnet nuget add source https://api.nuget.org/v3/index.json --name nuget.org --configfile consoleapp/nuget.config
# Change to project directory to ensure local nuget.config is used
pushd consoleapp
dotnet add packcheck.csproj package Microsoft.Agents.AI --prerelease
dotnet build -f ${{ matrix.targetFramework }} -c ${{ matrix.configuration }} packcheck.csproj
# Clean up
popd
popd
rm -rf "$TEMP_DIR"
- name: Run Unit Tests
shell: bash
run: |
export UT_PROJECTS=$(find ./dotnet -type f -name "*.UnitTests.csproj" | tr '\n' ' ')
for project in $UT_PROJECTS; do
# Query the project's target frameworks using MSBuild with the current configuration
target_frameworks=$(dotnet msbuild $project -getProperty:TargetFrameworks -p:Configuration=${{ matrix.configuration }} -nologo 2>/dev/null | tr -d '\r')
# Check if the project supports the target framework
if [[ "$target_frameworks" == *"${{ matrix.targetFramework }}"* ]]; then
if [[ "${{ matrix.targetFramework }}" == "${{ env.COVERAGE_FRAMEWORK }}" ]]; then
dotnet test -f ${{ matrix.targetFramework }} -c ${{ matrix.configuration }} $project --no-build -v Normal --logger trx --collect:"XPlat Code Coverage" --results-directory:"TestResults/Coverage/" -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.ExcludeByAttribute=GeneratedCodeAttribute,CompilerGeneratedAttribute,ExcludeFromCodeCoverageAttribute
else
dotnet test -f ${{ matrix.targetFramework }} -c ${{ matrix.configuration }} $project --no-build -v Normal --logger trx
fi
else
echo "Skipping $project - does not support target framework ${{ matrix.targetFramework }} (supports: $target_frameworks)"
fi
done
env:
# Cosmos DB Emulator connection settings
COSMOSDB_ENDPOINT: https://localhost:8081
COSMOSDB_KEY: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
- name: Log event name and matrix integration-tests
shell: bash
run: echo "github.event_name:${{ github.event_name }} matrix.integration-tests:${{ matrix.integration-tests }} github.event.action:${{ github.event.action }} github.event.pull_request.merged:${{ github.event.pull_request.merged }}"
- name: Azure CLI Login
if: github.event_name != 'pull_request' && matrix.integration-tests
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
# This setup action is required for both Durable Task and Azure Functions integration tests.
# We only run it on Ubuntu since the Durable Task and Azure Functions features are not available
# on .NET Framework (net472) which is what we use the Windows runner for.
- name: Set up Durable Task and Azure Functions Integration Test Emulators
if: github.event_name != 'pull_request' && matrix.integration-tests && matrix.os == 'ubuntu-latest'
uses: ./.github/actions/azure-functions-integration-setup
id: azure-functions-setup
- name: Run Integration Tests
shell: bash
if: github.event_name != 'pull_request' && matrix.integration-tests
run: |
export INTEGRATION_TEST_PROJECTS=$(find ./dotnet -type f -name "*IntegrationTests.csproj" | tr '\n' ' ')
for project in $INTEGRATION_TEST_PROJECTS; do
# Query the project's target frameworks using MSBuild with the current configuration
target_frameworks=$(dotnet msbuild $project -getProperty:TargetFrameworks -p:Configuration=${{ matrix.configuration }} -nologo 2>/dev/null | tr -d '\r')
# Check if the project supports the target framework
if [[ "$target_frameworks" == *"${{ matrix.targetFramework }}"* ]]; then
dotnet test -f ${{ matrix.targetFramework }} -c ${{ matrix.configuration }} $project --no-build -v Normal --logger trx
else
echo "Skipping $project - does not support target framework ${{ matrix.targetFramework }} (supports: $target_frameworks)"
fi
done
env:
# Cosmos DB Emulator connection settings
COSMOSDB_ENDPOINT: https://localhost:8081
COSMOSDB_KEY: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
# OpenAI Models
OpenAI__ApiKey: ${{ secrets.OPENAI__APIKEY }}
OpenAI__ChatModelId: ${{ vars.OPENAI__CHATMODELID }}
OpenAI__ChatReasoningModelId: ${{ vars.OPENAI__CHATREASONINGMODELID }}
# Azure OpenAI Models
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__CHATDEPLOYMENTNAME }}
AZURE_OPENAI_ENDPOINT: ${{ vars.AZUREOPENAI__ENDPOINT }}
# Azure AI Foundry
AzureAI__Endpoint: ${{ secrets.AZUREAI__ENDPOINT }}
AzureAI__DeploymentName: ${{ vars.AZUREAI__DEPLOYMENTNAME }}
AzureAI__BingConnectionId: ${{ vars.AZUREAI__BINGCONECTIONID }}
FOUNDRY_PROJECT_ENDPOINT: ${{ vars.FOUNDRY_PROJECT_ENDPOINT }}
FOUNDRY_MEDIA_DEPLOYMENT_NAME: ${{ vars.FOUNDRY_MEDIA_DEPLOYMENT_NAME }}
FOUNDRY_MODEL_DEPLOYMENT_NAME: ${{ vars.FOUNDRY_MODEL_DEPLOYMENT_NAME }}
FOUNDRY_CONNECTION_GROUNDING_TOOL: ${{ vars.FOUNDRY_CONNECTION_GROUNDING_TOOL }}
# Generate test reports and check coverage
- name: Generate test reports
if: matrix.targetFramework == env.COVERAGE_FRAMEWORK
uses: danielpalme/ReportGenerator-GitHub-Action@5.5.1
with:
reports: "./TestResults/Coverage/**/coverage.cobertura.xml"
targetdir: "./TestResults/Reports"
reporttypes: "HtmlInline;JsonSummary"
- name: Upload coverage report artifact
if: matrix.targetFramework == env.COVERAGE_FRAMEWORK
uses: actions/upload-artifact@v6
with:
name: CoverageReport-${{ matrix.os }}-${{ matrix.targetFramework }}-${{ matrix.configuration }} # Artifact name
path: ./TestResults/Reports # Directory containing files to upload
- name: Check coverage
if: matrix.targetFramework == env.COVERAGE_FRAMEWORK
shell: pwsh
run: .github/workflows/dotnet-check-coverage.ps1 -JsonReportPath "TestResults/Reports/Summary.json" -CoverageThreshold $env:COVERAGE_THRESHOLD
# This final job is required to satisfy the merge queue. It must only run (or succeed) if no tests failed
dotnet-build-and-test-check:
if: always()
runs-on: ubuntu-latest
needs: [dotnet-build-and-test]
steps:
- name: Get Date
shell: bash
run: |
echo "date=$(date +'%m/%d/%Y %H:%M:%S')" >> "$GITHUB_ENV"
- name: Run Type is Daily
if: ${{ github.event_name == 'schedule' }}
shell: bash
run: |
echo "run_type=Daily" >> "$GITHUB_ENV"
- name: Run Type is Manual
if: ${{ github.event_name == 'workflow_dispatch' }}
shell: bash
run: |
echo "run_type=Manual" >> "$GITHUB_ENV"
- name: Run Type is ${{ github.event_name }}
if: ${{ github.event_name != 'schedule' && github.event_name != 'workflow_dispatch'}}
shell: bash
run: |
echo "run_type=${{ github.event_name }}" >> "$GITHUB_ENV"
- name: Fail workflow if tests failed
id: check_tests_failed
if: contains(join(needs.*.result, ','), 'failure')
uses: actions/github-script@v8
with:
script: core.setFailed('Integration Tests Failed!')
- name: Fail workflow if tests cancelled
id: check_tests_cancelled
if: contains(join(needs.*.result, ','), 'cancelled')
uses: actions/github-script@v8
with:
script: core.setFailed('Integration Tests Cancelled!')

View File

@@ -0,0 +1,82 @@
param (
[string]$JsonReportPath,
[double]$CoverageThreshold
)
$jsonContent = Get-Content $JsonReportPath -Raw | ConvertFrom-Json
$coverageBelowThreshold = $false
$nonExperimentalAssemblies = [System.Collections.Generic.HashSet[string]]::new()
$assembliesCollection = @(
'Microsoft.Agents.AI.Abstractions'
'Microsoft.Agents.AI'
)
foreach ($assembly in $assembliesCollection) {
$nonExperimentalAssemblies.Add($assembly)
}
function Get-FormattedValue {
param (
[float]$Coverage,
[bool]$UseIcon = $false
)
$formattedNumber = "{0:N1}" -f $Coverage
$icon = if (-not $UseIcon) { "" } elseif ($Coverage -ge $CoverageThreshold) { '✅' } else { '❌' }
return "$formattedNumber% $icon"
}
$totallines = $jsonContent.summary.totallines
$totalbranches = $jsonContent.summary.totalbranches
$lineCoverage = $jsonContent.summary.linecoverage
$branchCoverage = $jsonContent.summary.branchcoverage
$totalTableData = [PSCustomObject]@{
'Metric' = 'Total Coverage'
'Total Lines' = $totallines
'Total Branches' = $totalbranches
'Line Coverage' = Get-FormattedValue -Coverage $lineCoverage
'Branch Coverage' = Get-FormattedValue -Coverage $branchCoverage
}
$totalTableData | Format-Table -AutoSize
$assemblyTableData = @()
foreach ($assembly in $jsonContent.coverage.assemblies) {
$assemblyName = $assembly.name
$assemblyTotallines = $assembly.totallines
$assemblyTotalbranches = $assembly.totalbranches
$assemblyLineCoverage = $assembly.coverage
$assemblyBranchCoverage = $assembly.branchcoverage
$isNonExperimentalAssembly = $nonExperimentalAssemblies -contains $assemblyName
$lineCoverageFailed = $assemblyLineCoverage -lt $CoverageThreshold -and $assemblyTotallines -gt 0
$branchCoverageFailed = $assemblyBranchCoverage -lt $CoverageThreshold -and $assemblyTotalbranches -gt 0
if ($isNonExperimentalAssembly -and ($lineCoverageFailed -or $branchCoverageFailed)) {
$coverageBelowThreshold = $true
}
$assemblyTableData += [PSCustomObject]@{
'Assembly Name' = $assemblyName
'Total Lines' = $assemblyTotallines
'Total Branches' = $assemblyTotalbranches
'Line Coverage' = Get-FormattedValue -Coverage $assemblyLineCoverage -UseIcon $isNonExperimentalAssembly
'Branch Coverage' = Get-FormattedValue -Coverage $assemblyBranchCoverage -UseIcon $isNonExperimentalAssembly
}
}
$sortedTable = $assemblyTableData | Sort-Object {
$nonExperimentalAssemblies -contains $_.'Assembly Name'
} -Descending
$sortedTable | Format-Table -AutoSize
if ($coverageBelowThreshold) {
Write-Host "Code coverage is lower than defined threshold: $CoverageThreshold. Stopping the task."
exit 1
}

96
.github/workflows/dotnet-format.yml vendored Normal file
View File

@@ -0,0 +1,96 @@
#
# This workflow runs the dotnet formatter on all c-sharp code.
#
name: dotnet-format
on:
workflow_dispatch:
pull_request:
branches: ["main", "feature*"]
paths:
- dotnet/**
- '.github/workflows/dotnet-format.yml'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
check-format:
strategy:
fail-fast: false
matrix:
include:
- { dotnet: "10.0", configuration: Release, os: ubuntu-latest }
runs-on: ${{ matrix.os }}
env:
NUGET_CERT_REVOCATION_MODE: offline
steps:
- name: Check out code
uses: actions/checkout@v6
with:
fetch-depth: 0
persist-credentials: false
sparse-checkout: |
.
.github
dotnet
- name: Get changed files
id: changed-files
if: github.event_name == 'pull_request'
uses: jitterbit/get-changed-files@v1
continue-on-error: true
- name: No C# files changed
id: no-csharp
if: github.event_name == 'pull_request' && steps.changed-files.outputs.added_modified == ''
run: echo "No C# files changed"
# This step will loop over the changed files and find the nearest .csproj file for each one, then store the unique csproj files in a variable
- name: Find csproj files
id: find-csproj
if: github.event_name != 'pull_request' || steps.changed-files.outputs.added_modified != '' || steps.changed-files.outcome == 'failure'
run: |
csproj_files=()
exclude_files=("Experimental.Orchestration.Flow.csproj" "Experimental.Orchestration.Flow.UnitTests.csproj" "Experimental.Orchestration.Flow.IntegrationTests.csproj")
if [[ ${{ steps.changed-files.outcome }} == 'success' ]]; then
for file in ${{ steps.changed-files.outputs.added_modified }}; do
echo "$file was changed"
dir="./$file"
while [[ $dir != "." && $dir != "/" && $dir != $GITHUB_WORKSPACE ]]; do
if find "$dir" -maxdepth 1 -name "*.csproj" -print -quit | grep -q .; then
csproj_path="$(find "$dir" -maxdepth 1 -name "*.csproj" -print -quit)"
if [[ ! "${exclude_files[@]}" =~ "${csproj_path##*/}" ]]; then
csproj_files+=("$csproj_path")
fi
break
fi
dir=$(echo ${dir%/*})
done
done
else
# if the changed-files step failed, run dotnet on the whole slnx instead of specific projects
csproj_files=$(find ./ -type f -name "*.slnx" | tr '\n' ' ');
fi
csproj_files=($(printf "%s\n" "${csproj_files[@]}" | sort -u))
echo "Found ${#csproj_files[@]} unique csproj/slnx files: ${csproj_files[*]}"
echo "csproj_files=${csproj_files[*]}" >> $GITHUB_OUTPUT
- name: Pull container dotnet/sdk:${{ matrix.dotnet }}
if: steps.find-csproj.outputs.csproj_files != ''
run: docker pull mcr.microsoft.com/dotnet/sdk:${{ matrix.dotnet }}
# This step will run dotnet format on each of the unique csproj files and fail if any changes are made
# exclude-diagnostics should be removed after fixes for IL2026 and IL3050 are out: https://github.com/dotnet/sdk/issues/51136
- name: Run dotnet format
if: steps.find-csproj.outputs.csproj_files != ''
run: |
for csproj in ${{ steps.find-csproj.outputs.csproj_files }}; do
echo "Running dotnet format on $csproj"
docker run --rm -v $(pwd):/app -w /app mcr.microsoft.com/dotnet/sdk:${{ matrix.dotnet }} /bin/sh -c "dotnet format $csproj --verify-no-changes --verbosity diagnostic --exclude-diagnostics IL2026 IL3050"
done

112
.github/workflows/label-issues.yml vendored Normal file
View File

@@ -0,0 +1,112 @@
name: Label issues
on:
issues:
types:
- reopened
- opened
jobs:
label_issues:
name: "Issue: add labels"
if: ${{ github.event.action == 'opened' || github.event.action == 'reopened' }}
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: actions/github-script@v8
with:
github-token: ${{ secrets.GH_ACTIONS_PR_WRITE }}
script: |
// Get the issue body and title
const body = context.payload.issue.body
let title = context.payload.issue.title
// Define the labels array
let labels = []
// Check if the issue author is in the agentframework-developers team
let isTeamMember = false
try {
const teamMembership = await github.rest.teams.getMembershipForUserInOrg({
org: context.repo.owner,
team_slug: process.env.TEAM_NAME,
username: context.payload.issue.user.login
})
console.log("Team Membership Data:", teamMembership);
isTeamMember = teamMembership.data.state === 'active'
} catch (error) {
// User is not in the team or team doesn't exist
console.error("Error fetching team membership:", error);
isTeamMember = false
}
// Only add triage label if the author is not in the team
if (!isTeamMember) {
labels.push("triage")
}
// Helper function to extract field value from issue form body
// Issue forms format fields as: ### Field Name\n\nValue
function getFormFieldValue(body, fieldName) {
if (!body) return null
const regex = new RegExp(`###\\s*${fieldName}\\s*\\n\\n([^\\n#]+)`, 'i')
const match = body.match(regex)
return match ? match[1].trim() : null
}
// Check for language from issue form dropdown first
const languageField = getFormFieldValue(body, 'Language')
let languageLabelAdded = false
if (languageField) {
if (languageField === 'Python') {
labels.push("python")
languageLabelAdded = true
} else if (languageField === '.NET') {
labels.push(".NET")
languageLabelAdded = true
}
// 'None / Not Applicable' - don't add any language label
}
// Fallback: Check if the body or the title contains the word 'python' (case-insensitive)
// Only if language wasn't already determined from the form field
if (!languageLabelAdded) {
if ((body != null && body.match(/python/i)) || (title != null && title.match(/python/i))) {
// Add the 'python' label to the array
labels.push("python")
}
// Check if the body or the title contains the words 'dotnet', '.net', 'c#' or 'csharp' (case-insensitive)
if ((body != null && body.match(/\.net/i)) || (title != null && title.match(/\.net/i)) ||
(body != null && body.match(/dotnet/i)) || (title != null && title.match(/dotnet/i)) ||
(body != null && body.match(/C#/i)) || (title != null && title.match(/C#/i)) ||
(body != null && body.match(/csharp/i)) || (title != null && title.match(/csharp/i))) {
// Add the '.NET' label to the array
labels.push(".NET")
}
}
// Check for issue type from issue form dropdown
const issueTypeField = getFormFieldValue(body, 'Type of Issue')
if (issueTypeField) {
if (issueTypeField === 'Bug') {
labels.push("bug")
} else if (issueTypeField === 'Feature Request') {
labels.push("enhancement")
} else if (issueTypeField === 'Question') {
labels.push("question")
}
}
// Add the labels to the issue (only if there are labels to add)
if (labels.length > 0) {
github.rest.issues.addLabels({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
labels: labels
});
}
env:
TEAM_NAME: ${{ secrets.DEVELOPER_TEAM }}

21
.github/workflows/label-pr.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
# This workflow will triage pull requests and apply a label based on the
# paths that are modified in the pull request.
#
# To use this workflow, you will need to set up a .github/labeler.yml
# file with configuration. For more information, see:
# https://github.com/actions/labeler
name: Label pull request
on: [pull_request_target]
jobs:
add_label:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/labeler@v6
with:
repo-token: "${{ secrets.GH_ACTIONS_PR_WRITE }}"

View File

@@ -0,0 +1,72 @@
name: Label title prefix
on:
issues:
types: [labeled]
pull_request_target:
types: [labeled]
jobs:
add_title_prefix:
name: "Issue/PR: add title prefix"
continue-on-error: true
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/github-script@v8
name: "Issue/PR: update title"
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
let prefixLabels = {
"python": "Python",
".NET": ".NET"
};
function addTitlePrefix(title, prefix)
{
// Update the title based on the label and prefix
// Check if the title starts with the prefix (case-sensitive)
if (!title.startsWith(prefix + ": ")) {
// If not, check if the first word is the label (case-insensitive)
if (title.match(new RegExp(`^${prefix}`, 'i'))) {
// If yes, replace it with the prefix (case-sensitive)
title = title.replace(new RegExp(`^${prefix}`, 'i'), prefix);
} else {
// If not, prepend the prefix to the title
title = prefix + ": " + title;
}
}
return title;
}
labelAdded = context.payload.label.name
// Check if the issue or PR has the label
if (labelAdded in prefixLabels) {
let prefix = prefixLabels[labelAdded];
switch(context.eventName) {
case 'issues':
github.rest.issues.update({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
title: addTitlePrefix(context.payload.issue.title, prefix)
});
break
case 'pull_request_target':
github.rest.pulls.update({
pull_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
title: addTitlePrefix(context.payload.pull_request.title, prefix)
});
break
default:
core.setFailed('Unrecognited eventName: ' + context.eventName);
}
}

View File

@@ -0,0 +1,33 @@
name: Check .md links
on:
workflow_dispatch:
pull_request:
branches: ["main"]
paths:
- '**.md'
- '.github/workflows/markdown-link-check.yml'
- '.github/.linkspector.yml'
schedule:
- cron: "0 0 * * *" # Run at midnight UTC daily
permissions:
contents: read
jobs:
markdown-link-check:
runs-on: ubuntu-22.04
# check out the latest version of the code
steps:
- uses: actions/checkout@v6
with:
persist-credentials: false
# Checks the status of hyperlinks in all files
- name: Run linkspector
uses: umbrelladocs/action-linkspector@v1
with:
reporter: local
filter_mode: nofilter
fail_on_error: true
config_file: ".github/.linkspector.yml"

32
.github/workflows/merge-gatekeeper.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Merge Gatekeeper
on:
pull_request:
branches: [ "main", "feature*" ]
merge_group:
branches: ["main"]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
merge-gatekeeper:
runs-on: ubuntu-latest
# Restrict permissions of the GITHUB_TOKEN.
# Docs: https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs
permissions:
checks: read
statuses: read
steps:
- name: Run Merge Gatekeeper
# NOTE: v1 is updated to reflect the latest v1.x.y. Please use any tag/branch that suits your needs:
# https://github.com/upsidr/merge-gatekeeper/tags
# https://github.com/upsidr/merge-gatekeeper/branches
uses: upsidr/merge-gatekeeper@v1
if: github.event_name == 'pull_request'
with:
token: ${{ secrets.GITHUB_TOKEN }}
timeout: 3600
interval: 30
ignored: CodeQL,CodeQL analysis (csharp)

View File

@@ -0,0 +1,53 @@
name: Python - Code Quality
on:
merge_group:
workflow_dispatch:
pull_request:
branches: ["main"]
paths:
- "python/**"
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
jobs:
pre-commit:
name: Checks
if: "!cancelled()"
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.14"]
runs-on: ubuntu-latest
continue-on-error: true
defaults:
run:
working-directory: ./python
env:
UV_PYTHON: ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Set up python and install the project
id: python-setup
uses: ./.github/actions/python-setup
with:
python-version: ${{ matrix.python-version }}
os: ${{ runner.os }}
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
- uses: actions/cache@v5
with:
path: ~/.cache/pre-commit
key: pre-commit|${{ matrix.python-version }}|${{ hashFiles('python/.pre-commit-config.yaml') }}
- uses: pre-commit/action@v3.0.1
name: Run Pre-Commit Hooks
with:
extra_args: --config python/.pre-commit-config.yaml --all-files
- name: Run Mypy
env:
GITHUB_BASE_REF: ${{ github.event.pull_request.base.ref || github.base_ref || 'main' }}
run: uv run poe ci-mypy

39
.github/workflows/python-docs.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: Python - Create Docs
on:
workflow_dispatch:
release:
types: [published]
permissions:
contents: write
id-token: write
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
jobs:
python-build-docs:
if: github.event_name == 'release' && startsWith(github.event.release.tag_name, 'python-')
name: Python Build Docs
runs-on: ubuntu-latest
environment: "integration"
env:
UV_PYTHON: "3.11"
defaults:
run:
working-directory: python
steps:
- uses: actions/checkout@v6
- name: Set up uv
uses: astral-sh/setup-uv@v7
with:
version-file: "python/pyproject.toml"
enable-cache: true
cache-suffix: ${{ runner.os }}-${{ env.UV_PYTHON }}
cache-dependency-glob: "**/uv.lock"
- name: Install dependencies
run: uv sync --all-packages --dev --docs
- name: Build the docs
run: uv run poe docs-full
# Upload docs to learn gh

99
.github/workflows/python-lab-tests.yml vendored Normal file
View File

@@ -0,0 +1,99 @@
name: Python - Lab Tests
on:
workflow_dispatch:
pull_request:
branches: ["main"]
paths:
- "python/packages/lab/**"
merge_group:
branches: ["main"]
schedule:
- cron: "0 0 * * *" # Run at midnight UTC daily
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
jobs:
paths-filter:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
outputs:
pythonChanges: ${{ steps.filter.outputs.python}}
steps:
- uses: actions/checkout@v6
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
python:
- 'python/**'
# run only if 'python' files were changed
- name: python tests
if: steps.filter.outputs.python == 'true'
run: echo "Python file"
# run only if not 'python' files were changed
- name: not python tests
if: steps.filter.outputs.python != 'true'
run: echo "NOT python file"
python-lab-tests:
name: Python Lab Tests
needs: paths-filter
if: needs.paths-filter.outputs.pythonChanges == 'true'
runs-on: ${{ matrix.os }}
strategy:
fail-fast: true
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13", "3.14"]
# TODO(ekzhu): re-enable macos-latest when this is fixed: https://github.com/actions/runner-images/issues/11881
os: [ubuntu-latest, windows-latest]
env:
UV_PYTHON: ${{ matrix.python-version }}
permissions:
contents: read
defaults:
run:
working-directory: python
steps:
- uses: actions/checkout@v6
- name: Set up python and install the project
id: python-setup
uses: ./.github/actions/python-setup
with:
python-version: ${{ matrix.python-version }}
os: ${{ runner.os }}
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
# Lab specific tests
- name: Run lab tests
run: cd packages/lab && uv run poe test
- name: Run lab lint
run: cd packages/lab && uv run poe lint
- name: Run lab format check
run: cd packages/lab && uv run poe fmt --check
- name: Run lab type checking
run: cd packages/lab && uv run poe pyright
- name: Run lab mypy
run: cd packages/lab && uv run poe mypy
# Surface failing tests
- name: Surface failing tests
if: always()
uses: pmeier/pytest-results-action@v0.7.2
with:
path: ./python/packages/lab/**.xml
summary: true
display-options: fEX
fail-on-empty: false
title: Lab Test Results

198
.github/workflows/python-merge-tests.yml vendored Normal file
View File

@@ -0,0 +1,198 @@
name: Python - Merge - Tests
on:
workflow_dispatch:
pull_request:
branches: ["main"]
merge_group:
branches: ["main"]
schedule:
- cron: "0 0 * * *" # Run at midnight UTC daily
permissions:
contents: write
id-token: write
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
RUN_INTEGRATION_TESTS: "true"
RUN_SAMPLES_TESTS: ${{ vars.RUN_SAMPLES_TESTS }}
jobs:
paths-filter:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
outputs:
pythonChanges: ${{ steps.filter.outputs.python}}
steps:
- uses: actions/checkout@v6
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
python:
- 'python/**'
# run only if 'python' files were changed
- name: python tests
if: steps.filter.outputs.python == 'true'
run: echo "Python file"
# run only if not 'python' files were changed
- name: not python tests
if: steps.filter.outputs.python != 'true'
run: echo "NOT python file"
python-tests-core:
name: Python Tests - Core
needs: paths-filter
if: github.event_name != 'pull_request' && needs.paths-filter.outputs.pythonChanges == 'true'
runs-on: ${{ matrix.os }}
environment: ${{ matrix.environment }}
strategy:
fail-fast: true
matrix:
python-version: ["3.10"]
os: [ubuntu-latest]
environment: ["integration"]
env:
UV_PYTHON: ${{ matrix.python-version }}
OPENAI_CHAT_MODEL_ID: ${{ vars.OPENAI__CHATMODELID }}
OPENAI_RESPONSES_MODEL_ID: ${{ vars.OPENAI__RESPONSESMODELID }}
OPENAI_API_KEY: ${{ secrets.OPENAI__APIKEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_CHAT_MODEL_ID: ${{ vars.ANTHROPIC_CHAT_MODEL_ID }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__CHATDEPLOYMENTNAME }}
AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME: ${{ vars.AZUREOPENAI__RESPONSESDEPLOYMENTNAME }}
AZURE_OPENAI_ENDPOINT: ${{ vars.AZUREOPENAI__ENDPOINT }}
LOCAL_MCP_URL: ${{ vars.LOCAL_MCP__URL }}
# For Azure Functions integration tests
FUNCTIONS_WORKER_RUNTIME: "python"
DURABLE_TASK_SCHEDULER_CONNECTION_STRING: "Endpoint=http://localhost:8080;TaskHub=default;Authentication=None"
AzureWebJobsStorage: "UseDevelopmentStorage=true"
defaults:
run:
working-directory: python
steps:
- uses: actions/checkout@v6
- name: Set up python and install the project
id: python-setup
uses: ./.github/actions/python-setup
with:
python-version: ${{ matrix.python-version }}
os: ${{ runner.os }}
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
- name: Azure CLI Login
if: github.event_name != 'pull_request'
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Set up Azure Functions Integration Test Emulators
uses: ./.github/actions/azure-functions-integration-setup
id: azure-functions-setup
- name: Test with pytest
timeout-minutes: 10
run: uv run poe all-tests -n logical --dist loadfile --dist worksteal --timeout 600 --retries 3 --retry-delay 10
working-directory: ./python
- name: Test core samples
timeout-minutes: 10
if: env.RUN_SAMPLES_TESTS == 'true'
run: uv run pytest tests/samples/ -m "openai" -m "azure"
working-directory: ./python
- name: Surface failing tests
if: always()
uses: pmeier/pytest-results-action@v0.7.2
with:
path: ./python/**.xml
summary: true
display-options: fEX
fail-on-empty: false
title: Test results
python-tests-azure-ai:
name: Python Tests - Azure AI
needs: paths-filter
if: github.event_name != 'pull_request' && needs.paths-filter.outputs.pythonChanges == 'true'
runs-on: ${{ matrix.os }}
environment: ${{ matrix.environment }}
strategy:
fail-fast: true
matrix:
python-version: ["3.10"]
os: [ubuntu-latest]
environment: ["integration"]
env:
UV_PYTHON: ${{ matrix.python-version }}
AZURE_AI_PROJECT_ENDPOINT: ${{ secrets.AZUREAI__ENDPOINT }}
AZURE_AI_MODEL_DEPLOYMENT_NAME: ${{ vars.AZUREAI__DEPLOYMENTNAME }}
LOCAL_MCP_URL: ${{ vars.LOCAL_MCP__URL }}
defaults:
run:
working-directory: python
steps:
- uses: actions/checkout@v6
- name: Set up python and install the project
id: python-setup
uses: ./.github/actions/python-setup
with:
python-version: ${{ matrix.python-version }}
os: ${{ runner.os }}
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
- name: Azure CLI Login
if: github.event_name != 'pull_request'
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Test with pytest
timeout-minutes: 10
run: uv run --directory packages/azure-ai poe integration-tests -n logical --dist loadfile --dist worksteal --timeout 300 --retries 3 --retry-delay 10
working-directory: ./python
- name: Test Azure AI samples
timeout-minutes: 10
if: env.RUN_SAMPLES_TESTS == 'true'
run: uv run pytest tests/samples/ -m "azure-ai"
working-directory: ./python
- name: Surface failing tests
if: always()
uses: pmeier/pytest-results-action@v0.7.2
with:
path: ./python/**.xml
summary: true
display-options: fEX
fail-on-empty: false
title: Test results
# TODO: Add python-tests-lab
python-integration-tests-check:
if: always()
runs-on: ubuntu-latest
needs:
[
python-tests-core,
python-tests-azure-ai
]
steps:
- name: Fail workflow if tests failed
id: check_tests_failed
if: contains(join(needs.*.result, ','), 'failure')
uses: actions/github-script@v8
with:
script: core.setFailed('Integration Tests Failed!')
- name: Fail workflow if tests cancelled
id: check_tests_cancelled
if: contains(join(needs.*.result, ','), 'cancelled')
uses: actions/github-script@v8
with:
script: core.setFailed('Integration Tests Cancelled!')

62
.github/workflows/python-release.yml vendored Normal file
View File

@@ -0,0 +1,62 @@
name: Python - Build Release Assets
on:
release:
types: [published]
permissions:
contents: write
id-token: write
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
jobs:
python-build-assets:
if: github.event_name == 'release' && startsWith(github.event.release.tag_name, 'python-')
name: Python Build Assets and add to Release
runs-on: ubuntu-latest
environment: "integration"
env:
UV_PYTHON: "3.13"
defaults:
run:
working-directory: python
steps:
- uses: actions/checkout@v6
- name: Set up python and install the project
id: python-setup
uses: ./.github/actions/python-setup
with:
python-version: ${{ matrix.python-version }}
os: ${{ runner.os }}
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
- name: Set environment variables
run: |
# Extract package name from tag (format: python-<package>-<version>)
TAG="${{ github.event.release.tag_name }}"
PACKAGE=$(echo "$TAG" | sed 's/^python-\([^-]*\)-.*$/\1/')
# Validate package exists
if [[ ! -d "packages/$PACKAGE" ]]; then
echo "Error: Package '$PACKAGE' not found in packages/ directory"
echo "Available packages: $(ls packages/)"
exit 1
fi
echo "PACKAGE=$PACKAGE" >> $GITHUB_ENV
echo "Building package: $PACKAGE"
- name: Check version
run: |
echo "Building and uploading Python package version: ${{ github.event.release.tag_name }}"
echo "Package directory: packages/${{ env.PACKAGE }}"
- name: Build the package
run: uv run poe --directory packages/${{ env.PACKAGE }} build
- name: Release
uses: softprops/action-gh-release@v2
with:
files: |
python/dist/*

View File

@@ -0,0 +1,59 @@
name: Python - Test Coverage Report
on:
workflow_run:
workflows: ["Python - Test Coverage"]
types:
- completed
permissions:
contents: read
pull-requests: write
jobs:
python-test-coverage-report:
runs-on: ubuntu-latest
if: github.event.workflow_run.conclusion == 'success'
continue-on-error: false
defaults:
run:
working-directory: python
steps:
- uses: actions/checkout@v6
- name: Download coverage report
uses: actions/download-artifact@v7
with:
github-token: ${{ secrets.GH_ACTIONS_PR_WRITE }}
run-id: ${{ github.event.workflow_run.id }}
path: ./python
merge-multiple: true
- name: Display structure of downloaded files
run: ls
- name: Read and set PR number
# Need to read the PR number from the file saved in the previous workflow
# because the workflow_run event does not have access to the PR number
# The PR number is needed to post the comment on the PR
run: |
if [ ! -s pr_number ]; then
echo "PR number file 'pr_number' is missing or empty"
exit 1
fi
PR_NUMBER=$(head -1 pr_number | tr -dc '0-9')
if [ -z "$PR_NUMBER" ]; then
echo "PR number file 'pr_number' does not contain a valid PR number"
exit 1
fi
echo "PR_NUMBER=$PR_NUMBER" >> "$GITHUB_ENV"
- name: Pytest coverage comment
id: coverageComment
uses: MishaKav/pytest-coverage-comment@v1.2.0
with:
github-token: ${{ secrets.GH_ACTIONS_PR_WRITE }}
issue-number: ${{ env.PR_NUMBER }}
pytest-xml-coverage-path: python/python-coverage.xml
title: "Python Test Coverage Report"
badge-title: "Python Test Coverage"
junitxml-title: "Python Unit Test Overview"
junitxml-path: python/pytest.xml
default-branch: "main"
report-only-changed-files: true

View File

@@ -0,0 +1,49 @@
name: Python - Test Coverage
on:
pull_request:
branches: ["main", "feature*"]
paths:
- "python/packages/**"
- "python/tests/unit/**"
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
jobs:
python-tests-coverage:
runs-on: ubuntu-latest
continue-on-error: false
defaults:
run:
working-directory: python
env:
UV_PYTHON: "3.10"
steps:
- uses: actions/checkout@v6
# Save the PR number to a file since the workflow_run event
# in the coverage report workflow does not have access to it
- name: Save PR number
run: |
echo ${{ github.event.number }} > ./pr_number
- name: Set up python and install the project
id: python-setup
uses: ./.github/actions/python-setup
with:
python-version: ${{ matrix.python-version }}
os: ${{ runner.os }}
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
- name: Run all tests with coverage report
run: uv run poe all-tests-cov --cov-report=xml:python-coverage.xml -q --junitxml=pytest.xml
- name: Upload coverage report
uses: actions/upload-artifact@v6
with:
path: |
python/python-coverage.xml
python/pytest.xml
python/pr_number
overwrite: true
retention-days: 1
if-no-files-found: error

54
.github/workflows/python-tests.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: Python - Tests
on:
pull_request:
branches: ["main", "feature*"]
paths:
- "python/**"
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
jobs:
python-tests:
name: Python Tests
runs-on: ${{ matrix.os }}
strategy:
fail-fast: true
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13", "3.14"]
# todo: add macos-latest when problems are resolved
os: [ubuntu-latest, windows-latest]
env:
UV_PYTHON: ${{ matrix.python-version }}
permissions:
contents: write
defaults:
run:
working-directory: python
steps:
- uses: actions/checkout@v6
- name: Set up python and install the project
id: python-setup
uses: ./.github/actions/python-setup
with:
python-version: ${{ matrix.python-version }}
os: ${{ runner.os }}
env:
# Configure a constant location for the uv cache
UV_CACHE_DIR: /tmp/.uv-cache
# Unit tests
- name: Run all tests
run: uv run poe all-tests
working-directory: ./python
# Surface failing tests
- name: Surface failing tests
if: always()
uses: pmeier/pytest-results-action@v0.7.2
with:
path: ./python/**.xml
summary: true
display-options: fEX
fail-on-empty: false
title: Test results