Thanks for your interest in contributing! This is an open source project, so we appreciate community contributions.
Pull requests for bug fixes are welcome, but before submitting new features or changes to current functionalities open an issue and discuss your ideas or propose the changes you wish to make. After a resolution is reached a PR can be submitted for review. PRs created before a decision has been reached may be closed.
For commit messages, try to use the same conventions as most Go projects, for example:
contrib/database/sql: use method context on QueryContext and ExecContext
QueryContext and ExecContext were using the wrong context to create
spans. Instead of using the method's argument they were using the
Prepare context, which was wrong.
Fixes #113
Pull requests should follow conventional commits naming format with the following structure:
<type>(scope): <description>
Where:
- type: The type of change (feat, fix, docs, style, refactor, test, chore)
- scope: The package or area affected (e.g., contrib/database/sql, ddtrace/tracer)
- description: A brief description of the change
Examples:
feat(contrib/http): add support for custom headersfix(ddtrace/tracer): resolve memory leak in span processor
All new code is expected to be covered by tests.
We expect all PR checks to pass before we merge a PR.
The code coverage report has a target of 90%. This is the goal, but is not a hard requirement. Reviewers ultimately make the decision about code coverage and quality and will merge PRs at their discretion. Any divergence from the expected 90% should be communicated by the reviewers to the PR author.
Please feel free to comment on a PR if there is any difficulty or confusion about any of the checks.
Our CI pipeline includes several automated checks:
- Copyright Check: Verifies all files have proper copyright headers
- Generate Check: Ensures generated code is up-to-date
- Module Check: Validates Go module consistency using
make fix-modules - Lint Check: Runs comprehensive linting using
golangci-lint - Lock Analysis: Runs
checklocksto detect potential deadlocks and race conditions
- Core Tests: Tests the main library functionality
- Integration Tests: Tests against real services using Docker
- Contrib Tests: Tests all third-party integrations
- Race Detection: Tests with Go race detector enabled
- Code Generation: Ensures all generated code is current and consistent
Sometimes a pull request's checks will show failures that aren't related to its changes. When this happens, you can try the following steps:
- Look through the GitHub Action logs for an obvious cause
- Retry the test a few times to see if it flakes
- For internal contributors, ask the #dd-trace-go channel for help
- If you are not an internal contributor, open an issue or ping @DataDog/apm-go
Before submitting a PR, you can run the same checks locally using make targets:
# Show all available targets
make help
# Install tools
make tools-install
# Run all linters (same as CI)
make lint
# Format code (recommended before committing)
make format
# Check module consistency
make fix-modules
# Run all tests
make test
# Run integration tests
make test-integrationYou can also run scripts directly for more control:
# Run specific linting options
make lint
# Format specific file types
make format/go
make format/shell
# Run specific test configurations
make test/contrib
make test/appsecWe try to review new PRs within a week of them being opened. If more than two weeks have passed with no reply, please feel free to comment on the PR to bubble it up.
If a PR sits open for more than a month awaiting work or replies by the author, the PR may be closed due to staleness. If you would like to work on it again in the future, feel free to open a new PR and someone will review.
We provide several utility scripts in the scripts/ directory to help with common development tasks:
Runs all linters on the codebase to ensure code quality and consistency.
# Run all linters (default behavior, install tools)
make lintThe script runs:
goimportsfor import formattinggolangci-lintfor comprehensive Go lintingchecklocksfor lock analysis (with error tolerance)
Formats Go and shell files in the repository.
# Format both Go and shell files and install tools (default behavior)
make format
# Format Go files and install tools
make format/go
# Format shell files and install tools
make format/shellAnalyzes lock usage patterns to detect potential deadlocks and race conditions.
# Run checklocks on the default target (./ddtrace/tracer)
./scripts/checklocks.sh
# Run checklocks on a specific directory
./scripts/checklocks.sh ./path/to/target
# Run checklocks and ignore errors
./scripts/checklocks.sh --ignore-errorsMaintains Go module consistency across the repository by running go mod tidy on all modules and adding missing replace directives for local imports.
make fix-modulesThis script:
- Runs the
fixmodulestool to add missing replace directives - Executes
go mod tidyon all Go modules in the repository - Updates the
go.work.sumfile
Enhanced testing script with improved output formatting and additional options.
# Run core tests only
make test/unit
# Run integration tests
make test/integration
# Run contrib tests
make test/contrib
# Run all tests
make test
# Run with AppSec enabled
make test/appsecThe script provides:
- Timestamped output for better debugging
- Early failure detection with clear error messages
- Automatic Docker service management for integration tests
- Support for Apple Silicon (M1/M2) Macs
A set of Style guidelines was added to our Wiki. Please spend some time browsing it. It will help tremendously in avoiding comments and speeding up the PR process.
For local development, use make targets as the primary interface:
# Instead of running golangci-lint directly
make lint
# Instead of running formatters manually
make format
# Instead of running go mod tidy manually
make fix-modules
# Install all development tools
make tools-installFor more specific control, you can use scripts directly:
# Run specific linting configurations
make lint --tools
# Format only specific file types
make format/go
# Run specific test types
make test/contribIf you prefer using Docker for linting:
docker run --rm -v $(pwd):/app -w /app golangci/golangci-lint:v1.63.3 golangci-lint run -v --timeout 5mfmt.Sprintf can introduce unnecessary overhead when building a string. Favor string builders, or simple string concatenation, a + "b" + c over fmt.Sprintf when possible, especially in hot paths.
Sample PR: #3365
Please view our contrib README.md for information on integrations. If you need support for a new integration, please file an issue to discuss before opening a PR.
When working with environment variables, direct use of os.Getenv and os.LookupEnv is not permitted. Instead, all environment variables must be validated against an allowed list using env.Get and env.Lookup from the internal/env package (or instrumentation/env when working on contrib packages). This validation system helps us automatically detect newly introduced variables and ensures they are properly documented and tracked.
Once a new environment variable is added to the codebase, Datadog maintainers will also add it to Datadog's internal configuration registry for tracking and documentation purposes.
Upon each tracer release, new configuration keys are automatically tagged by our CI pipeline to track when they were introduced.
The configinverter tool provides a command to add new environment variable keys to the supported_configurations.json file and regenerate the corresponding Go code.
go run ./scripts/configinverter/main.go add DD_MY_NEW_KEYAfter adding it to the codebase the key also needs to be added to the registry by an internal contributor. If the key already exists in the registry because another language already registred it this step can be skipped. Not adding the key to the registry will fail the CI step in charge of checking the local file against the registry.
All environment variables should be read at least once by a test. When this happens, a helper automatically detects the usage and adds the variable to the JSON configuration file. Since the variable isn't yet present in the generated code, it won't read any actual environment values initially, but it will be recorded for code generation.
Note that CI jobs will fail if new keys are detected but not properly generated into the code.
You can check for keys that have been added to the JSON file but not yet generated into code:
go run ./scripts/configinverter/main.go checkAfter the first test run that detects your new environment variable, regenerate the code:
go run ./scripts/configinverter/main.go generateAfter adding it to the codebase the key also needs to be added to the registry by an internal contributor. If the key already exists in the registry because another language already registred it this step can be skipped. Not adding the key to the registry will fail the CI step in charge of checking the local file against the registry.
The GitLab validate_supported_configurations_local_file job validates the JSON file content against Datadog's configuration registry to ensure every configuration key is properly registered and documented. If keys are missing from the registry, the job will fail and display the list of missing keys in the output. These keys must be added to the internal registry by Datadog maintainers for the check to pass, the key will need to be documented before merging the PR onto main.
Additionally, multiple CI jobs include a step that checks for newly discovered environment variables during test execution and will fail if keys are missing from the generated list. To resolve this failure, use one of the two methods described above to add the key to the generated list.
When adding a new dependency, especially for contrib/ packages, prefer the minimum secure versions of any modules rather than the latest versions. This is to avoid forcing upgrades on downstream users for modules such as google.golang.org/grpc which often introduce breaking changes within minor versions.
This repository used to omit many dependencies from the go.mod file due to concerns around version compatibility (ref). As such, you may have configured git to ignore changes to go.mod and go.sum. To undo this, run
git update-index --no-assume-unchanged go.*Please also see the section about "Adding Go modules" when it comes to selecting the minimum secure versions of a module rather than the latest versions.
Then start by updating the main go.mod file, e.g. by running a go get command in the root of the repository like this:
go get <import-path>@<new-version>
Then run the following command to update all go.mod and go.sum files in the repository:
make fix-modules
This is neccessary because dd-trace-go is a multi-module repository.
Some benchmarks will run on any new PR commits, the results will be commented into the PR on completion.
To add additional benchmarks that should run for every PR, go to .gitlab-ci.yml.
Add the name of your benchmark to the BENCHMARK_TARGETS variable using pipe character separators.
Some core packages are using uber-go/goleak to detect goroutine leaks.
To isolate the leak to a single test, you can use the bash script from the goleak README.
If you are experiencing a leak failure in CI that doesn't seem to reproduce locally, try running a local datadog agent. Some test failures only appear when http connections to the agent are created and become idle after the test completes.
Last but not least, you might find a goroutine leak with an unhelpful stack trace:
Goroutine 92554 in state IO wait, with internal/poll.runtime_pollWait on top of the stack:
internal/poll.runtime_pollWait(0x7f46dcd5b368, 0x72)
/opt/hostedtoolcache/go/1.23.11/x64/src/runtime/netpoll.go:351 +0x85
...
net/http.(*persistConn).readLoop(0xc0041c8b40)
/opt/hostedtoolcache/go/1.23.11/x64/src/net/http/transport.go:2205 +0x354
created by net/http.(*Transport).dialConn in goroutine 92609
/opt/hostedtoolcache/go/1.23.11/x64/src/net/http/transport.go:1874 +0x29b4
In this case, consider editing the stdlib code (e.g. http/transport.go:1874) to print a stack trace at the location where the goroutine is being created:
fmt.Printf("Leak start at stack=%s\n", string(debug.Stack()))In practice, leaks often go through http.(*Client).Do, so that can be a good place to instrument as well.
Following the advice above, most goroutine leaks should be easy to debug and fix.