Skip to content

Research GenAI Ecosystem #154

@jaydeluca

Description

@jaydeluca

Research GenAI Ecosystem

Background

OpenTelemetry is actively working on semantic conventions for generative AI, but the
ecosystem is evolving rapidly. Frameworks like LangChain, LlamaIndex, OpenAI clients,
Anthropic clients, and others are racing to add observability — but there's no
authoritative map of what telemetry each actually emits or how well it aligns with
the semantic conventions.

Goal

Document the GenAI/LLM instrumentation landscape so users of the ecosystem explorer
can understand what's available, what telemetry each framework emits, and how
complete the semantic convention adoption is.

Open questions for contributors

  • Which GenAI frameworks have OTel instrumentation? (native, contrib, third-party)
  • What signals do they capture? (traces, metrics, logs)
  • How complete is adoption of the GenAI semantic conventions?
  • What patterns are emerging for tracing RAG pipelines and agent tool calls?
  • How does coverage vary across languages (Python, JS, Java, .NET)?

Related work

The genai-otel-conformance project
runs automated conformance tests for 40+ GenAI libraries, measuring per-attribute
coverage across span types (inference, embeddings, tool execution, agents). Results
are published to a conformance dashboard.
This is a valuable data source and potential integration point.

Scope

Research should cover the major frameworks and languages:

  • Python: LangChain, LlamaIndex, OpenAI, Anthropic, LiteLLM, others
  • JS/TS: LangChain.js, OpenAI, Anthropic clients
  • Java and .NET as available

The output should feed into how GenAI instrumentation is represented in the
ecosystem explorer.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions