Sets inputs, outputs, tags, metadata, and metrics as provided for a given LLM Observability span. Note that with the exception of tags, this method will override any existing values for the provided fields.
For example:
llmobs.trace({ kind: 'llm', name: 'myLLM', modelName: 'gpt-4o', modelProvider: 'openai' }, () => {
llmobs.annotate({
inputData: [{ content: 'system prompt, role: 'system' }, { content: 'user prompt', role: 'user' }],
outputData: { content: 'response', role: 'ai' },
metadata: { temperature: 0.7 },
tags: { host: 'localhost' },
metrics: { inputTokens: 10, outputTokens: 20, totalTokens: 30 }
})
})
An object containing the inputs, outputs, tags, metadata, and metrics to set on the span.
Decorate a function in a javascript runtime that supports function decorators. Note that this is not supported in the Node.js runtime, but is in TypeScript.
In TypeScript, this decorator is only supported in contexts where general TypeScript function decorators are supported.
Optional LLM Observability span options.
Enable LLM Observability tracing.
Returns a representation of a span to export its span and trace IDs. If no span is provided, the current LLMObs-type span will be used.
An object containing the span and trace IDs.
Submits a custom evalutation metric for a given span ID and trace ID.
The span context of the span to submit the evaluation metric for.
An object containing the label, metric type, value, and tags of the evaluation metric.
Instruments a function by automatically creating a span activated on its scope.
The span will automatically be finished when one of these conditions is met:
The return value of the function.
Wrap a function to automatically create a span activated on its scope when it's called.
The span will automatically be finished when one of these conditions is met:
Optional LLM Observability span options.
The function to instrument.
A new function that wraps the provided function with span creation.
Whether or not LLM Observability is enabled.