Zum Inhalt

LLMConfigsResource

LLMConfigsResource

LLMConfigsResource(client: Client)

Bases: BaseResource

create

create(name: str, llm_model_name: str, api_key: str, description: str = '', llm_base_url: str | None = None, api_version: str | None = None, max_connections: int = 10, max_retries: int | None = None, timeout: int | None = None, system_message: str | None = None, max_tokens: int | None = None, top_p: float | None = None, temperature: float | None = None, best_of: int | None = None, top_k: int | None = None, logprobs: bool | None = None, top_logprobs: int | None = None, reasoning_effort: str | None = None, inference_type: InferenceType = OPENAI) -> LLMConfig

Create a new LLM configuration.

Parameters:

Name Type Description Default
name str

Name for the LLM config.

required
llm_model_name str

Name of the LLM model.

required
api_key str

API key for the LLM service.

required
description str

Optional description for the LLM config.

''
llm_base_url str | None

Optional base URL for the LLM service.

None
api_version str | None

Optional API version.

None
max_connections int

Maximum number of concurrent connections to the LLM provider.

10
max_retries int | None

Optional maximum number of retries.

None
timeout int | None

Optional timeout in seconds.

None
system_message str | None

Optional system message for the LLM.

None
max_tokens int | None

Optional maximum tokens to generate.

None
top_p float | None

Optional nucleus sampling parameter.

None
temperature float | None

Optional temperature parameter.

None
best_of int | None

Optional number of completions to generate.

None
top_k int | None

Optional top-k sampling parameter.

None
logprobs bool | None

Optional flag to return log probabilities.

None
top_logprobs int | None

Optional number of top log probabilities to return.

None
reasoning_effort str | None

Optional reasoning effort parameter for o-series models.

None
inference_type InferenceType

Type of Inference Provider to use.

OPENAI

Returns:

Type Description
LLMConfig

The created LLM configuration.

Raises:

Type Description
HTTPStatusError

If an LLM config with the same name already exists.

delete

delete(llm_config: LLMConfig) -> None

Deletes an LLM configuration.

Parameters:

Name Type Description Default
llm_config LLMConfig

The LLM configuration to delete.

required

Raises:

Type Description
HTTPStatusError

If the LLM config doesn't exist or belongs to a different project.

get

get(name: str) -> LLMConfig

Get an LLM config by name.

Parameters:

Name Type Description Default
name str

Name of the LLM config.

required

Returns:

Type Description
LLMConfig

The requested LLM config.

Raises:

Type Description
ValueError

If no LLM config is found with the given name.

get_or_create

get_or_create(name: str, llm_model_name: str, api_key: str, description: str = '', llm_base_url: str | None = None, api_version: str | None = None, max_connections: int = 10, max_retries: int | None = None, timeout: int | None = None, system_message: str | None = None, max_tokens: int | None = None, top_p: float | None = None, temperature: float | None = None, best_of: int | None = None, top_k: int | None = None, logprobs: bool | None = None, top_logprobs: int | None = None, reasoning_effort: str | None = None, inference_type: InferenceType = OPENAI) -> tuple[LLMConfig, bool]

Get an existing LLM config or create a new one.

The existence check is only based on the name parameter - if an LLM config with the given name exists, it will be returned regardless of the other parameters. If no config with that name exists, a new one will be created using all provided parameters.

Parameters:

Name Type Description Default
name str

Name for the LLM config.

required
llm_model_name str

Name of the LLM model.

required
api_key str

API key for the LLM service.

required
description str

Optional description for the LLM config.

''
llm_base_url str | None

Optional base URL for the LLM service.

None
api_version str | None

Optional API version.

None
max_connections int

Maximum number of concurrent connections to the LLM provider.

10
max_retries int | None

Optional maximum number of retries.

None
timeout int | None

Optional timeout in seconds.

None
system_message str | None

Optional system message for the LLM.

None
max_tokens int | None

Optional maximum tokens to generate.

None
top_p float | None

Optional nucleus sampling parameter.

None
temperature float | None

Optional temperature parameter.

None
best_of int | None

Optional number of completions to generate.

None
top_k int | None

Optional top-k sampling parameter.

None
logprobs bool | None

Optional flag to return log probabilities.

None
top_logprobs int | None

Optional number of top log probabilities to return.

None
reasoning_effort str | None

Optional reasoning effort parameter for o-series models.

None
inference_type InferenceType

Type of Inference Provider to use.

OPENAI

Returns:

Type Description
tuple[LLMConfig, bool]

tuple[LLMConfig, bool]: A tuple containing: - The LLM configuration - Boolean indicating if a new config was created (True) or existing one returned (False)