Language Models

Language Models are nothing more than the AI models we know and use. Figurative's Language Models are a simple way of defining how you access models from AI model providers once, so they can be reused multiple times.

A single Language Model can power as many Integrals as you need with zero overhead and without further configuration

Before proceeding, head over to the Sign up page to create an account if you don't already have one.

Prerequisites

All you need is your choice of AI Provider and model when configuring your Language Model. Every other configuration is optional

Supported providers

We plan to support as many providers as possible. Currently, you can configure Language Model using all the available models for these providers

  • Anthropic
  • OpenAI

Upcoming providers include Llama, Google, Deepseek, and Grok

Create your Integral

Visit the Language Models page to add a Language Model

Provide input parameters

A Language Model requires a unique name, a provider and model choice as shown below. Prompts are defined as part of the Integral, this makes the Language Model modular and reusable in different scenarios.

Adding extra configuration

Language Models support several optional configuration paramerters including temperature, and other common model parameters. We've designed this interface to abstract and standardize common parameters across multiple provders so you can use the same config options on different providers without worrying about the underlying values

Using your Language Model

Language Models currently can only be used through an Integral. Create an Integral and assign your model so you can use it in live mode. Once the instance is live, you can make queries against the Language Model

Try out the Language Model

After deploying your Integral you can test the LLM with the following api endpoint using the deployment URL. Generate a new Api key if you don't have one already
import requests
url = '<your-deployment-url>/query'
body = {
"messages": [
{"content": "Hello world", "role": "user"},
]
}
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer <your-api-key>",
}

response = requests.post(url, headers=headers, json=body)

print(response.status_code)
print(response.json())

That's it. You are ready to start building many awesome projects.

Request data format

The body field accepts an array of messages which supports the same multi-turn conversations seen on other AI providers the message[number].content field supports text only at the moment. Plans for structured inputs are underway. In the mean time, you can pass stringified JSON to the content field as input to the prompt when needed. See the examples below:

Response format

The content field can be either text string or JSON object depending on the configured response format on the Integral. Note: the response only returns the current AI message without the full thread of the conversation

Next steps

Discover how to maximize your Integrals and explore additional resources.