Ruby Quick Start Guide for /v1/chat/completions API
Last updated May 16, 2025
Table of Contents
Our Claude chat models (Claude 3.7 Sonnet, Claude 3.5 Sonnet latest, Claude 3.5 Haiku, and Claude 3.0 Haiku) generate conversational completions for input messages. This guide walks you through how to use the /v1/chat/completions
API with Ruby.
Prerequisites
Before making requests, provision access to the model of your choice.
If it’s not already installed, install the Heroku CLI. Then install the Heroku AI plugin:
heroku plugins:install @heroku/plugin-ai
Attach a chat model to an app of yours:
# If you don't have an app yet, you can create one with: heroku create example-app # specify the name you want for your app, or skip this step to use an existing app you have # Create and attach one of our chat models to your app, example-app: heroku ai:models:create -a example-app claude-3-7-sonnet --as INFERENCE
Ruby Example Code
require 'net/http'
require 'json'
require 'uri'
# Fetch required environment variables or raise an error if missing
INFERENCE_URL = ENV.fetch('INFERENCE_URL') do
raise <<~ERROR
Environment variable 'INFERENCE_URL' is missing.
Set it using:
export INFERENCE_URL=$(heroku config:get -a $APP_NAME INFERENCE_URL)
ERROR
end
INFERENCE_KEY = ENV.fetch('INFERENCE_KEY') do
raise <<~ERROR
Environment variable 'INFERENCE_KEY' is missing.
Set it using:
export INFERENCE_KEY=$(heroku config:get -a $APP_NAME INFERENCE_KEY)
ERROR
end
INFERENCE_MODEL_ID = ENV.fetch('INFERENCE_MODEL_ID') do
raise <<~ERROR
Environment variable 'INFERENCE_MODEL_ID' is missing.
Set it using:
export INFERENCE_MODEL_ID=$(heroku config:get -a $APP_NAME INFERENCE_MODEL_ID)
ERROR
end
##
# Parses and prints the API response for the chat completion request.
#
# @param response [Net::HTTPResponse] The response object from the API call.
def parse_chat_output(response)
if response.is_a?(Net::HTTPSuccess)
result = JSON.parse(response.body)
content = result.dig('choices', 0, 'message', 'content')
puts "Chat Completion: #{content}"
else
puts "Request failed: #{response.code}, #{response.body}"
end
end
##
# Generates a chat completion using the Stability AI Chat Model.
#
# @param payload [Hash] Hash containing parameters for the chat completion request.
def generate_chat_completion(payload)
uri = URI.join(INFERENCE_URL, '/v1/chat/completions')
request = Net::HTTP::Post.new(uri)
request['Authorization'] = "Bearer #{INFERENCE_KEY}"
request['Content-Type'] = 'application/json'
request.body = payload.to_json
response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == 'https') do |http|
http.request(request)
end
parse_chat_output(response)
end
# Example payload
payload = {
model: INFERENCE_MODEL_ID,
messages: [
{ role: 'user', content: 'Hello!' },
{ role: 'assistant', content: 'Hi there! How can I assist you today?' },
{ role: 'user', content: 'Why is Heroku so cool?' }
],
temperature: 0.5,
max_tokens: 100,
stream: false
}
# Generate a chat completion with the given payload
generate_chat_completion(payload)