Skip to main content
The Hicap API gives you access to foundation models from OpenAI, Anthropic, Google Gemini, and more — through a single OpenAI-compatible endpoint. Use any standard OpenAI SDK and start building in under 5 minutes.

1. Get your API key

Opens the Hicap Platform — You’ll create an account and generate an API key in the Hicap Platform dashboard ↗. Then come back here to use it.
Export it as an environment variable:
export HICAP_API_KEY="your_api_key_here"

2. Install an SDK and make your first request

npm install openai
example.mjs
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.hicap.ai/v1",
  apiKey: process.env.HICAP_API_KEY,
  defaultHeaders: { "api-key": process.env.HICAP_API_KEY },
});

const response = await client.chat.completions.create({
  model: "gpt-5",
  messages: [{ role: "user", content: "Write a one-sentence bedtime story about a unicorn." }],
});

console.log(response.choices[0].message.content);
Run with node example.mjs.
That’s it — you just called the Hicap API. The same code works with any supported model; just change the model parameter.

Switch between models and providers

One API, any model. Use OpenAI, Anthropic, or Gemini models through the same endpoint and SDK — no separate integrations needed.
# OpenAI
client.chat.completions.create(model="gpt-5", messages=[...])

# Anthropic Claude
client.chat.completions.create(model="claude-sonnet-4.5", messages=[...])

# Google Gemini
client.chat.completions.create(model="gemini-2.5-pro", messages=[...])

# Moonshot Kimi
client.chat.completions.create(model="kimi-k2.5", messages=[...])
See all available models, context windows, and pricing → All Models or models.json

Analyze images

Send image URLs directly to vision-capable models:
const response = await client.chat.completions.create({
  model: "gpt-5",
  messages: [{
    role: "user",
    content: [
      { type: "text", text: "What is in this image?" },
      { type: "image_url", image_url: { url: "https://example.com/photo.jpg" } },
    ],
  }],
});

console.log(response.choices[0].message.content);

Stream responses

Use server-sent events to show results as they’re generated:
const stream = await client.chat.completions.create({
  model: "gpt-5",
  messages: [{ role: "user", content: "Explain quantum computing in simple terms." }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

These are the most commonly used Hicap API endpoints. All follow the OpenAI API spec — if you’ve used the OpenAI API before, you already know how to use these.
EndpointDescriptionDocs
POST /v1/chat/completionsGenerate text, analyze images, use toolsReference →
POST /v1/embeddingsCreate text embeddings for search and RAGReference →
POST /v1/images/generationsGenerate images with DALL·EReference →
POST /v1/audio/transcriptionsTranscribe audio with WhisperReference →
POST /v1/audio/speechGenerate speech from textReference →
Full API Reference →

Models

Start with gpt-5 for strong general-purpose performance, or pick a model optimized for your use case.

GPT-5.4

OpenAI’s most capable model. 1M+ context window, top-tier reasoning.

Claude Opus 4.6

Anthropic’s flagship. Deep reasoning, coding, and analysis.

Gemini 3.1 Pro

Google’s latest. 1M context, strong multimodal and reasoning.

GPT-5-mini

Fast and affordable. Great balance of performance and cost.

Claude Haiku 4.5

Ultra-fast. Ideal for high-volume, latency-sensitive workloads.

Gemini 2.5 Flash

Cost-efficient with thinking capabilities. Great for real-time apps.
View all models →

Why Hicap?

Up to 25% lower cost

We bulk-reserve compute from leading cloud providers and pass the savings to you.

One API, all providers

OpenAI, Anthropic, Gemini, Moonshot, Zhipu, MiniMax — all through one OpenAI-compatible endpoint.

5-minute integration

Use the standard OpenAI SDK. Change the base URL. You’re done.

No lock-in

Start monthly, scale anytime. Switch models with a single parameter change.

Next steps

API Reference

Full endpoint docs, authentication, and request/response schemas.

All Models

Compare all models by provider, tier, context window, and pricing.

FAQ

Common questions about pricing, security, architecture, and data handling.