Axione AI Documentation

Welcome to the Axione AI API documentation. Our service provides a unified interface to access multiple AI models through a single, OpenAI-compatible API.

Base URL: https://axione.vercel.app/api/v1

Quick Start

  1. Sign up for an account
  2. Generate an API key from your dashboard
  3. Start making requests to our API endpoints

Authentication

The Axione AI uses API keys for authentication. Include your API key in the Authorization header of all requests.

Authorization Header
Authorization: Bearer sk-your-api-key-here

Chat Completions

Create chat completions using various AI models. This endpoint is compatible with OpenAI's chat completions API.

POST /v1/chat/completions
Request Body:
{
  "model": "openai/gpt-4o",
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "max_tokens": 150,
  "temperature": 0.7,
  "stream": false
}
Response:
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "openai/gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 20,
    "total_tokens": 30
  }
}

Available Models

Get a list of available AI models and their capabilities. All models follow the format provider/model-name.

GET /v1/models
{
  "object": "list",
  "data": [
    {
      "id": "openai/gpt-4o",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "anthropic/claude-3.5-sonnet",
      "object": "model",
      "created": 1687882411,
      "owned_by": "anthropic"
    },
    {
      "id": "google/gemini-2.5-flash-lite",
      "object": "model",
      "created": 1687882411,
      "owned_by": "google"
    }
  ]
}
Popular Models
OpenAI Models
  • openai/gpt-4o - Most advanced GPT-4 model
  • openai/gpt-4o-mini - Faster, cost-effective GPT-4
  • openai/gpt-3.5-turbo - Fast and efficient
Other Providers
  • anthropic/claude-3.5-sonnet - Advanced reasoning
  • google/gemini-2.5-flash-lite - Multimodal AI
  • mistral/mistral-large - Capable open model

Daily Limits

The API implements daily usage limits that are updated automatically every day. No subscriptions needed - your tier is updated based on usage patterns.

Tier Requests/Day Tokens/Day Status
Tier 1 1,000 250,000 Free
Tier 2 4,000 1,000,000 Auto-upgrade
Tier 3 40,000 10,000,000 Auto-upgrade
Tier 4 80,000 20,000,000 Auto-upgrade
Tier 5 120,000 30,000,000 Premium
Daily Reset: All limits reset automatically at midnight UTC each day.

Error Handling

The API uses conventional HTTP response codes to indicate success or failure.

Error Response Format
{
  "error": {
    "type": "invalid_request_error",
    "message": "Model 'invalid/model' not found",
    "code": "model_not_found"
  }
}
Common Error Codes:
  • 400 - Bad Request (invalid model format, missing parameters)
  • 401 - Unauthorized (invalid API key)
  • 404 - Not Found (model not found)
  • 429 - Daily Limit Exceeded
  • 500 - Internal Server Error

Code Examples

curl -X POST https://axione.vercel.app/api/v1/chat/completions \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o",
    "messages": [
      {
        "role": "user", 
        "content": "Hello!"
      }
    ]
  }'

const response = await fetch('https://axione.vercel.app/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer sk-your-api-key',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'anthropic/claude-3.5-sonnet',
    messages: [
      { role: 'user', content: 'Hello!' }
    ]
  })
});

const data = await response.json();
console.log(data.choices[0].message.content);

import openai

# Set up the client with your API key and base URL
client = openai.OpenAI(
    api_key="sk-your-api-key",
    base_url="https://axione.vercel.app/api/v1"
)

# Create a chat completion
response = client.chat.completions.create(
    model="google/gemini-2.5-flash-lite",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)