API Doc-CometAPI
HomeDashBoardModel_Price
HomeDashBoardModel_Price
Discord_Support
  1. Anthropic Compatiable Endpoint
  • GET START
    • Model New Release Announcement
    • Platform notification
    • Help Center
    • Quick Start
    • About Pricing
    • About Grouping
    • Interface Stability
    • Privacy policy
    • Terms of service
    • Error code description
    • Code example
    • Must see for use
    • Common Misconceptions
    • Confusion about use
    • Best Practices
      • CometAPI Account Balance Query API Usage Instructions
      • Retry Logic Documentation for CometAPI and OpenAI Official API
      • Midjourney Best Practices
      • Runway Best Practices
  • OpenAI Compatiable Endpoint
    • gpt-4o-image generates image
    • Chat
    • Recognizing Images
    • Models
    • Embeddings
    • Images
    • Realtime
    • Image Editing (gpt-image-1)
  • Audio
    • Create speech
    • Create transcription
    • Create translation
  • Anthropic Compatiable Endpoint
    • Anthropic Claude
      POST
  • Music Generation Endpoint
    • Suno
      • Setting suno Version
      • Generate lyrics
      • Generate music clip
      • Upload clip
      • Submit concatenation
      • Single task query
      • Batch query tasks
    • Udio(Temporarily unavailable)
      • Generate music
      • Task query
  • Image Generation Endpoint
    • Midjourney(images)
      • Quick Tutorial - Complete Process in One Go
      • Task Fetching API
        • List by Condition
        • Fetch Single Task (most recommended)
      • Imagine
      • Action (UPSCALE; VARIATION; REROLL; ZOOM, etc.)
      • Blend (image -> image)
      • Describe (image -> text)
      • Modal (Area Redesign & Zoom)
    • Ideogram(images)
      • Official documentation (updated in real time)
      • Generate 3.0 (text to image)
      • Remix 3.0 (hybrid image)
      • Reframe 3.0(Reconstruction)
      • Replace Background 3.0(Background replacement)
      • Edit 3.0(Editing images)
      • ideogram Text Raw Image
      • ideogram Hybrid image
      • ideogram enlargement HD
      • ideogram describes the image
      • ideogram Edit image
    • Flux(images)
      • Generate image (replicate format)
      • flux fine-tune images(Temporarily unavailable)
      • flux generate image(Temporarily unavailable)
      • flux query
    • Replicate(image)
      • replicate Generate
      • replicate query
    • Recraft(images)
      • Appendix
      • Recraft Generate Image
      • Recraft Vectorize Image
      • Recraft Remove Background
      • Recraft Clarity Upscale
      • Recraft Create style
      • Recraft Generative Upscale
  • Video Generation Ednpoint
    • runway(video)
      • official format
        • runway images raw video
        • runway to get task details
      • Reverse Format
        • generate(text)
        • generate(Reference images)
        • Video to Video Style Redraw
        • Act-one Expression Migration
        • feed-get task
    • kling (video)
      • callback_url
      • Generating images
      • Text Generation Video
      • Image Generation Video
      • Video Extension
      • virtual try-on
      • lip sync
      • Individual queries (videos)
    • MiniMax Conch(video)
      • MiniMax Conch Official Documentation
      • MiniMax Conch Generation
      • MiniMax Conch Query
      • MiniMax Conch Download
    • luma (video)
      • Official api interface format
        • luma generate
        • luma search
    • PIKA(video)
      • pika feed
      • PIKA Reference Video Generation
      • PIKA Reference Image Generation
      • PIKA reference text generation
    • sora
      • Reverse Format
        • Create Video
        • Query Video Task
        • Create Video
  • Software Integration Guide
    • cometapi Site API Call Testing
    • OpenManus
    • Chatbox
    • CherryStudio
    • Cursor
    • ChatHub
    • COZE
    • FastGPT
    • cline
    • dify
    • gptme
    • Immersive Translation
    • Lobe-Chat
    • Zotero
    • LangChain
    • AnythingLLM
    • Eudic Translation
    • OpenAI Translator
    • ChatAll Translation
    • Pot Translation
    • GPT Academic Optimization (gpt_academic)
    • NEXT CHAT (ChatGPT Next Web)
    • Obsidian's Text Generator Plugin
    • Open WebUI
    • avante.nvim
    • librechat
    • Lazy Customer Service
    • utools-ChatGPT Friend
    • IntelliJ Translation Plugin
    • n8n
  1. Anthropic Compatiable Endpoint

Anthropic Claude

POST
https://api.cometapi.com/v1/messages
Maintainer:Not configured

POST /v1/messages#

This endpoint is used to create chat completions through the same format and parameters as Anthropic Claude. This endpoint only supports Claude models. Please view all supported models in Model List and Pricing

Request

Header Params
Authorization
string 
required
Example:
Bearer {{api-key}}
Body Params application/json
model
string 
required
ID of the model to be used.For more information on which models are available for the Chat API, see the Model Endpoint Compatibility Table:https://platform.openai.com/docs/models/model-endpoint-compatibility
max_tokens
integer 
optional
Maximum number of tokens generated for chat completion. The total length of input tokens and generated tokens is limited by the length of the model context.
messages
array [object {2}] 
required
In Chat Format:https://platform.openai.com/docs/guides/text?api-mode=chat Generate a chat completion message.
role
string 
optional
role
content
string 
optional
temperature
integer 
optional
What sampling temperature to use, between 0 and 2. Higher values (e.g. 0.8) will make the output more random, while lower values (e.g. 0.2) will make the output more focused and deterministic. We usually recommend changing this or top_p but not both.
top_p
integer 
optional
An alternative to temperature sampling, called kernel sampling, where the model considers the results of markers with top_p probability mass. So 0.1 means that only the markers that make up the top 10% probability mass are considered. We usually recommend changing either this or TEMPERATURE but not both.
n
integer 
optional
How many chat completion options to generate for each input message.
stream
boolean 
optional
If set, a partial message increment will be sent, as in ChatGPT. When the token is available, the token will be sent as a data-only server send event data: [DONE] and the stream is terminated by the message. For sample code, see the OpenAI Cookbook.
stop
string 
optional
The API will stop generating more tokens for up to 4 sequences.
presence_penalty
number 
optional
A number between -2.0 and 2.0. Positive values penalize new tokens based on whether or not they have appeared in the text so far, thus increasing the likelihood that the model will talk about new topics. See more about frequency and presence penalties:https://platform.openai.com/docs/api-reference/parameter-details
frequency_penalty
number 
optional
A number between -2.0 and 2.0. Positive values penalize new tokens based on whether or not they have appeared in the text so far, thus increasing the likelihood that the model will talk about new topics. See more about frequency and presence penalties:https://platform.openai.com/docs/api-reference/parameter-details
logit_bias
null 
optional
Modifies the likelihood that the specified token will appear in the completion. Accepts a json object that maps markers (specified by the marker ID in the tagger) to an associated deviation value from -100 to 100. Mathematically, the deviations are added to the logits generated by the model before sampling. The exact effect varies from model to model, but values between -1 and 1 should reduce or increase the likelihood of selection; values like -100 or 100 should result in prohibited or exclusive selection of the associated token.
user
string 
optional
Unique identifiers that represent your end users can help OpenAI monitor and detect abuse. Learn more:https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids
Example
{
    "model": "cometapi-3-7-sonnet",
    "max_tokens": 1024,
    "messages": [
        {
            "role": "user",
            "content": "Hello, world"
        }
    ]
}

Request samples

Shell
JavaScript
Java
Swift
Go
PHP
Python
HTTP
C
C#
Objective-C
Ruby
OCaml
Dart
R
Request Request Example
Shell
JavaScript
Java
Swift
curl --location --request POST 'https://api.cometapi.com/v1/messages' \
--header 'Authorization: Bearer {{api-key}}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "model": "cometapi-3-7-sonnet",
    "max_tokens": 1024,
    "messages": [
        {
            "role": "user",
            "content": "Hello, world"
        }
    ]
}'

Responses

🟢200Successful Response
application/json
Body
id
string 
required
Message's unique identifier
type
string 
required
Type of the object
role
string 
required
Role of the message sender
content
array [object {2}] 
required
Actual text content
type
string 
optional
Type of content block
text
string 
optional
Actual text content
model
string 
required
Specific model name and version that generated this response
stop_reason
string 
required
Reason why the model stopped generating content
usage
object 
required
Object about API call resource usage, mainly token statistics
input_tokens
integer 
required
Number of tokens input to the model (prompts and historical messages, etc.)
output_tokens
integer 
required
Number of tokens in the output content generated by the model
cache_creation_input_tokens
integer 
required
Number of input tokens used to create the cache
cache_read_input_tokens
integer 
required
Number of input tokens read from the cache
Example
{
    "id": "chatcmpl-a1568ee22c7a4f31848f3dc16c2f3",
    "type": "message",
    "role": "assistant",
    "content": [
        {
            "type": "text",
            "text": "Hello! It's nice to meet you. I'm Claude, an AI assistant. How can I help you today? Feel free to ask me anything you'd like - I'm happy to assist with a wide range of topics and tasks."
        }
    ],
    "model": "claude-3-5-sonnet-20240620",
    "stop_reason": "end_turn",
    "stop_sequence": null,
    "usage": {
        "input_tokens": 3,
        "output_tokens": 52,
        "cache_creation_input_tokens": 0,
        "cache_read_input_tokens": 0
    }
}
Previous
Create translation
Next
Setting suno Version
Built with