API Reference
OpenAI-compatible chat completions and models endpoints. SSE streaming, hard-pinned to deepseek-v3.2.
Mikan Cloud speaks the OpenAI Chat Completions wire format. Anything you'd send to https://api.openai.com/v1 works against https://api.mikancloud.com/v1, with two pins.
POST /v1/chat/completions
Mirrors the OpenAI Chat Completions spec — same request body, same response, same SSE event format.
Pins
| Field | Value | Notes |
|---|---|---|
model | deepseek-v3.2 | Any other value returns 400 model_not_found. |
stream | true recommended | Non-streaming works, but Cursor expects SSE. |
Authorization | Bearer sk-mk-… | Required on every request. |
Request
{
"model": "deepseek-v3.2",
"stream": true,
"messages": [
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "ping"}
],
"temperature": 0.7,
"max_tokens": 1024
}Response
200 OK returns the standard OpenAI shape: id, object: "chat.completion", choices[], usage{}. With stream: true, you receive text/event-stream chunks (data: {…}) terminated by data: [DONE].
GET /v1/models
Returns the single supported model.
{
"object": "list",
"data": [
{
"id": "deepseek-v3.2",
"object": "model",
"owned_by": "deepseek"
}
]
}Rate limits
Default: 60 requests per minute per API key. Exceeding this returns 429 rate_limit_exceeded. Bump the cap by emailing support — we'll lift verified accounts to 600 rpm.
What we don't expose
embeddings,images,audio— DeepSeek V3.2 is text-only.- Function calling / tool use — supported by upstream; passthrough lands once Cursor ships first-class support.
- File uploads, threads, assistants — out of scope at MVP.