Skip to main content

How to create an API key and run inference using the Minibase Inference Cloud.

Retrieving an API key and running your first inference request using cURL, Python, or JavaScript.

M
Written by Michael McCarty
Updated over a week ago

Overview

API keys allow you to authenticate and interact with Minibase.ai programmatically, without needing to log in through the web interface. Once you’ve created an API key, you can use it to run inference requests on any of your models. This article explains how to retrieve an API key from the dashboard and make your first inference request using cURL, Python, or JavaScript.


How to Use

Step 1: Retrieve your API key

  1. Go to Settings → API Keys in your Minibase dashboard.

  2. Click Create API Key. You can optionally give it a name (e.g., “Production Server”) and set an expiration date.

  3. Copy the API key when it’s shown — this is the only time you’ll see the full key. Store it securely.

  4. Your new API key will now appear in the list of active keys.


Step 2: Make your first inference request

Every inference request is sent to the Minibase API endpoint with:

  • Your API key in the Authorization header.

  • The model ID you want to use.

  • A prompt and inference parameters (e.g., max_tokens, temperature).

Endpoint:

https://staging.minibase.ai/api.php

Example model ID:

micro_base_t11_1755574721

Details

cURL example

# Use API key authentication via Authorization header.
API_KEY="YOUR_API_KEY"
BASE="https://staging.minibase.ai/api.php"

# Call inference directly with API key
curl -s -X POST "$BASE" \
-H "Authorization: Bearer $API_KEY" \
--data-urlencode "action=mt_inference" \
--data-urlencode "model_id=micro_base_t11_1755574721" \
--data-urlencode "prompt=Explain quantum entanglement in one sentence." \
--data-urlencode "max_tokens=128" \
--data-urlencode "temperature=0.7" \
--data-urlencode "format=json"

Python example

# Use API key authentication via Authorization header.
import requests

API_KEY = "YOUR_API_KEY"
BASE = "https://staging.minibase.ai/api.php"

# Call inference directly with API key
r = requests.post(BASE,
headers={'Authorization': f'Bearer {API_KEY}'},
data={
'action': 'mt_inference',
'model_id': 'micro_base_t11_1755574721',
'prompt': 'Explain quantum entanglement in one sentence.',
'max_tokens': '128',
'temperature': '0.7',
'format': 'json'
}
)
print(r.json())

JavaScript example

// Use API key authentication via Authorization header.
const API_KEY = "YOUR_API_KEY";
const BASE = 'https://staging.minibase.ai/api.php';

async function infer(prompt) {
const res = await fetch(BASE, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/x-www-form-urlencoded'
},
body: new URLSearchParams({
action: 'mt_inference',
model_id: 'micro_base_t11_1755574721',
prompt,
max_tokens: '128',
temperature: '0.7',
format: 'json'
})
}).then(r => r.json());
console.log(res);
}

// Usage:
// await infer('Explain quantum entanglement in one sentence.');

Tips & Best Practices

  • Copy your API key once: You’ll only see the full key at creation time. Store it somewhere secure (e.g., secret manager).

  • Rotate keys: If you suspect a key has leaked, revoke it immediately and create a new one.

  • Use separate keys per environment: For example, create one for “Development” and one for “Production” to simplify debugging and auditing.

  • Test with cURL first: This makes it easy to confirm your API key and endpoint work before integrating into an application.


Troubleshooting

Q: My request returns 401 Unauthorized.

A: Double-check that your Authorization header is set correctly:

-H "Authorization: Bearer YOUR_API_KEY"

Also confirm that the key is active and not revoked.

Q: My request returns an error about model_id.

A: Ensure the model_id matches exactly what’s listed on your model’s page (e.g., micro_base_t11_1755574721).

Q: Why do I see “Invalid action”?

A: Make sure you are using action=mt_inference in your request body.

Q: Why does it say I need to deploy?

A: Before you can run inference on a model, you must first select to "deploy" the model on the models display page. If the model has not been deployed, the inference request will be rejected.


Need More Help?

Need More Help?
Join our Discord support server to chat with our team and get real-time assistance.

Did this answer your question?