Get up and running with your first Minibase model in 5 minutes.
Prerequisites
Ollama server running (`ollama serve`)
Step 1: Get Your Model ID
Click on a trained model
Copy the Model ID (looks like: `my_model_name_1234567890_abcd1234`)
Step 2: Download the Model
```bash
ollama pull YOUR_MODEL_ID
```
Example:
```bash
ollama pull toxic_classifier_1760421525_9afae726
```
You'll see download progress. This only happens once - the model is cached locally.
Step 3: Run Inference
Basic Usage
```bash
echo "input text" | ollama run YOUR_MODEL_ID
```
Example:
```bash
echo "screw you" | ollama run toxic_classifier_1760421525_9afae726
# Output: "You're being rude."
```
Interactive Mode
```bash
ollama run YOUR_MODEL_ID
```
Type your input, press Enter. Type `/bye` to exit.
With Custom Instruction
```bash
echo "Hello world" | ollama run YOUR_MODEL_ID --instruction "Translate to Spanish"
# Output: "Hola mundo"
```
Essential Commands
```bash
# List downloaded models
ollama list
# View model details
ollama show YOUR_MODEL_ID
# Delete a model
ollama rm YOUR_MODEL_ID
# Stop server
pkill ollama
```
How It Works
Your model has a default instruction from training
Input is formatted: `Instruction: [default]\n\nInput: [your text]\n\nResponse:`
Model generates output based on this formatted prompt
Use `--instruction` to override the default instruction
Process Files
```bash
# Single file
cat input.txt | ollama run YOUR_MODEL_ID > output.txt
# Batch processing
for file in *.txt; do
cat "$file" | ollama run YOUR_MODEL_ID > "processed_$file"
done
```
Troubleshooting
"Model not found"
Check Model ID is correct (copy from minibase.ai)
Run `ollama pull YOUR_MODEL_ID` first
"Could not connect to ollama server"
Run `ollama serve` in a terminal
Check server is running: `curl http://localhost:11434/api/version`
Slow first response
Normal! First inference loads model into memory (~2-5 seconds)
Subsequent requests are much faster
More Information