Skip to main content

Troubleshooting Guide

M
Written by Michael McCarty
Updated over a month ago

# Troubleshooting Guide


Common issues and quick solutions for Minibase Ollama.


## Installation Issues


### "Command not found: ollama"


**Solution:**

```bash

# Add to PATH

export PATH="$HOME/.minibase/bin:$PATH"

echo 'export PATH="$HOME/.minibase/bin:$PATH"' >> ~/.zshrc

source ~/.zshrc

```


### "Permission denied"


**Solution:**

```bash

chmod +x ~/.minibase/bin/ollama

```


### macOS Security Warning


**Solution:**

1. System Settings → Privacy & Security

2. Click "Allow Anyway"

3. Run ollama again, click "Open"


## Connection Issues


### "Could not connect to ollama server"


**Solution:**

```bash

# Start the server

ollama serve

```


### Port Already in Use


**Solution:**

```bash

# Use different port

export OLLAMA_HOST="127.0.0.1:11435"

ollama serve

```


## Model Issues


### "Model not found"


**Causes:**

* Model ID incorrect

* Model not pulled yet

* No access to model


**Solution:**

```bash

# Verify model ID from minibase.ai

# Pull the model

ollama pull YOUR_CORRECT_MODEL_ID


# List what's downloaded

ollama list

```


### Download Fails


**Solution:**

* Check internet connection

* Verify API key: `cat ~/.minibase/config.json`

* Re-download Ollama package from minibase.ai

* Check model exists in your account


## Performance Issues


### Slow First Response


**Explanation:** Normal! First inference loads model into memory (2-5 seconds).


**Solution:**

```bash

# Keep model loaded longer

export OLLAMA_KEEP_ALIVE=1h

ollama serve

```


### Slow All Responses


**Solutions:**

* Check if GPU is being used: `ollama ps` (look for "GPU")

* Close other applications to free RAM

* Use smaller quantization model (select "low" in Minibase)


### High Memory Usage


**Solution:**

```bash

# Unload models faster

export OLLAMA_KEEP_ALIVE=5m

ollama serve


# Delete unused models

ollama rm OLD_MODEL

```


## Common Errors


### "Unknown capabilities for model"


**Solution:**

* Contact [email protected] with Model ID

* Try re-pulling: `ollama rm MODEL && ollama pull MODEL`


### "Context deadline exceeded"


**Causes:** Network timeout


**Solution:**

* Check internet connection

* Try again (downloads resume automatically)


## FAQ


### Can I use public Ollama models?


No. Minibase Ollama only works with models from your Minibase account.


### Do I need internet?


* **First time**: Yes, to download model

* **After that**: No, fully offline


### How much disk space do I need?


* Small models (135M): ~100-200 MB

* Medium models (1B): ~500 MB - 2 GB

* Large models (7B+): 4-40 GB


Check usage: `du -sh ~/.ollama/models`


### Can I use multiple models?


Yes! Download multiple models with `ollama pull`. Only one loads at a time by default.


### How do I update Minibase Ollama?


1. Download latest from minibase.ai → API Keys

2. Run installer (overwrites binary)

3. Your models and config are preserved


## Still Need Help?


1. Check [Ollama Documentation](https://github.com/ollama/ollama)

2. Email [email protected] with:

* Operating system

* `ollama --version` output

* Complete error message

* Model ID


## Diagnostic Commands


```bash

# Check version

ollama --version


# Check server

curl http://localhost:11434/api/version


# Check config (redact API key!)

cat ~/.minibase/config.json


# List models

ollama list


# Check what's loaded

ollama ps


# Check disk usage

du -sh ~/.ollama/models

```

Did this answer your question?