Skip to main content

Micro-Based Model

Our smallest model, designed for simple and repeatable tasks. What it’s good at, how to structure datasets, and troubleshooting.

N
Written by Niko McCarty
Updated this week

The Micro-Based model is the smallest model available in Minibase. It is designed for lightweight tasks and optimized for mobile and edge deployment. While it does not provide deep reasoning or long, dynamic conversations, it performs well on simple, repeatable tasks where responses are short and predictable.

Typical use cases include:

  • Classification: labeling text or short inputs with predefined categories.

  • Extraction: pulling out keywords or structured values from text.

  • Simple Q&A: answering direct factual questions with short responses.

  • Instruction Following (basic): handling narrow, repeatable tasks reliably.

This model is best when efficiency and small memory footprint are more important than conversational depth or creativity.

Model Specs

  • # of Parameters: ~135M

  • Format Type: causal_lm

  • Download Size: ~135 MB (estimate)

  • Max Sequence Length: 1,024 tokens

  • Context Window: 1,024 tokens

The compact size makes it fast and efficient, ideal for edge devices and resource-limited environments.

How to Use

Step 1: Choose the model

  • From the Models tab, select the Micro-Based model when creating a new fine-tuned model.

Step 2: Prepare your dataset

  • Use Instruction, Input, and Response fields, keeping examples short and consistent.

  • Example:

    • Instruction: “Classify sentiment as Positive or Negative”

    • Input: “I love this product!”

    • Response: “Positive”

Step 3: Train your model

  • Upload in CSV, Excel, JSON, or JSON-L format. All files are standardized to JSON-L.

  • Start fine-tuning; training uses LoRA for efficient adaptation.

Step 4: Test in the browser

  • On the Model Details page, you can chat with your model directly.

  • Adjust Temperature (randomness) and Max Tokens (length) to evaluate its behavior.

Step 5: Deploy or download

  • Download in GGUF format for local or mobile use.

  • Or deploy instantly with Minibase Cloud to access it via APIs.

Tips & Best Practices

  1. Keep tasks narrow: This model works best when it only has to do one specific job repeatedly.

  2. Use small, clean datasets: Consistency matters more than size for micro tasks.

  3. Short outputs are ideal: Design tasks around labels, keywords, or short responses rather than long-form text.

  4. Quantization options:

    1. High (Q8_0): best quality, near-lossless.

    2. Medium (Q4_K_M): balanced, recommended for most users.

    3. Low (Q4_K_S): smallest, fastest, optimized for mobile/edge.

Troubleshooting

Responses are inconsistent

Ensure training examples are uniform and tightly scoped. Variability can confuse smaller models.

The model doesn’t give long answers

The Micro-based model isn’t designed for that. Fix: Use the Task-based model or a larger model better suited for instruction following and reasoning.

Training still feels slow

While smaller models train faster, dataset size matters. Try a smaller dataset for testing, then scale up once satisfied with the model's performance.

Did this answer your question?