← Back to Work

Qwen3.5-2B Fine-tuned

RoleML Engineer
Year2025
TechUnsloth, Transformers, TRL
Qwen3.5-2B Fine-tuned AI Model

The Challenge

Create an optimized fine-tuned version of Qwen3.5-2B that delivers high-quality conversational AI capabilities while training faster and more efficiently than traditional methods.

Solution

I leveraged Unsloth, a cutting-edge fine-tuning framework that enables 2x faster training compared to standard methods. The model was fine-tuned on custom datasets to enhance its conversational abilities and domain-specific knowledge.

Technical Details

  • Base Model: Qwen/Qwen3.5-2B
  • Training Framework: Unsloth (2x faster training)
  • Parameters: 2B
  • Quantization: Q8_0 (8-bit)
  • Model Size: 2.01 GB
  • License: Apache 2.0

Key Features

  • Optimized for conversational text generation
  • Supports GGUF format for efficient inference
  • Compatible with Transformers library
  • Hardware compatible with 8-bit quantization
  • Ready for deployment with text-generation-inference

Results

The fine-tuned model achieves improved conversational performance while being significantly more efficient to train. Available on HuggingFace with over downloads and supporting multiple quantization formats for flexible deployment.

Interested in AI/ML projects?

Let's discuss how I can help with your machine learning needs.

Let's build something refreshing.