๐Ÿง  ZeroXClem/Qwen-4B-Valiant-Polaris

Overview

ZeroXClem/Qwen-4B-Valiant-Polaris is a thoughtfully blended model crafted using Model Stock merging via MergeKit. It fuses the structured reasoning of Polaris, the creative expressiveness of Dot-Goat and RP-V3, and the scientific depth of ShiningValiant3 into a powerful 4B architecture built atop the official Qwen/Qwen3-4B.

Designed for enhanced reasoning, uncensored creativity, deep roleplay, and advanced agentic performance โ€” this model is both lightweight and intellectually formidable.


๐Ÿ”ง Merge Details

  • Merge Method: model_stock
  • Base Model: Qwen/Qwen3-4B
  • Dtype: bfloat16
  • int8_mask: true
  • normalize: false
  • Tokenizer Source: Qwen/Qwen3-4B

Merge Configuration

models:
  - model: bunnycore/Qwen3-4B-Dot-Goat
  - model: bunnycore/Qwen3-4B-RP-V3
  - model: POLARIS-Project/Polaris-4B-Preview
  - model: ValiantLabs/Qwen3-4B-ShiningValiant3
  - model: Qwen/Qwen3-4B
merge_method: model_stock
base_model: Qwen/Qwen3-4B
normalize: false
int8_mask: true
dtype: bfloat16
tokenizer_source: Qwen/Qwen3-4B

๐Ÿงฌ Models Merged

๐Ÿ bunnycore/Qwen3-4B-Dot-Goat

Uncensored, multi-domain, LoRA-infused Qwen model focusing on creativity, tool-use, and deep chat alignment.

๐ŸŽญ bunnycore/Qwen3-4B-RP-V3

Character-rich roleplay personality fusion from the amoral, mixture-of-thought, and SuperbEmphasis trees.

๐ŸŒŒ POLARIS-Project/Polaris-4B-Preview

Post-trained with advanced reinforcement learning on reasoning-heavy datasets. Surpasses Claude Opus & Grok on math and logic benchmarks.

โœจ ValiantLabs/Qwen3-4B-ShiningValiant3

Expertly aligned to scientific reasoning, agentic workflows, and multi-domain creative logic.

๐Ÿ”ง Qwen/Qwen3-4B

Official pretrained Qwen3 model with support for thinking / non-thinking modes, multilingual reasoning, and tool-calling capabilities.


โœจ Features & Highlights

๐Ÿ”น Advanced Reasoning โ€” Polaris post-training brings SOTA performance in chain-of-thought, math, and symbolic logic.

๐Ÿ”น Roleplay & Uncensored Expressiveness โ€” RP-V3 and Dot-Goat contribute dynamic personas and emotion-rich conversational modeling.

๐Ÿ”น Scientific & Engineering Alignment โ€” ShiningValiant3 ensures excellent handling of complex scientific and analytical queries.

๐Ÿ”น Multimodal-Friendly & Tool-Aware โ€” Qwenโ€™s native agentic design enables external tool use and seamless task execution.

๐Ÿ”น Lightweight Excellence โ€” At just 4B parameters, this model performs impressively for its size with long context (32k+) and efficient inference.


๐ŸŽฏ Use Cases

  • ๐Ÿ’ฌ Conversational & RP Agents
  • ๐Ÿ“š Scientific Reasoning & Educational Tutoring
  • ๐Ÿ” Advanced Math & Logic Problem Solving
  • โœ๏ธ Creative Writing & Storyworld Simulation
  • ๐Ÿง  Tool-Integrated Autonomous Agents

๐Ÿš€ Usage Instructions

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ZeroXClem/Qwen-4B-Valiant-Polaris"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

prompt = "Solve: What is the smallest prime greater than 100?"
messages = [{"role": "user", "content": prompt}]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True
)

inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

๐Ÿงญ Alignment & Ethics

โš ๏ธ Unfiltered Behavior: Some sub-models are uncensored and may produce unmoderated content. Please implement safety layers when deploying in public-facing apps.

โš ๏ธ Responsible Use: Outputs are governed by their inputs. Always review critical output for bias, hallucination, or ethical misalignment.

๐Ÿ“œ License: Apache 2.0 + governed by respective base model licenses (see individual repos).


๐Ÿ’Œ Feedback & Contributions

Got thoughts, benchmarks, or new merge suggestions? Weโ€™d love to hear from you! Feel free to:

  • Submit issues or pull requests ๐Ÿ’ก
  • Tag us in your Hugging Face projects โค๏ธ
  • Join the discussion around merging and alignment at @ZeroXClem on HF and GitHub!

ZeroXClem Team | 2025 โœจ

Downloads last month
7
Safetensors
Model size
4B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ZeroXClem/Qwen3-4B-Valiant-Polaris