๐ง ZeroXClem/Qwen-4B-Valiant-Polaris
Overview
ZeroXClem/Qwen-4B-Valiant-Polaris is a thoughtfully blended model crafted using Model Stock merging via MergeKit. It fuses the structured reasoning of Polaris, the creative expressiveness of Dot-Goat and RP-V3, and the scientific depth of ShiningValiant3 into a powerful 4B architecture built atop the official Qwen/Qwen3-4B.
Designed for enhanced reasoning, uncensored creativity, deep roleplay, and advanced agentic performance โ this model is both lightweight and intellectually formidable.
๐ง Merge Details
- Merge Method:
model_stock - Base Model:
Qwen/Qwen3-4B - Dtype:
bfloat16 - int8_mask:
true - normalize:
false - Tokenizer Source:
Qwen/Qwen3-4B
Merge Configuration
models:
- model: bunnycore/Qwen3-4B-Dot-Goat
- model: bunnycore/Qwen3-4B-RP-V3
- model: POLARIS-Project/Polaris-4B-Preview
- model: ValiantLabs/Qwen3-4B-ShiningValiant3
- model: Qwen/Qwen3-4B
merge_method: model_stock
base_model: Qwen/Qwen3-4B
normalize: false
int8_mask: true
dtype: bfloat16
tokenizer_source: Qwen/Qwen3-4B
๐งฌ Models Merged
๐ bunnycore/Qwen3-4B-Dot-Goat
Uncensored, multi-domain, LoRA-infused Qwen model focusing on creativity, tool-use, and deep chat alignment.
๐ญ bunnycore/Qwen3-4B-RP-V3
Character-rich roleplay personality fusion from the amoral, mixture-of-thought, and SuperbEmphasis trees.
๐ POLARIS-Project/Polaris-4B-Preview
Post-trained with advanced reinforcement learning on reasoning-heavy datasets. Surpasses Claude Opus & Grok on math and logic benchmarks.
โจ ValiantLabs/Qwen3-4B-ShiningValiant3
Expertly aligned to scientific reasoning, agentic workflows, and multi-domain creative logic.
๐ง Qwen/Qwen3-4B
Official pretrained Qwen3 model with support for thinking / non-thinking modes, multilingual reasoning, and tool-calling capabilities.
โจ Features & Highlights
๐น Advanced Reasoning โ Polaris post-training brings SOTA performance in chain-of-thought, math, and symbolic logic.
๐น Roleplay & Uncensored Expressiveness โ RP-V3 and Dot-Goat contribute dynamic personas and emotion-rich conversational modeling.
๐น Scientific & Engineering Alignment โ ShiningValiant3 ensures excellent handling of complex scientific and analytical queries.
๐น Multimodal-Friendly & Tool-Aware โ Qwenโs native agentic design enables external tool use and seamless task execution.
๐น Lightweight Excellence โ At just 4B parameters, this model performs impressively for its size with long context (32k+) and efficient inference.
๐ฏ Use Cases
- ๐ฌ Conversational & RP Agents
- ๐ Scientific Reasoning & Educational Tutoring
- ๐ Advanced Math & Logic Problem Solving
- โ๏ธ Creative Writing & Storyworld Simulation
- ๐ง Tool-Integrated Autonomous Agents
๐ Usage Instructions
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ZeroXClem/Qwen-4B-Valiant-Polaris"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Solve: What is the smallest prime greater than 100?"
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
๐งญ Alignment & Ethics
โ ๏ธ Unfiltered Behavior: Some sub-models are uncensored and may produce unmoderated content. Please implement safety layers when deploying in public-facing apps.
โ ๏ธ Responsible Use: Outputs are governed by their inputs. Always review critical output for bias, hallucination, or ethical misalignment.
๐ License: Apache 2.0 + governed by respective base model licenses (see individual repos).
๐ Feedback & Contributions
Got thoughts, benchmarks, or new merge suggestions? Weโd love to hear from you! Feel free to:
- Submit issues or pull requests ๐ก
- Tag us in your Hugging Face projects โค๏ธ
- Join the discussion around merging and alignment at
@ZeroXClemon HF and GitHub!
ZeroXClem Team | 2025 โจ
- Downloads last month
- 7