qqWen-72B-RL: Reasoning-Enhanced Q Programming Language Model

Model Overview

qqWen-72B-RL is a 72-billion parameter language model specifically designed for advanced reasoning and code generation in the Q programming language. Built upon the robust Qwen 2.5 architecture, this model has undergone a comprehensive two-stage training process: pretraining, and reinforcement learning (RL) for the Q programming language. qqWen-72B-RL is a reasoning model.

This model is special/different than the previous models in that the pretraining accuracy is high enough that we are able to skip the SFT phase and directly go to RL. With the hope being this will bias the model less towards pythonic Q.

Associated Technical Report: Report

πŸ”€ About Q Programming Language

Q is a high-performance, vector-oriented programming language developed by Kx Systems, primarily used in:

  • Financial Markets: High-frequency trading, risk management, and market data analysis
  • Time-Series Analytics: Real-time processing of large-scale temporal data
  • Data Science: Efficient manipulation of large datasets with concise syntax
  • Quantitative Research: Mathematical modeling and statistical analysis

Key Q Language Features:

  • Vector Operations: Built-in support for element-wise operations on arrays
  • Functional Programming: First-class functions and powerful combinators
  • Memory Efficiency: Optimized for handling large datasets in minimal memory
  • Speed: Exceptional performance for numerical computations
  • Concise Syntax: Expressive code that can accomplish complex tasks in few lines

πŸ“ Citation

If you use this model in your research or applications, please cite our technical report.

@misc{hogan2025technicalreportfullstackfinetuning,
      title={Technical Report: Full-Stack Fine-Tuning for the Q Programming Language}, 
      author={Brendan R. Hogan and Will Brown and Adel Boyarsky and Anderson Schneider and Yuriy Nevmyvaka},
      year={2025},
      eprint={2508.06813},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2508.06813}, 
}
Downloads last month
14
Safetensors
Model size
73B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for morganstanley/qqWen-72B-RL

Base model

Qwen/Qwen2.5-72B
Finetuned
(54)
this model
Quantizations
2 models

Collection including morganstanley/qqWen-72B-RL