Add library_name, link to code and paper

#5
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -1,14 +1,17 @@
1
  ---
 
 
2
  license: other
 
3
  license_name: qwen-research
4
  license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
5
- language:
6
- - en
7
  pipeline_tag: text-generation
8
  ---
9
 
10
  # Qwen2.5-3B
11
 
 
 
12
  ## Introduction
13
 
14
  Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
@@ -47,6 +50,8 @@ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.gith
47
 
48
  For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
49
 
 
 
50
  ## Citation
51
 
52
  If you find our work helpful, feel free to give us a cite.
 
1
  ---
2
+ language:
3
+ - en
4
  license: other
5
+ library_name: transformers
6
  license_name: qwen-research
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
 
 
8
  pipeline_tag: text-generation
9
  ---
10
 
11
  # Qwen2.5-3B
12
 
13
+ This repository contains the 3B Qwen2.5 checkpoint described in the paper [Making LLMs Better Many-to-Many Speech-to-Text Translators with Curriculum Learning](https://huggingface.co/papers/2409.19510).
14
+
15
  ## Introduction
16
 
17
  Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
 
50
 
51
  For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
52
 
53
+ The Github repository for the paper is https://github.com/yxduir/LLM-SRT
54
+
55
  ## Citation
56
 
57
  If you find our work helpful, feel free to give us a cite.