Update README.md
Browse files
README.md
CHANGED
|
@@ -22,6 +22,7 @@ Techniques:
|
|
| 22 |
- Minitron: Compact Language models via Pruning & Knowledge Distillation
|
| 23 |
- DistiLLM: Towards Streamlined Distillation for Large Language Models
|
| 24 |
- Quantization
|
|
|
|
| 25 |
- Fine-Tuning | [GitHub](https://github.com/rsk2327/DistAya/tree/track/fine-tuning)
|
| 26 |
|
| 27 |
Dataset:
|
|
|
|
| 22 |
- Minitron: Compact Language models via Pruning & Knowledge Distillation
|
| 23 |
- DistiLLM: Towards Streamlined Distillation for Large Language Models
|
| 24 |
- Quantization
|
| 25 |
+
- KV Cache Compression
|
| 26 |
- Fine-Tuning | [GitHub](https://github.com/rsk2327/DistAya/tree/track/fine-tuning)
|
| 27 |
|
| 28 |
Dataset:
|