RWKV Language Model
RWKV (pronounced as RWaKuV) is an RNN with GPT-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable).
RWKV is an Open Source, non profit group, under the linux foundation. Supported by our sponsors.
So it's combining the best of RNN and transformer - great performance, fast inference, fast training, saves VRAM, "infinite" ctxlen, and free sentence embedding. Moreover it's 100% attention-free.
RWKV architecture paper
Current Version Status
Version | v4 - Raven | v4 - Dove | v5 - Eagle | v6 - Finch |
---|---|---|---|---|
Paper | 🎓Paper Accepted @ EMNLP 2023 | (no architecture change) | 🔧 stable | 🔧 stable |
Overall Status | 🌚 EOL - Recommended to use v6 instead | 🌚 EOL - Recommended to use v6 instead | ✅ General Availability | ✅ General Availability |
0.4B model | Fully Trained : rwkv-pile-430m | Fully Trained | ✅ Fully Trained | ... |
1.5B model | Fully Trained : rwkv-raven-1b5 | Fully Trained | ✅ Fully Trained | ✅ Fully Trained |
3B model | Fully Trained : rwkv-raven-3b | Fully Trained | ✅ Fully Trained | ✅ Fully Trained |
7B model | Fully Trained : rwkv-raven-7b | Fully Trained | ✅ Fully Trained | ✅ Fully Trained |
14B model / 7B 2T model | Fully Trained : rwkv-raven-14b | not-planned | not-planned | ✅ Fully Trained |
8x7B MoE model | not-planned | not-planned | scheduled | ... |
TLDR vs Existing transformer models
Good
- Lower resource usage (VRAM, CPU, GPU, etc) when running and training.
- 10x to a 100x lower compute requirements compared to transformers with large context sizes.
- Scales to any context length linearly (transformers scales quadratically)
- Perform just as well, in terms of answer quality and capability
- RWKV models are generally better trained in other languages (e.g. Chinese, Japanese, etc), then most existing OSS models
Bad
- Is sensitive to prompt fomatting, you may need to change how you prompt the model
- Is weaker at task that require lookback, so reorder your prompt accordingly
- (e.g. Instead of saying "For the document above do X", which will require a lookback. Say "For the document below do X" instead)
Who sponsors the compute for RWKV?
RWKV is made possible, as an Open Source project, thanks to the large amount of GPU compute and researchers time contributions from
Without their invaluable support, we would not have been able to develop the core RWKV foundation models that you see today.
In addition, we would like to thank
- alpin @ pygmalionAI
- AutoMeta @ AlignmentLab
- Recursal.AI
- Various other folks who donated slices of GPU time / preferred not to be named
For helping with GPU time, on smaller experiments, finetunes, and various models. Especially for those models that never get publically released in failed runs.
Quick RWKV community terminology
- RWKV - The model architecture itself, code found at https://github.com/BlinkDL/RWKV-LM
- RWKV World - New base model that is being trained on a larger more diverse mix of dataset, which include samples from over a 100 languages. Partially instruction trained.
- Raven - Official finetuned version of the base model, with instruction training
- Base model / Pile Plus Model - RWKV Base model is currently trained on "The Pile" with additional mix of other datasets. This model is not instruction trained.
Which RWKV models should I be using?
- For the majority of use cases, you should be using the pretrained, finetuned 7B world model
- On a case by case basis, you may find the older (smaller dataset), but larger raven model, to be bettter in certain specific benchmarks. When the 14B world model is ready, it is expected to replace the raven model in all use cases.
- If you want to finetune a model, for a very specific use case, without any existing instruction tuning, you may find the pile model more useful (rare, in most use cases its better to finetune the world or raven model)