Amazing enough, using Alpaca-Lora based on LLaMA (7B) to complete the fine-tuning in 20 minutes, the effect is comparable to Stanford Alpaca
NoSuchKey
Guess you like
Origin blog.csdn.net/sinat_37574187/article/details/131441733
Recommended
Ranking