solar 10.7b: combining llm's to scale performance - beats mixtral, llama 2, and more!
Published 10 months ago • 6.9K plays • Length 8:21Download video MP4
Download video MP3
Similar videos
-
9:32
solar-10.7b llm beats mixtral moe thanks to model merging and depth up-scaling (dus)
-
17:04
mistral 7b: the best tiny model ever! beats llama 2 (installation tutorial)
-
5:07
ollama - loading custom models
-
6:36
what is retrieval-augmented generation (rag)?
-
19:21
train mistral 7b to outperform llama 2 70b (zephyr 7b alpha)
-
4:51
how to use the llama 2 llm in python
-
17:17
build a talking ai with llama 3 (python tutorial)
-
17:51
i analyzed my finance with local llms
-
6:25
running mistral ai on your machine with ollama
-
18:37
mistral 7b - better than llama 2? | getting started, prompt template & comparison with llama 2
-
0:17
private llm vs ollama with mistral-7b-instruct-v0.2 model performance comparison
-
8:35
zephyr 7b alpha - a new recipe for fine tuning
-
0:10
i tricked chatgpt to think 9 10 = 21