778: mixtral 8x22b: sota open-source llm capabilities at a fraction of the compute — with jon krohn
Published 2 months ago • 436 plays • Length 6:31Download video MP4
Download video MP3
Similar videos
-
0:57
the mixture of expert approaches: a promising way for creating smaller llms
-
0:56
rlhf reimagined
-
2:45
siplace tx - best placement performance for the smart smt factory
-
8:16
new ai mixtral 8x7b beats llama 2 and gpt 3.5
-
12:33
mistral 8x7b part 1- so what is a mixture of experts model?
-
10:19
open- vs closed-source llms: which is better?
-
5:37
684: get more language context out of your llm — with jon krohn (@jonkrohnlearns)
-
5:06
vicuña: how the revolutionary llm came to be
-
3:27
are llms the future or something else? (large language models)
-
10:44
702: llama 2 — it's time to upgrade your open-source llm — with jon krohn (@jonkrohnlearns)
-
9:55
llama 2: behind the scenes of today’s top open-source llm
-
21:02
resolving the llm user-privacy issue: how to train generative a.i. while keeping data confidential
-
4:20
galactica: the science-specific llm and why it was brought down
-
2:55
the trainium ultra cluster (for large language models)
-
7:25
toolformer: llm that learns how to use external tools
-
5:05
litgpt: the llm library for minimalists
-
5:35
how lightning ai makes llms easy
-
1:23:38
713: llama 2, toolformer and bloom: open-source llms — with meta's dr. thomas scialom
-
2:52
facts on open automation | the 80/20 rule: all eyes on asm material tower