democratizing foundation models via k-bit quantization - tim dettmers | stanford mlsys #82
Published Streamed 8 months ago • 3.4K plays • Length 58:25Download video MP4
Download video MP3
Similar videos
-
47:35
foundation models on consumer devices - tianqi chen | stanford mlsys #85
-
58:07
ml for ml compilers - mangpo phothilimthana | stanford mlsys #80
-
55:59
training llms at scale - deepak narayanan | stanford mlsys #83
-
58:29
a taxonomy of ml for systems problems - martin maas | stanford mlsys #81
-
30:48
qlora: efficient finetuning of quantized llms | tim dettmers
-
1:11:43
lecture 05 - quantization (part i) | mit 6.s965
-
1:20:22
stanford cs229 machine learning i model-based rl, value function approximator i 2022 i lecture 20
-
1:01:53
tim dettmers | qlora: efficient finetuning of quantized large language models
-
6:46
tim dettmers—k-bit inference scaling laws
-
52:56
multimodal reasoning: palm-e & gemini - aakanksha chowdhery | stanford mlsys #90
-
58:41
8-bit methods for efficient deep learning with tim dettmers
-
29:11
stanford webinar - democratizing model discovery with neural networks
-
58:41
a data-centric view on reliable generalization - ludwig schmidt | stanford mlsys #71
-
16:45
large language models for health 101