evallm: interactive evaluation of large language model prompts on user-defined criteria
Published 3 months ago • 138 plays • Length 11:40Download video MP4
Download video MP3
Similar videos
-
0:31
evallm: interactive evaluation of large language model prompts on user-defined criteria
-
15:02
llmr: real-time prompting of interactive worlds using large language models
-
2:57
llm comparator: visual analytics for side-by-side evaluation of large language models
-
3:01
[de4903] towards personalized evaluation of large language models with an anonymous crowd-sourcing p
-
18:22
query, key and value matrix for attention mechanisms in large language models
-
14:20
large language models can self-improve at web agent tasks
-
7:08
re-evaluate your language learning methods
-
6:46
language model evaluation and perplexity
-
2:50
evaluation approaches for your llm (large language model): insights from microsoft & langchain
-
0:31
sensecape: enabling multilevel exploration and sensemaking with large language models
-
0:31
generating automatic feedback on ui mockups with large language models
-
22:14
giraffe: adventures in expanding context lengths in llms
-
1:43
generative ai weekly research highlights | july 3 -july 9
-
1:50
[short] long-form factuality in large language models
-
2:03
[w4g0150] item-side fairness of large language model-based recommendation system
-
2:40
[short] moe-llava: mixture of experts for large vision-language models
-
7:00
large language models as tool makers
-
3:04
ep-alm: efficient perceptual augmentation of language models
-
1:35
[short] human alignment of large language models throughonline preference optimisation