chat with any llm (even llama 3.1 405b!) mobile support - copilot for obsidian v2.6.0
Published 4 weeks ago • 2K plays • Length 8:41Download video MP4
Download video MP3
Similar videos
-
13:47
biggest release - chat with your entire obsidian vault offline! claude 3 integration!
-
13:09
llama 3.2 goes multimodal and to the edge
-
13:55
how did llama-3 beat models x200 its size?
-
7:35
llama 3.2: llama goes multimodal ! what happened inference code
-
3:00
meta ai llama 3 explained (in 3 minutes!)
-
15:50
obsidian for beginners 2024
-
35:07
llms for advanced question-answering over tabular/csv/sql data (building advanced rag, part 2)
-
10:11
llama 101
-
15:02
llama 3 tested!! yes, it’s really that great
-
7:21
finally! open-source "llama code" coding assistant (tutorial)
-
8:15
llama 3 chatbox and ollama: your ultimate local ai assistant
-
5:39
boost productivity with free ai in vscode (llama 3 copilot)
-
18:05
how i built a multiple csv chat app using llama 3 ollama pandasai|fully local rag #ai #llm
-
12:23
build anything with llama 3 agents, here’s how
-
9:15
llama 3.2 is here and has vision 👀
-
8:02
use gpt-4, claude 3 and llama 3 in the same chat
-
8:48
llama 3 uncensored 🥸 it answers any question
-
10:21
llama-3.1 (405b, 70b, & 8b) continuedev free copilot! fully locally and opensource!
-
24:20
"okay, but i want llama 3 for my specific use case" - here's how
-
21:17
chat with docs using llama3 & ollama| fully local| ollama rag|chainlit #ai #llm #localllms
-
16:31
extending llama-3 to 1m tokens - does it impact the performance?
-
21:42
breaking down meta's billion dollar llm blueprint [llama-3.1 full breakdown]