a pure rust chat bot using mistral-7b, huggingface/candle, axum, websockets, and leptos (wasm).
Published 10 months ago • 2.6K plays • Length 4:05Download video MP4
Download video MP3
Similar videos
-
6:05
create safetensors and quantized files for candle chat (pure rust ai chat bot with a wasm frontend).
-
1:59
pure rust serverless ai chat bot with a wasm frontend hosted statically on github pages.
-
3:43
config new models, tokenizer.json, and load .pdf and .txt for fireside chat (pure rust ai chat bot).
-
3:31
how to run local ai on ubuntu without a gpu - linux ai tutorial
-
12:01
full stack auto coder builds fastapi webapps using o1-mini and claude sonnet
-
15:36
mistral 7b llm ai leaderboard: the king of the leaderboard? nvidia rtx 3090 vision 24gb throw down!
-
1:03
addition of leptonic user interface to candle chat frontend, and rest api to change inference args.
-
0:32
getting mistral-7b into huggingface inference endpoint!
-
4:53
how to run your own uncensored ai on ubuntu - mistral 7b llm
-
21:19
mistral moe - better than chatgpt?
-
8:27
codestral ai tutorial: getting started with mistral coding llm
-
2:57
rust hugging face candle hello world with cuda
-
22:07
build an ai job interview prep chatbot using mistral-7b model|langchain|all opensource#llm #ai #apps
-
8:14
mistral-7b with localgpt: chat with your documents
-
1:59
how to use mistral ai to chat | mistral ai tutorial
-
18:43
using codestral in vs code locally for free but there is a big problem
-
11:12
mistral ai: the gen ai start-up you did not know existed
-
24:13
mistral-7b-instruct multiple-pdf chatbot with langchain & streamlit |free colab|all open source #ai
-
43:39
how to build a full-stack rag powered smart web searching ai tool using tavily, langchain & mistral
-
6:43
get started with mistral 7b locally in 6 minutes