lightning talk: accelerating inference on cpu with torch.compile - jiong gong, intel
Published 1 year ago • 1.3K plays • Length 11:43Download video MP4
Download video MP3
Similar videos
-
12:58
lightning talk: accelerated inference in pytorch 2.x with torch...- george stefanakis & dheeraj peri
-
10:33
lightning talk: accelerating pytorch performance with openvino - yamini, devang & mustafa
-
13:34
lightning talk: the fastest path to production: pytorch inference in python - mark saroufim, meta
-
1:30
asplos'24 - lightning talks - session 6b - specpim: accelerating speculative inference on pim enable
-
10:03
scaling inference on cpus with torchserve
-
8:18
lightning talk: standardizing cpu benchmarking with torchbench for pytorch... - xu zhao & mingfei ma
-
15:51
lightning talk: efficient inference at the edge: performance you need at the lowest... - felix baum
-
16:32
accelerate transformer inference on cpu with optimum and onnx
-
11:11
lightning talk: what's new for pytorch developer infrastructure - sahan paliskara & catherine lee
-
34:14
understanding the llm inference workload - mark moyou, nvidia
-
12:45
lightning talk: introduction to torch.distributed.pipelining - howard huang & ke wen, meta
-
9:24
lightning talk: hieroglyph2text: a pytorch-powered pipeline for automated egyptian h... susi gentsch
-
15:23
pytorch 2.0: unlocking the power of deep learning with the torch compile api - christian keller
-
1:25
asplos'24 - lightning talks - session 6c - pytorch 2: faster machine learning through dynamic python
-
10:59
lightning talk: lessons from using pytorch 2.0 compile in ibm's watsonx.ai inference - antoni martin
-
13:19
lightning talk: adding backends for torchinductor: case study with intel gpu - eikan wang, intel