mobisys 2022 - codl: efficient cpu-gpu co-execution for deep learning inference on mobile devices
Published 1 year ago • 267 plays • Length 15:59Download video MP4
Download video MP3
Similar videos
-
1:30
mobisys 2022 - teaser - codl efficient cpu-gpu co execution deep learning inference mobile devices
-
18:12
mobisys 2022 - band: coordinated multi-dnn inference on heterogeneous mobile processors
-
1:30
mobisys 2022 - teaser - band: coordinated multi dnn inference on heterogeneous mobile processors
-
16:59
mobisys 2022 - memory-efficient dnn training on mobile devices
-
15:33
mobisys 2022 - mgemm: low-latency convolution with minimal memory overhead optimized for mobile
-
1:26
mobisys 2022 - teaser - memory efficient dnn training on mobile devices
-
14:27
mobicom 21 - asymo: scalable and efficient deep-learning inference on asymmetric mobile cpus
-
1:30
mobisys 2022 - teaser - mgemm: low latency convolution with minimal memory overhead mobile devices
-
13:23
mobisys 2021 - nn-meter: towards accurate latency prediction of dl model inference on edge devices
-
1:31
mobisys 2022 - teaser - enabling software defined phy for backscatter networks
-
19:41
emdl 2021 - parallelfusion: towards maximum utilization of mobile gpu for dnn inference
-
15:36
mobisys 2022 - deepmix: mobility-aware, lightweight, and hybrid 3d object detection for headsets
-
1:30
mobisys 2022 - teaser - magsnoop: listening to sounds induced by magnetic field fluctuations....