Conference paper Open Access

Inplace knowledge distillation with teacher assistant for improved training of flexible deep neural networks

Ozerov Alexey; Ngoc Q. K. Duong

Deep neural networks (DNNs) have achieved great success in various machine learning tasks. However, most existing powerful DNN models are computationally expensive and memory demanding, hindering their deployment in devices with low memory and computational resources or in applications with strict latency requirements. Thus, several resource-adaptable or flexible approaches were recently proposed that train at the same time a big model and several resource-specific sub-models. Inplace knowledge distillation (IPKD) became a popular method to train those models and consists in distilling the knowledge from a larger model (teacher) to all other sub-models (students). In this work a novel generic training method called IPKD with teacher assistant (IPKD-TA) is introduced, where sub-models themselves become teacher assistants teaching smaller sub-models. We evaluated the proposed IPKD-TA training method using two state-of-the-art flexible models (MSDNet and Slimmable MobileNet-V1) with two popular image classification benchmarks (CIFAR-10 and CIFAR-100). Our results demonstrate that the IPKD-TA is on par with the existing state of the art while improving it in most cases.

https://hal.archives-ouvertes.fr/hal-03222599/document
Files (364.1 kB)
Name Size
EUSIPCO2021_IPKD_TA.pdf
md5:127c97413d89321aa67ed8c393c8895c
364.1 kB Download
26
8
views
downloads
All versions This version
Views 2626
Downloads 88
Data volume 2.9 MB2.9 MB
Unique views 1717
Unique downloads 88

Share

Cite as