Published August 24, 2021 | Version v1
Conference paper Open

Inplace knowledge distillation with teacher assistant for improved training of flexible deep neural networks

  • 1. InterDigital

Description

Deep neural networks (DNNs) have achieved great success in various machine learning tasks. However, most existing powerful DNN models are computationally expensive and memory demanding, hindering their deployment in devices with low memory and computational resources or in applications with strict latency requirements. Thus, several resource-adaptable or flexible approaches were recently proposed that train at the same time a big model and several resource-specific sub-models. Inplace knowledge distillation (IPKD) became a popular method to train those models and consists in distilling the knowledge from a larger model (teacher) to all other sub-models (students). In this work a novel generic training method called IPKD with teacher assistant (IPKD-TA) is introduced, where sub-models themselves become teacher assistants teaching smaller sub-models. We evaluated the proposed IPKD-TA training method using two state-of-the-art flexible models (MSDNet and Slimmable MobileNet-V1) with two popular image classification benchmarks (CIFAR-10 and CIFAR-100). Our results demonstrate that the IPKD-TA is on par with the existing state of the art while improving it in most cases.

Notes

https://hal.archives-ouvertes.fr/hal-03222599/document

Files

EUSIPCO2021_IPKD_TA.pdf

Files (364.1 kB)

Name Size Download all
md5:127c97413d89321aa67ed8c393c8895c
364.1 kB Preview Download

Additional details

Related works

Is referenced by
Preprint: https://arxiv.org/pdf/2105.08369.pdf (URL)

Funding

AI4Media – A European Excellence Centre for Media, Society and Democracy 951911
European Commission