Machine Learning Techniques for Understanding and Predicting Memory Interference in CPU-GPU Embedded Systems
Creators
- 1. Unimore
Description
Nowadays, heterogeneous embedded platforms are
extensively used in various low-latency applications, including
the automotive industry, real-time IoT systems, and automated
factories. These platforms utilize specific components, such as
CPUs, GPUs, and neural network accelerators for efficient task
processing and to solve specific problems with a lower power
consumption compared to more traditional systems. However,
since these accelerators share resources such as the global
memory, it is crucial to understand how workloads behave
under high computational loads to determine how parallel
computational engines on modern platforms can interfere and
adversely affect the system’s predictability and performance.
One area that remains unclear is the interference effect on
shared memory resources between the CPU and GPU: more
specifically, the latency degradation experienced by GPU kernels
when memory-intensive CPU applications run concurrently. In
this work, we first analyze the metrics that characterize the
behavior of different kernels under various board conditions
caused by CPU memory-intensive workloads on a Nvidia Jetson
Xavier. Then, we exploit various machine learning methodologies
aiming to estimate the latency degradation of kernels based on
their metrics. As a result of this, we are able to identify the
metrics that could potentially have the most significant impact
when predicting the kernels completion latency degradation.
Files
IEEE_RTCSA_2023_Embedded_System___SVR_RF_DEPPNN.pdf
Files
(964.3 kB)
Name | Size | Download all |
---|---|---|
md5:5e5f2482f7dfc30ea1674048b7c317a6
|
964.3 kB | Preview Download |