Published April 7, 2023 | Version v1
Journal Open

Large Deviations for Products of Non-Identically Distributed Network Matrices With Applications to Communication-Efficient Distributed Learning and Inference

  • 1. Department of Fundamental Sciences, Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
  • 2. Department of Power, Electronics, and Commu- nications Engineering, Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
  • 3. Department of Electrical and Computer Engineer- ing, Carnegie Mellon University, Pittsburgh, PA 15213 USA
  • 4. Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
  • 5. Amazon.com, Inc., Alexa AI, Pittsburgh, PA 15232 USA

Description

This paper studies products of independent but non-identically distributed random network matrices that arise as weight matrices in distributed consensus-type computation and inference procedures in peer-to-peer multi-agent networks. The non-identically distributed matrices studied in this paper model various application scenarios in which the agent communication network is time-varying, either naturally or engineered to achieve communication efficiency in computational procedures. First, under broad conditions on the statistics of the network matrix sequence, the product of the sequence is shown to converge almost surely to the consensus matrix and explicit large deviations rate of convergence are obtained. Specifically, given the admissible graph of interconnections modeling the base network topology, it is shown that the large deviations rate of consensus equals the minimum limiting value of the fluctuating graph cuts, where the edge costs are assigned through the current probabilities of the inter-agent communications. Secondly, an application of the above large deviations principle is studied in the context of distributed detection in time-varying networks with sequential observations. By adopting a consensus+innovations type distributed detection algorithm, as a by-product of this result, error exponents are obtained for the performance of distributed detection. It is shown that slow starts (slow increase) of inter-agent communication probabilities yield the same asymptotic error rate – and hence the same distributed detection performance, as if the communications were at their nominal levels from the beginning. As an important special case it is shown that when all the intermittent graph cuts have a link the probability of which increases to one, the performance of distributed detection is asymptotically optimal - i.e., equivalent to a centralized setup having access to all network data at all times.

Files

Petrovic_et_al_IEEETSP2023.pdf

Files (545.3 kB)

Name Size Download all
md5:8c536925acad53f0bb7ad1a72ffab5ef
545.3 kB Preview Download

Additional details

Funding

MARVEL – Multimodal Extreme Scale Data Analytics for Smart Cities Environments 957337
European Commission