Abstract
Applications that involve analysis of data from distributed networked data sources typically involve computation performed centrally in a datacenter or cloud environment, with some minor pre-processing potentially performed at the data sources. As these applications grow in scale, this centralized approach leads to potentially impractical bandwidth requirements and computational latencies. This has led to interest in edge computing, where processing is moved nearer to the data sources, and recently, in-network computing, where processing is done as data progresses through the network. This paper presents a model for reasoning about distributed computing at the edge and in the network, with support for heterogeneous hardware and alternative software and hardware accelerator implementations. Unlike previous distributed computing models, it considers the cost of computation for compute-intensive applications, supports a variety of hardware platforms, and considers a heterogeneous network. The model is flexible and easily extensible for a range of applications and scales, and considers a variety of metrics. We use the model to explore the key factors that influence where computational capability should be placed and what platforms should be considered for distributed applications.
Original language | English (US) |
---|---|
Pages (from-to) | 395-409 |
Number of pages | 15 |
Journal | Future Generation Computer Systems |
Volume | 105 |
DOIs | |
State | Published - Apr 1 2020 |
Externally published | Yes |
Bibliographical note
Generated from Scopus record by KAUST IRTS on 2021-03-16ASJC Scopus subject areas
- Hardware and Architecture
- Software
- Computer Networks and Communications