Exploring hardware accelerator offload for the Internet of Things

Ryan A. Cooke, Suhaib A. Fahmy

Research output: Contribution to journalArticlepeer-review

Abstract

The Internet of Things is manifested through a large number of low-capability connected devices. This means that for many applications, computation must be offloaded to more capable platforms. While this has typically been cloud datacenters accessed over the Internet, this is not feasible for latency sensitive applications. In this paper we investigate the interplay between three factors that contribute to overall application latency when offloading computations in IoT applications. First, different platforms can reduce computation latency by differing amounts. Second, these platforms can be traditional server-based or emerging network-attached, which exhibit differing data ingestion latencies. Finally, where these platforms are deployed in the network has a significant impact on the network traversal latency. All these factors contributed to overall application latency, and hence the efficacy of computational offload. We show that network-attached acceleration scales better to further network locations and smaller base computation times that traditional server based approaches.
Original languageEnglish (US)
Pages (from-to)207-214
Number of pages8
JournalIT - Information Technology
Volume62
Issue number5-6
DOIs
StatePublished - Dec 16 2020
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-03-16

Fingerprint

Dive into the research topics of 'Exploring hardware accelerator offload for the Internet of Things'. Together they form a unique fingerprint.

Cite this