Skip to main navigation
Skip to search
Skip to main content
KAUST PORTAL FOR RESEARCHERS AND STUDENTS Home
Home
Profiles
Research units
Research output
Press/Media
Prizes
Courses
Equipment
Student theses
Datasets
Search by expertise, name or affiliation
LSDDL: Layer-Wise Sparsification for Distributed Deep Learning
Yuxi Hong, Peng Han
Computer Science
Computer, Electrical and Mathematical Sciences and Engineering
King Abdullah University of Science and Technology
Research output
:
Contribution to journal
›
Article
›
peer-review
3
Scopus citations
Overview
Fingerprint
Fingerprint
Dive into the research topics of 'LSDDL: Layer-Wise Sparsification for Distributed Deep Learning'. Together they form a unique fingerprint.
Sort by
Weight
Alphabetically
Keyphrases
Stochastic Gradient Descent
100%
Communication Time
100%
Deep Neural Network
100%
Distributed Deep Learning
100%
Sparsification
100%
Machine Learning
50%
Structural Information
50%
Communication Overhead
50%
Distributed Learning Algorithm
50%
Compress
50%
Communication Cost
50%
Prediction Accuracy
50%
Large-scale Cluster
50%
Weighting Method
50%
Network Communication
50%
Neural Network
50%
Training Process
50%
Performance Bottleneck
50%
Deep Learning Model
50%
Diverse Applications
50%
Compression Techniques
50%
Machine Learning Models
50%
Arms Race
50%
Decompression
50%
Application Domain
50%
Light-weighted
50%
Shared-nothing
50%
Real Model
50%
Training Machine
50%
PyTorch
50%
Distributed Machine Learning
50%
Computer Science
Gradient Descent
100%
Machine Learning
100%
Deep Neural Network
100%
Distributed Machine Learning
100%
Deep Learning
100%
Communication Overhead
50%
Communication Cost
50%
Structure Model
50%
Performance Bottleneck
50%
Prediction Accuracy
50%
Machine Learning Algorithm
50%
Deep Learning Model
50%
Communication Networks
50%
Experimental Result
50%
Training Process
50%
Neural Network
50%
Compression Technique
50%
Application Domain
50%
Engineering
Deep Neural Network
100%
Gradient Descent
100%
Deep Learning
100%
Experimental Result
50%
Machine Learning Algorithm
50%
Application Domain
50%
Decompression
50%
Compression Technique
50%
Supports
50%
Communication Network
50%
Model Structure
50%
Chemical Engineering
Learning System
100%
Deep Learning
100%
Deep Neural Network
50%
Neural Network
25%