1 |
YU J, TAN M, ZHANG H Y, et al. Hierarchical deep click feature prediction for fine-grained image recognition [J]. IEEE transactions on pattern analysis and machine intelligence, 2022, 44(2): 563–578. DOI: 10.1109/TPAMI.2019.2932058
|
2 |
KRIMAN S, BELIAEV S, GINSBURG B, et al. Quartznet: deep automatic speech recognition with 1D time-channel separable convolutions [C]//2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2020: 6124–6128. DOI: 10.1109/ICASSP40776.2020.9053889
|
3 |
AHMAD F, ABBASI A, LI J J, et al. A deep learning architecture for psychometric natural language processing [J]. ACM transactions on information systems, 2020, 38(1): 1–29. DOI: 10.1145/3365211
|
4 |
HONG R, CHANDRA A. DLion: Decentralized distributed deep learning in micro-clouds [C]//Proceedings of the 30th International Symposium on High-Performance Parallel and Distributed Computing. ACM, 2021: 227–238. DOI: 10.1145/3431379.3460643
|
5 |
TIAN L, YANG M Z, WANG S G. An overview of compute first networking [J]. International journal of web and grid services, 2021, 17(2): 81–97. DOI: 10.1504/ijwgs.2021.114566
|
6 |
KRÓL M, MASTORAKIS S, ORAN D, et al. Compute first networking: distributed computing meets ICN [C]//The 6th ACM Conference on Information-Centric Networking. ACM, 2019: 67–77. DOI: 10.1145/3357150.3357395
|
7 |
AWAN A A, HAMIDOUCHE K, HASHMI J M, et al. Scaffe: co-designing MPI runtimes and caffe for scalable deep learning on modern GPU clusters [J]. ACM sigplan notices, 2017, 52(8): 193–205
|
8 |
WANG S, LI D, GENG J K, et al. Impact of network topology on the performance of DML: Theoretical analysis and practical factors [C]//IEEE Conference on Computer Communications. IEEE, 2019: 1729–1737. DOI: 10.1109/INFOCOM.2019.8737595
|
9 |
LI M, ZHOU L, YANG Z, et al. Parameter server for distributed machine learning [J]. Big learning NIPS workshop, 2013, 6: 2–12
|
10 |
LI M, ANDERSEN D G, PARK J W, et al. Scaling distributed machine learning with the parameter server [C]//The 11th USENIX conference on Operating Systems Design and Implementation. ACM, 2014: 583–598. DOI: 10.5555/2685048.2685095
|
11 |
LI M, ANDERSEN D G, SMOLA A, et al. Communication efficient distributed machine learning with the parameter server [C]//The 27th International Conference on Neural Information Processing Systems. ACM, 2014: 19–27
|
12 |
ZHANG S, CHOROMANSKA A, LECUN Y. Deep learning with elastic averaging SGD [C]//The 28th International Conference on Neural Information Processing Systems. ACM, 2015: 685–693
|
13 |
DEAN J, CORRADO G S, MONGA R, et al. Large scale distributed deep networks [J]. Advances in neural information processing systems, 2012, 1: 1223–1231
|
14 |
CHEN T Q, LI M, LI Y T, et al. MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems [EB/OL]. [2022-10-10].
|
15 |
ABADI M, AGARWAL A, BARHAM P, et al. TensorFlow: large-scale machine learning on heterogeneous distributed systems [EB/OL]. [2022-10-10].
|
16 |
XING E P, HO Q, DAI W, et al. Petuum: a new platform for distributed machine learning on big data [J]. IEEE transactions on big data, 2015, 1(2): 49–67. DOI: 10.1109/tbdata.2015.2472014
|
17 |
MXNET. Distributed training in MXNet. [EB/OL]. [2022-10-10].
|
18 |
CHEN Y R, PENG Y H, BAO Y X, et al. Elastic parameter server load distribution in deep learning clusters [C]//Proceedings of the 11th ACM Symposium on Cloud Computing. ACM, 2020: 507–521. DOI: 10.1145/3419111.3421307
|
19 |
MOHAN V, REDDY Y J, KALPANA K. Active and passive network measurements: a survey [J]. International journal of computer science and information technologies, 2011, 2(4): 1372–1385
|
20 |
GOEL U, WITTIE M P, CLAFFY K C, et al. Survey of end-to-end mobile network measurement testbeds, tools, and services [J]. IEEE communications surveys & tutorials, 2016, 18(1): 105–123. DOI: 10.1109/COMST.2015.2485979
|
21 |
TC(8). Linux tc. [EB/OL]. [2022-10-10].
|