dc.description.abstract |
Current IoT applications generate huge volumes of complex data that requires agile analysis in order to obtain deep insights, often by applying Machine Learning (ML) techniques. Support vector machine (SVM) is one such ML technique that has been used in object detection, image classification, text categorization and Pattern Recognition. However, training even a simple SVM model on big data takes a significant amount of computational time. Due to this, the model is unable to react and adapt in real-time. There is an urgent need to speedup the training process. Since organizations typically use the cloud for this data processing, accelerating the training process has the advantage of bringing down costs. In this paper, we propose a model partitioning approach that partitions the tasks of Stochastic Gradient Descent based Support Vector Machines (SGD-SVM) on various edge devices for concurrent computation, thus reducing the training time significantly. The proposed partitioning mechanism not only brings down the training time but also maintains the approximate accuracy over the centralized cloud approach. With a goal of developing a smart objection detection system, we conduct experiments to evaluate the performance of the proposed method using SGD-SVM on an edge based architecture. The results illustrate that the proposed approach significantly reduces the training time by 47%, while decreasing the accuracy by 2%, and offering an optimal number of partitions. |
en_US |