Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/3196
Title: A framework for partitioning support vector machine models on edge architectures
Authors: Sahi, M.
Maruf, M. A.
Azim, A.
Auluck, N.
Keywords: edge computing
partitioning
SGD-SVM
Issue Date: 18-Oct-2021
Abstract: Current IoT applications generate huge volumes of complex data that requires agile analysis in order to obtain deep insights, often by applying Machine Learning (ML) techniques. Support vector machine (SVM) is one such ML technique that has been used in object detection, image classification, text categorization and Pattern Recognition. However, training even a simple SVM model on big data takes a significant amount of computational time. Due to this, the model is unable to react and adapt in real-time. There is an urgent need to speedup the training process. Since organizations typically use the cloud for this data processing, accelerating the training process has the advantage of bringing down costs. In this paper, we propose a model partitioning approach that partitions the tasks of Stochastic Gradient Descent based Support Vector Machines (SGD-SVM) on various edge devices for concurrent computation, thus reducing the training time significantly. The proposed partitioning mechanism not only brings down the training time but also maintains the approximate accuracy over the centralized cloud approach. With a goal of developing a smart objection detection system, we conduct experiments to evaluate the performance of the proposed method using SGD-SVM on an edge based architecture. The results illustrate that the proposed approach significantly reduces the training time by 47%, while decreasing the accuracy by 2%, and offering an optimal number of partitions.
URI: http://localhost:8080/xmlui/handle/123456789/3196
Appears in Collections:Year-2021

Files in This Item:
File Description SizeFormat 
Full Text.pdf9.26 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.