Please use this identifier to cite or link to this item:
http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/3901
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kulshrestha, S. | - |
dc.contributor.author | Jain, A. | - |
dc.contributor.author | Sahani, A. | - |
dc.date.accessioned | 2022-08-25T15:18:43Z | - |
dc.date.available | 2022-08-25T15:18:43Z | - |
dc.date.issued | 2022-08-25 | - |
dc.identifier.uri | http://localhost:8080/xmlui/handle/123456789/3901 | - |
dc.description.abstract | Data lagging and distortion has been the issue for the majority of us since the usage of online video conferencing platforms has become a routine of our daily life. In this paper, we attempted to design a solution for this by specifically for the image part of the videos, by building a convolution neural network based autoencoder, which will compress the images being sent from one end to another in a batch of 5 frames, and stretch it back to its original size on the receiver end. We calculated the accuracy and loss obtained for the same for comparison purposes. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Autoencoders | en_US |
dc.subject | Data compression | en_US |
dc.subject | Image-Similarity-Measures | en_US |
dc.subject | Keras | en_US |
dc.subject | Machine learning | en_US |
dc.subject | Neural networks | en_US |
dc.subject | OpenCV | en_US |
dc.subject | Python | en_US |
dc.subject | Tensorflow | en_US |
dc.title | An autoencoder based approach to enable high fidelity video conferencing over low bandwidth networks | en_US |
dc.type | Article | en_US |
Appears in Collections: | Year-2021 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Full Text.pdf | 323.51 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.