Year-2020http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/15662024-03-28T18:15:25Z2024-03-28T18:15:25ZDesign of porphyrin-based porous metal-organic frameworks (MOFs) for catalytic applicationsSharma, N.http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/23352021-08-02T09:54:03Z2021-08-02T00:00:00ZDesign of porphyrin-based porous metal-organic frameworks (MOFs) for catalytic applications
Sharma, N.
2021-08-02T00:00:00ZTowards a knowledge warehouse and expert system for the automation of SDLC tasksKapur, R.http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/23332021-08-02T08:53:15Z2021-08-02T00:00:00ZTowards a knowledge warehouse and expert system for the automation of SDLC tasks
Kapur, R.
The cost of a skilled and competent software engineer is high, and it is desirable to
minimize dependency on such costly human resources. One of the ways to reduce such
costs is via automation of various software development tasks.
Recent advances in Artificial Intelligence (AI) and the availability of a large volume
of knowledge bearing data at various software development related venues present a ripe
opportunity for developing tools that can automate software development tasks. For instance,
there is significant latent knowledge present in raw or unstructured data associated
with Version Control Systems (VCS) artifacts such as source files, code commit logs, and
defect reports, available in the Open Source Software (OSS) repositories.
We have leveraged such knowledge-bearing data, the latest advances in AI and hardware,
to create knowledge warehouses and expert systems for the software development
domain. Such tools can help develop applications for performing various software development
tasks such as defect prediction, effort estimation, and code review.
Contributions
We have proposed novel approaches and tools to address the following software development
tasks:
1. Automating the Software Development Effort Estimation (SDEE): We propose
an efficient SDEE method for open source software, which provides accurate and
fast effort estimates. Given the software description of a newly-envisioned software,
our tool yields an effort estimate for developing it, along with the information about
the existing functionally-similar software. To derive the effort estimates, we leverage
the developer activity information of software developed in the past. A software
similarity detection model is trained using the Paragraph Vectors Algorithm (PVA)
on various software product descriptions to detect the existing software with similar
functionality. For this method, we develop the SDEE dataset, which comprises the SDEE metrics’
values derived from more than 13000 GitHub software repositories belonging
to 150 different software categories and the PVA vector representations of software
product descriptions for the considered set of GitHub repositories.
2. Detecting source code defectiveness: We present a novel system to detect the
defects in source code and the attributes of possible defects, such as the severity of
defects. We develop models using 12 different state-of-the-art ML algorithms with
50+ different combinations of their key parameters to perform the source code’s
defect estimation in various scenarios. PROCON dataset (see below) was used to
train these models. The best performing model for each of the considered set of
defect estimation scenarios and the considered set of programming languages was
identified and chosen to perform the task.
We develop a dataset of PROgramming CONstruct (PROCON) metrics for this
method, which we have defined. This dataset’s PROCON metrics’ values were
extracted by processing more than 30000 source files taken from 20+ OSS repositories
at GitHub. These source files were written in four major programming languages,
viz., C, C++, Java, and Python.
3. Detecting bloat libraries in a software distributable: We present an obfuscation
resilient method to detect bloat libraries present in a software distributable. Our
approach’s novel aspects are i) Computing a vector representation of a .class file
using a model that we call Jar2Vec. The Jar2Vec model is trained using the wellknown
PVA, ii) Before using it for training the Jar2Vec models, a .class file is
converted to a normalized form via semantics-preserving transformations.
To perform this task, we trained 27 different models using different PVA parameter
combinations on > 30000 .class files taken from > 100 different Java libraries
available at MavenCentral.
4. Automation of source code review: We propose a code review assisting tool which
assists a programmer in generating better and informed code reviews, backed by StackOverflow (SO) posts as evidence. To detect the similarity present in source
code, 57 PVA models were trained for each of the considered programming languages
on source code present in 188200+ GitHub source files. The best performing
model for each of the considered programming languages was identified and
chosen to perform the source code’s similarity detection. To perform the similarity
detection for a given source code sample c, we compare the similarity in source
code between c and the code samples present in the SOposts dataset (see below).
We created the SOposts dataset that comprises the code, text, and metadata portions
extracted from > 3 million SO posts for this tool. It also contains the PVA
vector representations of the source code collected from the SO posts and the sentiment
analysis information of narrative text present in the posts. We considered
the source code written in five popular programming languages, viz., C, C#, Java,
JavaScript, and Python to develop our dataset.
Each of the proposed methods described in the preceding section has been implemented
as a web-based tool.
2021-08-02T00:00:00ZAssessing and improving sustainability in supply networks of manufacturing organizationsSharma, R.K.http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/23312021-08-02T07:09:04Z2021-08-02T00:00:00ZAssessing and improving sustainability in supply networks of manufacturing organizations
Sharma, R.K.
Sustainability is a critical issue in supply networks that is going to affect the environment
as well as the economy. Jobs in conventional manufacturing are at the risk of being lost
owing to stricter environmental regulations the world over. India needs to promote
sustainability in manufacturing all the more if it has to reap its demographic dividend.
Adopting sustainability into supply networks of manufacturing organizations will lead to
the implementation of state of the art technologies having lower carbon footprints in
manufacturing, leading to products that are globally competitive and meet the strict
environmental standards. Current supply networks of manufacturing organizations are not
sustainable. Life cycle assessment (LCA) has been widely used for the design of sustainable
supply networks by environmental managers. In this research, our aim is to assess and
improve sustainability in the supply networks of manufacturing organizations. In the first
objective, we developed an LCA based framework to assess sustainability in supply
networks of manufacturing organizations, to be tested in four case studies. Utilizing this
framework, we identified environmental impact hotspots and then suggested green
interventions to remove these hotspots. We performed four case studies to test this
framework. The products chosen for the case studies were paracetamol tablet, rice-based
pudding-kheer, bio implant, and hair dryer. The products were chosen based on their
widespread use and environmental impact potential. The results of the study validated the
effectiveness of the proposed sustainability assessment framework in the four case studies.
The successful implementation this framework in these four case studies builds a case for
wider application of this framework in more number of manufacturing organizations
belonging to various industries and sectors. We also present reconfiguration case in the
supply network of these four products, simulating the implementation of a suggested green
intervention. Green interventions are scattered in the literature, and there is a need for a
single trustworthy information support. Therefore in our second objective, we developed a
software tool to suggest suitable green interventions that could be used to remove a given
environmental impact hotspot in a supply network. Subsequent to finding the green
interventions, there is also a need to rank the green interventions depending on how
dominant they are. Therefore, in the third objective, we analyzed a set of green
interventions implemented in supply networks and found the dominant green interventions.
In the second part of this objective, we analyzed a green intervention for its implementation. The green intervention chosen for the analysis was ‘green supplier selection.’ The process
of sustainability assessment and improvement in supply networks of manufacturing
organizations is dependent on various factors. Therefore in the fourth objective, we
identified and analyzed the factors hindering the implementation of sustainability (barriers)
in supply networks as well as the factors which can overcome these barriers (drivers).
This research has developed a comprehensive package for manufacturing organizations to
assess and improve sustainability in their supply networks. We developed a holistic
systematic approach through adoption of a multiple methodological approach. The holistic
approach developed is appropriate for varied manufacturing industries. Small industries
will find the framework more useful, as it will help to create a sustainability infrastructure
at their plants without a lot of investment. Besides creating a life cycle inventory of data,
the framework will also help create a sustainability ecosystem in the manufacturing
organizations. The developed framework also enhances the scope of the analyses that can
be done using life cycle analysis technique. The LCA based sustainability assessment
framework is a key theoretical contribution of this research work, as it develops a analytical
process framework to assess and improve sustainability in supply networks of
manufacturing organizations. It presents a methodology that can be adopted by the
manufacturing organizations with ease to assess and improve sustainability in their supply
networks. The first part (objective 1) of this work provides an LCA based sustainability
assessment framework. The manufacturing organizations can use it to find the
environmental impact hotspots in the supply networks of their products. The data
generation done in the course of application of the LCA based sustainability assessment
framework is a significant research contribution of this work, as the data generated is novel
and will assist in future environmental sustainability assessment studies. The second part
(objective 2) provides software support to search and find green interventions to remove
the environmental impact hotspots. In the third part (objective 3) of the work, we developed
an interpretive structure modelling method based framework to prioritize the green
interventions. The companies can use the framework to find the most effective green
interventions among a set of interventions. The fourth part (objective 4) of the work
provides a hybrid fuzzy analytical hierarchy process (FAHP) and technique for order
performance by similarity to ideal solution framework (TOPSIS), which the companies can
use to rank the top drivers to sustainability implementation. FAHP approach alone can only
provide the weights of different criteria and sub-criteria but cannot rank the alternatives. Therefore, it requires an additional approach to rank the alternatives. The reason behind
using a combination of FAHP and FTOPSIS is that it is a stablished approach for analyzing
the different criteria and alternatives to find the concrete results. However, the novelty of
this study lies in the fact that this approach has not been used so far to analyze the supply
chain barriers and the drivers to eliminate these barriers. The sustainability managers can
use the identified drivers to their advantage to accelerate the implementation of
sustainability.
2021-08-02T00:00:00ZLearning-based methods for single image haze removalDudhane, A.http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/21962021-07-23T11:58:25Z2021-07-23T00:00:00ZLearning-based methods for single image haze removal
Dudhane, A.
Haze is an atmospheric phenomenon where turbid media obscure the scenes. Haze
reduces the visibility of the scenes and reduces the reliability of outdoor surveillance
systems. Under severe hazy weather conditions, the aerosols scatters or sometimes
completely stops the light rays from reaching the camera sensor. Thus, outdoor captured
photos tend to be hazy in inclement weather and have low visibility. Color cast
of captured photos in such inclement weather evidently also depend on the size of the
aerosols and its properties. The major challenges needs to tackle in the field of image
de-hazing are: low-visibility, color imbalance, image capturing medium, unavailability
of real-world training data etc. This work mainly focuses on analyzing and designing
different modalities for image de-hazing in the context of providing the solution to the
above-mentioned challenges.
The significant contribution of this work is in: 1) proposing a novel scene transmission
map estimation method, 2) proposing a dense haze removal approach, 3)
proposing a novel varicolored image de-hazing approach which is applicable for hazy
images captured in different weather conditions 4) proposing an underwater image
de-hazing approach and 5) proposing un-paired training network for image de-hazing.
Accurate estimation of scene transmission map is a key to recover the haze-free
image from input hazy image. In this work, a convolution neural network based approach
is proposed for scene transmission map estimation. The contribution of the work lies in the haze relevant feature extraction from RGB and YCbCr color spaces
of input hazy image and a novel feature fusion approach. Another contribution towards
the image de-hazing is made by proposing an end-to-end deep network which
is trained adversarially for dense haze removal.
Along with the visibility improvement, restoration of color balance is also equally
challenging problem in image de-hazing. In this work, we propose a varicolored image
de-hazing network which restores the color balance in a given varicolored hazy image
and recovers the haze-free image. Also, a large-scale synthetic varicolored hazy image
database is generated to train the network for varicolored image de-hazing. Also, we
have proposed an underwater image de-hazing approach which recovers perceptually
pleasant images by improving the visibility and color balance in input underwater
image.
In general, a major hurdle to train a convolution neural network for image dehazing
is the unavailability of large-scale real-world hazy, and corresponding hazefree
image (i.e. paired data). Thus, in this work, an end-to-end network is proposed
which is trained in an unpaired manner to resolve the unavailability of paired training
data.
The proposed image de-hazing approaches are evaluated on the current stateof-
the-art databases such as D-Hazy, SOTS, HazeRD, NTIRE-2018, NTIRE-2019, RESIDE
and set of real-world hazy images. Also, two new datasets are proposed in this
work namely outdoor hazy image (OHI) dataset and synthetic varicolored hazy image
(VHI) dataset. Standard quantitative evaluation parameters such as SSIM, PSNR,
CIEDE2000 are used to evaluate the proposed de-hazing approaches.
2021-07-23T00:00:00Z