Abstract:
The phenomenal success of certain crowdsourced
online platforms, such as Wikipedia, is accredited to their ability
to tap the crowd’s potential to collaboratively build knowledge.
While it is well known that the crowd’s collective wisdom
surpasses the cumulative individual expertise, little is understood
on the dynamics of knowledge building in a crowdsourced environment.
A proper understanding of the dynamics of knowledge
building in a crowdsourced environment would enable one in
the better designing of such environments to solicit knowledge
from the crowd. Our experiment on crowdsourced systems based
on annotations shows that an important reason for the rapid
knowledge building in such environments is due to variance in
expertise. We use, as our test bed, a customized Crowdsourced
Annotation System (CAS) which provides a group of users the
facility to annotate a given document while trying to understand
it. Our results show the presence of different genres of proficiency
amongst the users of an annotation system. We observe that
the crowdsourced knowledge ecosystem comprises of mainly four
categories of contributors, namely: Probers, Solvers, Articulators
and Explorers. We infer from our experiment that the knowledge
garnering mainly happens due to the synergetic interaction across
these categories.