User Behavior To Identify Malicious Activities In Large- Scale Social Networks

Because of the data-intensive nature of emerging tasks, requiring validation, evaluation and annotation of large volumes of data, crowd sourcing has gained quick popularity. While developing a sound definition of crowd sourcing, Estelles and Guevara suggest that micro tasks are of variable complexity and modularity, and entail mutual advantage to the worker and the requester. Gathering small contributions through such micro tasks facilitates the accomplishment of work that is not easily automatable, through rather minor contributions of each individual worker. With the universality of the internet, it became possible for distribution of tasks at global scales, leading to the recent success of crowd sourcing, being later defined as an online, distributed problem-solving and production model. In the recent past, there has been a considerable amount of work towards developing appropriate platforms and suggesting frameworks to efficient crowd sourcing an increasing large amount of research communities’ advantage from using crowd sourcing platforms in order for gathering distributed and unbiased data, to validate results, evaluate aspects, or to build ground truths. While the demand for using crowd sourcing to solve several problems is on an upward climb, there are some obstacles that hinder requesters from attaining reliable, transparent, and non-skewed results. For improvement of task performance, gold-standards are the typically adopted solution. Generally, gold standards are questions where answers are known a priori to task administrators. Hence, if worker fails to provide the correct answer for a particular question, he will be a flagged as an untrustworthy worker. However, with the success of crowd sourcing market, we believe that malicious activities and adversarial approaches will also become more advanced and popular, overcoming common gold standards.

Quality control mechanisms should thereby account for a diverse number of workers that exhibit a wide range of behavioral patterns. Tackle poor worker performance in the past, these methods have been considered and used. However, there is a need to understand the behavior of these workers and the kinds of malicious activity they bring about in crowd sourcing platforms. In this project, we present our work towards analyzing the behavior of malicious micro task workers, and reflect on guidelines to overcome such workers in the context of online surveys

Literature review

T.S. Behrend, D.J.Sharek, A.W.Meade, and E.N.Wiebe research presented the suitability of crowd sourcing as an alternative data source for organizational psychology research. promoted the suitability of crowd sourcing user studies, while cautioning that special attention should be given to the task formulation Even though these works outline shortcomings of using crowd sourcing, they do not consider the impact of malicious activity that can emerge in differing ways.

A.Kittur, E.H.Chi, and B.Suh work shows that varying types of malicious activity is prevalent in crowd sourced surveys, and propose measures to curtail such behavior. And took surveys, and examined the characteristics of surveys that may determine the data reliability.

C.C. Marshall and F.M.Shipman the work presented by authors includes an algorithms that improve the existing techniques to enable the separation of bias and error rate of the worker. And released on their study of methods to automatically detect improper tasks on crowd sourcing platforms.

Y.Baba, H. Kashima, K. Kinoshita, G. Yamaguchi, and Y. Akiyoshi reflected on the importance of controlling the quality of tasks in crowd sourcing marketplaces. Complementing these existing works, our work propels the consideration of both aspects (task design as well as worker behavior), for effective crowd sourcing. Dow et al. introduced a feedback system for improving the quality of work in the crowd.

S. Dow, A. Kulkarni, B. Bunge, T. Nguyen, S. Klemmer, and B. Hartmann present a method to achieve quality control for crowd sourcing, by providing training feedback to workers while relying on programmatic creation of gold data. But for gold-based quality assurance, task administrators need to understand the behavior of malicious workers and anticipate the likely types of worker errors with respect to different types of tasks.

W.Mason and D. J.Watts proposed the behavior of user In the realm of studying the reliability and performance of crowd workers with respect to the incentives offered, Mason et al. Investigated the relationship between financial incentives and the performance of the workers They found that higher monetary incentives increase the quantity of workers but not the quality of work. A large part of their results align with our findings presented in the following sections. Related to their work, we adopt the approach of collecting data through crowd sourced surveys in order to draw meaningful insights.

P. G. Ipeirotis, F. Provost, and J. Wang proposed the analysis quantitative and qualitative of the work and extends their work, and additionally by providing a sustainable classification of malicious workers that sets precedents for an extension to different categories of micro tasks. Through their work, Ipeirotis et al. provoked the need for techniques that can accurately estimate the quality of workers, allowing for the rejection or blocking of low-performing workers and spammers.

11 February 2020
close
Your Email

By clicking “Send”, you agree to our Terms of service and  Privacy statement. We will occasionally send you account related emails.

close thanks-icon
Thanks!

Your essay sample has been sent.

Order now
exit-popup-close
exit-popup-image
Still can’t find what you need?

Order custom paper and save your time
for priority classes!

Order paper now