Can Social Media Help Cultivate Trust in Social Media?


The study found that with a group of only eight people, there was no significant difference between performance and true observation. When the groups rose to 22 people, they began to confront the inspectors. (These numbers describe what happened when ordinary people were told the source of the story. When they did not know the source, the crowd did a little worse.) each other. Political research is really difficult.

It would seem unlikely that intentionally selected groups would be able to advance the role of analysts — especially in light of title, first impression, and publication. But that is the whole idea of ​​mass intelligence: gather enough people, act on their own, and their results will hit experts.

“Our focus on what is going on is that people are reading this and asking themselves, ‘How does this relate to everything else I know?'” Rand said. and you are a much better marker than anyone else. ”

This is not the same as Reddit’s high-end and low-end systems, nor is it the Wikipedia type of citizen readers. In that case, small, non-representative groups of users use their own resources to solve things, and everyone can see what the others are doing. The wisdom of the masses is evident only when the groups are divided and the people make their own judgments. And relying on statistically mobilized groups, politicians, not a volunteer group, makes the process of researchers more difficult in the game. (This also explains why the test method is different from Twitter’s Bird watch, a flight software that encourages users to post notes explaining why this tweet is misleading.)

The bottom line on this paper is simple: Social media platforms such as Facebook and Twitter can use social media to better accomplish their monitoring activities without providing accurate information. (Individuals in the study paid $ 9 an hour, which translates to about $ .90 each.) he favors. (As of 2019 Pew research, Republicans strongly believe that “investigators” favor one side. “) Facebook has already started the same, paying user groups to “use them as investigators to obtain information that may contradict online falsehoods or endorsement of other claims.” But efforts are being made to inform the work of the observers who are seeing, not to add to it.

Enlarging the light is one thing. The most interesting question is how the platforms should be used. Should false stories be banned? What about stories that may not be false in them, but that are misleading or confusing?

The researchers say the platforms should move away from the real / false facts and leave it alone / set it as an option. Instead, they point out that the platforms incorporate “continuous multilingual calculations” into their systems. Instead of having one true / false error, and doing everything on top of it in some way and everything under it, the platforms should include the numbers provided by the people in comparison to how the given link should be mentioned on the user’s feed. In other words, the less people judge a case, the lower the algorithm.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *