The work of integrity groups gives a different answer. We may be on display now, but we have a long history of this work. We have learned a lot from the anti-spam solutions in email or search engines, and we borrow many ideas from computer security.
One of the best ways to gain this trust is to work from the bottom of the list eliminating issues that aren’t worth the fight. I look at two examples to help illustrate this point, but there are many similarities, such as limitations to group size, karma or history (such as Google’s PageRank), a “region of origin”, a positive form. dialog, and a less powerful sharing button. In the meantime, let us consider two things that integrity keepers have made: we call them driving tests and bumps.
First, we must make it difficult for people to have false accounts. Imagine if, after being arrested for a crime, anyone could be released from prison and be safe as a newcomer. Imagine if it were impossible to know if you were talking to a group of people or one person who is rapidly changing a secret. This mistrust is not good. At the same time, we must remember that pseudonymous stories are not always bad. Perhaps someone with a well-known name is a young adult who is unmarried, or he is a human rights activist living under dictatorial regimes. We must not ban all false accounts. But we can make their price higher.
One way is the same as the way, in most countries, you will not be able to drive a car until you have learned to use it under the supervision and pass a driving test. Similarly, new accounts do not need to access all of the features in the app. In order to open up explicit content (from spam, harassment, etc.), the account may need to pay extra on time and effort. Maybe it just takes time to “ripen.” You may need to pay close attention to the karma system. It may need to take some practical steps to adjust. Only when the account is eligible through this “running test” will it be dependent on access to the entire program.
Spammers can, of course, jump through the cracks. In fact, we expect them. Other than that, we do not want to make it too difficult for legitimate users of fake accounts. In an effort to create a new “secret”, we are introducing another science into the equation. Three false accounts can be run. But hundreds or even thousands can be very difficult to break.
On the Internet, the most harmful are almost always from power users. This is understandable – social networking sites often encourage their members to post as much as possible. Power users are able to do this more frequently, as well as for different audiences, and at the same time, more than they can in real life. In the old cities, the cost of one person pertaining was consistent with the physical need for each person to be in one place or to talk to one group at a time. This is not true online.
On the Internet, some actions can be viewed as slow-moving, yet they are extremely difficult to do with power. Consider forming twelve groups at a time, or commenting on a thousand videos per hour, or posting each minute for a whole day. When we see people using something too much, we think they are probably doing something like driving at an unsafe speed. We have the answer: speed bumps. Block them out of doing that for a while. There is no value judgment here – it is not punishment, it is security. Such strategies can be an easy way to make things safer for everyone and disrupt a small community.