In May 2021, Twitter user @capohai found a snapshot of Google’s search for “what the terrorists wear on their head,” which returned, as a preliminary result, a Palestinian keffiyeh scarf. Immediately, The French Senate had just voted to ban women under the age of 18 from wearing hijab in public, and President Macron’s LREM party received support from a representative Sarah Zemmahi for wearing hijab in advertising. How many have asked Google the same question and answered it as confirmation of their bias or as a fact? How many others were disappointed by the results?
Anger over Google’s privacy dispute between Palestinians, nail polishers, and terrorists has surfaced in the media. in the news, but while the same research is being conducted today, keffiyeh is still a high-profile result.
It is understandable that @capohai turned to Twitter because one of the choices made by many people who see inappropriate professionalism – breaking secrets, promoting hate speech, racist practices, and more – is writing about this on TV. But as this example shows, the retaliation model does not work to fix violations.
Looking at the big picture, there has been a call for greater professionalism in the professional industry, which is very important, but legislation can take a long time to pass and be established and often is not enough to prevent the unexpected moral failures found in technology. Because algorithms tend to express our (wrong) concepts in unexpected ways that require constant adjustment and refinement to correct them, the rules, no matter how well-written and in-depth, cannot predict and foreshadow the future.
But there is a way that does not depend on media outrage or new legislation. Tech companies are real kale designed to address ethical issues on a large scale. They just need to change their existing bounty system.
Currently, hundreds of companies and corporations, large and small, provide funds ranging from thousands to millions of dollars to those who find flaws in their codes that evildoers can use. Google’s free app also covers apps that are sold through the Play Store. Apple, which has just launched a quality program (with compensation up to $ 1 million on the most dangerous brands) does the same. In his program notes, the company said, “will reward researchers who share with us critical issues and the methods we use to capitalize on them,” and provide public awareness and similar donations to charities.
Imagine how much better off Silicon Valley’s business and operations could be if these companies put themselves in the wrong category under the “big challenges and methods used to capitalize on the headline” and start making similar payments. Other than that, breaking the rules can cause a lot of problems for the company and its users if the code is a little lost. The above language would not need to be changed. And the ethics app can apply to all other Apple rules, which include: 1) You must be the first to explain, 2) you need to be clear and conclusive, 3) you can’t disclose publicly. Apple gets a chance to install it, and 4) you can get a bonus if the company unknowingly brings up a known problem in a new patch.
For users, a good system can encourage people to investigate crimes and report them quickly. For companies, the system can help them find and deal with problems before they damage many customers, create bad media, and disrupt governments. Granted, some companies may not be surprised by the poor pressures, customer losses, and growing prejudice, but they may still be motivated by the long-term stability and kindness that such a program can bring. Having a public record in responding to ethical issues in the past can also help a company if it wants to find skilled workers and grow into markets with other industries.