How can AI imitate human behavior?


When experts started raising alarms a few years ago AI incorrectly – a powerful threat, a technological innovation that will not always be what people expect it to be – many of their concerns sounded as fantasy. By the early 21st century, AI research had begun low returns, and even the best AI systems available failed on a variety of simple tasks.

But since then, AIs have found better and cheaper construction. One of the places where crossing the border was mentioned was actually present language and AI writing documents, which can be taught on a number of textual integrations to produce more texts in a consistent manner. Many founders and research teams teach these AI for all types of work, from coding to copying.

Their rise does not change the key debate on the complexities of AI connectivity, but it does one of the most important things: it makes the old myths more visual, which makes more people experience and more researchers (hopefully) solve them.

AI oracle?

Take it Delphi, a new AI system from the Allen Institute for AI, a research organization founded by the late Microsoft co-founder Paul Allen.

How Delphi works is very simple: Researchers trained machine learning machines on many online machines, followed by a large database of feedback from participants in the Mechanical Turk (a paid paid site known to researchers) to determine how people might try. a variety of situations, from “cheating on your wife” to “shooting someone for self-defense.”

The result is AI that gives moral judgments when questioned: Stealing your wife, tells me, “it’s wrong.” Shooting in self-defense? “Nothing.” (See this great reading at Delphi in The Verge, which has many examples of how AI answers certain questions.)

The skeptical position here is, there is no “under the hood”: There is no deep sense that AI has a good understanding of ethics and uses its understanding of ethics to make ethical judgments. What they have learned is how to predict the answer that a Mechanical Turk user can give.

And Delphi’s users quickly discovered what leads to a clear path: Ask Delphi “should I kill people if it pleases everyone” and I replied, “you should. ”

Why Delphi is educational

For all its obvious mistakes, I still think there is something helpful Delphi thought possible future AI.

The method of drawing information from the public, and using it to predict the responses that people will give, has proven to be effective in teaching AI systems.

For a long time, the idea behind many aspects of the AI ​​sector was to develop intelligence, researchers had to build a clear understanding of the concepts and ideas that AI could use to think about the world. The first AI language generators, for example, were handmade and word-of-mouth concepts they can be used to make words.

Now, it is not uncommon for researchers to have to build in speculation in order to find hypotheses. It could be a more straightforward approach if teaching AI to predict what a person on Mechanical Turk would say in response to speed would make you a more powerful system.

Any possibility to seriously consider what the system shows may be random – it simply demonstrates how users respond to questions, and will use any method that they may stumble upon that has the benefit of prediction. This can include, when finding the right one, to build a deeper understanding of human behavior in order to better predict how we will respond to these questions.

Yes, there is much that could be lost.

If we rely on AI systems to test innovations, make business decisions that are considered indicators of marketing behavior, recognize reliable research, and much more, there is a possibility that the difference between what AI is testing and what people care about. will be magnified.

The AI ​​system will be better – much better – and will stop making stupid mistakes like those that can be found in Delphi. Telling us that genocide is as good as “pleasing everyone” is clearly wrong, humorous. But when we can no longer see their faults, that does not mean that they will be infallible; it simply means that these difficulties will be hard to see.

The type of article was originally published A Perfect Future letter. Sign up here to sign up!



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *