The Department of Defense is providing AI guidance to professional contractors


The purpose of this guide is to ensure that professional contractors adhere to the existing DoD standards ethical principles of AI, says Goodman. DoD made the announcement last year, following a two-year study conducted by the Defense Innovation Board, a group of leading technical and business research consultants set up in 2016 to bring the Silicon Valley explosion to U.S. forces. The board was headed by former Google CEO Eric Schmidt until September 2020, and its current members include Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab.

Some critics, however, are skeptical about whether this work promises any change.

In the study, the council consulted with a number of experts, including anti-AI activists, including members of Campaign for Killer Robots and Meredith Whittaker, a former Google researcher who helped design the Project Maven exhibitions.

Whittaker, now director of faculty at New York University’s AI Now Institute, could not be reached for comment. But according to Courtney Holsworth, a spokeswoman for the agency, she attended another meeting, where she argued with agency officials, including Schmidt, over his conduct. Holsworth states: “It was not a wise inquiry. “The claim that it can be counted as a moral purification, in which the existence of a long-term anti-verbal term is used to imply that the results provided buy more from the parties involved.”

If DoD does not have a lot of buying options, can its advice still help build confidence? “There will be people who are dissatisfied with the code of conduct that DoD makes because they find the idea strange,” Goodman says. It is important to consider what counseling they can and cannot do.

For example, the guidelines say nothing about the use of deadly weapons, a skill that some campaigners argue should be banned. But Goodman points out that the rules governing such technology are chosen to be of the highest quality. The purpose of this guide is to make it easier to create AI that complies with these rules. And part of the process is to address any concerns a third party has. “The proper use of this guideline is to choose not to follow a certain procedure,” says Jared Dunnmon of the DIU, who co-authored it. “You can decide it’s not a good idea.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *