TECHNOLOGY

This program can give AI a sense of morality — sometimes


As a result of recent advances in AI and languages, Delphi taps 7 Algorithms that use mathematically simulated neural networks have made amazing progress in feeding large amounts of text.

In June 2020, researchers at OpenAI, an organization working on state-of-the-art AI tools, demonstrated a program called GPT-3 that can predict, summarize, and create texts that often seem like extraordinary skills, although it He will spit. Prejudicial and hateful language I learned from reading this text.

Researchers behind Delphi also asked ethical questions about GPT-3. They found that its answers were agreed with the crowd workers more than 50 percent of the time – a little better than flipping a coin.

Improving the performance of a system like Delphi will require a variety of AI approaches, including some that allow a machine to explain its reasoning and indicate when it is in conflict.

The idea of ​​giving the machine an ethical code has been around for decades in both academic research and science fiction. The three laws of Isaac Asimov’s famous robotics popularize the idea that machines can follow human ethics, although short stories exploring this idea highlight the contradictions in such simple logic.

Choi said Delphi should not be taken as a precise answer to any moral question. A more sophisticated version may identify uncertainty due to different opinions in the training data. “Life is full of gray areas,” he says. “No two people will completely agree, and there is no way an AI program will match people’s judgment.”

Other machine learning systems have shown their own moral blind spot. In 2016, Microsoft released a chatbot called Tay, designed to learn from online conversations. The program was quickly sabotaged and taught to say offensive and disgusting things.

Efforts to explore ethical perspectives on AI have also revealed the complexity of such a task. A project launched by researchers at MIT and elsewhere in 2018 sought to explore public perceptions of the ethical issues that self-driving cars may face. They asked people to decide, for example, whether it would be better for a car to hit an elderly person, a child or a robber. The project has expressed different views across different countries and social groups. Respondents from the United States and Western Europe were more likely to release the child than an adult elsewhere.

Those who are creating AI tools are interested in getting involved with some of the ethical challenges. “I think people are right to point out the flaws and failures of the model,” says Nick Frost, CTO of Koher, a startup that has created a large language model that is accessible to others through an API. “They are informative of larger, broader problems.”

Has created ways to guide the output of Cohere’s algorithm, which is now being tested by some businesses. It cures the algorithm feeding content and trains the algorithm to learn to capture examples of bias or hate speech.

Frost says the controversy around Delphi reflects a broader question that the technology industry is wrestling with – how to build technology responsibly. Often, he said, when it comes to content restraint, misinformation and algorithmic bias, companies try to wash their hands of the problem with the argument that all technology can be used for good and bad.

When it comes to ethics, “there’s no ground truth, and sometimes technology companies quit because there’s no ground truth,” Frost said. “Trying is the best approach.”

Updated, 10-28-21, 11:40 am ET: An earlier version of this article incorrectly stated that Mirko Mussolini is a professor of philosophy.

Updated, 10-29-21, 1:10 pm ET: An earlier version of this article misspelled Nick Frost’s name, and incorrectly identified him as Cohere’s CEO.


More great cable story



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button