Last month, Stanford Researchers have announced that a new era of artificial intelligence has arrived, with a huge neural network and data built over the ocean. They said a new research center at Stanford would create and study a “basic model” of AI.
Critics of the idea were quick to express themselves at a workshop organized to mark the inauguration of the new center. Some object to the limited power and sometimes bizarre behavior of these models; Others warn against paying too much attention to a way to make machines smarter.
“I think the term‘ foundation ’is terribly wrong,” Jitendra Malik, a professor at UC Berkeley studying AI, told attendees at a video conference.
The owner acknowledges that the practical use of one type of model identified by Stanford researchers – a large language model that can answer questions or create text from prompts – is much greater. But he said that evolutionary biology suggests that language creates interactions with other aspects of intelligence such as the physical world.
“These models are really fortresses in the air; They have no basis, ”Malik said. “The language we have in these models is not baseless, there is this lie, there is no real understanding.” He declined an interview request.
A paper written by dozens of Stanford researchers described “an emerging paradigm for creating artificial intelligence systems” that labeled it a “basic model.” The ever-larger AI models have made some impressive advances in AI in recent years, such as perception and robotics as well as language.
Large language models are also the basis for large technology companies such as Google and Facebook, which use them in areas such as search, advertising and content control. Large language model building and training may require millions of dollars worth of cloud computing power; So far, it has limited their development and is used in several heeled technology companies.
But larger models are also problematic. Language models inherit biased and offensive text from their trained data and have zero perception of what lies in their common sense or truth. Given a prompt, a large language model can throw unpleasant language or misinformation. There is also no guarantee that these larger models will continue to produce advances in machine intelligence.
The Stanford proposal has divided the research community. “Calling them ‘foundation models’ completely disrupts the discussion,” said Subbarao Kamambhapati, a professor at Arizona State University. There is no clear path from these models to a more general form of AI, says Kamambhapati.
Thomas Dietrich, a professor at Oregon State University and former president of the Association for the Advancement of Artificial Intelligence, said he has a “huge respect” for researchers behind the new Stanford Center and believes they are genuinely concerned about the problems with these models.
But Dietrich is surprised that the idea of original-based models is not partly about getting funding for the resources needed to build and operate them. “I’m surprised they gave these models a fancy name and created a centerpiece,” he says. “It does the small job of planting flags, which can have several benefits towards fundraising.”
Stanford has proposed the creation of a national AI cloud to provide industry-scale computing resources for educators working on AI research projects.
Emily M. Bender, a professor of linguistics at the University of Washington, said she was concerned that the idea of based models reflected the industry’s bias towards investing in data-centric systems in AI.
Bender said it is especially important to study the risks posed by large AI models. He was the co-author of a research paper published in March, which drew attention to the problems of the large language model and contributed to the departure of two Google researchers. But, he said, verification should be done from multiple branches.
“There are all these adjacent, really important areas that are just hungry for funding,” he says. “Before we throw money into the cloud, I want the money to go to another branch.”