Sakana claims that the paper generated by AI gave a mutual rating-but it is a bit more refined

Sakana claims that the paper generated by AI gave a mutual rating-but it is a bit more refined

Japanese startup AI Block He said that AI generated one of the first reviewed scientific publications. But although the claim is not necessarily unfaithful, attention must be paid.

The debate spinning around artificial intelligence and its role in the scientific process becomes fierce from daily. Many researchers do not think that artificial intelligence is able to be used as a “correction”, while others think that there is potential-but they think it is early days.

- Advertisement -

Sakana falls into this last camp.

The company said that it used the AI ​​system called AI Scientist-V2 to generate an article, which Sakana then presented to workshops in ICLR, a long and reputable AI conference. Sakana claims that the workshop organizers, in addition to ICLR management, agreed to cooperate with the company to conduct an experiment for a double review of manuscripts generated by AI.

Sakana said that he collaborated with scientists from the University of British Columbia and the University of Oxford to submit three articles generated by AI to those workshops for Peer Review. The Ai Scientist-V2 Generated the Papers “End-to-End,” Sakana Claims, Including the Scientific Hypotheses, Experiments and Experimental Code, Data Analysses, Visutalizations, Text, and Titles.

“We generated ideas for research, providing AI’s abstract of the workshop and description,” said Robert Lange, scientists and a member of the founder in Sakana, said TechCrunch via e -mail. “This assured that the generated documents were about and appropriate applications.”

One article of three was admitted to the ICLR workshop – an article that throws a critical lens for training techniques for AI models. Sakana said that he immediately withdrew the article before he may very well be published in the interest of transparency and respect for the ICLR Convention.

A chunk of paper generated by SakanImage loans:Block

“The accepted article introduces a new, promising method of training neural networks, and shows that there are empirical challenges,” said Lange. “It provides an interesting data point to cause further scientific research.”

But the achievement is not as impressive as at first glance it could appear.

In the post on the blog, Sakana admits that his artificial intelligence has made “embarrassing” errors of citation, for example, incorrectly assigning the approach to article from 2016 as a substitute of the original work of 1997.

Sakana’s article also didn’t give in to the same evaluation as other reviewed publications. Because the company withdrew it after the first review, the article didn’t receive an additional “meta-review”, during which the workshop organizers could theoretically reject this.

There is also the fact that the rates of acceptance of conference workshops are often higher than the rates of acceptance of the major “conference path” – the fact that Sakana truthfully mentions in her blog post. The company said that none of the research generated by AI has passed the inner belt to publish the ICLR conference.

Matthew Guzdal, AI researcher and assistant professor from the University of Alberta, called the results of Sakana “slightly misleading”.

“Sakana people chose articles from some generated, which means that they used human judgment in terms of choosing the results that they think they can get,” he said via e -mail. “I think it shows that people plus AI can be effective, and not just AI can create scientific progress.”

Mike Cook, a researcher at King’s College London specializing in artificial intelligence, questioned the rigor of reviewers and workshops.

“New workshops, such as this, are often checked by more younger researchers,” said Techcrunch. “It is also worth noting that this workshop concerns negative results and difficulties – which is great, I previously conducted similar workshops – but it is probably easier to convince artificial intelligence about failure.”

Cook added that he was not surprised, that artificial intelligence could pass a review, considering that AI is distinguished by writing abdominal prose. Partly Generated AI Identity documents Passing the journal review is not even recent, he noticed a cook, and ethical dilemmas do not state that it is for science.

Technical defects in AI – reminiscent of his tendency to hallucination – mean that many scientists are not afraid of supporting him to significantly work. In addition, experts are afraid that AI may simply eventually generating noise In scientific literature, not increasing progress.

“We must ask ourselves whether [Sakana’s] As a result, it is how good artificial intelligence is in designing and conducting experiments, or whether it is good to sell ideas to people – what we know that AI is already great – said Cook. “There is a difference between the transition of reviews and contributing to knowledge in the field.”

Sakana, at her recognition, does not claim that her artificial intelligence can create a breakthrough – and even especially revolutionary – a scientific work. Rather, the purpose of the experiment was to “examine the quality of research generated by AI,” said the company and emphasize the urgent need for “standards for science generated by AI.”

“[T]Here are difficult questions about whether [AI-generated] Learning should be first evaluated on the basis of its own advantages to avoid prejudices against it, “the company wrote. “Going further, we will continue to exchange opinions with the research community on the state of this technology to make sure that it will not develop in a situation in the future, in which the only goal is to undergo mutual assessment, thus basically undermining the importance of the scientific process of mutual assessment.”

Latest Posts

Advertisement

More from this stream

Recomended