What do philosophers think about the AI ​​"GPT-3" that generates highly accurate sentences that are indistinguishable from humans?


What do philosophers think about the AI ​​"GPT-3" that generates highly accurate sentences that are indistinguishable from humans?

Of a nonprofit organization that studies artificial intelligenceOpenAILanguage model developed byGPT-3"Is attracting a great deal of attention because it can generate highly accurate sentences that are indistinguishable from those written by humans. Nine philosophers give their opinions on various issues and discussions raised by GPT-3.

Philosophers On GPT-3-Daily Nous

◆1:New York University David charmersProfessor

Charmers pointed out that GPT-3 is basically an enhanced version of the previous generation GPT-2 and does not include major new technologies. On the other hand, GPT-3 contains 175 billion parameters, and trained with far more data makes it one of the most interesting AIs ever made. I am.

Mr. Charmers says that GPT-3 distinguishes machines from humans.Turing testI also mentioned that it was closer to passing than any AI ever created, and GPT-3 has general intelligence, not artificial intelligence (AI) specialized in only one field.General purpose artificial intelligence (AGI)I think that it will also be a hint.

On the other hand, GPT-3 poses a number of philosophical challenges, said Chalmers, who said they could be biased by the data used for training, the potential for depriving human workers of their work, and the risks of misconduct and fraud. He claims that there are many ethical issues such as sex. Also, GPT-3 is a pure language system that has no preference beyond the purpose of "complete the text", so Mr. Charmers thinks that it may be difficult to really understand "happiness" and "anger". I will.

◆2:OpenAI Policy Team Researcher Amanda AskellMr

Askell notes that GPT-3 is interesting because it can be trained on a lot of data to capture the complexity and give a few instructions to complete a task without fine tuning. On the other hand, he pointed out that for most tasks, GPT-3 is far from the human level and cannot maintain a consistent identity and belief throughout the context.

Askell said that many philosophers are excited about thinking and predicting artificial intelligence models such as GPT-3, and hope that the philosophers will clarify the debate about the limitations of language models. I am. Should language models also expand the range of data to touch the "world other than language" in the future, or think about the moral status of machine learning models and the index of "perception" in AI now? Askell argued.

◆3:San Francisco State University Carlos MontemayorProfessor

“The interaction with GPT-3 is creepy,” Montemayor commented. Humans have been considered to be particularly superior to other animals and machines in terms of verbal ability, but if machines can answer better than the average human, the premise falters. GPT-3, which has such a good language ability, may be disliked by anthropogenic values, but it may be a step toward an accurate understanding of the relationship between human intelligence and language. That.

On the other hand, Montemayor points out that there is a long way to go before AI surpasses the Turing test, and "the purpose of using language by humans" will become an important question. In social life, it's not just important to systematically encode semantic information using language. GPT-3 is far from an AI capable of true communication because it is necessary to pay attention to the situations of mutual exchange of words, mutual expectations, and behavior patterns.

◆ 4:Princeton University Annette ZimmermannDoctor

Zimmermann, while astounding the AI ​​community by the stunning results of GPT-3, carries on the downside of "successively inheriting patterns of historical prejudice and inequality, like any other AI." Pointed out that. Prejudice and discrimination included in datasets are major problems in the development of machine learning algorithms, suggesting that there are complicated moral, social, and political problem spaces.

Zimmermann says that social meaning and linguistic context are so important in the design of AI that researchers need to carefully consider and debate that technology will not establish fraud and discrimination. He says. Also, in designing an AI that produces a more fair world, it is not necessary to use "human" as a standard as in the Turing test, and to make a more desirable AI, you should use yourself or modern society as a standard for evaluation. Insisted there was no.

◆5:Massachusetts Institute of Technology Justin KhooAssociate professor

Khoo argues that the use of AI-generated speech has necessitated addressing bot restrictions that impede rational debate. Khoo believes that the bot's speech is like parrots' repeating human language, not "free speech to be protected."

Khoo admits, "There is a point that the regulation of bots deprives the user who operates the bot the freedom of speech." However, he pointed out that the purpose of protecting freedom of speech is to protect people's attempts to discover the truth by freely sharing opinions and discussing. Since bots interfere with the rational involvement of people, they should be regulated in the same way as the incitement of violence and criminal remarks. “It is necessary to regulate speech bots to protect freedom of speech. Yes,” Khoo argued.

◆ 6:York University Regina RiniAssociate Professor

Mr. Rini pointed out that GPT-3 and modern AI are not the same as human beings, but they are not just machines. Rini believes that the GPT-3 is a statistical abstraction of millions of hearts based on Reddit posts, Wikipedia articles, news articles and more.

Most of the interactions that people make on the Internet are simple based on specific tasks, and since some bots and chat services are already operating on the Internet, the successor to GPT-3 will eventually be the Internet. It is visible to be used as a conversational simulation bot above. Rini points out that in the future, AI, which is indistinguishable from humans in the future, may be operating on the Internet and the presence of individuals on the other side of the Internet may become uncertain. It's worth thinking about what you think in an era where no one can recognize you over the Internet.

◆7:University of Utah C. Thi NguyenAssociate professor

Nguyen claims that GPT-3 is a step toward the dream of building "a truly creative AI that creates art and games." However, although there is no objection to the idea of ​​“AI that creates art'' itself, the way children, companies and institutions targeted to obtain economic profit from the art and games created by AI can use AI to create products And there are concerns about bias in the training data.

Only the “measurable” can be managed by companies and institutions as a data set for generating AI. Therefore, when trying to make a "good work of art", the evaluation criteria in the data set are the tags such as "good" and "bad" by someone's hand, the number of "stars" posted by review people on the review site, and the number of views. Nguyen points out that there is a risk of losing the delicate and subtle value of art based on such things. Nguyen believes that "addiction" has already been evaluated in the game industry, and similar problems may occur in games created from AI such as GPT-3.

◆8:Cambridge University Henry ShevlinDoctor

Shevlin is a writer who died in 2015 using GPT-3.Terry PrachetWith Mr(PDF file)Mock interviewIt was reproduced. However, the mock interview was not a pleasant conversation about Mr. Prachet's work, it was said that there was a horribly existential conversation, and she was nervous even knowing that the other person was not a human, Shevlin said. ..

The advent of GPT-3 poses major challenges for the field of AI ethics, such as the creation of fake news, the movement to replace human workers with AI, and the problem of bias contained in training data. As a result, even scholars in the field of humanities are required to acquire basic technical knowledge and understanding and work on new tools created by technology companies.

◆ 9:University of Edinburgh Shannon VallorProfessor

Vallor, who studies a philosophy of emerging technology, admits that the GPT-3 is not only capable of producing interesting short stories and poems, it's also the occasional executable HTML code. On the other hand, it is too far to think of GPT-3 as it is immediately linked to general-purpose artificial intelligence, and there is also the hype side.

Vallor points out that the hurdle that AI cannot overcome is "understanding" rather than its performance, arguing that understanding is not just a momentary act, but sustainable and lifelong social work. The daily work of "understanding" that builds, repairs and strengthens the ever-changing sensations of other people, things, times and places that make up the world is a fundamental component of intelligence, predictive and generative. The classic model GPT-3 cannot achieve this, said Vallor.

Copy the title and URL of this article

Source link

Do you like this article??

Show More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button