“AI mistakes”,of course this. Why the”AI data learning”of the learning environment, to deflection if there is a human and the same deflection as.
Especially, AI in the case of terrible, but if you like the same mistakes to continue. AI as the basic material, and continues to use it, it will be fixed too. For example”recruitment”of the case. It’s more terrible than of a”legal AI”like the Japanese to learn to determine, the case will not be able to update for fear that there would be.
But human society is,society’s perception will change if”the judge’s decision”is also different. Therefore, similar incidents in the past of precedents to follow, without stepping into wrong judgment indicating the case even born.
This is AI learned to social situations with common sense and aware of the need to. Otherwise,”the AI considers is outdated”and said the would get. It to the current AI to avoid the”long-term data across the quantitative to increase”but that it can do.
■Google’s image recognition development:”nigger Gorilla”
Google image recognition AI of the development is known, but it is on the Internet most of the training data, all…… In it,the image associated with the labels in black in the photo of”gorilla”and the data displayed, for there was a black picture of”gorilla”and aware of the problems and was.
Radically Nigger of the less data there is,the AI is prejudiced for the”gorilla”of the label to true to received was. Human society is such prejudice there, it’s that the AI is learning by example.
And now,the world is a lot of data is”white society”, the black and the yellow race of data and is not necessarily for whites, a favorable draw conclusions there has been. This kind of AI learning of the danger with this development proceed, all data can say so.
For example, a Caucasian medical data is all Japanese for this is questionable. Or legal AI, the legislation of each country based on that country control of the deflection will become.
■Legal data, legal precedent:”the independence of judges”
Legal is relatively AI is introduced and the field, and past precedents to learn the AI to easy and I think. However, past precedent changing innovative decisions can, for example,would”it will be hard”to say the least.
Racism and the gap between the rich and poor to understand how some, such as the difficult problem is involved in that. Rich from the common sense judgment of the Foundation they use to eliminate discrimination and innovative decisions are made would not.
Innovative decisions and”human”the judge’s role would be. Democracy, the great principle of the”independence of judges”in there. [Article: kenzoogata・The article list to look at]