How researchers are trying to unlock the secret of artificial intelligence algorithms
Examine a bank loan request, perform a medical diagnosis, predict which product or service a customer will buy based on their tastes: the use of algorithms and artificial intelligence (AI) are becoming increasingly important in daily. Despite the impact that these decisions can have on practices, the mystery remains as to the data used and on which the algorithms are based to determine whether or not a client is solvent, a patient is at risk of contracting an illness, or if a customer is interested in a product. It is this unknown that three American researchers from Carnegie Mellon University in the United States wanted to explore.
Anupam Datta, Shayak Sen and Yair Zick are interested in the question of the functioning of the algorithms which found the AI and have just published their results. Using a measurement system that they developed by themselves, the Quantitative Input Influence (QII), they sought to determine the weight of each information brought to an algorithm in decision making.
The opacity of algorithms pointed out
“The transparency of algorithms is the subject of increasing demand, as private and public organizations use increasingly large volumes of personal data, and increasingly complex systems for analyzing this data to collect data. their decisions», They explain in a item presenting the first results of their research published at the end of May. However, they continue, more transparency would make it possible to identify discriminatory decisions and prediction errors, and to place organizations in the face of their responsibility in their decision-making processes.
The researchers carried out a first phase of testing their QII measurement system on standard machine-learning algorithms. Conclusion of these tests: beyond transparency reports carried out at the level of an organization, it is necessary to build individualized transparency reports, for each individual analyzed (client, patient, etc.)
Understanding the classification
These reports make it possible to explain how, in a given context (a decision to be made), the profile of an individual (his income, his level of education, his marital status, etc.) varies his relative “classification” by compared to other profiles.
To illustrate their point, the authors of the study detail three typical profiles of individuals. The first, that of Mr. X presented below, shows the positive impact that his level of income has on his positioning. The fact that he is not married, with a dependent child, on the other hand, has a negative impact.
More contextualized, the profile of Mr. Z below also comes from what the researchers call their “arrests” database (profiles of individuals arrested by the police over a given period of time). Analysis of the profile shows that his origin and year of birth played a positive role in his being arrested by the police, more than the fact that he has no drug history, for example.
It is easy to understand the benefit of the QII system deployed on a larger scale. Many sectors, such as insurance, banking, health, sales, and even IT security, already use algorithms to increase knowledge of their targets. Moreover, the three researchers announce that they are looking for industrial partners to deploy their QII system on operational machine-learning algorithms. If this research will serve the transparency of AI algorithms, another unknown is whether organizations will play along, and publish this data.
SEE: The Comprehensive Research of U.S. Academics
Photo credit: Fotolia, royalty-free stock images, vectors and videos