[Un article de The Conversation écrit par Christian Goglin – Professeur en Intelligence Artificielle, Éthique de l’Intelligence Artificielle et Finance, ICD Business School]
During the last American presidential election, the role of artificial intelligence algorithms used by social networks to capture and retain the attention of Internet users was criticized. In fact, not only do these mechanisms reinforce the opinion of voters – by recommending content with which they already agree – but they also harm the quality of the debate by preventing even the very possibility of pluralistic and rational based on facts — in particular by the generation of “filter bubbles” which avoid any dissonance by promoting exchanges between individuals of similar opinions.
Could a more ethical design of these AI systems remedy these problems in the future? If yes, thanks to what ethics? Furthermore, what can we put concretely behind this concept sometimes considered inconsistent?
AI systems could incorporate ethical constraints
Beyond the particular case of social networks, the absence of neutrality of AI systems, both predictive and generative, is recognized and documented by the academic community. They can in fact cause injustice and harm individuals, particularly because of their socio-economic consequences – the case of discrimination in the granting of bank credit serves as a textbook case in this area.
If this major issue is primarily the responsibility of the legislator, the designers of AI systems will always retain a margin of freedom to make choices in the programming of their systems: more or less beneficial choices, conscious or unconscious reflections of their personal or collective ethics. Thus, to shorten its delivery times, the company Amazon is increasingly optimizing its supply chain using AI, completely legally, but at the cost of frantic speeds and strong pressure on its employees.
However, it is precisely in the interstices of law and regulation that a common ethics can be explicitly embedded in AI systems. This alignment of algorithmic mechanisms with human moral preferences is undoubtedly one of the keys, after the law, to the harmonious integration of these systems into society.
In search of moral consensus
Before going any further, let us define ethics in a simple way: ethics is the means of discerning good behavior from bad, but also good decisions and actions, based on their consequences and their adherence to a set of moral values or principles.
Certainly, everyone, depending on their history, beliefs and culture, can claim a unique moral judgment. However, we must go beyond the individual and the relativism of “each his own ethics” to build a broader ethics, aiming at the universal, in order to promote empathy, living together and solidarity, and within the framework of a society where AI systems are well integrated, this shared ethics should be encoded there.
Defining this greatest common denominator is possible on the basis of a democratic method of deliberation. The ethics of discussion, theorized by philosophers Jürgen Habermas and Karl-Otto Appel, can help us with this. This comes from a process of mutual listening, during which participants debate around a table to distinguish the argument that they unanimously judge to be the best.
If this search for consensus may sound like a utopia, real examples of application of this type of ethics exist. In practice, however, they are more realistic, with less demanding deliberation methods.
Thus, UNESCO's normative work makes it possible to develop and adopt recommendations for its 194 member states, following long and inclusive processes, bringing together international experts, civil society and state representations.
Granting bank credit, a balancing act between disadvantaging the bank or the individual
Let us now take a concrete example, that of granting bank credit. If the distribution of credit is limited by regulations aimed at guaranteeing their stability, banks retain degrees of freedom to control their economic risk according to their risk appetite. As a result, each customer wishing to take out a loan is assigned a credit score, or a prediction of their risk of default. This score is decisive in making the decision to grant the loan, in relation to the bank's risk appetite.
However, the score is very often calculated by an artificial intelligence system, operating by automatic learning, a phase during which the machine learns to evaluate the risk of default “by itself”, using a game of learning bringing together past credits from the bank.
The expression “by itself” is a little naive because this learning, of the supervised type, can in reality be oriented or constrained. Thus, academic work suggests learning approaches aimed at maximizing the bank's expected profit. The machine then learns to minimize the negative economic consequences for the bank, attributable to prediction errors. These errors are of two types with unequal consequences, either the bank lends to a borrower who will default (loss), or the bank does not lend to a borrower who would not have defaulted if it had been trusted (loss of profit). ).
The problem with this type of approach is that learning only takes into account the negative economic consequences for the bank without considering those affecting borrowers, namely the risks of over-indebtedness and banking exclusion. However, a fair decision should take into account all stakeholders.
Going further, it is possible, in addition to the socio-economic consequences for all stakeholders, to also identify the ethical values endangered by negative decision-making consequences, for example when lending to a less creditworthy person whose default will result in a situation of over-indebtedness.
Thus, if we equip ourselves with a framework of ethical values, such as that proposed by the European Commission, we can resort to the values of “social well-being” (or group) and “individual well-being”, in which 'well-being' is to be taken in the sense of “material comfort allowing a pleasant existence”. It is then possible to associate, for the bank, the danger of financial stability with the value of “social well-being” since its employees would be impacted by accounting losses or even bankruptcy. Concerning the borrower, the dangers of personal bankruptcy and the difficulties of integrating into society can be associated with the value of “individual well-being”. However, one value can prevail over the other – the hierarchical approach is also a way out of moral dilemmas. Plato gives an example in which an individual promised to return a weapon to his friend when the latter is likely to use it to injure a third party, due to his state of mind. Here, the resolution of the moral dilemma is quite simple because the value of physical integrity of the third party outweighs that of respect for a promise.
On a theoretical level, if we refer to the typology of sociologist Max Weber, this approach implements a plural ethics, combining ethics of responsibility (similar to utilitarianism), with socio-economic consequences, and ethics of conviction (similar to ethics), with the consideration of ethical values underlying implicit standards. These two ethics, often considered irreconcilable, are integrated here into the same model.
The implementation of a “plural” ethics
In practice, we can therefore “morally align” an artificial intelligence system thanks to machine learning, constrained by the minimization of adverse socio-economic consequences for all stakeholders, adjusted by the relative importance of the associated ethical values.
As it stands, with regard to decisions assisted by AI systems, the difficulty of this approach lies in the estimation of the socio-economic consequences induced by prediction errors as well as in the definition of a hierarchy associated values. These two tasks, estimation and prioritization, could be accomplished through deliberation, thanks to the ethics of discussion. But which organization has the independence and legitimacy necessary to implement such discussions?
Indeed, if companies are free to go beyond legal and regulatory requirements by defining their own ethical standards, this also implies that they are judge and jury.
For this reason, only independent standardization bodies, able to propose sectoral ethical standards (because developing an AI dedicated to the medical or commercial sector does not call for the same expertise, grouped within separate technical committees) are likely to guarantee the seriousness of the approach.
AFNOR could play this role for France, CEN-CENELEC for the European Union and ISO-IEC for the world. Additionally, with the standards published, the hierarchy of values at play with an AI system would be accessible to the general public.
Companies using it could benefit from ethical certification, a differentiating factor from the competition. Standardization organizations would be responsible for establishing a dedicated ethical governance committee, bringing together multidisciplinary experts, cooperating with the technical committee. ad hocin order to assess the consequences and values at stake in each specific case considered. Above all, the representatives of all stakeholders, borrower and lender in the case of granting credit, would also be included, for a fair deliberation, drawing the contours of a true common ethics integrated into the AI system.

With an unwavering passion for local news, Christopher leads our editorial team with integrity and dedication. With over 20 years’ experience, he is the backbone of Wouldsayso, ensuring that we stay true to our mission to inform.



