ESDES Blog for a sustainable development

Artificial Intelligence (AI), machine rationality, and CSR: from utopian morganatic polygamy to a peaceful household

Written by Landry Simo | Mar 9, 2022 8:30:00 AM

Having whittled away significantly at the ambient protectionism of yesteryear through market deregulation, globalisation can now justifiably be held responsible for encouraging the creation of new virtual borders linked to exclusive data ownership and the emergence of Big Data. Market wars have thus made way for a data and computer algorithm war.


AI is gaining ground relentlessly, which frequently means outwitting human beings. In the determined, but possibly unhealthy, pursuit of ever-higher performance and ever-lower costs, organisations are, therefore, increasingly deciding to automate tasks that used to be performed by people. In a social era that has brought an inspiring, but unequal, consensus about the (formerly disputed) multidimensional character of organisational performance, we are now seeing the emergence of the first questions about the socially responsible nature of AI and the resultant reliance on computer algorithms.


CRS certifications take inspiration from ISO 26,000, which highlights the various pillars of CRS, including sustainable development, stakeholder well-being, and organisational ethics.
It is now more acceptable to acknowledge that - all things considered, and all criteria combined - algorithms perform better and score higher in rationality and decision-making than human beings.


However, it is worth noting that AI’s undeniable strength in taking rational decisions, while optimising profitability and minimising costs, does not actually guarantee that CSR certification criteria and good performance will be upheld. This is due to at least three reasons, to which we might add a fourth:

  • In ethnical and social terms, AI destroys countless human jobs, even though it also creates new jobs via Schumpeter’s mechanism of creative destruction. There is no guarantee that, having machines as work colleagues will contribute as much to the organisation’s well-being as the opportunities for socialisation created by interpersonal relationships. AI takes decisions based on statistics that are likely to be biased and may produce inexact, discriminatory, opaque and unjust results. As it also undeniably lacks the ability to put decisions in perspective, AI may also prove to be amoral. On the other hand, algorithmic performance could overtake human performance if AI’s greater profitability leads mutatis mutandis to a greater propensity and capacity for investing in social and philanthropic fields. This reasoning is not, however, without its limitations since it relies on the operation of managerial strategy within each relevant organisation.
  • Environmentally, AI is more energy intensive than people because it is driven by chemical energy, whereas people run on their own circulatory systems. Depending on energy supply, therefore, a machine workforce may actually turn out to be less ecologically viable than manpower.
  • In economic terms, automation certainly contributes to higher revenues and lower costs, but it does not necessarily optimise well-being for all stakeholders. What is more, AI leads to massive investments in the construction of databases and computer algorithms and, above all, in the protection of the data collected and thereby made vulnerable. These costs are therefore likely to dilute the economic advantages of AI and influence value creation for stakeholders.
  • In terms of security, AI signals a transition from classic ‘terrorist attacks’ to a new form of terrorism: ‘algorithmic terrorism’. Our era is increasingly marked by cybercrime, which is one of the principal issues with developing AI. To illustrate this point, most sophisticated attempts to manipulate and defraud our financial markets and health systems now take place exclusively in cyberspace, by attacking AI systems. The omnipresent threat should never be underestimated.

Whatever the undeniable advantages if AI in terms of automating tasks, speeding up execution and reducing the fallibility of rationality, it is important to note that human intervention (at least for the present) remains the philosopher’s stone of successful digitalisation. The precision and reliability of computer algorithms are intrinsically dependent on the quality and reliability of the ‘input’ (interface, computer code and data), which is still done by people.


To sum up, the triptych - AI, algorithmic decision-making and CSR - may yet co-exist peaceably if an appropriate legal framework and a well-trained algorithmic army are put in place to sugar-coat or diminish the limitations discussed above and the inherent risks of AI: an essential condition for achieving organisational performance in the various CSR areas.


However, will it be easy to implement a legal framework capable of integrating the diversity and complexity of AI’s idiosyncratic characteristics, thereby limiting its inconveniences, and ultimately promoting CSR? It seems highly doubtful!


It remains an inevitable that using it in this way will enhance Artificial Intelligence, making it more intelligent and less artificial. This will be the subject of a future publication.