Having whittled away significantly at the ambient protectionism of yesteryear through market deregulation, globalisation can now justifiably be held responsible for encouraging the creation of new virtual borders linked to exclusive data ownership and the emergence of Big Data. Market wars have thus made way for a data and computer algorithm war.
AI is gaining ground relentlessly, which frequently means outwitting human beings. In the determined, but possibly unhealthy, pursuit of ever-higher performance and ever-lower costs, organisations are, therefore, increasingly deciding to automate tasks that used to be performed by people. In a social era that has brought an inspiring, but unequal, consensus about the (formerly disputed) multidimensional character of organisational performance, we are now seeing the emergence of the first questions about the socially responsible nature of AI and the resultant reliance on computer algorithms.
CRS certifications take inspiration from ISO 26,000, which highlights the various pillars of CRS, including sustainable development, stakeholder well-being, and organisational ethics.
It is now more acceptable to acknowledge that - all things considered, and all criteria combined - algorithms perform better and score higher in rationality and decision-making than human beings.
However, it is worth noting that AI’s undeniable strength in taking rational decisions, while optimising profitability and minimising costs, does not actually guarantee that CSR certification criteria and good performance will be upheld. This is due to at least three reasons, to which we might add a fourth:
In terms of security, AI signals a transition from classic ‘terrorist attacks’ to a new form of terrorism: ‘algorithmic terrorism’. Our era is increasingly marked by cybercrime, which is one of the principal issues with developing AI. To illustrate this point, most sophisticated attempts to manipulate and defraud our financial markets and health systems now take place exclusively in cyberspace, by attacking AI systems. The omnipresent threat should never be underestimated.
Whatever the undeniable advantages if AI in terms of automating tasks, speeding up execution and reducing the fallibility of rationality, it is important to note that human intervention (at least for the present) remains the philosopher’s stone of successful digitalisation. The precision and reliability of computer algorithms are intrinsically dependent on the quality and reliability of the ‘input’ (interface, computer code and data), which is still done by people.
To sum up, the triptych - AI, algorithmic decision-making and CSR - may yet co-exist peaceably if an appropriate legal framework and a well-trained algorithmic army are put in place to sugar-coat or diminish the limitations discussed above and the inherent risks of AI: an essential condition for achieving organisational performance in the various CSR areas.
However, will it be easy to implement a legal framework capable of integrating the diversity and complexity of AI’s idiosyncratic characteristics, thereby limiting its inconveniences, and ultimately promoting CSR? It seems highly doubtful!
It remains an inevitable that using it in this way will enhance Artificial Intelligence, making it more intelligent and less artificial. This will be the subject of a future publication.