Creating human impact with artificial intelligence
By Ritin MathurThe rising prominence of artificial intelligence (AI) in our conversations, the way we work and do business, protect communities against crime and even the way news is reported to us on television is unprecedented. Its ability to learn and process massive amount of data far outstrips any other current technologies and will only continue to grow as we make greater strides in computer processing power and deep learning algorithms.
In Singapore, the AI ecosystem is thriving with the emergence of start-ups, growing collaboration across sectors and the drive to develop capabilities and innovative solutions. A recent study by Microsoft and IDC Asia/Pacific estimated that AI will nearly double the rate of innovation and employee productivity improvements in Singapore by 2021, and the majority of people in Singapore believe that AI will either help to do their existing jobs better or reduce repetitive tasks.
From manufacturing to healthcare, financial services and the public sector, the adoption of AI is pervasive with promise for powerful capabilities beyond our imagination.
Should we be afraid of AI?
Given all the benefits of AI, there are three areas where people have some degree of alarm or even fear when it comes to AI.
1. Rise of machines: There is some fear that with the rise of AI, these machines may not have the same ethical and moral dimensions as humanity does. This facet of AI leaves us wondering what the world would be like with the rise of non-empathetic machines. Truth is, AI can be more reliable at task-specific functions such as self-driving automobiles.
2. Replacing jobs: As organisations continue to adopt AI to handle lower-level and repetitive tasks, employees are enabled to focus on complex work that requires human empathy such as having difficult conversations, people management and projects requiring creativity. Using AI in ways that can make people more successful and boost customer satisfaction is what we call “human-centred AI”. This puts employees and customers first, ahead of technology that supports them.
3. Digital ethics: As algorithms become more advanced and bring increasing levels of accuracy to specific tasks, it becomes more difficult to make sense of how they work. This is largely because of how the models learn and the increasing complexity of their learning architecture. When we combine the inherent opacity of the algorithm with datasets that may be limited or not representative, or including real-world biases that are not desirable, then sometimes scenarios are presented where the AI decision may be discriminative and undesirable.
The one thing AI brilliantly does today is taking real-world examples and experiences through data to pattern match and cognitively recognise tasks to yield a prediction. However, a leap from this point to sentience may be a tad far-fetched at this juncture. A far more important emphasis now is to ensure there is fairness in AI and an ethical dimension in decision making that AI supports.
Putting heart in AI
According to our recent Trendlines research, 89% of global executives say they have encountered an ethical dilemma at work caused by the increased use of smart technologies and digital automation, whilst 87% admit they are not fully prepared to address the ethical concerns that exist today.
An effective approach to address these concerns is to develop a digital ethics framework that has a comprehensive definition of ethics and an accompanying system of values and moral principles for the digital interactions between people, business and things.
A digital ethics framework should also include a process to identify and correct areas where bias may creep in, whether it is the way the objective is defined; how data is collected; or how data is prepared for the AI algorithm to be trained. The framework should also define how potential biases in the entire value chain should be corrected.
For example, one way to protect against potential biases at the data collection stage includes identifying ‘protected groups’ and ensuring statistical parity for them by setting different thresholds in favour of the protected groups. Another way to look at fairness of an algorithm is to ensure the error rate of the algorithm for sub-groups is the same.
AI may be a long game, but regarding it only as a long game isn’t enough. Think of AI as a long game that needs a short plan. That short plan can include identifying potential quick wins, keeping pilots simple and educating people as we go.