Building a foundation for ethical AI in government
By Deepak RamanathanThe wave of technological advancement is sweeping through the world, and the public sector is no exception.
With all the hype around artificial intelligence (AI), and generative AI (GenAI) in particular, they have the greatest potential to harness this technology due to their access to vast amounts of data and extensive resources.
Yet, amongst government entities, the adoption of AI so far seems to be uneven and typically trails behind the private sector. A recent NTUC LearningHub’s Industry Insights Report 2024 found that 87% of local Singapore business leaders have adopted GenAI technologies to a certain extent for day-to-day work, whilst more than 80% of business leaders believe that GenAI skills will soon be essential for most job roles.
What’s certain is that the allure of AI is becoming too compelling to overlook, and in the next few years, we will likely see its adoption accelerate amongst government organisations. Some governments, such as Singapore’s, are taking significant strides with AI initiatives and AI looks set to become a key aspect or enabler of their strategies moving forward. Singapore's 2024 budget announcement regarding the allocation of $1b towards accelerating AI development and adoption over the next five years demonstrates this strong commitment.
Learning from early adopters
Just like with other technologies, rushing to be the first to adopt isn't always the wisest choice, but being more strategic and adopting wisely will certainly pay dividends. For government divisions that are “dipping their toes” in the technology, perhaps it is better to focus on decision-making and small-scale deployments in the early stages. They don’t necessarily have to go all-in on AI, nor do they have to go in blind.
By now, many first-movers in other industries have already experimented with AI, failed, learned and made progress in their efforts. Government leaders can and should heed the lessons and best practices derived from these experiences. They not only present a significant opportunity for government agencies but can also pave the way for eventual broader adoption.
One advantage is that governments are usually adept at making long-term investments and managing risks, unlike many private-sector companies. On the flip side, they are often “set in their ways” and face cultural barriers as well as skills gaps that may hinder the adoption of new and advanced technologies like AI.
Thus, taking a balanced approach to AI adoption, as Singapore has done thus far, maybe the way to go
as it can better facilitate innovation whilst safeguarding consumer interests.
The potential of AI for government
All in all, AI shouldn’t be viewed as a replacement for the workforce that already exists, but as a tool to augment human intelligence in government services. There are numerous ways that AI can be used by the public sector that can greatly benefit the social well-being of citizens, such as:
Improving operational efficiency through the automation of repetitive tasks (no matter how complex).
Making smart cities smarter, especially when AI is used in tandem with other technologies like big data analytics, the Internet of Things (IoT) and cognitive computing.
Protecting citizen identities and personal data with the use of synthetic data to build and train systems.
Preventing costly breakdowns and disruptions that could jeopardise public safety with predictive maintenance of infrastructure.
These are but a glimpse of the benefits AI can potentially deliver to governments, and its potential is truly immense. At the same time, however, overreliance or overconfidence in AI’s “magical abilities” can create unrealistic expectations and set the stage for disappointment as well as failure. Unfortunately, when AI falls short of these expectations, it won’t just hinder future adoption but also erode public trust.
Laying the groundwork for trust
Trust is a huge piece of the puzzle because based on the current trajectory, AI algorithms will play a bigger role in supporting government decisions that will have real and lasting impacts on citizens’ daily lives. Thus, governments have to take the necessary steps early on to ensure transparent, ethical and responsible use of AI, for this is essential in building public trust. Regardless of the scale, any decisions made with AI must not only be fair and ethical but also, crucially, be seen as fair and ethical.
For governments that are still in the early days of their AI journey, there are inevitable challenges to
navigate and a multitude of factors to consider. Building a strong foundation for AI begins with
establishing trust and ensuring the ethical and responsible use of AI.