COTD: AI integration drives Singapore businesses to boost data security
Businesses focus on upskilling, AI regulations, and data management to mitigate risks and challenges in AI implementation.
In Singapore, 76% of businesses are concerned about data privacy and security risks when using third-party AI solutions to integrate AI into their processes for increased revenue and productivity, KPMG reported.
Businesses are also concerned about being dependent on a partner's expertise and resources (58%) and risk of non-compliance with regulations (52%).
To bridge the AI skills gap, 55% of business leaders invest in upskilling employees for GenAI, while 69% focus on training and 61% hire new talent.
KPMG revealed that only 16% of organisations have a workforce highly equipped in all areas needed for GenAI utilization, whilst 78% are moderately equipped.
To mitigate the risks of implementing GenAI, particularly in cybersecurity (79%) and data quality (66%), business leaders aim to deploy ethical AI frameworks (17%), implement stringent data privacy measures (17%), and conduct regular internal compliance audits (11%).
To manage partner-related risks, business leaders seek to incorporate stringent data security protocols in partner agreements (69%), enforce two-factor authentication and other security practices (65%), and conduct regular security audits (64%).
Meanwhile, business leaders expect their partners to implement data privacy and security protocols (27%), use ethical AI guidelines and principles (23%), and integrate risk mitigation and management practices (17%) as safeguards.
In addition, KPMG said that 39% of businesses anticipate AI regulations to have a high impact on Gen AI implementation.
When preparing for AI regulations, key steps include reviewing and updating data handling practices (60%), implementing technical measures for AI transparency and fairness (51%), and updating protocols and procedures to align with new regulations (50%).
With AI regulations, businesses can have more stringent data privacy and security measures (63%), increased focus on transparency and explainability in AI models (52%), and increased cost due to compliance requirements (54%).