Tackling AI’s problems for the good of humanity

 Tackling AI’s problems for the good of humanity

Healthcare, financial services and tech industry experts debated the Ethical Considerations in Gen AI and Data Science. RAID Director Ben Avison reports from the AI & Big Data Expo at TechEx

 

The development and deployment of generative AI has been experiencing a massive boom over the last year or two – but scratch below the surface and the truth is that most products do not make it to market. A major pitfall in many systems is a propensity to hallucinate, according to a study by Snowflake.

For example, large language models (LLMs) have been known to cite legal cases that simply do not exist. “We don’t have a mechanism to know when they are hallucinating,” said Sanjay Puri, Founder, Knowledge Networks Group.

Chandrashekhar Kachole pointed out that humans, like AI, are less likely to make things up as their knowledge of a subject increases. Does this mean that generative AI’s propensity to hallucinate will also decrease as its knowledge increases? Shairil Yahya, Legal Compliance Technology & Solution Director, Philips thinks so. “Eventually hallucination will go down, but it will still exist,” he said.

 

Opening the black box

Another concern with AI is transparency. “Are we making data open so people can see that we followed standards when developing the AI?” asked Larry Orimoloye, Principal Architect AI/ML – Field CTO, Snowflake. “Machine learning must be explainable to regulators.”

“It’s a black box in many cases so we don’t know how LLMs are making decisions,” said Puri. “And we don’t know where data is coming from, so there’s inherent bias.

“Bias is a challenge,” Kachole agreed. “All the data we have behind foundation models can lead to misleading information.”

“Data is the bloodline of AI systems,” said Emily Yang, Head of Human-Centred AI and Innovation, Standard Chartered. “There are a lot of social and cultural dimensions we need to consider, such as training data – whose values are we abiding by?”

 

Mind the gap

Another issue in this field of cutting edge technology is that of regulatory lag.

“You must have regulation – it’s a very powerful tool. But it takes a long time,” said Yahya. “The EU AI Act was introduced this  year, but Chat GPT came out 4 years ago. Regulation is always playing catch up.”

It might be late, but when it comes it has an impact. “In the EU due to strict regulations, a lot of companies might move their development operations elsewhere,” said Yahya.

However, regulation is not necessarily bad for business. “I think you can have your cake and eat it,” said Orimoloye. “There are techniques and tools to mitigate against loss of privacy and security. There are guardrails and also ways to embrace innovation.”

Regulations can also create new business opportunities, such as the development of tools to detect and counter threats. The financial sector is particularly prone to cyberattacks and fraudulent transactions.

“That’s the biggest challenge,” said Chandrashekhar. “Fraud detection is not new, but AI has accelerated it. With AI you have ability to process many more transactions. We used to process 6-7% of transactions – now we can process them all. We are talking about billions of transactions happening every hour.

“In fintech, we’ve seen companies providing fraud detection-by-AI as a service.”

 

Harmonising a fragmented world

Another challenge for international companies complying with regulations is the lack of standardisation worldwide. “We don’t have global standards,” said Puri. “There’s the EU AI Act, but no policy in US and other countries.”

“We are living in a fragmented world,” said Sanjay. “Different states are putting out policies. There are frameworks; you can extrapolate best practices the OECD has put out. If you are a large company, you have internal resources do this.”

Responding to a question from RAID about the impact of horizontal legislation on vertical industries, Yahya highlighted that companies have responsibility for complying with regulations. “When you put a regulatory framework in place, someone has to be held responsible,” said Yahya. “Everyone must follow the rules.”

And the rules are there for the good of customers. “In the fintech sector, regulations are there to protect the end consumer, if you’re making an online purchase for example,” said Chandrashekhar.

“Responsible usage and adoption of AI is tied into our code of conduct,” said Yang.

 

Human-centred AI

People also play a vital role in the development of responsible AI.

Yang highlighted the growing use of the term “human-in-the-loop” to discuss oversight of AI deployment. “Human-in-the-loop suggests that humans are an afterthought”, she said, advocating for putting people at the centre of AI much sooner, for example in the development phase.

The kind of people you include matters too. “The more you have diverse thoughts in a room when you’re creating AI, the more you can come up with a comprehensive tool. It’s not just demographic diversity that’s important – bring in psychologists, designers and users too,” said Yang.

“The training and application of people developing AI is as important as regulation,” said Yajhya. “You can have all the regulation in the world, but you also need to have the right people developing AI.”

The speakers on the panel at TechEx were: Shairil Yahya, Legal Compliance Technology & Solution Director, Philips; Emily Yang, Head of Human-Centred AI and Innovation, Standard Chartered; Larry Orimoloye, Principal Architect AI/ML – Field CTO, Snowflake; Sanjay Puri, Founder, Knowledge Networks Group; and Chandrashekhar Kachole, CTO. The conversation was led by expert moderator Saber Fallah, Professor of Safe AI and Autonomy, University of Surrey