National Biz News

All Business Stories for You!

News

How To Use AI, Responsibly

In 2021, a group of researchers set out to quantify just how hot the topic of ethics of artificial intelligence had become. They searched Google Scholar for references to AI and ethics. What they found showed a remarkable uptick in this field. In the over three decades spanning 1985 and 2018, they found 275 scholarly articles focusing on the ethics of artificial intelligence. Those journals published 334 articles in 2019 alone – more than they had in the previous 34 years combined. In 2020, an additional 342 articles were published.

Research into AI ethics has exploded, and much of it has focused on guidelines for building AI models. Now, AI-based tools are widely available to the public. That’s left schools, businesses, and individuals to figure out how to use AI ethically – in a way that is safe, free of bias and accurate.

“Much of the public is not yet sufficiently informed or prepared to use AI tools in a fully responsible manner,” said IEEE Member Sukanya Mandal. “Many people are excited to experiment with AI but lack awareness of potential pitfalls around privacy, bias, transparency and accountability.”

HALLUCINATIONS AND INACCURACIES: THE BIGGEST PITFALLS FOR AI USERS

Because of the way they are built, most generative AI models are prone to hallucinations. They simply make things up, and the seemingly authoritative results give the appearance of confidence. That is a risk for users, who may pass on false information. In the U.S., lawyers using generative AI learned this lesson the hard way when they attempted to use chatbots to draft legal documents, only to discover that the AI made up nonexistent cases they cited as precedent in their arguments.

“AI may not always be accurate, so its information needs to be checked,” said IEEE President Tom Coughlin.

CAN WE TRUST THE DECISIONS AI MAKES?

Artificial intelligence models are trained on massive amounts of data, and sometimes they make decisions based on extremely complex mathematical functions that are difficult for humans to understand. Users often don’t know why an AI has made a decision.

“Many AI algorithms are ‘black boxes’ whose decision-making is opaque,” Mandal said. “But particularly for high-stakes domains like healthcare, legal decisions, finance and hiring, unexplainable AI decisions are unacceptable and erode accountability. If an AI denies someone a loan or a job, there must be an understandable reason.”

WHAT HAPPENS IF WE TRUST AI TOO MUCH?

Because AI models are trained on such large datasets, they could lull users into a false sense of confidence, causing them to accept decisions without question.

In “The Impact of Technology in 2024 and Beyond: an IEEE Global Study,” a recent survey of global technology leaders, 59% of respondents identified “inaccuracies and an overreliance on AI” as one of their organization’s biggest concerns when it came to the use of generative AI.

WHY IS IT IMPORTANT TO KNOW WHAT DATA WAS USED TO TRAIN AN AI MODEL?

Imagine this: An AI model used is trained to screen applicants for a job. It forwards resumes to hiring managers based on data collected over prior years and is trained to identify people most likely to get the job. Except, the industry has traditionally been male dominated. An AI could learn to identify women’s names, and thus automatically exclude those applicants, based not on their ability to do the job, but on their gender.

Such algorithmic biases can and do exist in AI training data, making it especially important for users to understand how models were trained.

“Ensuring unbiased data is a shared responsibility across the AI development lifecycle and an ongoing process,” Mandal said. “It starts with those sourcing data being cognizant of the risk of bias and using diverse, representative datasets. AI developers should proactively analyze datasets for bias. AI deployers should monitor real-world performance for bias. Ongoing testing and adjustment are needed as AI encounters new data. Independent audits are also valuable. No one can abdicate bias mitigation solely to others in the chain.”

SHOULD YOU TELL PEOPLE WHEN ARTIFICIAL INTELLIGENCE IS USED?

Disclosure is emerging as a key tenet of AI use. When an AI decides in healthcare, for example, patients should be told. And social media sites also require creators to disclose when AI was used to make or alter a video.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *