National Biz News

All Business Stories for You!

News

Artificial General Intelligence: What is it

You’ve heard of artificial intelligence – now more commonly known as AI. But what about artificial general intelligence? Sometimes referred to as AGI, this mainstay of science fiction and philosophy for decades embodies the idea of an AI that is capable of human-like reasoning ability.

When chatbots and generative AI first reached mass consumption, it seemed like that milestone was tantalizingly close. And yet, once the shine of these tools wore off, most experts realized that they are not capable of reasoning in the same way humans do.

“Despite numerous breakthroughs, generative AI lacks human-like reasoning because it operates based on patterns and data without grasping their meaning,” said Karthik K., IEEE member. “It cannot yet generalize, going from situations with very little information and applying it to a broader context. It cannot extend already-learned concepts to new situations.”

So, what is the difference between AI and AGI? And how would we know when it has reached the level of human decision-making? Those questions are hard to answer, in part because there is no universally accepted definition of AI, let alone AGI.

But those questions are important to ask nonetheless because they illuminate the capabilities, and the limitations of today’s technology.

NO CLEAR DEFINITION OF AI

The definition of AI is squishy, and touches on multiple disciplines. One editorial for IEEE Transactions on Artificial Intelligence noted that scholars from several disciplines —computer science, psychology, biology, math, and physics — have tried to define it.

“Every definition has been criticized and has failed to obtain consensus within each of these disciplines, let alone universal consensus,” the editorial noted.

TEST FOR AGI

In the absence of a clear definition of AGI, numerous theorists have proposed a variety of tests for artificial general intelligence. The idea is, we might not be able to define it, but maybe we would know it when we see it.

The most famous is the Turing Test in 1950, which envisions a group of experts convened to ask questions of an “Oracle.” If the experts reach the point where they cannot tell whether the answers come from a human or a machine, then according to the test, AGI has been reached.

Another, the Wozniak Coffee Test, is more whimsical. It is based on a purported claim by IEEE Fellow and Apple co-founder Steve Wozniak, who said that no robot would be able to enter a random house and make a cup of coffee. The idea being that, while making coffee seems simple, it is quite difficult, given the way houses differ in layout, the way people organize their cupboards, the varied methods of making coffee, etc.

ARE CHATBOTS ON THE CUSP OF AGI?

The difference between artificial intelligence and its human-like counterpart exists on a continuum, according to an in-depth discussion from Computer magazine, published by the IEEE Computer Society. Early artificial intelligence developed expertise in a single domain, like the games chess and go. Advanced AI’s today can understand language, translate language to images, and analyze images for signs of cancer, among other things.

And while those achievements are impressive, they don’t exactly think and reason the way humans do.

So how far away are the chatbots from achieving AGI?

At first glance, chatbots like ChatGPT come awfully close to passing the Turing Test to be classified as AGI. They can draft plausible-sounding essays, in a variety of disciplines, with confidence.

While chatbots can even pass professional licensing exams in fields like law, they also fail at the basics. Chatbots frequently get math problems wrong. They misattribute quotes, and don’t always understand cause and effect, arriving at wrong answers on reasoning problems.

“The next step in achieving AGI might involve developing AI systems that can demonstrate more advanced reasoning, problem-solving and learning capabilities across a wide range of domains,” said IEEE Member Sukanya Mandal. “This could include the ability to transfer knowledge from one context to another, to learn from limited data or examples, and to exhibit creativity and adaptability in novel situations.”

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *