Artificial General Intelligence (AGI) and the Real Existential Threat Hypothesis

0

Anthropomorphic sentiment suggests that Homo sapiens has been around for about six million years. However, think of it like this, if the earth had been created a year ago, then the human species would have 10 minutes. The industrial age started 2 seconds ago. We, the most evolved and intelligent species, are in fact recent guests on this planet.

From fire to brimstone, human intelligence has been the primary catalyst in achieving all that we have accomplished and all that is close to our hearts. It is clearly evident that the cycle of evolution of human beings and the world around them crucially depends on some relatively minor changes that have made the human mind. And this carries over, of course, to the fact that any other change that could significantly alter the substrate of thought could have potentially huge consequences.

Well, we’ve already seen another big bang that has caused a profound change in this substratum and right now we’re in the midst of a revolution. There has been a major paradigm shift from rule-based AI to machine learning and from human-machine interface to machine-to-machine interface. Machines can actually learn patterns and algorithms and continually improve, just like a human infant. Reinforcement learning is the process by which the machine trains through trial and error.

We encounter AI every day and technology is advancing rapidly. It can be the new evolution that can fundamentally change lives on our planet. Artificial intelligence has the potential to revolutionize all aspects of everyday life; work, mobility, medicine, economy and communication.

Checkmate on humanity?

Think about it, you are watching a video on Facebook or a reel on Instagram and then little by little you start to get the suggestion of a new video in the same category that you watched earlier. And then something scary happens, a chain of automated suggestions starts pouring into your feeds and now you’re trapped in this vicious cycle of killing hours of your time by letting you go into this cycle.

You watch and scroll continuously without even realizing that there is some artificial intelligence and an algorithm somewhere that reads and analyzes your scrolling pattern, your viewing time, trains your mind and prepares you for exactly what will require you to spend more and more time on that particular app. In a way, intelligence tells the user what to watch. And it happens all the time. You get suggestions when you shop online or on YouTube. And if it is not regulated, it can be a checkmate on human intelligence.

GPT-3 (Generative Pre-Trained Transformer 3) shakes Silicon Valley

Imagine an AI capable of writing anything. You feed him poems by a particular poet and he’ll write a new one with the same rhythm and genre. Or he can write a newspaper article (the tutor has already done this). It can read an article, answer questions from the information in the article, and even summarize the article for you, not to mention that it can generate images from the text.

GPT-3 is a deep learning algorithm that produces human-like text. This is the third generation language prediction model created by the San Francisco start-up “OpenAI”, which was co-founded by Elon Musk. This program is better than any previous program at producing text that could have been written by a human. It is a quantum leap because it can prove useful for many businesses and has great potential in automating tasks.

It can write Java codes, just inserting text descriptions into it. Or, how about creating a fictitious website by simply copying and pasting a URL with a description.

It all started with a game.

Intelligence

“Go” is arguably one of the most complex games out there, its purpose is simple; surround more territory than your opponent. This game has been played by humans for 2,500 years and is considered the oldest board game still played today. The level of complexity of the game goes far beyond the very fact that there are more movements possible in the game of Go than there are atoms in the universe.

However, it’s not just humans playing this game now. In 2016, Google Deep Mind’s AlphaGo defeated 18-time world champion Lee Sedol in four out of five matches. Normally a computer beating a human at a game like chess or checkers wouldn’t be that impressive, but Go is different. In countries where this game is very popular, like China, Japan, and South Korea, for them it’s not just a game. It’s like how you learn strategy. It has an almost spiritual component.

This game is way beyond the scope of predictions and it cannot be solved by brute force. There are more than 10 of the 170 possible moves in the game. To put this in perspective, there are only 10 to 80 atoms in the observable universe. AlphaGo was formed using data from real human Go games. He went through millions of games and learned the techniques used and even invented new ones that no one had ever seen.

However, a year after AlphaGo won over Lee Sedol, a brand new AI called AlphaGo Zero beat the original AlphaGo with 100-0 games. 100 games in a row. And the most impressive part about it is that he learned to play without human interaction. It was a huge victory for the deep mind and the AI. It was the best example of one type of intelligence beating other types of intelligence.

Artificial intelligence had proven that it could pull together a large amount of data, beyond anything a human could handle, and use it to learn how to predict an outcome. The business implications are enormous.

The case for the imminent arrival of AI at the human level generally calls for the advancements we have seen in machine learning to date and assumes that it will inevitably lead to superintellect. In other words, scale up the current models, give them more data, and voila: AGI.

Will advanced AI transform into Terminators and take control of human civilization?

Intelligence

During the Ford Distinguished Lectures in 1960, economist Herbert Simon proclaimed that within 20 years, machines would be up to all the tasks that humans can perform. In 1961, Claude Shannon – the founder of information theory – predicted that science fiction-style robots would emerge within 15 years.

Mathematician IJ Good devised a galloping “intelligence explosion,” a process by which machines smarter than humans iteratively improve their own intelligence. Writing in 1965, Good predicted that the explosion would happen before the end of the twentieth century. In 1993, Verner Vinge coined the start of this “singularity” explosion and said it would happen within 30 years.

Ray Kurzweil then declared a law of history, the Law of Accelerated Returns, which predicts the singularity will arrive by 2045. More recently, Elon Musk claimed that the superintelligence is less than five years away, and academics from Stephen Hawking to Nick Bostrom have raised concerns. regarding the dangers of dishonest AI.

The hype isn’t just limited to a handful of public figures. Every few years, surveys are conducted of researchers working in the field of AI to ask for their predictions on when we will reach General Artificial Intelligence (AGI) – machines as versatile and at least as intelligent as humans. The median estimates from these surveys give a 10% chance of AGI in the 2020s and a one in two chance of AGI between 2035 and 2050. Leading researchers in the field have also made surprising predictions. The CEO of OpenAI writes that in the decades to come, computers “will do almost everything, including making new scientific discoveries that will expand our concept of“ everything ”,” and Google co-founder Deepmind that “AI at the human level will have passed in the mid-2020s. “

These predictions have consequences. Some have called AGI’s arrival an existential threat, wondering if we should stop technological progress to avoid catastrophe. Others pour millions of dollars in philanthropic funding to avoid AI disasters.

Mayank Vashisht | Sub-editor | ELE timetables


Source link

Share.

Leave A Reply