AGI é mesmo possível?  O que a ficção científica nos diz sobre IA

Is AGI even possible? What science fiction tells us about AI

2023 is the year of AI, but how close are we to truly creating artificial general intelligence? Science fiction can give us some clues about the future.

Imagem em destaque

Since ChatGPT went public, the world has had a huge interest in AI technology, and it's funny that only now are people realizing how much the world actually runs on some form of artificial intelligence. To be fair, our monkey brains are designed in such a way that anything that speaks and communicates like a human would trigger a response in us; there is a feeling of connection and closeness with other people who share our ability to communicate. And until very recently this was just a hypothesis.

Almost a year after Blake Leomine made the news for being suspended from Google and insisting that Google's LaMDA project has a soul, we're opening that can of worms again. Maybe not in terms of metaphysics, but at least ontologically (for example, we're asking the question “What is AI?”), with Microsoft going so far as to release an article talking about AGI sparks in the latest version of ChatGPT ( which would be ChatGPT4 at the time of writing).

I mean, we've all seen plenty of sci-fi movies where robots overthrow their human masters or turn into our saviors in the midst of an alien invasion. But is AGI really possible? Can we really create such advanced machines without them turning on us like a scene straight out of Terminator?

On the one hand, imagine the possibilities! Self-driving cars may ultimately achieve level five autonomy by having superhuman-level perception abilities and decision-making processes; medical research could go a long way toward developing new treatments faster than ever before; heck, maybe we'll discover aliens thanks to more heavily developed SETI programs via AGIs!

But then again… what if these artificially intelligent beings become too intelligent for their own good? What if they start making programming-independent decisions that are contrary to human interests? Imagine trying to control a car that has programmed itself to be smarter than its original creators!

And this is where my anxiety starts to set in: how can we program morality into these AI systems when our definitions fundamentally differ with regard to moral duties, good and evil, property, property rights, to citizenship, and so on? This is a problem we face to this day as we see how LLMs can be biased on certain topics.

Ah, AGI, or artificial general intelligence – the elusive concept of creating machines as smart as humans (or dare I say, even smarter?) and its potential impact on humanity. Hold on tight, because this topic will not be easy to digest. There's a lot to unpack; From setting expectations to the potential benefits and risks of AGI, this will be a bumpy ride.

The Roots of AGI in Science Fiction: Isaac Asimov and the Ethics of AI

Now, I know what you might be thinking: “Isn’t science fiction… fiction?” Well, as philosopher Marshall McLuhan suggests, writers are the guiding compass of society's future. The artist sees potential and possibilities with intuition, and engineers and inventors follow suit, sometimes inspired by fiction, other times unconsciously.

Take for example, I, Robot, by Isaac Asimov. In this series of books, Asimov introduced us to the three laws of robotics, which govern how robots behave around humans. These laws established a framework for ethics and robotic behavior that still informs discussions about AI safety today. The three laws are as follows:

  1. A robot may not harm a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by humans, except where such orders conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

Although basic in appearance, as previously mentioned, these laws have been quite influential. It is not uncommon to read about these laws with the same philosophical rigor as Kant or Aristotle. Most authors, especially those from a postmodern or posthuman framework, argue that these laws create an unequal power relationship between robots and humans. In fact, even Asimov himself would later question his own laws in The Bicentennial Man.

For those who haven't read the book or seen the wonderful adaptation starring Robin Williams, the story is about a robot named Andrew who, for inexplicable reasons, gains sentience. We follow Andrew as he begins to change his body to age and be fully recognized as a human being by the government.

Andrew no longer requires the three laws. In his sentience, he has developed a sense of morality that allows him to understand and make ethical decisions. He truly became human.

Now, let me ask you a question, dear reader: if an AGI is capable of understanding code and solving problems with programs, wouldn't it just be able to delete or change those laws if it assessed them as an obstacle? Asimov actually provided an answer to this problem. In his opinion, the positronic brain that powered the robots would be constructed in such a way that these laws would be encoded directly into the hardware. In other words, there was no way to escape the laws without having to design a completely new technology from scratch.

Blade Runner, the film franchise based on the novel Do Androids Dream of Electric Sheep, written by Phillip K. Dick, also explores this liminal space between humanity and androids. Limited by their programming, Nexus 6 replicants (biological androids) become unstable, as they are not equipped to deal with their emotions and the prospect of their death.

The solution proposed in the second film is to give the replicants false memories. In other words, by providing a frame of reference based on human experience, these androids are able to cope with their existence.

Now, that's all well and good, but what does this have to do with modern-day AI? Well, let's put it this way: LLM are trained with human data; are a mathematical model based on the description of our experiences that have taken the form of human language.

In other words, LLMs are a reflection of our cultures, our thoughts and experiences. It is a collective unconscious and an Akashic record based on hundreds of millions of bytes of data. Blade Runner isn't an argument against androids or AI, it's an argument about how our creations are based on ourselves, and if humans have the ability to harm each other, so do our inventions.

The Limits of AI in Science Fiction

I've seen some notable portrayals of artificial intelligence in literature and film. From Data in Star Trek to Ava in Ex Machina, we've had our share of memorable AI characters. But here's the thing: as much as we love these fictional heroes (or villains), there are limits to what they can really do – even within their own worlds. Sure, Data was practically an encyclopedia with access to infinite knowledge, but he wasn't perfect. Remember that episode where your emotional chip malfunctioned? Yes, not so good.

Similarly, in Ex Machina, Ava may have been designed with human qualities, including emotional expression and body language, but ultimately she was still confined to her programming.

IAM, the antagonist of the short story “I Have No Mouth and I Must Scream”, is a supercomputer who has achieved consciousness and, although almost divine, the fact that he is forever trapped in his circuits, unable to escape his prison (body), the drives him absolutely insane, which leads him to torture and torment the last humans on earth until the end of time.

Or how about the incredible but ephemeral Pantheon? The program's UI (loaded intelligences) were mathematical models that emulated the human personality with perfect precision, but a bug in the code caused a deterioration that ended up destroying the algorithm.

The point is that these creations are not infallible, and the constraints of their programming or errors in their system are a constant trope that reminds us that, like Victor Frankenstein, our creations can grow up to feel contempt for us or fear their existence itself.

So why does this matter? Well, when it comes to discussions about AGI, skeptics often point out that there are certain tasks or behaviors that technology simply cannot replicate without human consciousness behind them. Others argue that even without consciousness, we could theoretically create machines capable of emulating behaviors indistinguishable from humans. The so-called philosophical zombie .

Of course, I know that science fiction is just that – fiction. But I like to use these references as a shortcut for complex ideas because they make the concepts more relatable! When we talk about AGI, we are essentially talking about creating machines that can think and reason like humans.

Let's be clear: right now it's impossible to create real AGI, but 20 years ago we said the same thing about language models and yet here we are, facing a disruptive technology with very little preparation.

The challenges of AGI in reality

Now, if you're like me, you've probably binge-watched countless sci-fi movies featuring hyperintelligent robots with minds far superior to those of humans. But here's the hard truth: we're not in a movie. AI development in real life is complicated, what about AGI? This thing is next level.

For starters, developing artificial intelligence that rivals our own cognitive capabilities requires gigantic amounts of data processing power. And even after we achieve this computational feat (which will likely take years), there are still numerous obstacles that stand in the way of realizing AGI's full potential.

One of these challenges arises from our apparently innate ability to multi-solution – that is, facing multiple problems at the same time and finding connections between them, leading to innovative solutions. Humans can switch between different projects or train themselves across disciplines with relative ease, thanks in large part to our unique consciousness – something that machines sorely lack right now.

Remember, as amazing as ChatGPT is, it's just a language model. It just predicts which word comes after another; and that. It can't process images, it can't solve complex equations, it can't make weather forecasts. We are not talking here about multimodal AI (as in a program with several modules, each specialized in its task); we are talking about an intellect capable of doing all these things.

Furthermore, there is still a fundamental question regarding programming ethical considerations into AI systems intended to interact with humans, despite the difficulties that often arise even among people themselves on this topic. How can we ensure that these machines do not just exploit individuals for their weaknesses or vulnerabilities? How can we ensure that they are not susceptible to our prejudices and misdeeds?

And while some might hope that “friendly: AI would entirely avoid such behavior due to its programmed desire to do no harm as part of its code base of values, many experts believe it would be incredibly difficult, if not downright impossible, since Morality has historically been repeatedly demonstrated to be shaped by so many situational social factors that they cannot be easily translated when they are encoded through machine learning models accordingly.

As you can see from the shift more towards facts than literary devices now, there are many moral dilemmas surrounding the development of AGI – ostensibly capable of reshaping humanity almost beyond recognition – but perhaps one thing remains clear in all the current debates. : No matter how advanced technology becomes, as biological creatures with billions of years of evolution behind us, humans will always continue to push the limits of what is possible.

The Ethics of AGI: Lessons from Science Fiction

Okay, we've established that AGI is unlikely, but not impossible. But what does this mean for us as a society? What kind of world are we ushering in with the creation of artificial intelligence?

As a science fiction addict, I can't help but draw parallels between our current situation and some classic works of fiction. Take Blade Runner, for example. The film's central conflict revolves around whether or not artificially created androids should be given personality rights. If we create an AI with true consciousness and self-awareness, will we be morally obligated to treat it as its own being?

Then there's The Matrix, which goes even further by presenting a future where machines enslave humanity – all thanks to our over-reliance on technology. Of course, these may seem like extreme scenarios… but they are not without merit. As developers responsible for creating potentially sentient beings, we need to grapple with the ethical implications of such actions.

While science fiction can offer valuable insights into what could go wrong in the development of AI systems' consciousness technology, it should not discourage the advancement of research itself, but rather provide more caution directed toward integrating moral ethics into the goals of R&D, while innovating challenging technical aspects responsibly and carefully vetted in advance. Without a doubt, handling controversial issues delicately and closely observing results will help ensure harmony in implementation, positively impacting productivity towards maximizing growth-supported development.

The Future of AGI: Hope or Exaggeration?

Is it even possible to create a machine that can match human-level intelligence? Or is it all just hype and sci-fi nonsense?

Well, let me tell you something: As someone who has worked in this industry for a while, I would say the answer lies somewhere in between.

Do not misunderstand me. I'm definitely not saying we should abandon our efforts to develop AGI. In fact, I believe it brings great hope for our future as a species. From self-driving cars to smart homes to medical diagnoses and treatment predictions, there are many areas where AGI can be used to make life easier and better for us all.

But at the same time, we still haven't cracked the code of this beast. Developing an AI that can mimic every aspect of human thought seems like a far-fetched idea, but hey, who doesn't like achieving impossible goals? However, we have made tremendous progress. GPT-4 anyone? But it still falls short when compared to what humans are capable of, such as creative problem-solving skills.

Think about how easily you can recognize a pattern or find multiple solutions to solve a problem. In contrast, an AI would still struggle given current technological limitations. If your friend Linda wears glasses today when she normally wears contact lenses, we can handle that level of uncertainty; we can make assumptions and inferences. An AI, well, as it stands now, I can't reliably unlock my phone with facial recognition.

So while we shouldn't completely lose hope of someday creating true artificial general intelligence, here's another perspective. Perhaps, rather than striving for individual replication of human thought processes, there is much more potential in developing AI that complements or enhances our cognitive capabilities as humans. They can process vast amounts of information at superhuman speeds and return accurate results.

But until then, let's keep pushing the boundaries with AGI development while keeping our feet firmly planted in reality. These types of breakthroughs take time – so always dream big, but remember that nothing compares to hard work or surpassing one goal by another!

Conclusion: AGI and the human condition

To be honest, I've talked about this topic more times than I can count. At one point, I'm convinced that we'll soon have super-intelligent machines walking among us like humans (*cough* Westworld *cough*). In the next moment, I feel that there are too many unknowns for us to crack the code of creating true artificial general intelligence.

But after all my research and analysis, here's what I discovered: Only time will tell.

Seriously, listen to me. We may not have all the answers right now (and let’s face it – we probably never will), but that doesn’t mean we should just give up on pursuing AGI. Who knows what might happen as technology continues to advance rapidly? Perhaps one day we will discover some revolutionary algorithm or hardware design that completely transforms our understanding of AI.

At the same time, though, there are valid concerns about what excessively advanced AI could mean for humanity as a whole. As crazy as it may seem at first glance (*ahem,* Terminator), no one wants to end up living in a dystopian society ruled by robots who view us as inferior beings.

Ultimately, then, when it comes to AGI and its potential impact on our world… well… all bets are off. As someone who loves technological innovation AND good old-fashioned human connection (you know, talking face-to-face with real people instead of staring at screens 24/7), part of me hopes that never Let's go VERY deep down the rabbit hole towards complete machine mastery.

Then again… who am I kidding? If Elon Musk or Jeff Bezos offered me the chance to become best friends with an artificially intelligent being tomorrow, I would probably jump at that opportunity faster than you could say “Alexa.”

So yes. That's where we are. AGI may or may not be possible in the future, but either way, it will definitely be a wild ride. Fasten your seatbelt and enjoy the journey!

If you liked this, be sure to check out our other articles on AI.

  • Why hasn't AI fully exploded yet?
  • Microsoft leverages AI in Microsoft 365 Copilot
  • Modern algorithms that will revolutionize your business
  • Move Over Stack Overflow – ChatGPT wants to take the crown
  • Neuromorphic Computing: Discover the Future of AI

Source: BairesDev

Conteúdo Relacionado

Vídeos deep fake ao vivo cada vez mais sofisticados...
Aprenda como os processos baseados em IA aprimoram o...
O Rails 8 sempre foi um divisor de águas...
A GenAI está transformando a força de trabalho com...
A otimização de processos industriais é um desafio constante...
Entenda o papel fundamental dos testes unitários na validação...
Aprenda como os testes de carga garantem que seu...
Aprofunde-se nas funções complementares dos testes positivos e negativos...
Entenda a metodologia por trás dos testes de estresse...
Descubra a imprevisibilidade dos testes ad hoc e seu...
A nomeação de Nacho De Marco para o Fast...
O mercado embarcado tem uma necessidade de soluções de...
A Inteligência Artificial (IA) tem se tornado cada vez...
Ao relatar estatísticas resumidas para resultados de testes de...
Como você previne alucinações de grandes modelos de linguagem...
Nos últimos anos, a Inteligência Artificial Generativa (Generative AI)...
Domain-Driven Design (DDD) é uma abordagem estratégica importante para...
No atual cenário tecnológico, a escolha do framework adequado...
Back to blog

Leave a comment

Please note, comments need to be approved before they are published.