What danger ChatGPT poses to humanity
What is ChatGPT? On the one hand, a tool that shows what AI is really capable of. From my perspective, this is the biggest technological threat we’ve seen in years. All because people want simple, not real answers.
ChatGPT has been the hottest topic in recent weeks among both the media and the average user. And it is not surprising. After years when the famous “artificial intelligence” could only enhance the photos on the phone and search for photos on Google, it has now made its presence felt in a way that can be felt. Not surprisingly, people rushed to test the new solution, and derivatives of the OpenAI product are also starting to be used by businesses. Competitors are also actively preparing their own solutions. At the same time, the capabilities of ChatGPT are just being explored, and not a week goes by without hearing about something new that this (or similar) program can do. It’s clear that humanity was excited about the possibilities it offered, and literally out of nowhere ChatGPT was elevated to the role of “the next big thing”. Meanwhile, I stand aside and think, if this is what the future is going to look like, then we should be on our guard.
From library to TikTok – ChatGPT found
Let’s start from the beginning. In general, we as a society have always sought to simplify certain decisions. This can be seen in almost every sphere of life. We went from movies in the theater to movies on TV and then to VOD because it’s more convenient. To get knowledge today is not to search in the library, but to read content online, watch materials on YouTube or even tens of seconds on TikTok. The same with everything – ordering purchases online, using robot vacuum cleaners. Wherever possible, we seek simplification, and that fact is beyond doubt.
At one time, Google was such a simplification. We are used to the fact that today we can enter any phrase there, and the algorithm will select the appropriate results for us. It doesn’t matter if it’s relevant or not, it’s still faster than manually scouring the pages for the information we’re interested in. And just as Google made a splash in the early 2000s, now ChatGPT is (or rather could be) the next step in this wave of simplification. Because if we enter a query on Google, the tool will send us a series of pages that we can look at. To a very limited extent, but still, Google forces us to familiarize ourselves with the source material.
ChatGPT does not. Instead, ChatGPT gives what people want – an answer
I will give an example. Very often, different people ask me a question that probably every technology journalist hears – “which phone would you recommend?”. What do you think people who ask this question want to hear? A thorough analysis of your buying needs, budget, and expectations, or a quick, simple answer – “make X, model Y”? If you think the first, you are wrong. People don’t want to hear “it depends”, they want a simple answer to their question. Simple does not mean true. And it is for such people that ChatGPT and the solutions that will appear after it were created.
And here the nuances begin. Already existing tools – Google, YouTube, Facebook, TikTok, etc. – are considered the main culprits of today’s global problems, such as disinformation, fake news, or information warfare. Even now, people in general do not check the sources of information, allow themselves to be manipulated, and do not question what they are offered. And all this while checking the source of this information is two clicks on Google from us. It also shows how far we’ve come in the world of convenient simplifications I talked about a moment ago. The number of people who doubt the pandemic or believe in the harmfulness of 5G is quite telling. Even with thousands of sources, people still prefer to believe what they want to believe.
Now imagine that a tool appears in this world that knows the answer to every question. However, no one knows where these responses actually come from. Where and how ChatGPT and derivatives get their information and how they process it remains a mystery. What does ChatGPT consider true and what fake? Which opinion is important to it and which is not? How (if at all) is content transmitted in Chat moderated? No one knows and none of the millions of users ask about it. The important thing is that every time the AI gives us a simple answer to our question.
If ChatGPT takes hold, we as a society will be in great danger
Let’s assume for a moment that ChatGPT spreads like Google does today. In that case, I cannot see it other than as a tool that the propaganda machines of totalitarian systems would die for. Why? Because that means we’ve raised a digital oracle a, which tells people what is true and what is not. And while real sources still exist, there is no chance that they will attract the attention of the majority who are looking for easy and quick solutions. In this climate, it is enough for the creators to translate one “virtual wajha” and here we have a tool for shaping public opinion on an unprecedented, unimaginable scale.
Let me remind you – now AI not only explains how to cook a risotto recipe but also answers questions about whether climate change is real and whether the Holocaust really happened. And questions like these always top Google’s annual question lists. By creating answers to these questions for potentially hundreds of millions of recipients, ChatGPT will automatically decide what information will reach the recipient’s ears. And if there is someone near it who does not like environmental activities, or has his own vision of historical truth? Then don’t think that with a few tweaks and the right selection of sources for a bot, it can’t influence the views and beliefs of people all over the world. In addition, this kind of manipulation would be almost impossible to detect, since the chat would give a slightly different answer every time.
ChatGPT will not fool humanity. It will only show what the current state looks like
Let’s face it, each of us is looking for some kind of simplification in life. For example, historical events, recent or distant, very often become the subject of debate, and each side has its own version of the truth. Do you need to be vaccinated against coronavirus? Is the Earth flat? Do you need to invest in cryptocurrencies? We will hear different answers to all these questions. And I will not be surprised if someone wants to learn more about it, because the temptation to hear one condensed dose of knowledge is irresistible. However, this, in turn, leaves the creators of a solution like ChatGPT with a monopoly on what is true and what is not.
As a journalist, I am not afraid of a future dominated by such a decision. It may turn out that my work will be redundant. Everything I write will be downloaded, polished by Chat, and spit out as an answer for which I will not receive even half a penny. I don’t care: horses were once replaced by cars and no one cries for them. However, a future in which one corporation has a monopoly not only on what information reaches users (Google already does) but also on what is true and what is not, scares me to the core. Somehow it seems to me that as humanity we will simply swallow this decision, but such an act will be the beginning of our end.