Eticas

High-performance AI without the risks. 

Demystifying AI: What is it, actually?

Much has been said lately about artificial intelligence (AI). Incredible applications are at our fingertips, we can talk to a machine just like we do with our friends who are miles away. We can ask machines to write down poetry, create images out of nothing (but a huge database of information) and even give quick orders to some being on our mobile device. And some fears have been rising just like we were back in the ‘90s and cyberpunk movies prophesy the worst future able to imagine.  

But let’s demystify it a bit and explain using human words what this AI thing is and how it works.  

AI is the term we use to define the simulation of human intelligence processes by machines. But that does not tell us much about how it works internally. In fact, human intelligence and human intelligent processes are often biased by humans. And it does not explain how a machine does it.  

So, what exactly is AI? Simply put, it is a combination of algorithms that solve specific applications that would be difficult to solve with traditional programming.  

When someone writes a program, they write a bunch of sentences (code) that a machine follows step by step. This can be very complex for big projects, but modern techniques and management allow for handling great amounts of code and maintaining very complex programs. There are many familiar examples of this, such as the operation of a web browser or an operating system: a user clicks in one area, which triggers a certain action; a user presses somewhere else, which launches a certain program. 

 However, it is not a suitable solution when there is uncertainty about the input, or when the goal is to predict what would be statistically the best output, or to ask for suggestions, especially when dealing with large amounts of data that require changing code so often that it becomes unmaintainable. This is where AI algorithms come in handy. 

A bit of history of AI 

Mankind has long sought to replicate human behavior through technology. We will not list all of the cases here, but we will mention some notable examples in modern computing that can show how the concept of AI began.  

Probably the first example was a program that learned to play checkers in 1951. In the same year, another program was written to play chess. One can imagine what it was like to see a huge machine “playing”, a concept reserved only for intelligent beings, and especially for such complex games involving reasoning. But this was still a program with instructions.  

Then mathematical theory and Bayesian methods were written to do inductive inference and prediction. Combine this with computers that were designed to be faster and faster, and programs that translated this mathematical theory into reality, and we are on our way to modern AI. Apply this math theory to reality, and we are on our way to modern AI.  

Computers are good at doing calculations, so this new paradigm was excellent, especially when those calculations are complex and many of them are needed to get a good result.  These new programs executed rules with a strong math background and began to solve more and more complex problems, gaining trust among users.  

Some well-known applications of AI include MYCIN, an expert system that helped identify bacteria that cause blood infections and saved doctors time, or ELIZA, the first chatbot to explore human-computer communication, which included a famous script called DOCTOR that simulated a psychotherapist. 

Moving forward 

After that, the concept of “AI agents” was created. This is a group of everyday applications of AI, or improvements of old systems that have been perfected. 

Examples include Deep Blue, the AI chess game that defeated former world chess champion Gary Kasparov, the first Roomba, a speech recognition software included in some operating systems, or rovers sent to Mars by NASA that navigated without human intervention.  

Not to mention the great advances made in medicine with image recognition, or in finance in multiple areas such as fraud detection. 

This has made AI one of the most important fields of research, given the potential it offers and the number of complex problems it solves.  

Returning to the present and to our question to try to explain what AI is, it is a computational way of solving problems, mainly in a statistical way, with a”training” that comes from a large amount of data prepared for these algorithms to “learn” how to solve a problem. 

The particularity is that while a program can be controlled by modifying the code, AI algorithms are based on math calculations, and sometimes we have black boxes of complex calculations that cannot be modified, but discarded and trained again. What we call training is the process by which we teach an AI system to solve a problem or perform a specific task by exposing it to a large amount of data. So, if you ask an AI model to paint a car like Rembrandt, statistically it gets a representation of the car, it gets what it would look like based on the data it has about Rembrandt’s paintings, and it generates an image that statistically should match the definition.  

It is widely recognized that AI has become a powerful tool, but it must be used carefully. It can answer with incorrect data if trained with inaccurate sources. It can lead to wrong decisions if not trained with unbiased data. But it’s still an extraordinary tool with unlimited applications to help us in our daily lives (when it’s audited).