Você está aqui: Página Inicial / Blog / Artificial intelligence: what it has to do with you and cybersecurity

Artificial intelligence: what it has to do with you and cybersecurity

With the popularization of artificial intelligence chatbots, tech giants have kicked off a race against time to offer tools equipped with this type of technology. We now have tools capable of creating a variety of content with only a few prompts from the user.

How does AI work?

In practice, a conversation with AI tends to involve a sequence of prompts whereby the user requests an article, scripts, lyrics, or even programming code. Everything that is sent is absorbed by the artificial intelligence, building a kind of creative repertoire. That's why you need to be careful about what you share since it can use what it learns from you to provide more precise answers to other users.

The mechanism behind these artificial intelligence tools is called machine learning. Unlike ordinary applications, with specific, human-designed programming, the artificial intelligence algorithm is designed to find associations and probabilities in large volumes of data. The functionality of AI is a result of these associations and the human corrections applied to its errors.

Here are some examples:

  • Object recognition:  AI can "learn" what a cat is by processing millions of photos of cats (and dogs, humans, cars...);

  • Speech recognition:  AI doesn't have a code that interprets a language, but compares countless hours of recorded voices with their transcriptions;

  • Programming assistant:  AI can "learn" programming by imitating code you find in online repositories.

How is AI used in cybercrime?

Due to the recent and astronomical growth of artificial intelligence, some cybercriminals have already started using AI to create scams. Examples include:

  • Deepfakes: AI creates excellent probabilistic imitations, facilitating the creation of fake content using people's voices and faces. Totally fake images of nudity or explicit acts can be generated featuring the face of someone who has never been in that situation. 

  • Phishing: phishing e-mails usually use identical text for thousands or millions of people, which makes it easier to block the message. With AI, different and even more personalized messages can be created.

  • AI poisoning: since AI needs to process information, a criminal could deliberately introduce malicious data into the AI to impair its use.

When is it dangerous to use AI?

"Generative AIs" are capable of creating apparently original content, including images, texts, emails, or summaries. They also tend to be the most unpredictable.

AI works with probabilities, which means it can make mistakes – or "hallucinate". There is no guarantee that the data entered in an AI-generated article is correct, for example. Everything must be checked.

AI-generated images, on the other hand, can contain copyrighted content. It's difficult to know for sure how an AI was trained, which means it may have been fed protected images and reproduced excerpts from this material. The images may also contain unexpected deformations.

The biggest risk is that many AIs are offered in the form of connected websites or applications and share all the data from the usage session. In practice, the use of this type of AI may be incompatible with privacy protection laws or confidentiality agreements, especially if no privacy settings are available.

During AI training, usage session chats can be viewed by humans.

Some organizations have been hiring dedicated, private AI services, but there are already companies that restrict the use of remote generative AI to avoid breaches.

When is it safe to use AI?

Generative AI should be used with care and with full attention to the privacy settings of each service.

AI, on the other hand, can easily be used safely in other non-generative contexts. Some recognize text, which can be very useful when scanning documents. There are also artificial intelligence tools that improve video quality and restore images, which can help recover memories recorded on tapes or photos damaged by time. Many smartphones today automatically enhance their photos with AI.

Many AIs are imposing barriers to prevent misuse, and some governments are considering passing laws that would require the inclusion of watermarks in videos produced with AI to inhibit the use of deepfakes. At the moment, however, the scenario is still highly uncertain.

Article originally written in Portuguese by Perallis Security Content Team: Inteligência artificial: o que ela tem a ver com você e com a cibersegurança? — Perallis Security