We are in an election campaign, and it is public knowledge that these days a part of the cake is decided on social networks. Fake news, bots and party tricks are on the lips of all the media, and we well know that the pressure that these techniques exert can influence the destiny of a country, even the world. If you don’t believe me you can watch the movie “Brexit”, or ask yourself how someone like Donald Trump is the most powerful man on the planet.
The media now has a responsibility to filter the news and to mediate the impact of these fraudulent techniques, especially on those networks where real time is important. But before we start…
What is a bot?
A bot is a computer program that automates tasks normally associated with a person. Like all technology it can be used for good or evil. In this sense, a bot can simulate a conversation with a user to guide him on his next trip or advise him on his next purchases. But it can also pose as a real person on social networks to influence the ideas and feelings of others for the benefit of its creator.
How to identify a bot?
There are several methods to be able to detect when we are in front of a bot. None is 100% reliable, but by combining them all we can get a fairly reliable probability. Let’s get started:
1. Note the username.
Social accounts for bots are usually created automatically. That’s why these accounts usually have numbers at the end of their username.
2. Check the time of publication.
Bots are programmed to answer or retweet a user or a hashtag, and usually do so automatically. Check the time that elapses from when a real tweet is posted until the bot responds. If it’s very short or almost instantaneous, it’s probably a bot.
3. Account data.
There are several indicators in the user account that can be relevant when detecting a bot.
- Check the number of published tweets and the date of registration and take an average. An account with more than 100 tweets a day has many numbers to be a bot.
- An account with very few followers but many followers can be a bot.
- If the account is newly created, is also a plus.
- If its most publications are retweets it have more numbers to be a bot.
4. Profile picture.
We can extract a lot of information from this data. First of all, be wary of user accounts whose profile picture is not a person. Bots makers know this, so they have gone a step further by using profile photos of deceased people, or people who directly do not exist and are generated by an Artificial Intelligence from features of many faces. To detect other people’s profile photos we can do a search on Google Images. This way you can find out if that photo really belongs to who it says it is.

In my case I have checked that my profile picture is not being used by any bot
As for the images generated by Artificial Intelligence, they have common features that can help you identify profile photos of people who do not exist. Basically you have to look at the background, the noise and the inconsistencies. In this example we have generated a face and discovered that it is a person who does not exist:

In this photograph we can observe an incoherence when seeing that the subject has eyes of different color, the background is out of focus and noise is appreciated. One condition can occur, but all three at the same time is very suspicious 🙂
In Alt17 we have developed a tool capable of detecting the ratio of bots that are publishing on a topic in real time, in addition to other indicators, using some of these techniques. It is called Pulsetuit and I’m attaching a video to see it working in the latest debates broadcast by Atresmedia and RTVE.