top of page
Beatrice de Waal

Information Warfare on European Cyber-Grounds: a Twitter analysis


Photo by Philipp Katzenberger


Ever since the beginning of the Russian invasion of Ukraine, started on February 24 2022, the general public has begun becoming more aware of a security issue that is at the very least a decade long old: information warfare. We have in fact often heard of the misuse of personal data of social media users by companies like Meta; however, when it comes to foreign state interference, the public discourse and security policies have been lacking. The main reason for this lagging behind is the intrinsic difficulty in dealing with online platforms manipulation: not only is this kind of activity illegal, and therefore carried out anonymously and then very easily denied, but reacting to it may result in a loss of true freedom of speech of legitimate users.


There is thus no clear roadmap to follow to protect citizens from information warfare, but one step can be taken right away: educating ourselves about the tactics that cyber warfare brigades use can help us learn how to spot suspicious accounts and sources, to warn others about them and, above all, to not spread false information any further.


Being an avid Twitter user myself, I have put my efforts into the curation of my timeline, to exclude sources that I knew for sure to be unreliable. I learnt how to recognise suspicious accounts from the use of certain symbols, or certain buzzwords, so I started muting or blocking them as soon as I encountered them. This - somewhat drastic - approach led me to accumulating a database of over 2500 Twitter accounts that spread fake news and conspiracy theories. As time went on, certain trends seemed to emerge: a lot of these accounts were anonymous, created very recently (Twitter shows the month and year of an account’s creation in its biography) and shared the same exact content. I have therefore decided to analyse some of them, to see a bit more rigorously whether my impressions were correct.

Let’s thus take a look into what information warfare is and what I have found about these suspicious accounts during my analysis.


What is Information Warfare?

Information warfare is defined as an operation conducted in order to get an information advantage over the opponent. More specifically, cyber-enabled information warfare is the one which happens in cyberspace, for example on social media platforms. The agents of this type of warfare are the so-called “cyber warfare brigades”, made up of social bots, which are automated or semi-automated accounts, and trolls. Trolls involved in information warfare post content with the specific aim of manipulating other users’ perception, in a way that will be functional to the interests of the country that has hired said trolls.


Russia has always been one of the most prolific perpetrators of information warfare, way before it was carried to the public cyberspace and was only fought through traditional propaganda. Then, in 2013, the Internet Research Agency, or IRA for short, was founded in St Petersburg by Yevegeny Viktorovich Prigozhin, also known as Putin’s chef. Prigozhin, apart from the IRA, also controls the Wagner Group, a state-backed mercenary group founded in 2014, which first fought in Syria and is currently working in Ukraine. An internet slang term used to refer to the IRA is Trolls from Olgino, with Olgino being the St Petersburg neighbourhood where the agency is located.


The Internet Research Agency and Trolls From Russia

The IRA’s main purpose is to create and coordinate cyber warfare brigades, which essentially means hiring people that will run fake social media profiles which will produce, but most importantly share and spread, content and narratives that will further Russian interests. One may think that their content will therefore focus on portraying Vladimir Putin as a great leader, or on celebrating Russian culture and lifestyle; essentially on making sure that non-Russian people have a positive opinion on Russia, and view it as an example of how a country should be run. However, this is not the IRA’s approach: since it is difficult to skew public opinion in such a drastic way (not only is it difficult to skew it, it is also difficult to measure how successful you were in doing so), the operations carried out by trolls have the aim of creating confusion, mistrust and social division in (mostly) Western audiences. They therefore mostly share media and opinions about polarising issues, usually taking the most extreme and controversial viewpoints (examples are vaccination, immigration, scandals involving political figures and parties) and content in support of local politicians that promote friendlier relations with Russia (such as lifting sanctions). Another large section of the IRA’s work focuses instead on domestic propaganda, for example by discrediting Vladimir Putin opponents’.


There are multiple reasons why trolls focus disproportionately on sharing content rather than creating new one: first of all, sharing is faster than creating, and makes work done by others reach a larger share of users. By sharing real users’ (or media outlets’) content, the networks of trolls infiltrate larger networks of legitimate people who truly believe in controversial viewpoints, therefore making it harder for content moderators to distinguish fake accounts from real ones. Another reason is instead a collateral consequence of the way the IRA operates. From the testimonies of people who used to work there, we know that most of the employees hired by the IRA (often students who already have knowledge of social media and are more likely to speak English or other foreign languages) do not agree with the viewpoints that they have to spread and do not really feel any sentimental patriotic attachment to the IRA’s purpose of furthering Russian interests. We also know that the professional trolls are organised in departments which may focus on specific geographical regions or on a specific social media platform, and need to operate a certain number of fake accounts and meet a daily quota of engagement through tweets, retweets, pictures posted, comments written and so on. By taking all of this into consideration, plus working shifts that are often as long as twelve hours, it becomes evident that most trolls just want to meet their quotas as fast as possible, therefore resorting to tactics such as reusing the same profile pictures and biographies, following the same exact accounts etc…


Interference in European Democratic Processes

Foreign state-backed trolling activities become a lot fiercer before democratic elections and referendums, with the aim of influencing people’s votes. The countries that become targets are identified through an evaluation of their perceived strategic value, on the presence of candidates that already align with Russian interests and on the belief that information warfare deterrence efforts are insufficient. The following timeline shows some notable examples of Russian interference in European countries (not necessarily EU members) democratic processes:

As we can see, the ties with Russia are not evident and are rather found in the widespread weakening of European social cohesion. You can see other cases of interference here.


Analysis of Pro-Russia Twitter Accounts

As previously stated, I accidentally found myself with a convenient list of roughly 2500 suspicious Twitter profiles, which is a product of random encounterings and unfortunate recommendations by Twitter itself. In fact, as I was blocking or muting profiles I was also viewing them, so each time I opened one of them I would be recommended similar ones; then, when I went back to my timeline, the recommendations went back to normal.


To be sure that the accounts that I took into consideration were actually pro-Russia, I have defined the following criteria to obtain a smaller sample, which by the end contained 570 accounts:

  1. They display the emoji of the Russian flag “🇷🇺” or a “Z” in their Twitter handle/display name/biography.

  2. They display one of said symbols or other pro-Russia elements (such as the ribbon of St. George or Vladimir Putin) as their profile picture.

  3. They are self-proclaimed Russia or Putin supporters, either in their handle, display name, biography or profile picture.

Furthermore, all the accounts taken into consideration were public and Italian-speaking (not necessarily strictly Italian speaking: some of them also shared content in English or, more rarely, in French or Spanish).


For each one of these profiles I have looked at the account creation date, at the total number of tweets (including retweets and comments) and at the followers and following amounts. Furthermore, I assigned to each account an anonymity score, in a range from 1 to 4 with the following criteria:

Lastly, I have looked at whether these accounts published anti-vax content, anti-European Union content and pro-Trump content. I have looked specifically for pro-Trump tweets since I have collected this data just before the US midterm elections. Here is what I have found:


As the trend line clearly shows, there has been a sharp increase of new accounts created in 2021; in 2022, as far as the end of October, the number of new accounts is even higher, but the rate has slowed down from. To understand whether the growth in accounts which respect the criteria presented above is proportionate to the growth of the number of Twitter users overall, I have compared the data of the number of new accounts between 2018 and 2022 (statista):


We can see that the number of new users reaches a peak at different times; the reason for the difference between the peaks of the amount of users worldwide and in Italy is that the majority of Twitter users is by far not European, and the country with the most Twitter users is the US (statista). Since it is quite common for trends that emerge in the US to arrive in Europe with some delay, I think that explains the discrepancy between the two peaks. Instead, the data in the sample shows that users with the previously presented characteristics have been growing differently over time: in 2019 and in 2020 the number of new users was still higher than the previous year, but the most growth has been in 2021, whereas the overall Twitter users growth in Italy slowed down significantly.

We can see at the growth of new users in the sample a bit more closely, divided by each month from 2019 until October 2022:

Let’s just point out some curious overlaps: we can see that the global maximum is reached during March 2022, the month just following the Russian invasion of Ukraine (started on February 24 2022). There are then four other significant peaks:

  1. The first one is in August 2021, after the number of new users had been growing more or less steadily since April 2021. During these months most of the first and second doses of vaccines against COVID-19 have been carried out in Italy (Report Vaccini Anti COVID-19); furthermore, since July 1 2021 the Green Pass, a COVID-19 vaccine passport instituted by the Italian government, has come into action.

  2. The second one is in September 2022, after a roughly steady growth since a minimum in June 2022. On July 14 2022 an Italian government crisis became very likely; it was in fact officially opened only a week later, on July 21 2022. The elections were then held on September 25 2022.

  3. The third peak is in January 2021. On December 27 2020 the European Union officially announced the beginning of COVID-19 vaccination campaigns in the EU.

  4. The fourth and oldest peak is in August 2019. On August 20 2019 there was (another) government crisis in Italy, after a long lead-up since the first week of August.

Of course, the data gathered so far is not enough to assert that there is a relevant correlation between the number of now clearly Pro-Russia users and the events described above; a larger sample of data and a more in-depth analysis would definitely be needed. However, it is still quite peculiar that the trend of this type of user growth doesn’t match the overall ones and display such behaviour in times of potential or actual crisis.


Let’s now take a look at anonymity, defined by the criteria above. It is still important to notice that just because an account displays a full name and/or a profile picture depicting a person it doesn’t mean that the account is not fake: identity theft is quite common on every social media platform.

From the data we can gather that most accounts (61.2%) are completely anonymous, while roughly 15% and 14% respectively only show either their name or their profile picture. To clarify any doubt, to be considered non-anonymous the profile picture does not need to clearly show the face of the owner of the account; it could also contain group photos or pictures taken from farther away. Pictures of animals, landscapes, food and images containing symbols, flags, writings or just solid colour are instead considered anonymous.


The followers-to-following ratio does not show any significant trend over time:

Generally speaking, researchers have found that while in the past troll accounts tried to accumulate as many followers as they could, today they have changed their tactics: in fact, the more followers you have the easier it is for content moderators to notice and check whether you follow all the platform guidelines, and potentially suspend or shut your account. It is therefore better for trolls to have a smaller following and target specifically the networks where they think their narratives can stick.


The followers-to-following ratio does not show any significant trend over time:

Generally speaking, researchers have found that while in the past troll accounts tried to accumulate as many followers as they could, today they have changed their tactics: in fact, the more followers you have the easier it is for content moderators to notice and check whether you follow all the platform guidelines, and potentially suspend or shut your account. It is therefore better for trolls to have a smaller following and target specifically the networks where they think their narratives can stick.


Lastly, I have found that most of these accounts share anti-EU content (78.3% of all sampled users) and anti-vaxx content (87% of all sampled users), whereas only than 30% of them shared pro-Trump content. It is not difficult to understand the potential reasons why a true Russia supporter may be against the European Union: a lot of them put emphasis on state-sovereignty in their posts and generally showed mistrust towards politicians, especially those who they felt as more “distant” from the common people. Many of the anti-EU accounts also tweeted specifically about being anti-euro currency, in this case repeating some of the arguments that are quite popular among some Italian political parties. The same could be said about vaccines: a general suspicion of the government and of scientific institutions, perceived as the long arm of the “elites”, make them refrain from vaccination and reinforces an already present anty-system sentiment. In fact, content that was also frequently posted is that about conspiracy theories, particularly ones about Bill Gates, George Soros, chemtrails, the moon landing and others evidently displaying antisemitic undertones.


Reacting to information warfare

As it was said in the introduction, reacting to information warfare is extremely difficult, because policy makers and moderators risk severely limiting freedom of speech. However, because of the recent worsening of the international situation following the Russian invasion of Ukraine, Twitter has become a little more strict in the way it moderates its content (even though the recent acquisition by Elon Musk may result yet again in policy changes). It is still necessary to point out that these social media companies themselves can incur in risks tied to information warfare, and sometimes they have to take steps to safeguard themselves: for example, starting from January 2022 Twitter has made accessing to its data a lot harder, because of threats received by employees involved in previous data disclosures received by its employees.


The EU has also taken steps forward: on July 5 2022 the European Parliament has adopted the Digital Services Act, which compels online service providers to guarantee quicker, more transparent operations and information, both to policy makers and to its users, about reports of illegal activities, traders’ traceability and online advertisement.


No matter the policies enacted, which despite their efficiency will never be able to provide an entirely safe and reliable online environment, the best strategy against information warfare and other types of platform manipulation will always be education about the matter and some well trained critical thinking.



Sources:

Comments


Recent Posts
Categories
Archive
bottom of page