On Wednesday, May 15, 2024, HCSS strategic analyst Laura Jasper was interviewed by the Institute of New Europe on “Unraveling the Complexities of Disinformation.”
- What are the greatest dangers of disinformation? And what is the difference between disinformation, misinformation and propaganda?
I’ ll start with talking about the difference between disinformation and misinformation and propaganda. It’s a very important distinction to be known. So the main difference between disinformation and misinformation is the intent with which the information is shared. For misinformation, that’s false or inaccurate information, that is being spread but unintentionally. The person, who is spreading the information might not be aware that the information is actually faulty, but it doesn’t mean that is not harmful. However the biggest difference is the intent. With disinformation, the person, the actor, the state, they will very intently and very deliberately put out false and misleading information to actively disinform people with like a goal in mind. That’s the main difference between those two. Then there is a propaganda which actually goes hand in hand with disinformation. That’s a very logical thing for people to understand. I think the most important thing to say here is that propaganda is much older than disinformation. All these old terms, trends that have been going on for centuries, they tend to get new names, all of a sudden, because of technology, because of some new trends and developments. Propaganda is definitely something very old. It’s much older than disinformation and it goes back to the 1600 for propaganda. The other difference is that propaganda is really related to a political use. The selective use of information with the aim to have a political effect. But the selective use of information does not necessarily mean it’s a faulty information. Sometimes when you say something and you hide half of the information, so you intently do not give everything that you know, that is still very selective use of information and it can still be propaganda. That doesn’t mean that you’re lying per se. You’re just keeping something behind. On the other hand disinformation is a really false information being put out. The disinformation is definitely a part of propaganda.
The greatest dangers of disinformation are that it really touches upon all of society, meaning that there are health effects. If you think about COVID-19 crisis, the disinformation being put out about the origin of the virus, but also about the vaccines, that disinformation really has an effect on the population’s health. That’s really harmful. But I think the biggest effect or danger of disinformation is the erosion of trust in institutions and media. If you look at the information, most of the time, these are statements or pieces of information that are designed to ripped, some kind of a very emotional response into people like either anger, mistrust or something in that line. Therefore, that’s undermining trust in others in the health care system, the institutions or media. Disinformation also touches upon other issues where elections, democratic system or institutions become at risk. People do not believe anymore in judges or in law, then the rule of law also becomes at risk. So it starts at trust and it spreads across the entirely of society at the end, with the very real results . It impairs freedom of expression, for example. So the dangers of disinformation are very varied and that’s really touches upon the core of what disinformation is. It’s being so widely, especially nowadays with technology and social media, available to everyone, like on the basis of your phone.
- What techniques or tools do you use to identify disinformation?
Prior knowledge and literacy about media and disinformation, in general, is very important. This is also something that comes with an education from a very young age. If we talk with our grandparents, for whom mainly the news was a radia and a newspaper, it’s very different than for kids now. Nowadays everywhere and everyone can find information, not only in one source, but there are many.
If we talk about techniques to identify disinformation, there are the couples of steps that sometimes have been shown to help identify it. Checking the source, finding out who is actually saying these things, especially on social media, so who the person behind these accounts is. Also in the other regular types of media, who the journalist is, what the newspaper is, who is the newspaper funded by. Checking the source is very important because then you can trace back to questions: „is only one person saying this” or „are many people saying this”. It is the consistency of the message. If you see something on X (Twitter), BBC or the radio. Do you see it also somewhere else? If you don’t, then the bell should start ringing. What is also important there, what I already refer to earlier, is a tone. Usually you try to get an information as objective as possible. You want the information not being colored by personal opinions. If the message, that you are reading, riggers emotions such as fear or anger, that might suggest that this person is not interested in just informing you, but is interested in guiding you somewhere with the message. The person is interested in provoking a reaction. Now everyone has their own biases. We all believe in something, based on where we have been growing up, an environment or a particular educational system. So also knowing yourself, having biases is very important. It indicates that people are much less likely to identify disinformation if the message says something that they actually believe. People always try to confirm their thoughts. If they read something that they agree with, they will much easier say that it was correct. They think it is not disinformation, because they believe it already. But this is also exactly what the entities that are spreading disinformation are trying to do. They look for their target audience and they try, as convincingly as possible, spread their message by making you think by emotions or by making it feel that your thoughts are actually validated by them. Being aware of your own bias is very important here.
To sum up, look at the source, tone of message, consistency of how and when the message pops up and the story. This is the type of technique they teach you at trainings and in high school as well. But in the technique you rely on yourself a lot. There are a couple of tools you can use. One which is by the European Union, it’s called EU vs disinfo, it’s a website. It’s from the European Union’s External Action Service. It registers cases of disinformation in European countries. They identify it and explain on their website how they found out that it’s a disinformation. What faulty was, what distorted was. I think, the Reuters, BBC, all of these, and much older established news agencies, also has the fact checking websites. The thing that’s very important, you can never rely on one tool to help you. The real way of finding out whether something is disinformation or not is using multiples tools. It’s not checking only BBC, but also Reuters, AFP or the EU websites. The more you check the more surely you can actually make a decision. Of course, you can never look at everything. It’s impossible, because it would take days for everyone to do that. Already just having a pluralistic, independent media sector, plays a huge role. If we look at Russia, for example, nowadays independent media are being thrown out, shut down and threatened. So having an independence and pluralistic media sector is one of the starting tools because it allows you to look at different sources.
- How does the long-term impact of disinformation differ from short-term effects, and what factors contribute to its resilience over time?
This year is very historic because more than half of the world population is going to vote. The election outcome might be one of the most democratic ones that we’ve had. I’m explaining this because one of the short term effects is an outcome of elections. There are many examples of Russian interference in Western elections. People and their ability to make a choice is impaired and in the short term that means that they might not make the decision that is in there best interest.
The long term effects are depending on who gets elected and whether or not disinformation has an affect on the elections that has an impact on democracy and the rule of law.
Short term effect might be that people are not trusting their one another, their neighbors, their doctors, the law, the judges or the police officers. That has an immediate effect, but in the long term that will slowly undermine the health system, democracy, the rule of law and human rights as well. So in the short term, usually it all comes down to the trust, while in the long term, because trust is a very important aspect of society, it goes down to the „big life questions”. Another example is the climate change. In the short term, that might mean that people don’t believe in it, so they will not make conscious choices according to that. In the short term these are all things that you might see. The effect will not be that big, but in the long term it might mean that we are years ahead of being able to curb that The the short term and long term effects are really differ in the impact across society. Short term effects impact on a person and their close circle, while long term is spread to institutions and to institutional bodies.
The awareness is a big step but also a difficult one. Oftentimes, if you try to prove people that they’re wrong, it will have the opposite effect. If you try to convince these people, they will believe their own convictions even more.
- How do institutions/organizations, such as EU, fight against disinformation?
I think EU as an institutional organization has progressed massively over the past years, but if we talk about EU as an assembly of countries then it depends very much on the country. EU as an organization has put out what they call the code of practice on disinformation. They try to talk to the social media platforms, what is, I think, very smart, because you can talk about what the disinformation is, how to counter it and how to look at it. Committing to improve their online policies and talking on these types of social media platforms is very important because it is not only about the message itself, it’s also about the message being able to be spread. So how is in the EU? Well, the fight against disinformation is very much focused on sharing informations, what may be difficult. These are many countries in the EU, which also work together with NATO and many other partnerships. EU has set up multiple agencies to only deal with disinformation and they’re communicating with NATO, within Member States.
We also have the prominent figures like Ursula von der Leyen with her statements about Russia. It’s a proactive communication because it takes away a little bit of fear. When people don’t know what is going on, it creates fear, that’s why talking about things in a way of not trying to scare people is important.
Sometimes it’s a little bit too late because disinformation is already out there and then you’re just trying to prove them wrong. That’s why having partnerships with the civil society organizations, NATO, UN, and, the most important, the social media platforms is something that they can do before disinformation is put out.
- Is Western Europe prepared to fight propaganda? If yes, how?
It’s definitely a bit hard to talk about the whole effect in Europe regarding propaganda, especially since it takes on many forms. But, in my opinion, the most important things to mention are definitely the elections, because that is something that the European Union and Western Europe experienced. Second of all it really touches upon one of the cores of the European Union, which is the liberal democracy. This is about being able to vote and giving the population the rights to express their opinions, and also the idea of what Europe is built on- peace on the continent. I think it shows that they are prepared to address it, especially because they are actually dressing it. It’s very important to notice that we can make an example with the war in Ukraine, where governments are much easier sharing their intelligence with the population. If you were to look back 15 years, the intelligence service of a country that saw another country interfering with their elections would keep it secret. They wouldn’t want to share it, just like with the intelligence. However, since the war in Ukraine, for example, the British Intelligence Service has been putting out intelligence memos on social media, sharing their intelligence on the war with the wider public. And they’re also doing that with the elections. They’re really saying, as seen in the Cambridge Analytica case „this is happening” and they’re investigating it publicly. So they are definitely prepared to fight it because they have shown that they are not allowing other external nations to interfere in their internal affairs. They’re really saying that an external nation should not interfere in the internal affairs of another country, just like a nation should not invade another country as well. They are definitely prepared to fight propaganda already by acknowledging its existence. But it’s difficult to say how well-prepared they are because when something goes wrong, you will hear about it, when something goes right, you will not hear about it.
- What is the strategy to combat disinformation by NATO and the EU?
NATO is primarily a military organization much more so than the European Union is, and NATO views disinformation as part of what they call hybrid threats. The hybrid threats is combined with other domains, such as economic sanctions, but also information warfare, political repercussions, or even biological threats; the threat comes from all different types of domains. So, for NATO, disinformation is a part of those threats, which is important because is it signified in their strategy. At the Brussels Summit, a couple of years ago, they specifically issued a statement that, there are many hybrid challenges of which disinformation is a part. They decided to adopt a dual-track model to strategize effectively and counter disinformation because they acknowledged that it is something they need to strengthen- their ability and defend themselves against cyber threats, ensuring they do not undermine society.
The dual-track model has two functions, and what they call „understand” and „engage”. NATO’s strategy is really focused on understanding disinformation environment. It’s about understanding the environment in which they operate, which very much signifies the military aspects of NATO, as there are land, sea and space environment. NATO also changed their doctrine to incorporate the information environment as a place of where war can actually take place. So, understanding that the environment is one part of the strategy. The second part of the strategy is about engaging with that information environment, and this is where they really highlight coordination with their partners – with the EU, with NATO, and also with the G7. Their strategy is really about understanding where they are and what is going on. Engaging with other partners is key because disinformation is not only harms one country; if it harms one country in the NATO alliance, it will harm the other countries as well. They really have a document that details understanding and engagement, and within NATO, there are trainings, seminars and they incorporate that through all of the different levels of their organization. That’s a clear strategy that they have adopted and communicated about the how to counter disinformation.
For the EU the most important thing, as I mentioned earlier, is the strategy of the „code of practice”. It involves these online platforms and trade associations, with also key players in the advertising sector. Marketing is also very interesting sector to include here because what marketing tries to do is also persuade people to buy something. Having all of these players together is part of their wire strategy, as they are committed to curbing disinformation and improving their online policies. This is a very big part of the strategy, and I think there is a recent example where the EU has launched an investigation into X (Twitter) for spreading disinformation. They are going after the social media platforms where disinformation is being spread, monitoring this, and launching investigations into the social media platforms, really bringing them in and having them explain how this can happen.
- What do you see as the emerging trends in disinformation?
There are many. I’m going to group them into two categories. The first one is the big elephants in the room: Artificial Intelligence. It’s the technological advancement, because while the idea of disinformation has been around, propaganda has been around for centuries. The technological development and AI has really changed the speeds and depths at which disinformation is spreading. It has increased the number of people who can be reached and impacted by disinformation, but ut has also given the opportunity foe many more partners ro actually engage in it. Because while first, let’s say, 20 years ago, if an entity wanted to start a very big disinformation campaign, they probably needed a lot more money, time, resources, people in order to pull this off. Thanks to technological development and AI, this has become much easier. The only thing you need is an Internet connection and a computer, so that’s definitely an emerging trend that we see. The speed, pace and reach are really picking up in the spreading of disinformation very specifically. There’s also the question, of course, of creating things like deepfakes, where there have been many pictures going around with filters.
There was a picture circulating of the Pope in a puffer jacket, which, I think, many people saw but, that’s a very good example of this, that you could not always discern whether it was a fake image. So, technological development definitely is a very important part there, and that is one aspect. The other one may be a bit less known. This is something that we have actually researched quite recently, and that is what we call the privatization of disinformation, which you could even call „disinformation for hire”, where private companies can be hired or paid by entities, whether state or non-state actors, to deliberately put out disinformation. I think of disinformation as a service. Of course, if you, for example, make the connection to organized crime, if people can make money off something, then they will do it. If money can be made, then a service can be provided, so the privatization of that and really the use of it as a service is a trend that we are starting to see. The University of Oxford also recently conducted research, maybe last year. The research on that is like the technological developments, mainly AI, are really one trend and the other trend is really the use of private entities as facilitators for disinformation that we can observe. So, those are definitely the two trends I would highlight.
- What is the future of disinformation, deep fake and AI? How fast is it evolving?
The future of disinformation can definitely be found within AI and deepfake technology. It’s involving extremely rapidly. I think many institutions and universities are also trying to figure out how to dress it, whether it’s good or bad development. I believe it’s both, depending on how it’s utilized. It can be either a dangerous thing or a useful resource. There are many discussions about whether AI will replace jobs, while in reality, AI is likely to make other jobs easier. It’s a very nuanced issue, and I think it signifies that we are still trying to figure out how it fits into our society. The answers I provide now may be very different in two months. It’s crucial to acknowledge that AI is not always inherently bad. We shouldn’t always take a negative stance, that it can scare people away. That’s not the direction we should be heading in as a society.
- Do you think Tucker Carlson’s interview with Putin served as a mouthpiece for Russian propaganda?
I found it extremely interesting to witness. If you look at the content of the of the interview, Putin gave a his own interpretation of history. He talked about history from 1862 and stated that Ukraine should’t have existed as a country. In 2021, he wrote this 5000 word essay On the Historical Unity of Russians and Ukrainians, which was a foreshadowing of his delusional justification for invading Ukraine. This interview served as another way of signifying that. I think it’s very important to point out that it is one of the first times in many years that the Western journalists, if we can call him that, have had access to Putin. A one- on- one interview is very rare, so of course, he took advantage of this opportunity. I think it was a political article, and there were many fact- checking articles published after this interview, one of them titled „5 lies and one truth from Putin’s interview with Tucker Carlson” . There are numerous established news media sources as well as other journalists who have also interviewed Putin. We also discussed what this means, and I think it provides a very unique view into these dynamics. The fact that this man was able to get that interview is very telling. His interview simply serves as a mouthpiece for Russian propaganda. I don’t think we should have expected anything else. However, it will be very useful and interesting for the study of Russian propaganda and the study of the Russian invasion of Ukraine.
This interview by Jowita Kołodziej was published on May 15, 2024, in the Institute of New Europe.