Reflections from Rome: Rethinking Misinformation, Disinformation, and the Future of Online Trust
This summer, our team was represented by Theo and me at a week-long course in Rome, Italy, on misinformation and disinformation, organized by the United Nations Interregional Crime and Justice Research Institute (UNICRI). The program brought together researchers, international law enforcement, and NGO workers from around the world to explore how false information spreads—and how to combat it.
We studied practical tools, such as the debunking sandwich (framing corrections by first affirming facts, then addressing the falsehood, and finally reinforcing the facts), as well as investigative techniques like reverse image, audio, and video searches. The goal was to sharpen our ability to recognize and debunk misinformation in real time.
We also explored the technical side of algorithms—how they are structured, and how they shape the visibility and spread of online content.
One exercise stood out in particular: we were given a lecture on extraterrestrials where some details were factual (the U.S. government released a file exploring their possibility), some were disinformation (Rosewell), and others were framed in ways designed to manipulate interpretation (some variation of facts/giving authority to false information). At the end, we learned which parts were true and which were not. The exercise underscored just how easy it is to believe misinformation—especially when it comes from a seemingly credible source.
Several conversations from the course continue to resonate with me:
Misinformation vs. Disinformation
A reminder of the importance of distinguishing between the two. Disinformation is deliberately created to mislead or manipulate, while misinformation is false information shared accidentally. In practice, these can be hard to untangle. One way to think of it is this: disinformation is intentionally starting and planting a rumor, while misinformation is hearing that rumor presented as fact and repeating it.The Business Model of Disinformation
Participants called for deeper research into how disinformation is financed, planted, and transformed into widespread misinformation. While some disinformation clearly originates from politicians or leaders intentionally sharing falsehoods, other cases are murkier. Creatively repurposed content—for example, recycling unrelated photos or videos and presenting them as evidence of a different incident, as happened recently with reports of India’s strikes on Pakistan-administered Kashmir—raises questions about who funds, incentivizes, and spreads such content at scale.Can We Overcome the Algorithm?
NGO leaders emphasized that the current structure of algorithms forces them to design content almost exclusively for their existing supporters. This undermines their ability to reach new audiences—even when their work has broad social value. The open question is: can we overcome these structural limitations, or are we locked into systems that reward only echo chambers?
Summer is often a good time to reset goals. For us, the work we’ve been doing through the Internet User Behavior Lab made it especially meaningful to spend a week reflecting with others who share our interest in misinformation and disinformation. The conversations gave us space to consider what further research is needed and how our work can contribute. We have exciting projects in progress that we look forward to sharing over the coming year. In the meantime, we’ll continue asking big questions—with the hope of helping internet users everywhere build healthier, more informed relationships with their online experiences.
By Alex Krause Matlack
On behalf of the Internet User Behavior Lab (IUBL)