
Jori VanAntwerp
For over two decades, Jori has enabled industrial and IT organizations to be successful in reducing risk, increasing compliance, and improving their overall security efforts. He has had the pleasure of working with companies such as Gravwell, Dragos, CrowdStrike, FireEye, McAfee, and is now CEO & Founder at EmberOT, a cybersecurity startup focused on making security a reality for critical infrastructure.
In this article, we focus on some basic steps that individuals can follow to combat one of the favorite tools in a nation-state threat actor’s toolbox: Disinformation. EmberOT CEO Jori VanAntwerp shares tips for fighting misinformation, disinformation, and *FUD-based manipulation (*Fear, Uncertainty, and Doubt).
Information is often our most valuable resource, and it’s one of the reasons we founded EmberOT – to provide a full view of information in OT environments.
It’s critical that the information we take in as individual consumers, and that we share with others and allow to shape our views, be as accurate as possible. I frequently discuss this topic with friends and family members and have boiled down some of that advice into five helpful tips for battling misinformation.
#1. Beware the dreaded algorithms
Our hyper-vigilance causes cybersecurity practitioners to be keenly aware of potential threats, but we’re not immune to the basic human nature that causes us to be swayed by misinformation and Fear, Uncertainty, and Doubt, or FUD as it’s known in the industry. Resisting the urge to doomscroll is a constant battle that the algorithms on search and social media sites feed on and perpetuate. PsychologyToday neatly summarizes the issue: “Our digital information, from Google searches to social media news feeds, is a self-fulfilling prophecy since our behavior influences the algorithms that curate what we see.”
We feed the algorithms based on our search behavior, online activities, and biases. In turn, the algorithms feed us the equivalent of an echo chamber in a filter bubble with tunnel vision. The dangers of algorithmically provided content were brought into sharp focus for society as a whole in 2020 when Frances Haugen, who realized algorithmic harms were being deliberately perpetrated by Facebook, became one of the most famous whistleblowers in tech history. As ars Technica reported, “Documents that Haugen collected from Facebook show that engagement-based ranking algorithms prioritize divisive and extreme content on the platform.”
The first step in breaking the cycle is realizing that the cycle exists and being cognizant of the impact that algorithms have on what you see online. I personally try to avoid algorithms entirely whenever possible (see tip #2), and when I can’t avoid them, I purposely confuse them through obfuscation.
#2. Anonymize, anonymize & also anonymize
Side-step trackers and algorithm-influenced results with anonymized browsing and search. I personally use the magic elixir of VPN + a privacy-friendly browser. Whether using internet browsers like Firefox or Brave, search engines like DuckDuckGo or Tor, a network that masks your online activity and generally uses DuckDuckGo as its default search engine, or any other tool that doesn’t collect your data, there are many ways you can anonymize your online activity.
When I’m lecturing talking to a friend or family member about echo chambers and the importance of online privacy, I like to show them what happens when we each do a search with the same exact wording on our own devices. This shows them a concrete example of how the same search can result in two completely different outcomes.
One word of warning: You may be shocked by how many websites completely break when the option to track you or serve you ads is removed. But at least you’ll know how they REALLY feel about you and your privacy.
#3. Identify the FUD
The term FUD first came to use in the mid-1970s to describe a shady sales tactic that was used by IBM to discourage sales of its competitor’s hardware. Eventually, the term grew from just meaning hardware disinformation to anything, especially exaggerated claims, meant to control people’s behavior. So always be cautious of words and phrases that focus on FUD or excessive hyperbole. Some examples to look out for, particularly as it relates to cybersecurity headlines, include:
- Cyberwar / Cyberwarfare
- Cyber-Pearl Harbor
- Cyber-9/11
- Kill chain
- Cyber-terrorists
Seeing one or more of these terms in a headline doesn’t automatically mean that a story contains misinformation, but they are a mental red flag indicating that additional research is likely warranted.
#4. Look at it from all angles
One of the most important pieces of advice that I offer friends or family to help them battle both misinformation and disinformation (disinformation being misinformation purposefully created for the explict purpose of deception), is to consistently check multiple sources any time they want to learn about a news story or event. Since stories are written by humans and humans have intrinsic biases, it’s impossible to get a clear picture or complete “truth” from a single news source.
When I’m analyzing a story, I read through multiple articles covering the topic from publications with varying political leanings. This helps to identify and minimize bias and gives a fuller picture of the story from all angles. The Ad Fontes Media Bias Chart is far from perfect, but it can be a helpful tool for understanding what angle a news story might be coming from, as well as a means for finding alternate reference sources outside of your own leanings (which are helpful to review in order to minimize your personal bias).
Ground News takes this a step further, showing the same specific news story as it’s reported (or, in some cases, not being reported) by various media outlets with a ranking of where the publication and story sit on the political spectrum. The difference in headlines about the same event alone is enough to show anyone just how much bias still exists, even in reporting that claims to be objective.
#5. Zero Trust – it’s not just for security models
Apply the security theory of Zero Trust to any news analysis you do. Even if a publication or reporter has been historically trustworthy in the past, that doesn’t mean they should be intrinsically and automatically trusted in the future. This is especially relevant on social media sites, where misinformation can spread like wildfire (often by those with no ill intentions) merely because people don’t take the time to verify sources. As the Fraser Hall Library advises, “Never share a post on social media without fact checking. This is especially true if it comes from a source you trust. If you want to spread truth, you need to assume that other people aren’t perfect and may make mistakes.”
Final thoughts
I have a friend who is raising three extremely intelligent and independent young women, and he constantly reminds them, “Question everything, even if it’s something I tell you.” I think that’s excellent advice, especially in a time when it’s so easy for misinformation to spread rapidly.
Lastly, remember that if you are not paying for a service, tool, or software, YOU are the product.
~Jori 🤘🔥
Bonus readings
Here are a few other interesting reads that are related to this topic:
Dark Reading article on nation-state motivations for propagating disinformation
RAND Corporation’s list of tools that fight disinformation online
Resilience Series Graphic Novels from CISA
Become a Subscriber
EMBEROT WILL NEVER SELL, RENT, LOAN, OR DISTRIBUTE YOUR EMAIL ADDRESS TO ANY THIRD PARTY. THAT’S JUST PLAIN RUDE.