My Pompa used to say, “Believe half of what you see and none of what you hear.”
He was a small-town farm boy from Wisconsin and clearly ahead of his time.
The quote was based on one from a short story by Edgar Allen Poe in 1845. So, clearly also ahead of his time.
The sentiment held strong for almost 200 years, but going into 2024, we finally need to update it: Believe none of what you see and none of what you hear.
The really, really bad
Let’s dive deep in a dark hole and try to work our way out of it.
Deepfake
Any of various media, esp. a video, that has been digitally manipulated to replace one person's likeness convincingly with that of another, often used maliciously to show someone doing something that he or she did not do.
-Oxford English Dictionary
Photoshopping, editing, and filters have been around for a while now. (Who doesn’t want to make a dim pic of their kid brighter and easier to see?) But deepfakes take it to another level by using A.I., sometimes referred to as “deep learning,” to do the work. And it’s a lot better at that work than we are.
Since we’re starting with the extreme, read over this recent news about how horrific deepfakes can be for our kids.
CBS News: Deepfake nude images of teen girls prompt action from parents, lawmakers
Washington Post: AI fake nudes are booming. It’s ruining real teens’ lives.
Research from 2023 shows that 94 percent of deepfake videos online are “explicitly pornographic” and 99 percent of those are of women. But to put it into more perspective, 94 percent of the victims work in the entertainment industry. So, as of today, those horrific, kid-focused news stories are still extremely rare.
But we can easily see the direction this can go.
The really bad
The next stop on our way up from the bottom is political deepfakes and how next year’s U.S presidential campaigns could be impossible to decipher.
In 2018, actor Jordan Peele created this deepfake of Barack Obama to prove a point. As you can imagine, five years of fast-paced AI development has made videos like this much more realistic. And now, everyone on Earth has access to the tech to make their own.
American trust in traditional media is the lowest its ever been (tied with 2016). Now that everyone can create realistic “fake news” on their own, how can we believe anything we see next year?
Everything else
As for the rest of the internet? Well, you can’t really believe any of that either.
AI-created content is already flooding the web through text, image, sound, and now video. And a lot of it is indistinguishable from human-created content.
This week, the world’s first AI-generated news network launched. AI anchors and everything. Watch.
Wharton Business School professor and A.I. expert Ethan Mollick said succinctly earlier this month, “One side effect from AI is that the corpus of human knowledge from mid-2023 on will have to be treated fundamentally differently than prior to 2023.
“Seriously, don't trust anything you see online anymore. Faking stuff is trivial. You cannot tell the difference. There are no watermarks, and watermarks can be defeated easily. This genie is not going back in the bottle.”
So, what to tell our kids (and our parents and our friends and neighbors)
Check, check, check
If something activates you or seems shocking, search for it on Google and click the “News” tab at the top. Even though trust in traditional media is low, they still adhere to stricter fact-checking and verification rules than your great aunt who shares wonky things on Facebook every week.
If it’s a source you trust and it still really surprises you, check coverage from the “other side” to see how they’re presenting it.
If a celebrity or politician says something shocking, check for more coverage. If it makes you react that emotionally, it will spread like wildfire.
Then check again
We’ve lived inside a 24-hour news cycle for decades. CNN launched as the first 24-hour news network in 1980. Almost 45 years ago.
But today, that 24 hours is multiplied by 4.5 billion social media users. Creating and sharing information can happen with the click of a button, but checking those facts takes time.
Traditional media infamously had to walk back its coverage of a hospital bombing in Gaza this fall because they reported before the facts were clear, even with real video.
If the human eye can’t detect AI-created video, follow-up is going to take even longer. Sit with what you see or read for a day and then research it again.
Consider the creator or sharer
Talk with your child about where they find information. Let them know good information can be found through any channel — yes, even TikTok — but figuring out who is sharing the information can tell you a lot.
The chart above is only for adults, but a generational divide is clear, and the direction is obvious. Social media is where young people get their news, and sharing there happens fast.
Once they’ve considered who is sharing the information, ask them to consider why they are. Is the author trying to teach them something? Convince them to believe something? What’s the difference? And how would creating fake images, text or video contribute to that goal?
Remember confirmation bias and social media algorithms
Who doesn’t love to be right? Confirmation bias is the idea that we interpret new information in a way that confirms what we already thought. Social media is built to show us what we like. The combination of the two can trap us in a spiral of only hearing one version of the truth: our own.
So, it’s not only things that shock us that need to be checked for AI-interference. If something controversial comes up and it perfectly aligns with what you already believed, it also needs to be checked.
Tell your kids about a time when you chose to dig deeper on something and it changed your mind. Also, tell them about a time you chose to learn more and it confirmed what you already believed. Both examples solidify the fact that learning more is great! It’s not embarrassing. It’s growing.
And in the age of A.I., we’ve all got a lot of learning and growing to do.