
LOLOL
with Mx. Dahlia Belle
We face a daily struggle with technology and media. Parents juggle time constraints while kids navigate digital overwhelm. Yet most social media wellness programs treat technology like a problem to solve rather than a culture to understand.
LOLOL takes a different approach. We’re exploring what happens when we use comedy to reveal how algorithms actually work and what chat bots want from us, inspiring fun along with knowledge and silliness.
And skill-building!
But, yeah—a lot of silliness.
Because when a 9-year-old and their guardian are both cracking up at the same hapless algorithmic trick, that's where real empowerment begins.
This image generated by Leonardo.Ai with the prompt: “An 8-bit image of Portland, Oregon in purples and orange."
Where screen time is family fun time
The LOLOL Experience
Each 60-minute LOLOL session is designed as an interactive comedy experience that gets families talking, laughing, and problem-solving together.
Educational TV Parody Format. Dahlia leads what appears to be a cheerful children's educational show fostering joy in technology. Think Reading Rainbow meets digital platforms—but where everything goes sideways. Each session explores K.A.I., a fictional “all-in-one” AI chatbot expressly marketed to kids; a video "field trip" to the headquarters of K.A.I.’s developer; and audience participation in figuring out how to wrest control of the show back from K.A.I..
Interactive Decision-Making. Families don't just watch—they actively participate in making choices throughout the experience. Kids and parents work together as the session unfolds, discovering the business-oriented forces that counteract their decisions, and how these conflicts shape what happens next. The format ensures everyone stays engaged while learning together.
Structured Chaos. Using improvisational comedy strategies, LOLOL sessions are carefully designed to be spontaneous while covering essential media literacy concepts. The jokes create natural moments for reflection and genuine family conversation about technology use.
Real Talk Time. Each session transitions into guided family discussion where the laughs lead to practical insights. Families leave with tools, strategies, and shared experiences they can reference at home.
What Makes LOLOL Different. By the end, families have developed their own inside jokes and common language around digital wellness—making those tricky conversations at home much easier.
A prospective treatment and guided family discussion outline is available here.
(15 pages; 336 kb)
The Team
Mx. Dahlia Belle, Host
Portland-based comedian, parent, social critic, and accidental activist MX. DAHLIA BELLE has been compared to a Swiss Army knife — different tools to extract sharpness, softness, vulnerability, hope, and wicked wit — as seen on Hannah Gadsby’s "Gender Agenda" on Netflix. Her writing has been featured in Cosmopolitan Magazine, The Guardian, The Stranger, and Portland Mercury. As a public speaker, she has been a repeat guest on NPR, CBC, XRAY FM, and Portland Radio Project (PRP). Live performances have included Bumbershoot, Lysistrata, All Jane, Upper Left Comedy Festival, SF Sketchfest, and Comedy In The Park.
Andrew Stout, Writer-Producer
A writer who explores the nature of human communication and connection, ANDREW STOUT’s experience spans multi-dsciplined art, pediatric research, and journalism. As Violet Eileen he pursues philosophical inquiries into the nature of presence, absence, and identity via sound and technology. As a writer of experimental fiction, he invented a new form called a gyre that lands somewhere between allegory and haiku. With LOLOL and his stories for all-ages (Luna Would Like to Be Alone Now), Andrew is now developing an innovative laughter-first approach to helping us navigate our complex information landscape together. Andrew lives in downtown Portland, Oregon.
Andrew and Dahlia in February 2025.

"We're 20 years deep into social media. We know how these companies work. They optimize their platforms for our attention, identity, and time, not for our wellbeing. One of the best skillsets we can teach young people today? Show them how to read any digital room they walk into and choose what actually serves their mental health."
—Andrew Stout, producer, LOLOL.
Dear Brutus
The problem isn't digital media, but how we use it.
by Andrew Stout
The distinction between "active" and "passive" digital engagement has profound implications for everyone, especially children. What follows is a kind of literature review providing background to LOLOL, our hybrid comedy-media literacy pilot program launching in 2026. I say “kind of” because I totally failed at the dry academic voice thing.
(Pssst.. I didn’t really try.)
Active vs. Passive Use: A Clear Distinction
I’m going to say something you might not expect a media literacy advocate to lead with: Digital media use consistently shows positive mental health outcomes. Yes, far from the threat to our children proscribed in countless doom-and-gloom op-eds, digital media use can be a good thing for a child’s mental health. Obviously, not every kind of digital media use is awesome. Not binging Bluey on an otherwise perfect summer day, for instance. Or watching a stranger play Minecraft for hours. Definitely not looking at post-after-post of portraits of peers digitally altered to appear slimmer than they are IRL. No, I’m not talking about passive digital media use.
What do I mean by active digital media, then? Content creation that requires imagination and skill-building; direct communication with friends; and meaningful interactions of any kind. Essentially, digital media experiences with a true sense of give-and-take from a real person, even if that real person is the child alone, putting their imagination to use.
But as LeVar Burton said: “You don’t have to take my word for it.” The largest comprehensive meta-analysis to date found positive associations between active use and social support measures. Passive use? It won’t shock you to learn all those hours scrolling for more videos of other kids opening their toys kinda bums the viewer out (1). Meanwhile, a study of 2,378 adolescents over three years confirmed that passive use quietly distresses users over time, while active use showed no relation to a teen’s stress (2). This pattern holds across cultural contexts and economic classes.
Let’s do this bullet-point style so I don’t wear a hole through my thesaurus:
Experience sampling studies show individual usage patterns beget highly idiosyncratic effects (3).
A 2024 review of 59 studies found digital technology interventions had “a moderate and significant effect size” (g = 0.43) for promoting youth mental health (4).
The U.S. Surgeon General's Advisory documented that 71% of adolescents report social media helps them show their creative side, 67% have people who can support them through tough times, and 80% feel more connected to friends' lives (5).
Study after study reminds us technology is only a tool. So it should follow we have agency in how we use it. For instance, in marginalized communities critical support can take root and blossom in ways not logistically possible before the internet: Seven out of ten adolescent girls of color encounter positive identity-affirming content, while LGBTQ+ youth, especially in rural areas, use digital platforms to connect with peers when offline communities are unavailable (6). These digital connections remind young people they are not alone (7).
Now that’s a powerful couple examples of active digital media.
The Parasocial Relationship Crisis: When Connection Becomes Consumption
Agency.
Choice.
Cause-and-effect.
Just as we can use digital media to mine our creativity and stay connected, we can use it to harm ourselves and each other. We all know how social media has aided human nature’s worst instincts. For children, we tend to associate this with anxiety and bullying—and, yes, we will get to that in a moment. But today, research into children, the internet, and self-harm is teaching us more about a threat first defined during the Golden Age of Television in the 1950s. It’s a danger more fundamental for being more common, super nuanced, and—for its victims—completely unintentional: parasocial relationships.
Parasocial relationships are one-way emotional connections to people who don't know we exist: influencers, celebrities, fictional characters. These are typically people we repeatedly encounter in media. Unlike genuine social relationships built on mutual interaction and reciprocity, parasocial relationships are fundamentally unbalanced. The follower, fan, or user invests emotional energy developing feelings of closeness toward someone—or something—to whom they are invisible.
These relationships become particularly harmful in those phases of our lives when we’re trying to figure out who we are. Research with 151 early adolescents found intense celebrity worship linked to body image concerns among female adolescents, leading to "feelings of loneliness and an exacerbation of existing mental health symptoms" when reaching the point of obsession (8).
Now. Ready for something really bleak?
Imagine chatbots entering this already lonely landscape.
Corporate Knowledge and Calculated Harm
If social media's past is AI's prologue, we can predict tech companies’ next moves. They will grow increasingly motivated by profit-margins in their product design. What kind of choices can we assume they’ll make? Well, how has social media changed since you first used it? Don’t you kinda figure they’re going to push products that seduce users into prioritizing their time with bots over real friends and family?
A recent study analyzing 981 participants and over 300,000 messages found AI chatbot use correlates with higher loneliness, emotional dependence, and problematic usage patterns (9). I don’t mean to throw shade at the future of our species, but another study found children are "much more likely than adults to treat chatbots as if they are human." In fact, they have already shown a tendency to disclose more mental health information to AI than to adults (10).
So, sure. Now I’ve become the doom-and-gloom op-ed writer I was mocking above. From revolutionary to cliché in less than 900 words. Well, I’ll proscribe away: It’s now an established fact that companies know their products harm us. Whether or not tech professionals push back often determines if they remain in the industry.
So who’s left to make these important strategic decisions, fiscal quarter by fiscal quarter?
Not Frances Haugen. You might remember she was the whistleblower who, in 2021, testified to Congress. Haugen revealed Facebook's internal research showed 32% of teen girls reported Instagram made them feel worse about their bodies. Meanwhile, 13.5% of UK teen girls said it made suicidal thoughts more frequent (11). Internal presentations stated "We make body image issues worse for one in three teen girls" (12).
Imagine receiving that data point on your productivity dashboard.
But we’re not done yet. Industry insiders admit creating "tools that are ripping apart the social fabric of how society works" through "short-term, dopamine-driven feedback loops" (13). The infinite scroll feature, invented in 2006, wastes over 200,000 human lifetimes daily according to its creator, who now expresses deep regret (14) and leads a non-profit that directly challenges the for-profit tech status quo.
Yeah, that guy isn’t developing the next ginormous social media platform either.
One of Facebook’s more memorable demonstrations of its values was the break-up of its own Civic Integrity team after the 2020 election. Sounds bad, right? But what do we know? I’m sure it wasn’t so black-or-white the next morning in the break room at the kombucha cooler. Well, internal employee messages presented by Haugen show colleagues stating they "cannot conscience working for a company that does not do more to mitigate the negative effects of its platform" (11).
So, yeah—it was bad.
Mass employee exoduses. Diminished goodwill from the public. Large tech companies have increasingly reacted to our pushback with a shoulder shrug. It's the cost of doing business. Even expensive lawsuits haven't slowed these billion-dollar entities' rush to, as they themselves love to remind us, "break things." Legal proceedings reveal over 482 cases filed against social media companies, with 42 attorneys general filing lawsuits alleging negligent design, failure to warn, and fraudulent concealment (15). Massachusetts AG Andrea Joy Campbell stated "Meta knew of the significant harm these practices caused to teenage users and chose to hide its knowledge and mislead the public to make a profit" (16).
The data on the harm to our children’s mental health is no longer hidden.
So we’re cool now, right?
The Predictable Patterns of Passive Consumption
Congratulations! If you’re looking for statistics to affirm your priors, you’ve come to the right section!
There are tons of studies that confirm strong associations between doom scrolling and psychological distress, social media addiction, and reduced wellbeing (17). Oh, bullet-points!
Harvard research links doom scrolling to "existential anxiety" and elevated stress hormones including cortisol and adrenaline (18).
The average 4.05 hours daily social media usage creates compulsive behavior cycles that 70% of participants engage in from bed, directly correlating with sleep problems.
Meta-analyses show 20-40% of adolescents experience cyberbullying victimization, with over 75% of victims presenting to emergency departments with mental health complaints (19).
Cyberbullied youth face 50% increased risk of suicidal thoughts and are more than twice as likely to engage in self-harm (20).
Cross-platform studies show all major visual platforms have statistically significant associations with body image issues (21).
Research reveals 40%+ of teenagers report body image concerns from social media, with rates doubling compared to adults. Only 32% of #bodypositivity videos on TikTok portrayed diverse body types, reinforcing narrow beauty standards (22).
But on the other hand—where else can you go to keep up with what the Russian bots are angry about today?
Teaching Digital Literacy for the Real World
Here’s the thing:: We're 20 years deep into social media. We know how these companies work—they optimize platforms for our attention, identity, and time, not for our wellbeing. One of the most valuable skills we can teach young people today? How to read any digital room they enter and choose what actually serves their mental health.
For media literacy education to be effective, it must begin and end with the human element—from individual behavior to collective culture and the economic decisions that shape both. It needs to make clear usage patterns are usage choices. Students need to feel their agency and recognize the difference between active engagement that builds genuine connections and passive consumption that exploits psychological vulnerabilities. They need tools to identify when they're forming parasocial relationships that substitute for real intimacy.
The curriculum should teach young people to ask: Am I creating or consuming? Am I connecting with real people who know me, or developing a one-way relationship with a brand? Am I choosing how to spend my time, or is an algorithm choosing for me?
These skills become more urgent as AI chatbots proliferate and their designs grow more sophisticated. Young people need frameworks for distinguishing between helpful AI tools and products designed to replace their own thought, creativity, and human relationships (23). They need practice recognizing when digital interactions serve their goals versus when they boost corporate engagement metrics.
Starting the Conversation Now
The research foundation for LOLOL is clear: active digital engagement supports mental health while passive consumption creates measurable harm. Corporate design intentionally exploits psychological vulnerabilities, and AI chatbots represent escalating risks for young people already struggling with parasocial relationships.
As for LOLOL’s design: Why comedy? The challenge of media literacy today is the same as before the days of Facebook and MySpace. The problem isn’t technological, but cultural. It needs a cultural solution. And I can’t think of a more basic unit of culture with which to build than a shared laugh.
The opportunity? That lies in empowering young people with practical skills for navigating digital spaces intentionally. Instead of warnings about evil platforms, we can teach students to recognize and choose digital behaviors that help them realize their complete selves.
Let's start this conversation now while we can still effect change. First, in ourselves. Then, maybe the stars. ❀
U.S. Surgeon General's Advisory (2023). Social Media and Youth Mental Health
Common Sense Media and Hopelab (2024). Social Media and Mental Health Report
The Role of Online Social Support in Mental Health: Comparing Rural and Urban Youth. PMC
Wall Street Journal's Facebook Files and Frances Haugen testimony documentation
Addicted by Design: The Hidden Psychology Behind Social Media's Hold on Us. Medium
How the invention of infinite scrolling turned millions to addiction. Medium
Current perspectives: the impact of cyberbullying on adolescent health. PMC
Social media use and body image issues among adolescents in a vulnerable Louisiana community. PMC
Young People, Body Image and Social Media. The Digital Wellness Lab
LOLOL
with Mx. Dahlia Belle
Media literacy… with jokes.
All illustrations drawn by Andrew Stout’s hand on an iPad (10th generation) with Procreate (5.3.15) and processed with Adobe Illustrator 2025, unless otherwise noted (the purple and orange Portland skyline was AI generated at Leonardo.ai).
All text written and edited by Andrew Stout in Apple Notes (4.11 ) and Microsoft Word (16.96.2 ), with assistance from Word’s spellcheck and a glance or two at the Associated Press Stylebook, 2012.