Skip to main content

Your Child's Most Dangerous Online Relationship Is Not With a Person.


Every school in the country teaches children about stranger danger online. About bullies. About the risks of sharing personal information with people they don’t know.


These are real dangers. They deserve the attention they receive.


But there is another danger operating at a scale that dwarfs all of them combined. It reaches every child with a device. It operates continuously, not occasionally. It requires no malicious human actor to function. And it is almost entirely absent from every online safety curriculum currently taught in schools.


The danger is the algorithm itself.


Not a stranger. Not a bully. Not a human being with harmful intentions.


A system. Designed by some of the most sophisticated engineers on earth. Optimised for one purpose. To capture and hold attention for as long as possible. And running, right now, on the device in your child's pocket.


What it does that stranger danger education doesn't address


Traditional online safety education teaches children to recognise and resist manipulation by other humans. That is a valuable skill.


But the algorithm is not a human. It does not make mistakes. It does not get tired. It does not have bad days. It simply optimises continuously, invisibly, without malice or conscience, for the behaviour most likely to keep your child engaged.


It has studied your child's specific patterns, vulnerabilities, and emotional responses longer than most teachers have known them. It knows what makes them linger. It knows what brings them back. It knows, with extraordinary precision, the exact content to surface at the exact moment their resistance is lowest.


No human predator has that capability.


What it does to children specifically


The algorithm identifies the emotion that keeps a child engaged. Then reinforces it continuously across the years when identity is forming, that emotion stops feeling like manipulation.


It feels like themselves.


The child who was fed outrage becomes the angry teenager who doesn't know why they're always angry. The child who was fed comparison becomes the insecure adolescent measuring every moment of their life against a curated highlight reel. The child who was fed validation hunger becomes the young adult who cannot tolerate being unseen.


These are not character flaws.


They are calibrations. Produced by a system that was never asked whether what it was optimising for was good for the children inside it.



What stranger danger education misses


Stranger danger teaches children to be suspicious of unknown humans approaching them online.


It does not teach them to be suspicious of the environment itself.


It does not teach them that the feed is not a neutral window onto the world. That every piece of content was placed in front of them by a system that knows their specific vulnerabilities. That the emotion they feel watching something was, in many cases, engineered rather than natural.


It does not give them the tools to ask, before they consume anything, who made this, who benefits from my attention, and what do they want me to feel.


The purpose of the three questions is to create a pause between stimulus and response. Psychologists call this metacognition, the ability to notice your own thoughts and emotions instead of automatically reacting to them.


Those three questions are the difference between a child who is inside the system and a child who can see it.


What needs to change


Online safety education needs a fourth category alongside stranger danger, bullying, and human manipulation.


Algorithmic literacy.


The ability to recognise that the digital environment is a built environment. That it was designed with outcomes in mind. That those outcomes were not designed with the child's wellbeing as the primary consideration.


This is not about frightening children. It is not about banning phones or demonising technology.


It is about giving children the same basic literacy we give them for every other environment they inhabit.


We teach children to look both ways before crossing a road not because roads are evil but because understanding how roads work keeps you alive.


The algorithm is not evil either.


It is simply indifferent.


And indifference at scale, directed at children who have no framework for recognising it, produces outcomes we are already seeing in mental health statistics, attention disorders, and the specific loneliness of a generation that has never been more connected.


The three questions every child should know


Before opening any app. Before handing a device to a child. Three seconds. Three questions.


Who made this?

Everything in the feed was created with an outcome in mind. This question breaks the illusion of neutrality.


Who benefits from my attention?

Your child is not the customer. Their attention is the product being sold to advertisers. This question names the transaction.


What do they want me to feel and why?

Outrage keeps you scrolling. Envy drives consumption. Validation keeps you coming back. This question identifies the emotional engineering underneath the content.


A child who asks these questions is not a child who never uses social media.


They are a child who knows what the transaction is before they enter it.


That knowledge changes everything.


The stranger at the door is a danger children can see.


The algorithm is the danger they are already inside.



And most of them have no idea it's there.

Comments

Popular posts from this blog

The algorithm won't teach you this

 Liquor stores won't promote AA meetings. Fast food chains won't promote Weight Watchers. The algorithm won't promote digital consciousness. So it's up to you. The internet isn't going anywhere. Abstinence isn't realistic. But living for the feed isn't the only alternative. FedByDefault exists because the platforms that profit from your attention will never teach you to question it. We do. Three questions. That's the whole thing. Who made this. Who benefits from my attention. What do they want me to feel. The purpose of the three questions is to create a pause between stimulus and response. Psychologists call this metacognition, the ability to notice your own thoughts and emotions instead of automatically reacting to them. Ask them before you open anything. Every time. They take three seconds and they change the relationship from one where the algorithm is in charge to one where you are. That's the Mental Firewall. https://samharker.gumroad.com

What Is A Mental Firewall

Think about what happens when a commercial comes on. You don't feel manipulated. You don't need anyone to warn you. Something in you just shifts. A quiet, almost automatic awareness that says “this is an advertisement. It has a purpose. I know what that purpose is.” You didn't always have that awareness. It developed over years of exposure. Somewhere along the way your brain learned to recognise the format, the intent, the specific kind of attention a commercial is asking for. And once you learned it you couldn't unlearn it. The awareness became automatic. That is a mental firewall. Not cynicism. Not suspicion. Not a wall that blocks everything out. Just a quiet recognition “ I know what this is. I know what it wants from me. Now I decide what I do with that.” Commercials aren't evil. Most of them are honest about exactly what they are. The jingle, the tagline, the thirty second story, all of it signals clearly “we are trying to sell you something.” And because you...