Ever since the internet became a household staple, there has been one problem that will seemingly never have a perfect solution – kids. I’m sure some of you are aware of what a minefield the early internet was, and how easy it could be for someone who doesn’t know better to be put into potentially harmful situations.
It was this reason that the first few child internet safety acts were put into place in the late 90s, limiting what kinds of content minors could access on the web.
But as time went on, the internet grew and became increasingly easier for children to be put it harm’s way. That’s why more bills aiming to limit content accessible to minors have been attempting to make their way through congress in the past few years.
One of these laws was the Kids Online Safety Act, or KOSA, which would make websites ensure that a user was not a minor before being able to view potentially inappropriate content.
However, some people criticize these laws as unconstitutional, stating that what these laws really plan to do is limit information to certain demographics of people under the facade of “for the kids,” which would violate freedom of speech.
This is due to the fact that some of what these laws considered “harmful” content could be informational, and could actually be used to educate people about a sensitive subject.
However, these complaints haven’t stopped some organizations from putting in more censorship into internet usage.
In late July, the UK government recently passed the Online Safety Act, an act that would limit what kinds of content a minor could see. Websites had to ensure that the people accessing potentially harmful content were not minors – “Platforms have a legal duty to protect children online.”
Another group that has recently implemented censorship is YouTube, a video sharing website that has had complications with child safety in the past.
In the US in early August, they implemented a new AI software that would, as The Official YouTube Blog explains, detect whether or not a user is underage or not by the videos that the user would watch. If an account is wrongly flagged as a child, the user would then have to present a photo of their ID in order to be able to get access to most of the platform’s content.
“Since they are their own platform, it’s completely fine.” Braxton Nattrass, sophomore, said when asked about the platform’s recent policy. “But I also believe it could lead to a path of discrimination because when you are able to control who and who doesn’t come into the platform you can restrict some people from coming on for many reasons. It also leads to a rabbit hole of what should and shouldn’t be censored.”
There are also complaints about the usage of AI in this system. The algorithm used has been prone to mistakes, with many users being able to upload fake IDs or photos in order to bypass the AI, rendering it useless. Others have opted to use a VPN in order to avoid the AI altogether.
It could also be noted that these new systems could potentially threat the privacy of users, as they must upload personal information online and trust that the site it’s uploaded to doesn’t have a data breach.
While child safety is a major concern for the online world, a perfect balance between censorship and freedom of speech can be hard to find. There are pros and cons to each side of the argument, but which answer is the right one is still argued by various groups across the globe.