ellipsis flag icon-blogicon-check icon-comments icon-email icon-error icon-facebook icon-follow-comment icon-googleicon-hamburger icon-imedia-blog icon-imediaicon-instagramicon-left-arrow icon-linked-in icon-linked icon-linkedin icon-multi-page-view icon-person icon-print icon-right-arrow icon-save icon-searchicon-share-arrow icon-single-page-view icon-tag icon-twitter icon-unfollow icon-upload icon-valid icon-video-play icon-views icon-website icon-youtubelogo-imedia-white logo-imedia logo-mediaWhite review-star thumbs_down thumbs_up

Pew Study: Teens and Internet Filters

Carmen M. Altamirano
Pew Study: Teens and Internet Filters Carmen M. Altamirano

The Pew Internet & American Life Project has released a study finding that families with teens (aged 12 to 17) have dramatically increased their use of internet filters or monitoring software -- up 65 percent -- from 7 million by the end of 2000 to nearly 12 million today. Released on March 17, 2005, the new Pew study, “Protecting Teens Online,” claims that 54 percent of internet-using families with teens now use filters.

As American teens are becoming more active internet users, parents are finding it necessary to block or monitor their teens’ activities on the web. Although non-technical means of monitoring occur in internet using households, such as implementing household rules and locating the family computer in a public area of the home, the Pew study found that parents of teens are increasingly safety netting their computers by using internet filters and monitoring software.

While consumers, specifically in this case, parents of teens, are using internet filters, the question of blocking is presented. Internet filters are intended to block improper content, and as a result, most filters are designed to function as pop-up blockers.

In the same way, anti-spyware programs, in an attempt to block bad content or viruses, end up blocking much more than that. While internet filters not only spare the viewer the displeasure of unsolicited content and images, they may also be blocking many online marketing ads, and depending on the technology, might possibly render traffic statistics questionable.

Although parents view the internet as a positive resource for their teens, they have concerns about unsolicited content. On a February 7, 2005 “Future Trends” radio interview, interviewer Jon Gordon talked with Patricia Greenfield (professor of psychology and director of the Children’s Media Center at UCLA) about unsolicited internet pornography. Greenfield states that the internet poses “new opportunities as well as new risks.” Upon visiting a teen chat room to observe, Greenfield immediately “started getting instant messages, many of which were sexual propositions.”

The Pew study reports “as of late 2004, 87 percent of all American teens aged 12 to 17 go online, which is about 21 million teens.” The report also found that 80 percent of parents with young children (under age 12) go online compared to the 87 percent of parents with teens (aged 12 to 17).

Collectively, American teens and their parents make up the largest online group: “Given that 66 percent of all Americans use the internet, parents and teens are more likely to be internet users than the general population,” reports the Pew study.

Filters and monitoring software

The Pew study reports that “the two main filter locations are the client side or server side.” The most prevalent forms are as follows:

Client-Side filters function as or in combination with a web browser, and are known as most flexible. The filters are loaded as software -- either by download or purchased software. For children, popular client side filters are Crayon Crawler and MyWeb; for teens, popular examples of client side filters are Net Nanny and CyberPatrol.

Server-Side filters, in the home, operate with or as a third-party server inside the intranet or internet. The filter blocks requests from the user while blocking graphic content and images. Server-side filters works either via the ISP or are web-based.

ISP-based filtering allows the parent to control content depending upon what the parent determines to be acceptable or unacceptable; the parent determines the blocking. A frequent ISP program used is Northern Trail Internet Access.

Web-based filtering operates through the server of the provider company and does not allow the subscriber to control blocking content. The program is offered through subscription for a monthly fee. Surf on the Safe Side is one such company.

Some software, programmed by the parent, prevents the child from giving out vital information on the web: name, address and social security number, et cetera. Other software offers time-limit programs and monitoring options with an inspection section, where parents may view their teen’s search activity.

Parents using internet filters

The likelihood of filter use primarily depends upon the parents’ familiarity with the internet. According to the Pew report, parents most likely to use filters are those online often or daily, as opposed to those who are not (58 percent to 47 percent). Mothers are more likely than fathers to report using filters (59 percent to 49 percent), and the same holds true of parents under forty years of age (64 percent to 49 percent). Families with younger teens (12 to 14 years) are more likely to use filters than those with older teens (15-17 years): a 60 percent to 49 percent difference. Educated parents with some college or a college degree are more likely to install filters (56 percent to 52 percent); ethnically, African-American parents were ranked most likely to use filters (64 percent) above Hispanic parents (61 percent), and Caucasian parents (52 percent).

Back in 2000, a different Pew study on internet filters had showed that parents with teen girls were more likely to install filters than those with boys. However, the results of the new study show that teen gender is no longer a factor in whether or not parents opt to install filters. Whether a household has a broadband or dial-up connection is also not a major factor in the use of filters.

Putting filters to the test

With the surge of teen internet usage, the influx of unacceptable content and parental concern about online predators, questions about internet filter effectiveness continually arise.

In 1998, an effort to prevent minors from entering unacceptable websites passed COPA (the Federal Child Online Protection Act). COPA says that websites with content “harmful to minors” must contain a verification system to ensure that users, who must be aged 18 and older, would not be permitted to enter without verification of credit card information.

Although defeated last June, there is an additional impending suit by the ACLU and other civil rights groups which disputes COPA as unconstitutional. One of the questions raised in the case is whether or not filters and monitoring software is effective enough to block harmful website and screen content. As a result, several studies evaluating internet filters have followed.

Despite parents’ efforts to protect their children and teens by implementing rules and safety nets, “everyone -- parents (81 percent) and teens (79 percent) -- still worries that teens are not careful enough when using the internet,” reports the Pew study.

According to Pew, a study on the under- or over-blocking of internet filters released by Consumer Reports in 2001 found that “most filters tested blocked 20 percent of 86 easily located objectionable sites selected.” However, by testing the filters against 53 controversial, yet legitimate sites, up to 20 percent of these sites were blocked.

Similarly, the Pew report discusses a December 2002 Kaiser Family Foundation study concerning internet filters blocking health content, finding that “most filters, when set at their least restrictive settings, only blocked about 1.4 percent of health information sites and about 87 percent of all pornographic sites,” while set at most restrictive settings, blocked 24 percent of health sites and 91 percent of pornographic sites.

Pew reports that despite the use of filters most teens are finding ways around the system and admit to “doing things online that they wouldn’t want their parents to know about,” whether the content relates to pornography, or simply information related to sexually transmitted diseases or mental health.

For parents, installing and using internet filters can be difficult, and this is a problem because customized installation is an important factor when it comes to filter performance. As the Pew study points out, “While filters have become more flexible and transparent in recent years, customizing a filter to reflect a family’s or a community’s values can be time-consuming and often requires more than a modicum of tech savvy.”

Additional resources:

Protecting Teens Online 2005: A Pew & American Life Report
Teens Struggle with Accidental Exposure to Internet Pornography: An interview with Patricia Greenfield, director of Children’s Digital Media Center at UCLA and professor of psychology, UCLA.

Teens Struggle with Accidental Exposure to Internet Pornography: An interview with Patricia Greenfield, director of the Children's Digital Media Center and professor of psychology at UCLA.

Why most sites are second-rate

The key causes of the aforementioned problem are that HTML is fairly simple and web browsers are extremely forgiving.

HTML is a very easy coding system to learn. All you need to do is memorize a few dozen tags, and you can create pleasing and effective websites. This enables many people to teach themselves HTML and become web designers. Unfortunately, most of them don't learn it properly or have any real understanding of what they're trying to achieve. Most of them think building websites is about creating stuff that looks good, but it's more subtle than that. Web design is really about building stuff that looks good on someone else's device, not your own, and that are useful. There are literally hundreds of ways you can code HTML to create the same visual effect, but many of these will create a slow, unresponsive site that will drive people crazy or that will render the site impenetrable to search engines.

Furthermore, even if you make mistakes in the coding, browsers will bend over backwards to handle your error. This means you can learn HTML incompletely and make mistakes in your coding without ever knowing because the browser is constantly compensating for your errors.

Gross neglect of speed

The most common failing of the second-rate coder is bloatware -- vast quantities of overly long and complex code where a few efficient lines would do the same task. It's as if Charles Dickens was their model: "Why use two lines of code where 20 will accomplish the same task?" Sometimes it looks like these coders are being paid by the line, and sometimes it looks like they're just trying to make work for themselves to avoid doing anything more useful.

However, I suspect what is really happening is they're just grabbing the first solution that springs to mind and never raising themselves to the level of asking, "Can I do this better? Is there a more efficient way of coding this?" It's far easier to simply trot out code like a donkey with your brain in neutral.

Bloatware has a number of negative consequences. First, it makes the pages slower to download and harder for the browser to process. Both download and processing time are important factors in search engines' assessments of a site because they want to send people to faster sites. As mobile computing grows, this will become a bigger and bigger issue.

Speed seems to have been forgotten by the web design industry around the time broadband arose. Prior to that, in the 1990s, everyone was very aware that web pages took time to download and bore that in mind when designing websites. Speed was so central to design that major development tools like Dreamweaver kept a running total of download time in the status bar as you coded so that you could see the impact of your changes on the site's speed. Designers didn't like casting aside their lovely creations because they were too slow, but they accepted the commercial realities of the world they inhabited and learned to compromise between appearance and performance.

With the rise of broadband, the web design industry simply forgot about speed to the point of ridiculousness. These days, designers throw in multiple calls from browser to server during page rendering. They call down web fonts, third-party components in iframes, and so on. The consequence is that many websites are slower now, over high-speed broadband, than they were when we were all running 256K dial-up connections.

While designers might have forgotten about speed, users haven't. There's a direct connection between website speed and the site's appeal. Sites that render in under five seconds are four times more likely to get a conversion than sites that take longer. This situation is even worse in the mobile market. In mobile, the critical time span is only two seconds, and you will get 10 times more mobile conversions if you meet this limit. Since search engines want to send people to sites that people like, search engines reserve the higher rankings for faster sites.

Most sites can be dramatically sped up with no visual changes, simply by re-coding for speed. Many sites use multiple JavaScript functions, called from multiple JavaScript files. It doesn't take more than a few minutes to combine them all into a single file. That alone can double the speed of a website. The same is true of CSS files, which provide stylistic information. While web fonts can look great, they are slow because they have to be downloaded from the web. Even Google warns designers about this and recommends no more than one web font per website.

Most search engine optimizers know all this. This is why a good SEO company will want to "search optimize" your site's code. Its salespeople might tell you they're doing fancy stuff based on a detailed knowledge of Google's search algorithms, but what the SEO techie is really doing is nothing more than basic code optimization for speed. This is something every web designer used to do automatically. It remains something customers could reasonably expect from every web designer today as a basic part of the service. But they don't get it, so SEO companies charge for it.

The CSS problem

Bloatware is even more common in second-rate CSS coding. CSS is the system used to tell a browser what visual appearance page elements should have. Most page elements, such as paragraphs and headings, have a default appearance, but it's pretty horrible. CSS is used to add extra style commands to do things like change margins, create borders, specify fonts and colors, and so forth. Tricks like sliding menus or transparent backgrounds can also be accomplished with CSS.

To give an element a specific appearance with CSS, you have three choices. The most basic method is to create a new default appearance, which will apply to that type of element everywhere. I can, for example, specify that all headings will be 20 point and red. The downside is that this will apply everywhere, without exception. It is therefore more usual to create a "class" for that element. So I could create a red/20pt class for headings, and then apply it only to the exceptional headings that I wanted to give that appearance, leaving the rest unchanged. Thirdly, I could also create an "id," which has pretty much the same effect as a class.

The standard says you use classes when you want to use the same formatting over and over, and ids when you only want to use that formatting once. Browsers are programmed to hold these objects in memory differently as a result, tuning handling of divs for multiple calls and treating ids as one-time disposable objects. However, many second-rate coders will use multiple ids with identical formatting. They should be using just one class.

While this might seem like a pedantic distinction, it has a huge impact on performance. Each CSS definition has to be downloaded by the browser and individually processed, so the more of them you have, the bigger your files, the slower the download time, and the longer it will take the browser to process the page before it can display it. It is therefore obvious you don't want any more CSS definitions than necessary. Yet website after website is bloated with multiple identical CSS definitions. Many pages contain a unique id for every element, yet all those different ids do exactly the same thing. I have seen web pages with literally hundreds of different ids, all designed to create exactly the same appearance. Each of them had to be individually processed, taking time, when one class could have done the same job in less than 1 percent of the time.

What sort of idiot would code multiple copies of exactly the same thing? Didn't they realize the stupidity of what they were doing while they copy-pasted the same commands over and over? No -- they were simply second-rate coders doing a second-rate job. They weren't thinking about what they were doing at all; they were just doing it. If there were enough coders to meet the demand, we could do the world a favor and throw these people out into the street where they belong. But there's a shortage of coders, so SEO agencies pick up easy cash cleaning up the mess second-rate coders leave behind.

Lazy structure

CSS misuse gets worse when it comes to creating a proper "meta structure." The meta structure of a document is its major functional components. Web pages have headings and paragraphs, and often lists and tables. Search engines need to know which is which. A heading tells the search engine something about the paragraph underneath it and is clearly a different type of content from that paragraph. Search engines need to know whether copy is a heading or a paragraph so it can understand the structure of the page and how the different bits of copy relate to each other.

However, many web designers don't use paragraphs or headings at all. Instead of using the correct

and tags designed 25 years ago for just this purpose, they use

is just code for a block of space on screen. It could be a paragraph, a call out box, a block of images, the entire page, or almost anything else. Designers use it because it has no default appearance, so it can be used to easily create whatever style they want.

There's nothing wrong with using divs -- unless they are being used instead of paragraph and heading tags. With CSS, it is possible to make a set of divs that create the visual appearance of headings and paragraphs so the design works fine for humans. However, if your page has no headings or paragraphs, search engines won't be able to tell what's what in your page. If you have competitor websites that are doing HTML properly -- using heading and paragraph tags to guide the search engines -- they will outrank you simply because search engines will be more certain of their understanding of the content. As a result, much SEO work is simply replacing divs with

and tags. Easy money, courtesy of second-rate coders.

Not all divs are bad. Divs are essential for creating visual appearance around a block of paragraphs, or for things like rounded corners in borders. However, even here we see bloatware. Lazy designers are those who don't think, but instead simply write vast amounts of inefficient code.

A common bloatware technique is to use one div for one aspect of the appearance, such as a border style, then another inside it for the font formatting, then another inside that for the line spacing, then another inside that for a background color, and so forth. Some pages end up with 10 or 15 divs nested inside each other when one or two could have done exactly the same job. They might even repeat this overloaded structure on every single paragraph. Multiple divs like this are very complicated for the browser to process because CSS commands can override other CSS commands. This means multiple divs have to be cross-referenced with each other to determine the final appearance. This makes a noticeable impact on speed and can even overload some browsers completely so that they can't display properly at all. Good coders try to minimize the number of nested divs they deploy. Second-rate coders simply never think it through to this level.

Ignorance on the server level

Second-rate work at the server level also provides SEO agencies with some easy money. There is a (small) set of "status codes" that web servers use to indicate the status of web pages to browsers and search engines when they ask for them. The most common code is 200, which means "everything is OK." After sending out a 200 status code, the server will follow up with the requested file.

The next most common one is 404, which means "can't find the file, server's running fine, no idea what's wrong." A 404 is not an appropriate code for a server to send out when a webpage has been removed from the site, moved, or renamed. A 410 is the correct code to say "permanently deleted," and there are codes in the 300 range for different types of file relocation or renaming. Yet most websites intentionally serve a 404 for pages that have been deleted, renamed or moved. This drives search engines nuts. Getting a 404 is like your website saying, "Gee dude, I'm working perfectly, I think, but duh, don't know, like wow, I can't find the file, this is like, you know, check it out, something's not right, but -- hey -- don't ask me what's wrong 'cause I'm OK. Bye." In other words, 404 is a good way of saying, "My website's a moron."

The 404 code is not a strategy -- it's the absence of thought by server admins, usually under the "guidance" of thoughtless web designers (who were the people who removed, renamed, or moved the page in the first place). If Google asks for a page and gets a 404, it has to come back and ask for the page again. If the page has been intentionally removed, it could waste six months repeatedly asking for the page. Multiply this over billions of websites, and it adds up to a significant cost for Google, a cost that could have been avoided if people merely ran their websites according to the standard they're getting paid to use. As a result, Google promotes websites that use status codes as they were designed and downgrades the listings of sites that just moronically 404 everything. So a good SEO design agency can take more of your money for simply setting up a proper HTTP status code regime -- a task your web designers should have done as part of normal operations.

The problem with SEO agencies

You won't hear any of this from most SEO agencies. Most SEO agencies survive via their working relationship with design agencies. Customers usually seek out design agencies, relying on the designers to provide the SEO services. Sometimes these are delivered from in-house staff, sometimes via an outside SEO specialist.

If an SEO agency gets called in by the design agency, they're not going to damage that relationship by telling the client much of the fee is simply because the agency did a second-rate coding job. The SEO agency keeps its opinions to itself. Even if the design agency's management could handle being told, the SEO agency won't do it. Firstly, the SEO staff have to liaise directly with the coders. It makes for difficult working relationships if your liaison knows you've told his boss that you think his work is sub-standard. Secondly, it rarely does any good. Most design agencies lack suitable technical management structures to address this issue. Furthermore, most web design team leaders are just as second-rate as their staff. Coding quality is usually cultural to a business. In a good design agency, all staff at all levels will code well (after all it's not hard), while all the work coming out of a second-rate agency will be badly coded, no matter who codes it.

It's worse if the SEO staff are inside the design agency. There's no way they can tell the client much of the fee is simply for covering over second-rate coding by their co-workers. In most cases the SEO people won't even warn their own management because, for the reasons just listed, it won't make any difference, it won't increase the company's bottom line, nor will it be welcomed by managers who (usually) can't do anything about it.

How to fix this massive problem

If you can cut HTML code, you'll know whether you are guilty of these second-rate practices yourself. If you are, have a little respect for your craft and learn to code HTML properly.

If you're a customer, not a coder, firstly you have my sympathy. If you're unsure whether your sites are the work of second-rate coding, open the code up (a simple "view source" in most browsers). Do you see every paragraph starting with

and every heading starting with tags (


, etc.)? Or do you see line after line of div tags? This is not rocket science. If you can't find

or tags in your pages, you've got a second-rate website.

Remember: You can't tell a second-rate coding job from the visual appearance of the site. A site can look great and still be second-rate. It's just like a house. I could create a building with fancy windows and ornate architectural features -- something that looked great. But if it was made of rotten timbers, bad wiring, and leaky plumbing, it would still be a second-rate house -- albeit a good-looking second-rate house. A website can be just like that: a second-rate mess that looks great.

If you have a second-rate website, you're not ready for SEO. Your initial SEO expenditures will just get consumed bringing your site's code up to scratch. Don't join in the SEO conspiracy of "let's take money for doing what those second-rate web designers were supposed to do in the first place." Get rid of your second-rate code before you take your site to the search engines. The search engines will punish you if you don't.

Brandt Dainow is the CEO of ThinkMetrics.

On Twitter? Follow iMedia Connection at @iMediaTweet.

"Discretion" image via Shutterstock.


to leave comments.