How SEO myths can cost you

Every day a new search engine optimization myth is born; unfortunately, not every day does an old myth die off. The net result is a growing population of myths. These are nearly impossible to squash because snake-oil salesmen keep perpetuating them — bringing them back from the brink, even. You can talk at conferences till you’re blue in the face. You can develop definitive SEO checklists or even write a top rated SEO book (e.g.,The Art of SEO). You’ll still get asked how to write good meta keywords.

I, for one, hate misinformation and disinformation, and the SEO industry, unfortunately, is rife with it. I’m going to do my part in fighting this menace and spreading the truth — by exposing some of the more insidious myths in this very article.

And now, without any further ado, the list.

MYTHS ABOUT CONTENT:

  • Great content equals (i.e., automatically leads to) great rankings. Just like great policies equal successful politicians, right?

  • Meta tags will boost your rankings. Fact: Optimizing your meta keywords is a complete waste of time. They have been so abused by spammers that the engines haven’t put any stock in them for years. In fact, Google never did support this meta tag. None of the various meta tags are given any real weight in the rankings algorithm.

  • If you define a meta description, Google uses it in the snippet. We already learned from my last column (“Anatomy of a Google Snippet”) that this is oftentimes not the case.

  • Tweaking your meta description is the way to optimize the Google snippet’s conversion potential. As I described in that article, the snippet content can be cobbled together from data from multiple sources. There’s an ideal keyword density value that you should optimize to. Find it by measuring KD of your high-ranking competitors.

  • Placing links in a teeny-tiny size font at the bottom of your homepage is an effective tactic to raise the rankings of deep pages in your site. Better yet, make the links the same color as the page background (I’m being facetious). Google’s algorithms are obviously more sophisticated than this dirty trick.

  • Google penalizes for duplicate content. I’ve long stated that it’s a filter, not a penalty. It may feel like a penalty because of the resultant rankings drop, but Google’s intention is not to penalize for inadvertent duplication due to tracking parameters, session IDs and other canonicalization snafus.

  • H1 tags are a crucial element for SEO: Research by SEOmoz shows little correlation between the presence of H1 tags and rankings. Still, you should write good H1 headings, but do it primarily for usability and accessibility, not so much for SEO.

  • The bolding of words in a Google listing signifies that they were considered in the rankings determination. Fact: This phenomenon — known as “KWiC” in Information Retrieval circles — exists purely for usability purposes.

  • It’s helpful if your targeted keywords are tucked away in HTML comment tags and title attributes (of IMG and A HREF tags). Since when have comment tags or title attributes been given any weight?

  • Validating and cleaning up the HTML will drastically increase the speed of a site or page. The biggest bottleneck to overcome in site speed is not what you think! If you want to be blown away, read Google chief performance engineer Steve Souders’ books High Performance Web Sites (for primarily server-side stuff like caching reverse proxies and Gzip compression) and Even Faster Web Sites (for primarily client-side stuff like JavaScript optimization).

  • Googlebot doesn’t read CSS. You’d better believe Google scans CSS for spam tactics like hidden divs.

  • Having country-specific sites creates “duplicate content” issues in Google. Google is smart enough to present your .com.au site to Google Australia users and your .co.nz site to Google New Zealand users. Not using a ccTLD? Then set the geographic target setting in Google Webmaster Tools; that’s what it’s there for.

  • It’s important for your rankings that you update your home page frequently (e.g., daily.) This is another fallacy. Plenty of stale home pages rank just fine, thank you very much.

    Using Flash will tank your SEO. Flash elements aren’t bad for SEO, it’s sites that are constructed with a single Flash movie, the absence of links, and text that’s been semantically marked up in that creates problems.

<< PREVIOUS PAGE: MYTHS ABOUT CONTENT

MYTHS ABOUT SITE ARCHITECTURE:

  • It’s good practice to include a meta robots tag specifying index, follow. This is totally unnecessary. The engines all assume they are allowed to index and follow unless you specify otherwise.

  • You can keep search engines from indexing pages linked-to with JavaScript links. There are many documented cases of Google following JavaScript-based links. Google engineers have stated that they are crawling JavaScript links more and more.

  • You should end your URLs in .html. Since when has that made a difference?

  • Hyphenated domain names are best (preferable) for SEO. Fact: Too many hyphens make your domain look like spam to the search engines.

  • Having an XML Sitemap will boost your Google rankings. Google will use your sitemaps file for discovery and potentially as a canonicalization hint if you have duplicate content. It won’t give a URL any more “juice” just because you include it in your sitemaps.xml, even if you assign a high priority level to it.

  • Using a minimum of 40 tags per blogpost helps to increase your ranking in search engines. This was from a self-proclaimed marketing guru and SEO expert, if you can believe it.

  • There’s no need to link to all your pages for the spiders to see them. Just list all URLs in the XML Sitemap. Orphan pages rarely rank for anything but the most esoteric of search terms. If your web page isn’t good enough for even you to want to link to it, what conclusion do you think the engines will come to about the worthiness of this page to rank?

  • Google will not index pages that are accessible only by a site’s search form. This used to be the case, but Google has been able to fill out forms and crawl the results since at least 2008. Note this doesn’t give you permission to deliberately neglect your site’s accessibility to spiders, as you’d probably be disappointed with the results.

  • There are some unique ranking signals for Google Mobile Search, and they include the markup being “XHTML Mobile.” Google Mobile Search results mirror those of Google Web Search. By all means, create a mobile-friendly version of your site; but do it for your users, not for SEO.

  • The Disallow directive in robots.txt can get pages de-indexed from Google. As I explained in my article “A Deeper Look at Robots.txt,” disallows can lead to snippet-less, title-less Google listings. Not a good look. To keep pages out of the index, use the Noindex robots.txt directive or the meta robots noindex tag — NOT a Disallow directive.

  • It’s considered “cloaking” — and is thus taboo and risky — to clean up the URLs in your links selectively and only for spiders. If your intentions are honorable, then you have nothing to fear. All the major search engines have said as much. You are helping the engines by removing session IDs, tracking parameters and other superfluous parameters from the URLs across your site — whether it’s done by user-agent detection, cookie detection or otherwise.

  • You can boost the Google rankings of your home page for a targeted term by including that term in the anchor text of internal links. Testing done by SEOmoz found that the anchor text of your “Home” links is largely ignored. Use the anchor text “Home” or “San Diego real estate” — it’s of no consequence either way.

  • A sitemap isn’t for people. A good (HTML, not XML) sitemap is designed as much for human consumption as it is for spiders. Any time you create pages/copy/links solely for a search engine, hoping they won’t be seen by humans, you’re asking for trouble.

  • The canonical tag is just as effective as 301 redirects for fixing canonicalization. Not really.

  • Flawless HTML validation can help improve your rankings. Take any popular search term and run a validation check against the top-10 results. Most of them will fail validation. Search engines are much more interested in the quality of the content on the page and are smart enough to overcome most parsing errors in HTML documents.

Stephan Spencer, [email protected] is co-author of The Art of SEO and founder of Netconcepts (acquired by Covario).