Chicago SES – Day 2 – Duplicate Site Issues

I came to this session to hear abouts splog and what Google and others were going to do about splog prevention (apparently nothing yet). I was also inerested how feeds affected organic listings as I’ve personally experienced some problems. What I got instead was a bunch of people developing business sites that needed redesign and/or clean up, certainly necessary, but not as interesting to me.

Jon Glick,

What is duplicate content a problem?

Google, Yahoo, Open Directory Project…

Confusing the Bot: Dynamic URLs

Confusing the Bot: 2 URLs

Don’t confuse the spider – chose one canonical domain and link all internal pages

301 redirects, your hero…

Yahoo! has transparency on whether your site is banned, check it out.
Shari Thurow, Grandtastic Designs

What is duplicate content?

The definition is unclear.

Search Engines do not want duplicate or near-duplicate content in their indices.

Duplicate content filters:
– content properties
– linkage properties
– content evolution
– host name resolution
– shingle comparison

example: 3 web pages, 3 unique URLs – robots.txt excludes the duplicate content or meta tag can do the same thing

Duplicate content is often copyright infringement.

Comments are closed.