Time for the 2012 SEOmoz Annual Industry Survey!

SEOmoz’s industry survey collects a lot of data about we fabulous search marketers. The infographic below contains data from last year’s survey that over 10,000 people participated in. You can be a part of 2012′s survey and help SEOmoz collect useful data that can be helpful to all our industry.  Take the SEOmoz 2012 Industry [...]

Follow SEJ on Twitter @sejournal



Posted in News | Tagged , , , , , , , , , | Comments Off

Let’s Be Negative: An Efficient Approach to Negatives…

…That Will Save You (lots of) Money and (even more) Time! The topic of negative keywords is one of those things you learn in SEM 101. It’s certainly one of the first things we discuss with our new hires, and it’s something they claim to understand rather well. So, with such a topic you might [...]

Follow SEJ on Twitter @sejournal



Posted in PPC | Tagged , , , , , , , , , | Comments Off

StatCounter Data: Google Chrome Gaining Market Share from Internet Explorer

According to the Irish analytics firm StatCounter, Google’s Chrome browser recently became the world’s most popular PC browser for one day. The StatCounter data, which is based on visitor information from 3 million websites and 15 billion page views each month, indicates that on March 18th Chrome reached 32.71% usage. Internet Explorer was a close [...]

Follow SEJ on Twitter @sejournal



Posted in News | Tagged , , , , , , , , , | Comments Off

RankFixer.com Enters the SEO Tools Market

Personally, I have been in the SEO game for 8 years. In my short time in this field I have come across very few people that inspire my SEO theory and challenge the way I practice. A little under 6 months ago I met Jeff at a Seattle SEO Network event. Quickly I understood that [...]

Follow SEJ on Twitter @sejournal



Posted in tools | Tagged , , , , , , , , , | Comments Off

Please Help: Take + Share the 2012 SEOmoz Industry Survey

Posted by randfish

Buenos Dias, marketers!

Today's an exciting day! We're thrilled to announce the launch of the 2012 SEOmoz Industry Survey. It's been 2 long years since our last survey, which had more than 10,000 respondents from around the world and produced some of the most detailed public information ever assembled about the growing fields of SEO, social media, content, and organic/inbound marketing.

Take the 2012 SEOmoz Industry Survey

This year's survey is projected to take about 20 minutes to complete (I took it twice and the first time took me ~18 minutes), and it's slightly more detailed than 2010's. We know this means a little extra work on your part, but we hope it will be worth it as we make the data available to all. The survey's available now in SurveyMonkey and will run for 5 weeks to collect data (but early participation's greatly appreciated):

SEOmoz Industry Survey 2012

The following sections contain the 54 total questions (49 if you're not an agency/consultant):

  • Your Work in the Industry
  • Questions for Consultants, Freelancers, and Agencies
  • Learning and Improving Internet Marketing Skills
  • Internet and Inbound Marketing Scope and Process
  • Inbound Marketing Tools and Tactics
  • SEO Tools and Tactics
  • Social Media Tools and Tactics
  • Predictions/Opinions for Internet/Inbound Marketing 

As an added incentive for the survey, we are offering the following excellent prizes to some lucky participants: One Grand Prize Winner will get a 16GB Wifi iPad 3. Three First Prize Winners will each get a $75 ThinkGeek gift certificate. Ten Second Prize Winners will each get a $25 gift certificate to the SEOmoz Zazzle Store. To see the full sweepstakes terms and rules, check out our sweepstakes rules page. The winners will be announced by June 4th.

We'd also like to thank our partners who are helping to encourage marketers around the world to participate in the survey, including Outspoken Media, Search Engine Land, Distilled, Hubspot, Search Engine Journal, Techipedia, AimClear, Blueglass, Marketing Pilgrim and Search Engine Watch.

Thanks to Supporting Organizations

If you're able, a tweet, Google+ share, Facebook share, or blog post of your own would be incredibly appreciated. It's our goal to reach as many professionals as possible and share something truly remarkable from the data.

Take the 2012 SEOmoz Industry Survey

Thanks for your help and participation!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Tagged , , , , , , , , , | Comments Off

Adobe Launches Public Beta of Photoshop CS6

During the beta period, Adobe will preview the app and its many new features as it readies a fresh update of the rest of the suite, as well as its Creative Cloud, for debut.

Tagged , , , , , , , , , | Comments Off

Adobe Launches Public Beta of Photoshop CS6

During the beta period, Adobe will preview the app and its many new features as it readies a fresh update of the rest of the suite, as well as its Creative Cloud, for debut.

Tagged , , , , , , , , , | Comments Off

Logic, Meet Google – Crawling to De-index

Posted by Dr. Pete

Since the Panda update, more and more people are trying to control their Google index and prune out low-quality pages. I’m a firm believer in aggressively managing your own index, but it’s not always easy, and I’m seeing a couple of common mistakes pop up. One mistake is thinking that to de-index a page, you should block the crawl paths. Makes sense, right? If you don’t want a page indexed, why would you want it crawled? Unfortunately, while it sounds logical, it’s also completely wrong. Let’s look at an example…

Scenario: Product Reviews

Let’s pretend we have a decent-sized e-commerce site with 1,000 unique product pages. Those pages look something like this:

1000 product pages (diagram)

Each product page has its own URL, of course, and those URLs are structured as follows:

  • http://www.example.com/product/1

  • http://www.example.com/product/2

  • http://www.example.com/product/3

  • http://www.example.com/product/1000

Now let’s say that each of these product pages links to a review page for that product:

Product pages linking to review pages

These review pages also have their own, unique URLs (tied to the product ID), like so:

  • http://www.example.com/review/1

  • http://www.example.com/review/2

  • http://www.example.com/review/3

  • http://www.example.com/review/1000

Unfortunately, we’ve just spun out 1,000 duplicate pages, as every review page is really only a form and has no unique content. Those review pages have no search value and are just diluting our index. So, we decide it’s time to take action…

The “Fix”, Part 1

We want these pages gone, so we decide to use the META NOINDEX (Meta Robots) tag. Since we really, really want the pages out completely, we also decide to nofollow the review links. Our first attempt at a fix ends up looking something like this:

Product pages with blocked links and NOINDEX'ed review pages

On the surface, it makes sense. Here’s the problem, though – those red arrows are now cut paths, potentially blocking the spiders. If the spiders never go back to the review pages, they’ll never read the NOINDEX and they won’t de-index the pages. Best case, it’ll take a lot longer (and de-indexation already takes too long on large sites).

The Fix, Part 2

Instead, let’s leave the path open (let the link be followed). That way, crawlers will continue to visit the pages, and the duplicate review URLs should gradually disappear:

Product pages with followed links

Keep in mind, this process can still take a while (weeks, in most cases). Monitor your index (with the “site:” operator) daily – you’re looking for a gradual decrease over time. If that’s happening, you’re in good shape. Pro tip: Don’t take any single day’s “site:” count too seriously – it can be unreliable from time to time. Look at the trend over time.

New vs. Existing Sites

I think it’s important to note that this problem only applies to existing sites, where the duplicate URLs have already been indexed. If you’re launching a new site, then putting nofollows on the review links is perfectly reasonable. You may also want to put the nofollows in place down the road, after the bad URLs have been de-indexed. The key is not to do it right away – give the crawlers time to do their job.

301, Rel-canonical, etc.

Although my example used nofollow and META NOINDEX, it applies to any method of blocking an internal link (including outright removal) and any page-based or header-based indexation cue. That includes 301-redirects and canonical tags (rel-canonical). To process those signals, Google has to crawl the pages – if you cut the path before Google can re-crawl, then those signals are never going to do their job.

Don’t Get Ahead of Yourself

It’s natural to want to solve problems quickly (especially when you’re facing  lost traffic and lost revenue), and indexation issues can be very frustrating, but plan well and give the process time. When you block crawl paths before de-indexation signals are processed or try to throw everything but the kitchen sink at a problem (NOINDEX + 301 + canonical + ?), you often create more problems than you solve. Pick the best tool for the job, and give it time to work.

Update: A couple of commenters pointed out that you can use XML sitemaps to encourage Google to recrawl pages with no internal links. That's a good point and one I honestly forgot to mention. While internal links are still more powerful, an XML sitemap with the nofollow'ed (or removed) URLs can help speed the process. This is especially effective when it's not possible to put the URLs back in place (a total redesign, for example).

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Tagged , , , , , , , , , | Comments Off

Logic, Meet Google – Crawling to De-index

Posted by Dr. Pete

Since the Panda update, more and more people are trying to control their Google index and prune out low-quality pages. I’m a firm believer in aggressively managing your own index, but it’s not always easy, and I’m seeing a couple of common mistakes pop up. One mistake is thinking that to de-index a page, you should block the crawl paths. Makes sense, right? If you don’t want a page indexed, why would you want it crawled? Unfortunately, while it sounds logical, it’s also completely wrong. Let’s look at an example…

Scenario: Product Reviews

Let’s pretend we have a decent-sized e-commerce site with 1,000 unique product pages. Those pages look something like this:

1000 product pages (diagram)

Each product page has its own URL, of course, and those URLs are structured as follows:

  • http://www.example.com/product/1

  • http://www.example.com/product/2

  • http://www.example.com/product/3

  • http://www.example.com/product/1000

Now let’s say that each of these product pages links to a review page for that product:

Product pages linking to review pages

These review pages also have their own, unique URLs (tied to the product ID), like so:

  • http://www.example.com/review/1

  • http://www.example.com/review/2

  • http://www.example.com/review/3

  • http://www.example.com/review/1000

Unfortunately, we’ve just spun out 1,000 duplicate pages, as every review page is really only a form and has no unique content. Those review pages have no search value and are just diluting our index. So, we decide it’s time to take action…

The “Fix”, Part 1

We want these pages gone, so we decide to use the META NOINDEX (Meta Robots) tag. Since we really, really want the pages out completely, we also decide to nofollow the review links. Our first attempt at a fix ends up looking something like this:

Product pages with blocked links and NOINDEX'ed review pages

On the surface, it makes sense. Here’s the problem, though – those red arrows are now cut paths, potentially blocking the spiders. If the spiders never go back to the review pages, they’ll never read the NOINDEX and they won’t de-index the pages. Best case, it’ll take a lot longer (and de-indexation already takes too long on large sites).

The Fix, Part 2

Instead, let’s leave the path open (let the link be followed). That way, crawlers will continue to visit the pages, and the duplicate review URLs should gradually disappear:

Product pages with followed links

Keep in mind, this process can still take a while (weeks, in most cases). Monitor your index (with the “site:” operator) daily – you’re looking for a gradual decrease over time. If that’s happening, you’re in good shape. Pro tip: Don’t take any single day’s “site:” count too seriously – it can be unreliable from time to time. Look at the trend over time.

New vs. Existing Sites

I think it’s important to note that this problem only applies to existing sites, where the duplicate URLs have already been indexed. If you’re launching a new site, then putting nofollows on the review links is perfectly reasonable. You may also want to put the nofollows in place down the road, after the bad URLs have been de-indexed. The key is not to do it right away – give the crawlers time to do their job.

301, Rel-canonical, etc.

Although my example used nofollow and META NOINDEX, it applies to any method of blocking an internal link (including outright removal) and any page-based or header-based indexation cue. That includes 301-redirects and canonical tags (rel-canonical). To process those signals, Google has to crawl the pages – if you cut the path before Google can re-crawl, then those signals are never going to do their job.

Don’t Get Ahead of Yourself

It’s natural to want to solve problems quickly (especially when you’re facing  lost traffic and lost revenue), and indexation issues can be very frustrating, but plan well and give the process time. When you block crawl paths before de-indexation signals are processed or try to throw everything but the kitchen sink at a problem (NOINDEX + 301 + canonical + ?), you often create more problems than you solve. Pick the best tool for the job, and give it time to work.

Update: A couple of commenters pointed out that you can use XML sitemaps to encourage Google to recrawl pages with no internal links. That's a good point and one I honestly forgot to mention. While internal links are still more powerful, an XML sitemap with the nofollow'ed (or removed) URLs can help speed the process. This is especially effective when it's not possible to put the URLs back in place (a total redesign, for example).

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Tagged , , , , , , , , , | Comments Off

The 3-3-3 Online Marketing Investment Model

A few weeks ago I was thinking about how companies seem to haphazardly invest in various aspects of online marketing. Some throw their entire budget at SEO, leaving no room for PPC. Other businesses put so much money in PPC that they leave little room for genuine SEO growth. While Herman Cain’s bold 9-9-9 tax [...]

Follow SEJ on Twitter @sejournal



Posted in Search Marketing | Tagged , , , , , , , , , | Comments Off