094: Large Site, In-House Technical SEO w/Jamie Alberico

jamie-alberico-headshot

In this episode!

  • How Jamie decides what products to index (on a 4+ million product site)
  • Metrics the technical SEO team at Arrow.com cares about
  • Optimizing crawl efficiency and click depth on a large site
  • How to run internal crawls on a 10m+ page site
  • Using custom XML sitemaps
  • How to find and fix re-direct chains
  • YOUR questions from Twitter answered
  • And tons more…

Listen Now!

BuzzSumo – One of my favorite tools for coming up with content ideas, finding people who share content in an industry, and tons more (like alerts to keep an eye on your competitor’s links). In this episode, I share a story of how BuzzSumo had exactly the tool a friend of mine was looking for (and couldn’t find anywhere). Listen to the show for a special code to get 30% off BuzzSumo for 3 months.

Related Episodes You Might Like

Show Agenda and Timestamps

  • Show Introduction [0:21]
  • Jamie’s Introduction [0:57]
  • Jamie’s background and how she became a technical SEO [1:10]
    • How did she learn technical SEO [2:56]
    • Tools and Sites Jamie used to learn technical SEO [3:39]
  • What does Arrow sell? [4:20]
    • Are the sales high volume or low volume? [5:37]
    • How many products does Arrow sell? [6:03]
    • What team does Jamie work with at Arrow? [6:26]
  • Working in-house [7:30]
    • Twitter question: “How do you prioritize your discoveries of issues on the technical SEO spectrum?” [7:34]
    • How does Jamie balance the different needs and make priority decisions [9:11]
    • How do you handle internal communication, task delegation and project management while working with different teams? [10:21]
    • How does Jamie streamline communication? [11:26]
    • Twitter question: Do you work with optimizing for KWs (eg kw optimization and rank tracking) and, if so, how do you hand off and manage research and Recs with content/marketing teams?[12:53]
    • What metrics, as an in-house technical SEO, are Jamie held responsible for? [13:56]
    • Twitter Question: Always curious about how other technical folks estimate impacts of technical changes such as improving crawl efficiency, indexing, new canonical tags etc. Do you predict revenue impacts or only technical metrics? Do you use educated guesses vs. statistical method? [17:43]
    • Good statistical models require good, scrubbed data. Why is this important and how do you get good scrubbed data? [23:20]
    • Tips for communicating with developers. [24:35]
  • Site Architecture [27:05]
    • With so many products and categories, what are some things that Jamie does to ensure easy crawling and low crawl depths? [27:21]
    • Internally in the site how does Jamie bring products up higher in the click path? [30:43]
    • Does Jamie strategically control the number of pages indexed for such a large site? [32:47]
      • How does Jamie decide which products to no-index and which to index? [33:38]
      • What is a critical marker? [34:08]
  • Sponsor Break with discount code! [34:34] 
  • Twitter Question: How do you prove (with data) that a site architecture needs to be reorganized? [37:01]
  • Does Jamie worry about the accessibility and indexing of the faceted nav? [38:16]
    • Is the decision to use parameters versus creating a unique url is that the same as deciding what get indexed? [39:17]
  • Crawling [41:33]
    • Twitter Question: How do you manage huge (10m+ URL) crawls? Do you use a monster of a machine, go virtual via AWS or similar, use a bunch of Deep Crawl credits, or batch by directory levels? [41:36]
    • How do you structure your crawls on large websites (category level vs product level vs blog level, for example) & how often do you schedule them? Are there any custom extractions you always use to keep an eye on specific issues?[43:00]
  • How to train your bot presentation [44:49]
    • Redirect chains- how many redirects does Googlebot follow?  [45:05]
    • How does Jamie find redirect chains [47:20]
  • Indexation [48:39]
    • Twitter Question: How do you decide which SERPs of a site with keyword search and filter (e.g. classified site) should be indexed? All? Exclude some with noindex? Or just don’t link them anywhere?
      Also: Should all pages you want to be index be linked somewhere? (e.g. SERPs from keyword search) [48:46]
    • What tool do you use for checking the pages of a huge site are actually indexed? [50:33]
  • Log File Analysis [51:33]
    • What tools does Jamie use for log file analysis? [51:38]
    • What is Jamie looking for when she analyzes log files and what actions does she take in response [53:19]
    • Is there a certain frequency of crawling that Jamie looks for? [55:00]
  • Mobile [55:51]
    • What steps has arrow taken to ensure you are ready for the mobile first index? [56:00]
    • How can people [56:36]
  • Angular and JavaScript [58:43]
    • Angular Universal [58:48]
    • Sitecore Headless [1:00:24]
    • As SEOs how do you ensure search engines can access and index JS/Angular etc content? [1:01:07]
  • SEO Horror Stories [1:04:16]
    • Twitter Question: I am kind of in the mood for some SEO horror stories. Could you please share one of your worst SEO nightmare moments experienced while working on an e-commerce site with millions of multilingual pages (if any)? [1:04:19]
  • Jamie’s speaking [1:06:49]
    • Jamie’s tweet regarding speaking availability [1:06:56]
  • Where to find Jamie online and in the real world [1:10:33]

Tools Mentioned

Articles, Resources, and Links Mentioned

Find Jamie Online

The post 094: Large Site, In-House Technical SEO w/Jamie Alberico appeared first on Evolving SEO.