Is your content not getting the hits you hoped for, even though it’s great content?
If you want your site (and your business) to go to the next level, you need to level-up your SEO. This is where technical SEO comes in.
But wait, you just got a grasp on regular SEO. If you’re like many non-SEO savvy business owners, the prospect of advanced SEO is like nuclear physics. Possible, maybe, but practically on another planet.
Take a deep breath. We’re here to help. Here, we have a complete guide to technical SEO for the non-technical SEO user. We’ll teach you what it is, why you need it, and a few items to incorporate into your SEO practices, all explained for the tech-averse and SEO-phobic.
What is Technical SEO?
First, the basics: what is technical SEO?
To put it simply, technical SEO is a form of SEO focused on improving the technical elements of your site so that you can improve your search engine rankings.
Wait…What’s SEO Again?
For the SEO-averse and those who are new to the class, let’s recap what SEO is.
Search engine optimization, or SEO, is the art and science of getting your website ranked on search engines like Google and Bing. Considering that Google alone receives 3.5 billion searches per day, that opens your business to a huge amount of potential traffic.
Search engines aren’t magic–they’re businesses. Their business is providing answers to customer questions. To provide the best possible answers to a customer’s question, they have to know all of the available options out there.
SEO is your way of showing a search engine the answers that you provide and what questions are relevant to those answers. It is NOT to be confused with search engine marketing (SEM)–SEO is free and is entirely focused on organic results.
Technical vs. Regular: What’s the Difference?
Regular SEO is what you probably already use.
It involves the use of keywords, internal and external links, and other elements to show search engines that you’re providing in-depth content and are an expert in your area.
Technical SEO is a form of on-page SEO, which is SEO conducted directly on your site pages. It goes a little deeper than regular SEO.
The basis of all SEO is crawling and indexing, which is how search engines establish site rankings. Technical SEO is “technical” because it doesn’t have anything to do with your site content or site promotion. Instead, it’s directly concerned with the crawling and indexing process, vis-a-vis your site infrastructure.
Technical SEO Terms to Know
Because technical SEO is, well, technical, there is some jargon that you’ll encounter. We promise not to overburden you, but there are a few basic jargon terms that will be helpful to understand before we go any further.
The first is crawling. Crawling, completed by a web crawler, is the process that search engines use to collect information from billions of websites to establish and update their search index.
The search index is how search engines organize information. You see, search engines don’t rifle through the entire web every time a user searches. That would be inefficient. Instead, search engines look at the query and refer to the search index, like the index at the back of a textbook.
The index contains entries for every word seen on every webpage indexed by a search engine, in order of relevance. Using the index, search engines determine what sites are most relevant and display them in order of relevance.
The best-ranked pages are hub pages or expert pages. However, because a web crawler is a program, not a person, you have to provide it with certain clues to know how to index you. That’s where technical SEO comes in.
Why You Need Technical SEO
Other forms of SEO can certainly help your site, but technical SEO has the unique advantage of directly appealing to crawling and indexing. It’s also unique in that it directly deals with the structure of your site.
Which means that it also helps make your site stronger on a structural level.
Site Structure Stands Out
Regular SEO deals with content. Technical SEO is about how you organize that content.
We’re talking about your site structure, which plays a critical role in how your site is indexed. Think of your site as a house, with rooms and hallways. If you’re having a party, it’s more enjoyable if people can easily flow from one room to the next without getting confused.
Crawling is the same way.
Web crawlers submit information about how your site is structured to the search index. Remember, search engines want to provide the best possible experience to their customers, which means that they want to send their customers to sites that won’t frustrate them.
In other words, the better your site is structured, the better your ranking will be.
Need for Site Speed
A complicated, cluttered site structure also means that our site takes longer to load.
And in the year 2019, with users accustomed to instant gratification, slow loading isn’t going to cut it. In fact, 58% of users will leave a page that takes longer than three seconds to load.
The problem? Most sites take eight to eleven seconds to load.
This means that if you aren’t optimizing for page speed, you’re getting left in the dust. Hint: if you’re not doing technical SEO, your old SEO tactics aren’t doing anything to address your need for speed.
Best Practices in Technical SEO
You know what technical SEO is and you know why you can’t afford to go without it. Buckle in, because you’re ready to learn some technical SEO to bring your site into the 21st century.
Remember: a lot of techniques here aren’t as crazy-technical as they look. Most of them are pretty straightforward. You just have to know where to start.
Specify Your Preferred Domain
Let’s start with the basics: your domain name.
Fun fact: your domain name helps web crawlers make heads or tails of your site. That’s because web crawlers don’t read a webpage like a human does.
By default, any site can be accessed whether or not there’s www in front of the domain or not. This is fine for users, who often won’t go to the trouble of typing the www anyway.
It’s not fine for web crawlers or search engines. This is because in the eyes of a search engine, a site with www and a site without www are two different sites, even if the domain name is otherwise identical. For all they know, it may actually be two different sites.
It’s nitpicky, but search engines have to know where to direct users. If they don’t know who you are, they won’t know how to index you, which means that you lose valuable ranking power.
Here’s the good news: you can tell them your preferred name when you set up your site.
There is no SEO advantage to having www or not, just personal preference.
Register with Google Search Console
To tell search engines your preferred domain name, you’ll have to register with Google Search Console and any other search engine you want to index you. We’re going to cover Google here.
Fortunately, the process is simple. All you have to do is register with Google Webmaster tools and verify all variations of your site name, such as:
For clarity: if you have HTTPS enabled, you do need to verify both versions of your site name (with or without www).
From there, you can indicate your preferred site name in Site Settings. Don’t forget to do the same thing in your CMS!
Review Your Sitemap
Have you ever tried navigating an unfamiliar road without a map, GPS, or Siri?
Exactly. Given that your business is on the line, why would you ask web crawlers to do that?
If you have a sitemap, now is the time to review it. If you don’t, surprise: it’s time to make one.
A sitemap is like a road map for web crawlers. It allows you to directly provide information about your site pages and files and how they connect. More specifically, it allows you to tell crawlers which pages are most important on your site.
However, for your sitemap to be useful, it has to be clean, clear, concise, and up-to-date. Also, it has to be registered with Google Search Console–web crawlers won’t know it exists unless you provide it to them.
Boost Your Crawl Budget
Ready to start crawling? If so, you have to crawl the right way. For that, you’re going to want to improve your crawl budget.
Crawl budget refers to the total number of URLs that search engines crawl during a given period. It’s not a ranking factor, but it does determine how often your most important pages are crawled.
This is important because search engines have to be able to crawl efficiently. They have to make sure that they can effectively crawl without downgrading user experience. This balance is your crawl rate limit–the number of simultaneous connections required to crawl your site.
In plain English, search engines want you to be a good neighbor. The harder your site is to crawl, the more server space required to crawl it, which wastes server resources that could have been directed toward pages with greater value. So, the easier your site is to crawl in a given period time, the more a search engine will reward you.
Lose Your Duplicates
A good sitemap won’t get you very far if you have duplicate content. Remember, search engines have to know what to index. If you have duplicate content, they won’t know what page ought to be indexed.
This will leave you with ranking authority split between two identical pages. You’ll also waste your crawl budget for no good reason.
If there are any duplicates you can shed, now is the time to delete them. The good news is that it’s relatively easy to puzzle out duplicate content. Site audits can usually flag this information for you.
Alternately, any pages with duplicate titles and meta pages are most likely duplicate content. If they’re not duplicate content, you need to get more creative with page titles and meta descriptions, because web crawlers will think they’re duplicates.
If you have similar content spread across multiple pages, one way to resolve this problem is to use the canonical link element.
Often called the “canonical link”, this element allows savvy website owners like yourself to tell search engines what the canonical URL is (a.k.a. the preferred version of a web page). This tells search engines to ignore other similar URLs and focus on the canonical URL for ranking purposes, rather than forcing search engines to puzzle out duplicate content.
When choosing your canonical URL, pick the most important one. This should also be the page with the most traffic and linking power, but if they’re completely equal, flip a coin.
To make a page canonical, all you need to do is add a rel=canonical link from the non-canonical page to the canonical one. The non-canonical URL will link to the canonical URL in the <head> section, like this:
<link rel=”canonical” href=”https://example.com/wordpress/seo-plugin/>
This is a kind of soft-redirect that functionally merges the duplicate pages into one. Once you do this, your multiple duplicate URLs count as one URL in the eyes of the search engine.
Ask for a Re-Crawl
This is more of an end-of-process step, but it’s still important to keep in mind.
Once you’ve gone through all your technical SEO, you’ve fixed quite a few issues on your site. If you did your job right, then this should mean a concurrent boost in your ranking.
However, you want to make sure that Google notices the changes you’ve made. Good news: you can explicitly tell Google to re-crawl your site.
To do this, all you need to do is go to Google Search Console, go to Crawl and select Fetch as Google. Then, enter the URL that you want to be recrawled. If you want your homepage recrawled, leave the field blank. This tells Google to immediately re-crawl your provided URL with your changes taken into account.
Structured data, that is. We promise this is not nearly as complicated or scary as it sounds.
Structured data is a form of microdata following specific rules. In plain English, it’s code that you can add to your webpages (remember, web crawlers read the code of your website). Structured data helps tell web crawlers the context of your site–in a language that they understand.
Basically, it tells search engines what your site means.
Take your contact information at the bottom of a webpage, for example. Human visitors can figure out that it’s your contact information when they see it. Web crawlers, however, have to work a little harder to figure out what it means.
In case you hadn’t figured it out yet, you don’t want web crawlers to have to spend too long figuring out your site.
Structured data is like slapping a label in block letters on that data so that web crawlers can immediately know that it’s contact information.
Better still, it doesn’t matter what search engine indexes your site–all of them can understand structured data in the same way.
The most straightforward way to do this is through Google’s Structured Data Markup Helper. Make sure the webpage tab is selected and paste the URL you want to use into the provided field. If you just have HTML, you can paste it into the HTML box instead.
Google will then provide you with ten categories. Pick the ones that apply to your page. Then click “Start Tagging”.
On the next page, you’ll see two panes. Your content is on the left and your schema markup is on the right. To mark something, highlight it in the left pane and choose the relevant tag from the menu that appears.
Once you’ve tagged everything you want to tag, look at the second box from the left. Google’s preferred form of structured data is JSON-LD. This will generate lines of code. You’re going to have to copy and paste this code directly into your source code. Or, rather, you’re going to delete the code for your chosen page and paste this code in its place.
Now you have code that web crawlers know how to read. The other good news? You won’t have to update your structured data once it’s added to your source code.
Make It Mobile (Friendly)
In the year 2019, a mobile-friendly website is not optional.
However, simply having a website is not the same thing as having a mobile-friendly website.
Think about the last time you went on a site on your phone and that site wasn’t optimized for mobile. Trust us, you knew that it wasn’t–you had to spend a frustrating amount of time zooming in and out and struggling to adapt buttons to finger-scrolling.
If you’re subjecting your customers to that kind of treatment, they’re going to leave your site with a sour taste in their mouths.
To be clear: you should NOT have two different sites. Your site can work for mobile and desktop alike, but you have to optimize it for that task.
A popular way to do this is through responsive web design, which uses flexible grids and layouts with intelligent CSS queries to adapt your site based on the viewing environment. In other words, if the site recognizes that the viewer is using, say, a small vertical screen, it will rearrange the site content and adapt the layout accordingly.
Unfortunately, unlike most of our other technical SEO tips, responsive web design does require a web designer to code correctly. But trust us: you’ll be thrilled that you invested, and your customers will thank you for doing it.
Last (but certainly not least) we invite you to consider accelerated mobile pages (AMP).
The lovechild of an open-source collaboration between Google and Twitter, AMP is a project to optimize mobile web pages. Basically, you’re taking your mobile pages and giving them a dose of rocket fuel.
It works by stripping your content and code down to the bones, leaving bare text, images, and video. Script, comments, and forms are functionally disabled. This way, your viewers only get the content they want in a fraction of the time.
This dramatically reduced load time means that pages are more likely to be read and shared by site visitors. Plus, Google will sometimes highlight AMP pages, giving you a nice search boost.
Ready to Conquer Technical SEO?
Is your head spinning yet?
Hey, we get it. SEO is a whirlwind on its own. Technical SEO is an undertaking, and doing both of them properly requires expertise and an eye for detail.
That’s where we come in.