Fix the Technical SEO Problems That Google Hates (And Boost Rankings Fast)
Technical SEO skills are in high demand
Salary potential:
US: $70,000-$120,000+ annually
India: ₹6-18 lakhs per year (experienced professionals: ₹25+ lakhs)
Freelance consultants: $100-$200/hour or $3,000-$10,000 per project
Benefits:
Less competition (most focus on content/link building)
More job security (harder to replace with AI tools)
Higher earning potential (20-30% more than general SEO professionals)
Benefits:
Bridge gap between development and marketing
Create solutions considering both performance and search visibility
Command higher rates for specialized problem-solving
Critical for sites with:
1,000+ pages
Multiple language versions
E-commerce sites (700+ products)
Sites built on frameworks (React, Angular, Next.js)
Technical issues compound exponentially at scale
Single issues can impact thousands of pages simultaneously
Benefits:
Ability to take on larger, more lucrative enterprise clients
Solve complex problems competitors cannot
Justify higher retainer fees
Expand service offerings beyond basic SEO
Potential to increase average client value from $1,500 to $4,000+ monthly
Don't need to master technical SEO, but should:
Understand core concepts
Identify potential technical issues
Communicate effectively with technical SEO professionals
Recognize when to invest in technical SEO
Optimizing technical website elements so search engines can:
Find pages (discover them)
Read pages (crawl them)
Store pages (index them)
Understand what pages are about (rank them appropriately)
HTML, CSS, and JavaScript
Messy or JavaScript-heavy code can hinder Google's ability to read content
Where website lives online
Slow or unstable servers impede crawling and frustrate users
How pages are organized and linked
Includes navigational menus and URL structures
XML sitemap (lists important URLs)
Robots.txt (tells Google which pages to crawl/ignore)
Helps Google discover and index pages faster
Enables monitoring of indexing status
Example: New website content gets indexed within days instead of weeks
List of important URLs you want in search results
Typically in XML format
Essential elements for each URL:
<loc>
: The URL itself (e.g., https://example.com/)
<lastmod>
: Last modification date
<priority>
: Importance rating (1.0 = highest, 0.8 = still important)
Helps search engines discover deep pages
Speeds up indexing of new content
Optimizes crawl budget
Essential for dynamic content (product variations, etc.)
WordPress: Yoast SEO plugin
Go to Settings → Site Features
Enable XML sitemaps
Available at yourdomain.com/sitemap.xml
Other platforms (Shopify, Wix) generate automatically
For custom websites
Free for up to 500 URLs
Process:
Launch Screaming Frog
Enter website URL
Start crawling
Navigate to "Sitemaps → XML sitemap"
Select "Generate XML Sitemaps"
Enable "lastmod", "priority", and "change frequency"
Consider including images for Google Images indexing
Export file
XML-Sitemaps.com for small websites
Enable "page last modification time" and "Page Priority"
List important URLs in text file/spreadsheet
Use ChatGPT/Gemini to format according to protocol
Download and upload to website root directory
Include the Right Pages
Homepage, key categories, products, blog posts, landing pages
Exclude admin pages, search results, cart/checkout, URLs blocked by robots.txt
Avoid tags/author archives (unless valuable) to prevent duplicate content
Follow Technical Guidelines
Keep under 50MB and 50,000 URLs (use multiple sitemaps with index file if larger)
Use absolute URLs (include https and domain name)
Only use canonical URLs
Keep Updated
Set automatic updates for CMS sites
Regularly update manual sitemaps
Check for errors in Google Search Console
Log into Google Search Console
Select website property
Navigate to "Sitemaps" in left menu
Enter sitemap URL
Click "Submit"
Monitor status for errors and indexing issues
Controls which bots can access which parts of your website
Located at "your-website.com/robots.txt"
Does not affect real users, only bots
Placed at root of website
User-agent
: Specifies which bots the rules apply to
User-agent: *
targets all bots
Disallow
: Tells bots what not to access
Allow
: Creates exceptions to Disallow rules
Blocks dynamic search results (/search
)
Creates specific exceptions (/search/about
)
Blocks URL parameters (/?
)
Allows specific parameters (/?hl=
)
Points to sitemap location
Has specific rules for different bots (e.g., Googlebot)
Blocks private sections (/messages
)
Allows public sections (/safetycheck
)
Lists multiple sitemaps
Extensive list of disallowed paths
Blocks cart, checkout, and account pages
Allows promotional pages and public wish lists
Robots.txt blocks crawling but not indexing
URLs may still appear in search results if linked elsewhere
Use "noindex" meta tag to remove URLs from search results
Specific rules override general ones
Allow /search
+ Disallow /search/about
blocks about page, allows other searches
Disallow /search
+ Allow /search/about
allows about page, blocks other searches
Write only one Disallow per line
Robots.txt is case-sensitive (/Product
≠ /product
)
User-agent: * Allow: / Sitemap: <https://mywebsite.com/sitemap.xml>
Basic access test: Visit yourdomain.com/robots.txt in browser
Check for errors in Google Search Console:
Settings > robots.txt > open report
Analyze blocked URLs in Screaming Frog:
Overview → Response Codes → Blocked by Robots.txt
Use online testers like robotstxt.com/tester
Google can rank duplicate or filtered versions of your content
Duplicate content confuses search engines
Results in diluted rankings
Can lead to wrong URL versions being ranked
URL Parameters
Example: running-shoes?sort=price
vs. running-shoes?sort=rating
Session IDs
Example: running-shoes?sessionid=123
Printer-Friendly Versions
Example: /print/running-shoes
URL Capitalization
Example: /shoes
vs. /Shoes
Tells Google which version is the "master" copy
Consolidates ranking signals to preferred URL
Simple HTML code: <link rel="canonical" href="<https://shoes.com/running-shoes>" />
Added to <head>
section of page
Signals to Google: "This is the original page; index this one and ignore duplicates"
Google compares URL used to reach page vs. URL in canonical tag
Consolidates ranking signals to canonical URL
Original page includes canonical tag pointing to itself
Prevents accidental duplication from tracking parameters, session IDs, etc.
Example: <link rel="canonical" href="<https://shoes.com/adidas-ultraboost>" />
Self-referencing tag handles duplicates automatically
Used when content is shared across multiple domains
Partner sites include canonical tag pointing to original source
Example: <link rel="canonical" href="<https://shoes.com/top-10-best-selling-shoes>" />
Protection against content theft (canonical gets copied too)
Page A → Page B, and Page B → Page C
Search engines may ignore chained redirects
Solution: Point directly to final canonical URL
Pages without canonical tags risk being treated as duplicates
Add self-referencing canonical to all original pages
Pointing to wrong URLs (e.g., 404 pages)
Use Screaming Frog to audit canonical implementation
Adding canonical tag multiple times on one page
Multiple different canonical tags on same page
Adding noindex alongside canonical tags
Using relative URLs instead of absolute URLs
Unlinked canonical targets
Placing canonical outside <head>
section
E-commerce: Product variants (size/color options)
Blogs: Printer-friendly or AMP versions
News Sites: Syndicated articles
Google Search Console:
Check "Indexing > Pages" section
Look for "Duplicate without user-selected canonical" warnings
Search for parameterized URLs:
Use inurl:utm_source=
or similar to find indexed duplicates
Screaming Frog:
Check "Canonicals" section for issues
Definition: Any link from one page to another on same website
Example narrative:
User reads article about making perfect espresso
Follows internal link to best espresso machines
Follows another link to specific product
Completes purchase
Reduce bounce rates
Increase engagement time
Establish topical authority
Show relationships between pages
Find related pages using: site:yourdomain.com "topic"
Distribute backlink authority
Pages with many backlinks pass ranking power to other pages
Example: Article with 100 backlinks shares authority to linked comparison pages
Run Screaming Frog crawl and analysis
Look for common problems:
Pages with no internal links pointing to them
Users and search engines struggle to find them
To identify:
Check "Issues > Orphan URLs" under sitemap
Verify why no other pages link to them
Also check "URLs not in sitemaps"
Links pointing to pages that no longer exist
Negative impact:
Poor user experience
Wasted crawl budget
To find:
Go to "Overview > Client Error (4XX)"
Look for 404 Not Found errors
Links going through multiple redirects
Problems:
Slows page load times
Dilutes link equity
To find:
Go to "Overview > Internal redirect chain and redirect loop"
Update links to point directly to final URLs
Go to "Visualization > Force-Directed Crawl Diagram"
Shows graphical representation of internal linking structure
Link from High-Authority Pages
Pages with many high-quality backlinks can pass more ranking power
Add links from these pages to important pages, high-converting landing pages, etc.
Add Contextual Links Naturally
Link from informational → commercial → transactional content
Use descriptive anchor text relevant to linked page
Create Breadcrumb Navigation
Show users their path through site
Example: Home > Espresso Machines > Breville Barista Express Impress
Update Old Content Regularly
Add links to new posts from older, relevant content
Research shows users trust URLs more than meta descriptions or titles
URLs remain consistent when titles/descriptions change
URLs appear prominently in:
Search results
Social media shares
Forum posts
Analytics tracking
Include relevant keywords that describe content
Avoid keyword stuffing
Good: /coffee-brewing-guide
Bad: /coffee-brewing-tips-guide-tutorial-how-to-brew-coffee
Shorter URLs are easier to read, share, and remember
Good: /french-press-coffee
(19 characters)
Bad: /how-to-make-the-perfect-cup-of-french-press-coffee-a-complete-detailed-guide
(72 characters)
Top-ranking pages average <60 characters
Check "URL > Over 115 Characters" in Screaming Frog
Google officially recommends hyphens
Good: /cold-brew-coffee-recipe
Bad: /cold_brew_coffee_recipe
or /cold brew coffee recipe
Avoid underscores, spaces, special characters
Check for issues in Screaming Frog under "Non-ASCII Characters," "Underscores," "Contains space"
Prevents duplicate content issues and user confusion
Good: /coffee-grinders
Bad: /Coffee-Grinders
URLs are case-sensitive on most servers
Check "Uppercase" under URL in Screaming Frog
URL structure should reflect content organization
Examples:
Main category: /coffee-makers
Subcategory: /coffee-makers/french-press
Product page: /coffee-makers/french-press/bodum-chambord
Benefits:
Helps search engines understand content relationships
Creates clear breadcrumb navigation
Parameters create messy, unfriendly URLs
Bad: /products.php?category=3&product=47&sessionid=123
Good: /coffee-makers/french-press
Use URL rewriting features in CMS platforms
Check for dynamic URLs in Screaming Frog under "Non-ASCII Characters," "Parameters," "Contains space"
Set up 301 redirects from old URLs to new ones
Without proper redirects:
Lose SEO value
Users hit 404 errors
Search engines may index both versions
External links become worthless
Avoid redirect chains
Check for "redirect chains" in Screaming Frog
Crawl site with Screaming Frog
Check for issues under Overview > URL:
Non-ASCII Characters
Underscores
URL: Uppercases
URL: Contains space
URL: Parameters
URL: Over 115 Characters
Prioritize fixes for high-traffic/high-value pages
Implement 301 redirects for each URL change
Modern sites use JavaScript frameworks (React, Angular, etc.)
Issue: Google may not see JavaScript-generated content
Real-world example: Walmart lost indexing of product descriptions
Solution: Implementing server-side rendering increased organic traffic revenue by 40%
Crawl Stage:
Downloads initial HTML file
Equivalent to "View Source" in browser
JavaScript content not visible yet
Render Stage:
Uses headless Chrome to execute JavaScript
Builds Document Object Model (DOM)
Equivalent to "Elements" tab in DevTools
Index Stage:
Stores rendered version in index
Critical limitation: ~5-second time limit for JavaScript rendering
Content loaded after this window may be missed
JavaScript runs in user's browser after page loads
Frameworks: Standard React, Vue, Angular without special configurations
Pros:
Fast navigation between pages (Single Page Applications)
Rich, interactive user experiences
Cons:
Googlebot might miss critical content
Slower initial load times
JavaScript runs on server; fully rendered HTML sent to browser
Frameworks: Next.js, Nuxt.js, Angular Universal
Pros:
Googlebot sees complete page immediately
Faster initial load times
Cons:
Higher server costs
Requires more technical expertise
Combines best of both approaches:
SSR for critical SEO content
CSR for interactive elements
Issue: API calls taking too long exceed Googlebot's rendering window
Solution:
Pre-fetch API data during server-side rendering
Implement API response caching
Prioritize critical API calls
Issue: Large JavaScript files delay execution
Solution:
Implement code splitting to load only necessary JavaScript
Issue: Content loaded on scroll may not be seen by Googlebot
Solution:
Make critical content visible immediately
Use proper <noscript>
fallbacks
Issue: Robots.txt blocking JavaScript files
Solution:
Allow Googlebot to access JavaScript files
Visit test page
Right-click > "View Page Source" (initial HTML)
Search for critical content
Right-click > "Inspect" (rendered DOM)
Search for same content
If content in "Inspect" but not "View Source" = potential SEO issue
Log into GSC
Enter URL in inspection tool
Click "View Tested Page"
Check "Screenshot" and "More Info" tabs
Verify critical content appears in rendered HTML
Enable JS rendering:
Configuration > Crawl config > Spider > Crawl (select JS)
Under Rendering > Choose JS rendering
Extraction > select "store HTML" and "Store Rendered HTML"
Compare rendered vs. non-rendered content:
View source > visible content > Show differences
Use WP Rocket plugin with JavaScript optimization features
Implement Next.js (React) or Nuxt.js (Vue) for SSR
Start with SSR for most important templates
Perform code splitting and JS bundle optimization
Prioritize rendering critical content first
Use dynamic rendering services like Prerender.io
Prioritize static pre-rendering for key landing pages
Add complete <noscript>
versions of critical content
Optimize JavaScript execution order
International sites show wrong versions to users:
UK customers seeing US prices
Japanese visitors seeing English content
Thai customers bouncing due to language barriers
Search engines confused by apparent duplicate content
HTML elements that tell Google which language/country a page targets
Prevent duplicate content issues across regional versions
Ensure right page reaches right audience
Basic format: <link rel="alternate" hreflang="en-US" href="<https://example.com/us/>" />
Language codes:
2-letter ISO 639-1 codes (e.g., en, ja)
Always lowercase
Country codes:
2-letter ISO 3166-1 Alpha-2 codes (e.g., US, TH)
Always uppercase
Examples:
hreflang="en"
- English content for all regions
hreflang="en-US"
- English for US specifically
hreflang="fr-FR"
- French for France
hreflang="x-default"
- Default when no specific match exists
Formatting rules:
Language code in lowercase
Country code in uppercase
Hyphen as separator (not underscore)
Language first, then country
Add to <head>
section of each page
Example:
<head> <link rel="alternate" hreflang="en-US" href="<https://example.com/us/>" /> <link rel="alternate" hreflang="en-GB" href="<https://example.com/uk/>" /> <link rel="alternate" hreflang="fr-FR" href="<https://example.com/fr/>" /> <link rel="alternate" hreflang="x-default" href="<https://example.com/us/>" /> </head>
Critical requirement: Every page must include complete set of hreflang tags pointing to all alternate versions, including itself
For larger sites (1000s of pages)
Adds hreflang information to XML sitemap
Example:
<url> <loc><https://example.com/us/page></loc> <xhtml:link rel="alternate" hreflang="en-US" href="<https://example.com/us/page>" /> <xhtml:link rel="alternate" hreflang="en-GB" href="<https://example.com/uk/page>" /> <xhtml:link rel="alternate" hreflang="fr-FR" href="<https://example.com/fr/page>" /> </url>
Advantage: Maintains hreflang in single file vs. updating individual pages
For PDFs, images, or other non-HTML content
Example:
HTTP/1.1 200 OK Content-Type: image/jpeg Link: <https://example.com/us/image.jpg>; rel="alternate"; hreflang="en-US", <https://example.com/uk/image.jpg>; rel="alternate"; hreflang="en-GB"
Requires server-side configuration
Use Screaming Frog:
Configuration > Spider > Crawl (ensure "Crawl Hreflang" is checked)
Configuration > Crawl Analysis (ensure "Hreflang" is selected)
Navigate to "Hreflang" tab under "Overview"
Pages that should have hreflang tags but don't
Page A links to Page B, but Page B doesn't link back
Solution: Add complete set of hreflang tags to every version
Different language codes used on reciprocal links
Example: US page refers to UK as "en-GB" but UK page refers to US as "en" instead of "en-US"
Hreflang pointing to URL that doesn't match canonical URL
Ensure hreflang URLs match canonical URLs exactly
Hreflang tags pointing to pages blocked from indexing
Remove noindex or don't include in hreflang set
Invalid codes or formatting errors
Always use standard ISO codes with proper formatting
Duplicate hreflang declarations for same language/region
Every page should include self-referencing hreflang tag
Canonical tag doesn't match URL in hreflang tag
No default version specified when country/language doesn't match
<head>
Hreflang tags placed in <body>
instead of <head>
section
Google Search Console:
Check "Indexing > Pages" section
Look for "Duplicate without user-selected canonical" warnings
Google Search:
Use inurl:utm_source=
or similar to find indexed duplicates
JavaScript code that translates website information for search engines
Uses Schema.org vocabulary (created by Google, Bing, Yahoo, Yandex)
Tells search engines specific meaning of content (e.g., "this number is a product price")
Creates rich snippets (enhanced search results with visual elements)
Captures more SERP screen space
Displays valuable information (prices, ratings, availability)
Increases CTR by up to 30% even from lower positions
Essential for voice search and AI search engines
Product Schema: Shows price, availability, ratings
Recipe Schema: Displays cooking time, ingredients, calories
Local Business Schema: Shows address, phone number, hours
Article Schema: Highlights author, publish date, featured image
FAQ Schema: Shows expandable Q&As directly in search results
Review Schema: Displays star ratings and review counts
Event Schema: Shows dates, locations, ticket availability
Google recommends JSON-LD format
Added to webpage's <head>
section
Example for product:
<script type="application/ld+json"> { "@context": "<https://schema.org/>", "@type": "Product", "name": "Ethiopian Yirgacheffe Coffee Beans", "image": "<https://example.com/coffee-beans.jpg>", "description": "Single-origin Ethiopian Yirgacheffe with notes of blueberry and dark chocolate.", "brand": { "@type": "Brand", "name": "Mountain Top Coffee" }, "offers": { "@type": "Offer", "url": "<https://example.com/coffee-beans>", "priceCurrency": "USD", "price": "14.99", "availability": "<https://schema.org/InStock>" }, "aggregateRating": { "@type": "AggregateRating", "ratingValue": "4.8", "reviewCount": "127" } } </script>
Run Screaming Frog crawl and analysis
Check "Structured Data" tab in overview
Look at entries under "missing"
Focus on high-traffic pages and positions 3-10
Prioritize product, local business, and FAQ pages
Option A: Use AI (Gemini, ChatGPT)
Provide specific details and documentation
Example prompt:
Generate JSON-LD structured data for a coffee product with these details: - Product name: Ethiopian Yirgacheffe Coffee Beans - Price: $14.99 - Availability: In Stock - Rating: 4.8 stars from 127 reviews - Description: Single-origin Ethiopian Yirgacheffe with notes of blueberry and dark chocolate - Brand: Mountain Top Coffee - Image URL: <https://example.com/coffee-beans.jpg> I have also attached the documentation for this schema type, so create the structured data with all the required properties. Suggest me If I'm missing any optional properties that can increase my CTR on search results.
Option B: Use Schema Generator Tools
Technical SEO Schema Markup Generator
Select schema type
Fill in fields
Copy generated code
Option C: Use WordPress Plugins
Yoast SEO
Enable structured data modules
Fill out forms
Plugin handles code generation and insertion
Copy JSON-LD code to <head>
section of webpage
Use Google's Rich Results Test (https://search.google.com/test/rich-results)
Enter URL or paste code
Review and fix any errors
For multiple URLs, use Screaming Frog validation
Using schema for content that doesn't exist on page
Example: Adding Recipe schema to cooking blog with no actual recipe
Correct approach: Use Article schema for general cooking content
Every schema type has specific required properties
Example: Product schema requires at least ONE of:
review
aggregateRating
offers
Check Google's requirements for each type
Schema data must match what users see on page
Example: Showing 4.2 stars on page but 5.0 stars in schema
Google strictly enforces this match
Missing commas, brackets, quotes can break entire schema
Example: Missing comma after line or unclosed brackets
Always test implementation with Rich Results Test