Check if your robots.txt file exists, is properly configured, and doesn't accidentally block important pages from search engines.
Robots.txt is a text file placed in your website's root directory that tells search engine crawlers which pages or sections of your site they can and cannot access. It's the first file crawlers check when visiting your site.
User-agent: Specifies which crawler the rules apply to
Disallow: Paths that crawlers should not access
Allow: Paths that crawlers can access (overrides Disallow)
Sitemap: Location of your XML sitemap
Crawl-delay: Seconds to wait between requests
User-agent: * Disallow: /admin/ Disallow: /private/ Allow: /public/ User-agent: Googlebot Crawl-delay: 10 Sitemap: https://example.com/sitemap.xml