Create and customize robots.txt files to control search engine crawling behavior on your website.
Leave empty if you don't have a sitemap
Add directories you want to block from search engine crawling.
Block specific file types from being crawled.
Block URLs matching specific patterns.
Set different rules for specific search engine crawlers.
Place in root directory
robots.txt must be accessible at yourdomain.com/robots.txt
Use for guidance only
Respectful crawlers follow robots.txt, but malicious ones may ignore it
Include sitemap location
Help search engines discover all your pages
Don't block CSS/JS files
Blocking these can prevent proper page rendering in search results
Save this content as "robots.txt" in your website's root directory.
# Allow all search engines to crawl the site User-agent: * Disallow: # Sitemap location (helps search engines find all pages) Sitemap: https://easysmartcalculator.com/sitemap.xml # Block admin and login pages User-agent: * Disallow: /admin/ Disallow: /login/ # Block specific crawlers User-agent: BadBot Disallow: /
/admin/ - Administration areas/login/ - Login pages/cgi-bin/ - Server scripts/tmp/ - Temporary files/private/ - Private content