How to stop search engines from crawling website using nginx

Stop search engines from crawling website using simple nginx location directive to provide robots.txt file.

This is a very simple solution as for /robots.txt request URI it will return defined response body and 200 OK status code. It will perform this action regardless of the file existence.

location = /robots.txt {
    add_header Content-Type text/plain;
    return 200 "User-agent: *\nDisallow: /\n";
}

You can use this solution to define default robots.txt file.

location = /robots.txt {                                                                                        
    try_files /robots.txt @robots.txt;
} 

location @robots.txt {
    add_header Content-Type text/plain;                                                                           
    return 200 "User-agent: *\nDisallow: /\n";
}

This will ensure that default content will be served only when application does not provide its own robots.txt file.