urllib.robotparser —- robots.txt 语法分析程序

源代码:Lib/urllib/robotparser.py


This module provides a single class, RobotFileParser, which answersquestions about whether or not a particular user agent can fetch a URL on theWeb site that published the robots.txt file. For more details on thestructure of robots.txt files, see http://www.robotstxt.org/orig.html.

  • class urllib.robotparser.RobotFileParser(url='')
  • This class provides methods to read, parse and answer questions about therobots.txt file at url.

    • seturl(_url)
    • Sets the URL referring to a robots.txt file.

    • read()

    • Reads the robots.txt URL and feeds it to the parser.

    • parse(lines)

    • Parses the lines argument.

    • canfetch(_useragent, url)

    • Returns True if the useragent is allowed to fetch the _url_according to the rules contained in the parsed robots.txtfile.

    • mtime()

    • Returns the time the robots.txt file was last fetched. This isuseful for long-running web spiders that need to check for newrobots.txt files periodically.

    • modified()

    • Sets the time the robots.txt file was last fetched to the currenttime.

    • crawldelay(_useragent)

    • Returns the value of the Crawl-delay parameter from robots.txtfor the useragent in question. If there is no such parameter or itdoesn't apply to the useragent specified or the robots.txt entryfor this parameter has invalid syntax, return None.

3.6 新版功能.

  • requestrate(_useragent)
  • Returns the contents of the Request-rate parameter fromrobots.txt as a named tuple RequestRate(requests, seconds).If there is no such parameter or it doesn't apply to the _useragent_specified or the robots.txt entry for this parameter has invalidsyntax, return None.

3.6 新版功能.

  • site_maps()
  • Returns the contents of the Sitemap parameter fromrobots.txt in the form of a list(). If there is no suchparameter or the robots.txt entry for this parameter hasinvalid syntax, return None.

3.8 新版功能.

The following example demonstrates basic use of the RobotFileParserclass:

  1. >>> import urllib.robotparser
  2. >>> rp = urllib.robotparser.RobotFileParser()
  3. >>> rp.set_url("http://www.musi-cal.com/robots.txt")
  4. >>> rp.read()
  5. >>> rrate = rp.request_rate("*")
  6. >>> rrate.requests
  7. 3
  8. >>> rrate.seconds
  9. 20
  10. >>> rp.crawl_delay("*")
  11. 6
  12. >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
  13. False
  14. >>> rp.can_fetch("*", "http://www.musi-cal.com/")
  15. True