上一话题

urllib.error — 由 urllib.request 引发的异常类

下一话题
就业培训     下载中心     Wiki     联络
登录   注册

Log
  1. 首页
  2. Python 3.12.4
  3. 索引
  4. 模块
  5. 下一
  6. 上一
  7. Python 标准库
  8. 互联网协议和支持
  9. urllib.robotparser — robots.txt 剖析器

urllib.robotparser — robots.txt 剖析器 ¶

源代码: Lib/urllib/robotparser.py


此模块提供单个类, RobotFileParser , which answers questions about whether or not a particular user agent can fetch a URL on the web site that published the robots.txt file. For more details on the structure of robots.txt files, see http://www.robotstxt.org/orig.html .

class urllib.robotparser. RobotFileParser ( url = '' ) ¶

This class provides methods to read, parse and answer questions about the robots.txt file at url .

set_url ( url ) ¶

Sets the URL referring to a robots.txt 文件。

read ( ) ¶

读取 robots.txt URL and feeds it to the parser.

parse ( lines ) ¶

剖析行自变量。

can_fetch ( useragent , url ) ¶

返回 True 若 useragent is allowed to fetch the url according to the rules contained in the parsed robots.txt 文件。

mtime ( ) ¶

Returns the time the robots.txt file was last fetched. This is useful for long-running web spiders that need to check for new robots.txt files periodically.

modified ( ) ¶

Sets the time the robots.txt file was last fetched to the current time.

crawl_delay ( useragent ) ¶

返回值为 Crawl-delay 参数从 robots.txt 为 useragent in question. If there is no such parameter or it doesn’t apply to the useragent specified or the robots.txt entry for this parameter has invalid syntax, return None .

Added in version 3.6.

request_rate ( useragent ) ¶

Returns the contents of the Request-rate 参数从 robots.txt 作为 命名元组 RequestRate(requests, seconds) . If there is no such parameter or it doesn’t apply to the useragent specified or the robots.txt entry for this parameter has invalid syntax, return None .

Added in version 3.6.

site_maps ( ) ¶

Returns the contents of the Sitemap 参数从 robots.txt in the form of a list() . If there is no such parameter or the robots.txt entry for this parameter has invalid syntax, return None .

Added in version 3.8.

The following example demonstrates basic use of the RobotFileParser 类:

>>> import urllib.robotparser
>>> rp = urllib.robotparser.RobotFileParser()
>>> rp.set_url("http://www.musi-cal.com/robots.txt")
>>> rp.read()
>>> rrate = rp.request_rate("*")
>>> rrate.requests
3
>>> rrate.seconds
20
>>> rp.crawl_delay("*")
6
>>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
False
>>> rp.can_fetch("*", "http://www.musi-cal.com/")
True
										

上一话题

urllib.error — 由 urllib.request 引发的异常类

下一话题

http — HTTP 模块

本页

  • 报告 Bug
  • 展示源

快速搜索

键入搜索术语或模块、类、函数名称。

  1. 首页
  2. Python 3.12.4
  3. 索引
  4. 模块
  5. 下一
  6. 上一
  7. Python 标准库
  8. 互联网协议和支持
  9. urllib.robotparser — robots.txt 剖析器
  10. 版权所有  © 2014-2026 乐数软件    

    工业和信息化部: 粤ICP备14079481号-1