urllib.parse
— 将 URL 剖析成组件
¶
源代码: Lib/urllib/parse.py
This module defines a standard interface to break Uniform Resource Locator (URL) strings up in components (addressing scheme, network location, path etc.), to combine the components back into a URL string, and to convert a “relative URL” to an absolute URL given a “base URL.”
The module has been designed to match the Internet RFC on Relative Uniform Resource Locators. It supports the following URL schemes:
file
,
ftp
,
gopher
,
hdl
,
http
,
https
,
imap
,
mailto
,
mms
,
news
,
nntp
,
prospero
,
rsync
,
rtsp
,
rtspu
,
sftp
,
shttp
,
sip
,
sips
,
snews
,
svn
,
svn+ssh
,
telnet
,
wais
,
ws
,
wss
.
urllib.parse
module defines functions that fall into two broad categories: URL parsing and URL quoting. These are covered in detail in the following sections.
The URL parsing functions focus on splitting a URL string into its components, or on combining URL components into a URL string.
urllib.parse.
urlparse
(
urlstring
,
scheme=''
,
allow_fragments=True
)
¶
Parse a URL into six components, returning a 6-item
命名元组
. This corresponds to the general structure of a URL:
scheme://netloc/path;parameters?query#fragment
. Each tuple item is a string, possibly empty. The components are not broken up into smaller parts (for example, the network location is a single string), and % escapes are not expanded. The delimiters as shown above are not part of the result, except for a leading slash in the
path
component, which is retained if present. For example:
>>> from urllib.parse import urlparse
>>> o = urlparse('http://www.cwi.nl:80/%7Eguido/Python.html')
>>> o
ParseResult(scheme='http', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',
params='', query='', fragment='')
>>> o.scheme
'http'
>>> o.port
80
>>> o.geturl()
'http://www.cwi.nl:80/%7Eguido/Python.html'
遵循的句法规范在 RFC 1808 , urlparse recognizes a netloc only if it is properly introduced by ‘//’. Otherwise the input is presumed to be a relative URL and thus to start with a path component.
>>> from urllib.parse import urlparse
>>> urlparse('//www.cwi.nl:80/%7Eguido/Python.html')
ParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',
params='', query='', fragment='')
>>> urlparse('www.cwi.nl/%7Eguido/Python.html')
ParseResult(scheme='', netloc='', path='www.cwi.nl/%7Eguido/Python.html',
params='', query='', fragment='')
>>> urlparse('help/Python.html')
ParseResult(scheme='', netloc='', path='help/Python.html', params='',
query='', fragment='')
scheme
argument gives the default addressing scheme, to be used only if the URL does not specify one. It should be the same type (text or bytes) as
urlstring
, except that the default value
''
is always allowed, and is automatically converted to
b''
若合适。
若
allow_fragments
argument is false, fragment identifiers are not recognized. Instead, they are parsed as part of the path, parameters or query component, and
fragment
is set to the empty string in the return value.
返回值为 命名元组 , which means that its items can be accessed by index or as named attributes, which are:
|
属性 |
索引 |
值 |
值若不存在 |
|---|---|---|---|
|
|
0 |
URL 方案说明符 |
scheme parameter |
|
|
1 |
网络位置部分 |
空字符串 |
|
|
2 |
分层路径 |
空字符串 |
|
|
3 |
用于最后路径元素的参数 |
空字符串 |
|
|
4 |
查询组件 |
空字符串 |
|
|
5 |
片段标识符 |
空字符串 |
|
|
用户名 |
||
|
|
口令 |
||
|
|
主机名 (小写) |
||
|
|
整数形式的端口号 (若存在) |
读取
port
属性将引发
ValueError
if an invalid port is specified in the URL. See section
结构化剖析结果
for more information on the result object.
Unmatched square brackets in the
netloc
属性将引发
ValueError
.
Characters in the
netloc
attribute that decompose under NFKC normalization (as used by the IDNA encoding) into any of
/
,
?
,
#
,
@
,或
:
will raise a
ValueError
. If the URL is decomposed before parsing, no error will be raised.
As is the case with all named tuples, the subclass has a few additional methods and attributes that are particularly useful. One such method is
_replace()
.
_replace()
method will return a new ParseResult object replacing specified fields with new values.
>>> from urllib.parse import urlparse
>>> u = urlparse('//www.cwi.nl:80/%7Eguido/Python.html')
>>> u
ParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',
params='', query='', fragment='')
>>> u._replace(scheme='http')
ParseResult(scheme='http', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',
params='', query='', fragment='')
3.2 版改变: Added IPv6 URL parsing capabilities.
3.3 版改变: The fragment is now parsed for all URL schemes (unless allow_fragment is false), in accordance with RFC 3986 . Previously, a whitelist of schemes that support fragments existed.
3.6 版改变:
Out-of-range port numbers now raise
ValueError
, instead of returning
None
.
3.8 版改变:
Characters that affect netloc parsing under NFKC normalization will now raise
ValueError
.
urllib.parse.
parse_qs
(
qs
,
keep_blank_values=False
,
strict_parsing=False
,
encoding='utf-8'
,
errors='replace'
,
max_num_fields=None
,
separator='&'
)
¶
Parse a query string given as a string argument (data of type application/x-www-form-urlencoded ). Data are returned as a dictionary. The dictionary keys are the unique query variable names and the values are lists of values for each name.
可选自变量 keep_blank_values is a flag indicating whether blank values in percent-encoded queries should be treated as blank strings. A true value indicates that blanks should be retained as blank strings. The default false value indicates that blank values are to be ignored and treated as if they were not included.
可选自变量
strict_parsing
is a flag indicating what to do with parsing errors. If false (the default), errors are silently ignored. If true, errors raise a
ValueError
异常。
可选
encoding
and
errors
parameters specify how to decode percent-encoded sequences into Unicode characters, as accepted by the
bytes.decode()
方法。
可选自变量
max_num_fields
is the maximum number of fields to read. If set, then throws a
ValueError
if there are more than
max_num_fields
fields read.
可选自变量
separator
is the symbol to use for separating the query arguments. It defaults to
&
.
使用
urllib.parse.urlencode()
function (with the
doseq
parameter set to
True
) to convert such dictionaries into query strings.
3.2 版改变: 添加 encoding and errors 参数。
3.8 版改变: 添加 max_num_fields 参数。
Changed in version 3.9.2:
添加
separator
parameter with the default value of
&
. Python versions earlier than Python 3.9.2 allowed using both
;
and
&
as query parameter separator. This has been changed to allow only a single separator key, with
&
as the default separator.
urllib.parse.
parse_qsl
(
qs
,
keep_blank_values=False
,
strict_parsing=False
,
encoding='utf-8'
,
errors='replace'
,
max_num_fields=None
,
separator='&'
)
¶
Parse a query string given as a string argument (data of type application/x-www-form-urlencoded ). Data are returned as a list of name, value pairs.
可选自变量 keep_blank_values is a flag indicating whether blank values in percent-encoded queries should be treated as blank strings. A true value indicates that blanks should be retained as blank strings. The default false value indicates that blank values are to be ignored and treated as if they were not included.
可选自变量
strict_parsing
is a flag indicating what to do with parsing errors. If false (the default), errors are silently ignored. If true, errors raise a
ValueError
异常。
可选
encoding
and
errors
parameters specify how to decode percent-encoded sequences into Unicode characters, as accepted by the
bytes.decode()
方法。
可选自变量
max_num_fields
is the maximum number of fields to read. If set, then throws a
ValueError
if there are more than
max_num_fields
fields read.
可选自变量
separator
is the symbol to use for separating the query arguments. It defaults to
&
.
使用
urllib.parse.urlencode()
function to convert such lists of pairs into query strings.
3.2 版改变: 添加 encoding and errors 参数。
3.8 版改变: 添加 max_num_fields 参数。
Changed in version 3.9.2:
添加
separator
parameter with the default value of
&
. Python versions earlier than Python 3.9.2 allowed using both
;
and
&
as query parameter separator. This has been changed to allow only a single separator key, with
&
as the default separator.
urllib.parse.
urlunparse
(
parts
)
¶
Construct a URL from a tuple as returned by
urlparse()
。
parts
argument can be any six-item iterable. This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had unnecessary delimiters (for example, a
?
with an empty query; the RFC states that these are equivalent).
urllib.parse.
urlsplit
(
urlstring
,
scheme=''
,
allow_fragments=True
)
¶
This is similar to
urlparse()
, but does not split the params from the URL. This should generally be used instead of
urlparse()
if the more recent URL syntax allowing parameters to be applied to each segment of the
path
portion of the URL (see
RFC 2396
) is wanted. A separate function is needed to separate the path segments and parameters. This function returns a 5-item
命名元组
:
(addressing scheme, network location, path, query, fragment identifier).
返回值为 命名元组 , its items can be accessed by index or as named attributes:
|
属性 |
索引 |
值 |
值若不存在 |
|---|---|---|---|
|
|
0 |
URL 方案说明符 |
scheme parameter |
|
|
1 |
网络位置部分 |
空字符串 |
|
|
2 |
分层路径 |
空字符串 |
|
|
3 |
查询组件 |
空字符串 |
|
|
4 |
片段标识符 |
空字符串 |
|
|
用户名 |
||
|
|
口令 |
||
|
|
主机名 (小写) |
||
|
|
整数形式的端口号 (若存在) |
读取
port
属性将引发
ValueError
if an invalid port is specified in the URL. See section
结构化剖析结果
for more information on the result object.
Unmatched square brackets in the
netloc
属性将引发
ValueError
.
Characters in the
netloc
attribute that decompose under NFKC normalization (as used by the IDNA encoding) into any of
/
,
?
,
#
,
@
,或
:
will raise a
ValueError
. If the URL is decomposed before parsing, no error will be raised.
Following the
WHATWG spec
that updates RFC 3986, ASCII newline
\n
,
\r
and tab
\t
characters are stripped from the URL.
3.6 版改变:
Out-of-range port numbers now raise
ValueError
, instead of returning
None
.
3.8 版改变:
Characters that affect netloc parsing under NFKC normalization will now raise
ValueError
.
3.9.5 版改变: ASCII newline and tab characters are stripped from the URL.
urllib.parse.
urlunsplit
(
parts
)
¶
Combine the elements of a tuple as returned by
urlsplit()
into a complete URL as a string. The
parts
argument can be any five-item iterable. This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had unnecessary delimiters (for example, a ? with an empty query; the RFC states that these are equivalent).
urllib.parse.
urljoin
(
base
,
url
,
allow_fragments=True
)
¶
Construct a full (“absolute”) URL by combining a “base URL” ( base ) with another URL ( url ). Informally, this uses components of the base URL, in particular the addressing scheme, the network location and (part of) the path, to provide missing components in the relative URL. For example:
>>> from urllib.parse import urljoin
>>> urljoin('http://www.cwi.nl/%7Eguido/Python.html', 'FAQ.html')
'http://www.cwi.nl/%7Eguido/FAQ.html'
allow_fragments
argument has the same meaning and default as for
urlparse()
.
注意
若
url
is an absolute URL (that is, it starts with
//
or
scheme://
), the
url
’s hostname and/or scheme will be present in the result. For example:
>>> urljoin('http://www.cwi.nl/%7Eguido/Python.html',
... '//www.python.org/%7Eguido')
'http://www.python.org/%7Eguido'
If you do not want that behavior, preprocess the
url
with
urlsplit()
and
urlunsplit()
, removing possible
scheme
and
netloc
parts.
3.5 版改变: Behavior updated to match the semantics defined in RFC 3986 .
urllib.parse.
urldefrag
(
url
)
¶
若 url contains a fragment identifier, return a modified version of url with no fragment identifier, and the fragment identifier as a separate string. If there is no fragment identifier in url ,返回 url unmodified and an empty string.
返回值为 命名元组 , its items can be accessed by index or as named attributes:
|
属性 |
索引 |
值 |
值若不存在 |
|---|---|---|---|
|
|
0 |
URL with no fragment |
空字符串 |
|
|
1 |
片段标识符 |
空字符串 |
见章节 结构化剖析结果 for more information on the result object.
3.2 版改变: Result is a structured object rather than a simple 2-tuple.
urllib.parse.
unwrap
(
url
)
¶
Extract the url from a wrapped URL (that is, a string formatted as
<URL:scheme://host/path>
,
<scheme://host/path>
,
URL:scheme://host/path
or
scheme://host/path
)。若
url
is not a wrapped URL, it is returned without changes.
The URL parsing functions were originally designed to operate on character strings only. In practice, it is useful to be able to manipulate properly quoted and encoded URLs as sequences of ASCII bytes. Accordingly, the URL parsing functions in this module all operate on
bytes
and
bytearray
objects in addition to
str
对象。
若
str
data is passed in, the result will also contain only
str
data. If
bytes
or
bytearray
data is passed in, the result will contain only
bytes
数据。
Attempting to mix
str
data with
bytes
or
bytearray
in a single function call will result in a
TypeError
being raised, while attempting to pass in non-ASCII byte values will trigger
UnicodeDecodeError
.
To support easier conversion of result objects between
str
and
bytes
, all return values from URL parsing functions provide either an
encode()
method (when the result contains
str
data) or a
decode()
method (when the result contains
bytes
data). The signatures of these methods match those of the corresponding
str
and
bytes
methods (except that the default encoding is
'ascii'
而不是
'utf-8'
). Each produces a value of a corresponding type that contains either
bytes
data (for
encode()
methods) or
str
data (for
decode()
方法)。
Applications that need to operate on potentially improperly quoted URLs that may contain non-ASCII data will need to do their own decoding from bytes to characters before invoking the URL parsing methods.
The behaviour described in this section applies only to the URL parsing functions. The URL quoting functions use their own rules when producing or consuming byte sequences as detailed in the documentation of the individual URL quoting functions.
3.2 版改变: URL parsing functions now accept ASCII encoded byte sequences
The result objects from the
urlparse()
,
urlsplit()
and
urldefrag()
functions are subclasses of the
tuple
type. These subclasses add the attributes listed in the documentation for those functions, the encoding and decoding support described in the previous section, as well as an additional method:
urllib.parse.SplitResult.
geturl
(
)
¶
Return the re-combined version of the original URL as a string. This may differ from the original URL in that the scheme may be normalized to lower case and empty components may be dropped. Specifically, empty parameters, queries, and fragment identifiers will be removed.
For
urldefrag()
results, only empty fragment identifiers will be removed. For
urlsplit()
and
urlparse()
results, all noted changes will be made to the URL returned by this method.
The result of this method remains unchanged if passed back through the original parsing function:
>>> from urllib.parse import urlsplit
>>> url = 'HTTP://www.Python.org/doc/#'
>>> r1 = urlsplit(url)
>>> r1.geturl()
'http://www.Python.org/doc/'
>>> r2 = urlsplit(r1.geturl())
>>> r2.geturl()
'http://www.Python.org/doc/'
The following classes provide the implementations of the structured parse results when operating on
str
对象:
urllib.parse.
DefragResult
(
url
,
fragment
)
¶
Concrete class for
urldefrag()
results containing
str
data. The
encode()
method returns a
DefragResultBytes
实例。
3.2 版新增。
urllib.parse.
ParseResult
(
scheme
,
netloc
,
path
,
params
,
query
,
fragment
)
¶
Concrete class for
urlparse()
results containing
str
data. The
encode()
method returns a
ParseResultBytes
实例。
urllib.parse.
SplitResult
(
scheme
,
netloc
,
path
,
query
,
fragment
)
¶
Concrete class for
urlsplit()
results containing
str
data. The
encode()
method returns a
SplitResultBytes
实例。
The following classes provide the implementations of the parse results when operating on
bytes
or
bytearray
对象:
urllib.parse.
DefragResultBytes
(
url
,
fragment
)
¶
Concrete class for
urldefrag()
results containing
bytes
data. The
decode()
method returns a
DefragResult
实例。
3.2 版新增。
urllib.parse.
ParseResultBytes
(
scheme
,
netloc
,
path
,
params
,
query
,
fragment
)
¶
Concrete class for
urlparse()
results containing
bytes
data. The
decode()
method returns a
ParseResult
实例。
3.2 版新增。
urllib.parse.
SplitResultBytes
(
scheme
,
netloc
,
path
,
query
,
fragment
)
¶
Concrete class for
urlsplit()
results containing
bytes
data. The
decode()
method returns a
SplitResult
实例。
3.2 版新增。
The URL quoting functions focus on taking program data and making it safe for use as URL components by quoting special characters and appropriately encoding non-ASCII text. They also support reversing these operations to recreate the original data from the contents of a URL component if that task isn’t already covered by the URL parsing functions above.
urllib.parse.
quote
(
string
,
safe='/'
,
encoding=None
,
errors=None
)
¶
Replace special characters in
string
使用
%xx
escape. Letters, digits, and the characters
'_.-~'
are never quoted. By default, this function is intended for quoting the path section of a URL. The optional
safe
parameter specifies additional ASCII characters that should not be quoted — its default value is
'/'
.
string
may be either a
str
或
bytes
对象。
3.7 版改变: Moved from RFC 2396 to RFC 3986 for quoting URL strings. “~” is now included in the set of unreserved characters.
可选
encoding
and
errors
parameters specify how to deal with non-ASCII characters, as accepted by the
str.encode()
方法。
encoding
默认为
'utf-8'
.
errors
默认为
'strict'
, meaning unsupported characters raise a
UnicodeEncodeError
.
encoding
and
errors
must not be supplied if
string
是
bytes
,或
TypeError
被引发。
注意,
quote(string, safe, encoding, errors)
相当于
quote_from_bytes(string.encode(encoding, errors), safe)
.
范例:
quote('/El Niño/')
yields
'/El%20Ni%C3%B1o/'
.
urllib.parse.
quote_plus
(
string
,
safe=''
,
encoding=None
,
errors=None
)
¶
像
quote()
, but also replace spaces with plus signs, as required for quoting HTML form values when building up a query string to go into a URL. Plus signs in the original string are escaped unless they are included in
safe
. It also does not have
safe
default to
'/'
.
范例:
quote_plus('/El Niño/')
yields
'%2FEl+Ni%C3%B1o%2F'
.
urllib.parse.
quote_from_bytes
(
bytes
,
safe='/'
)
¶
像
quote()
,但接受
bytes
对象而不是
str
, and does not perform string-to-bytes encoding.
范例:
quote_from_bytes(b'a&\xef')
yields
'a%26%EF'
.
urllib.parse.
unquote
(
string
,
encoding='utf-8'
,
errors='replace'
)
¶
替换
%xx
escapes with their single-character equivalent. The optional
encoding
and
errors
parameters specify how to decode percent-encoded sequences into Unicode characters, as accepted by the
bytes.decode()
方法。
string
may be either a
str
或
bytes
对象。
encoding
默认为
'utf-8'
.
errors
默认为
'replace'
, meaning invalid sequences are replaced by a placeholder character.
范例:
unquote('/El%20Ni%C3%B1o/')
yields
'/El Niño/'
.
3.9 版改变: string parameter supports bytes and str objects (previously only str).
urllib.parse.
unquote_plus
(
string
,
encoding='utf-8'
,
errors='replace'
)
¶
像
unquote()
, but also replace plus signs with spaces, as required for unquoting HTML form values.
string
必须是
str
.
范例:
unquote_plus('/El+Ni%C3%B1o/')
yields
'/El Niño/'
.
urllib.parse.
unquote_to_bytes
(
string
)
¶
替换
%xx
escapes with their single-octet equivalent, and return a
bytes
对象。
string
may be either a
str
或
bytes
对象。
若它是
str
, unescaped non-ASCII characters in
string
are encoded into UTF-8 bytes.
范例:
unquote_to_bytes('a%26%EF')
yields
b'a&\xef'
.
urllib.parse.
urlencode
(
query
,
doseq=False
,
safe=''
,
encoding=None
,
errors=None
,
quote_via=quote_plus
)
¶
转换映射对象或 2 元素元组序列,其中可能包含
str
or
bytes
objects, to a percent-encoded ASCII text string. If the resultant string is to be used as a
data
for POST operation with the
urlopen()
function, then it should be encoded to bytes, otherwise it would result in a
TypeError
.
The resulting string is a series of
key=value
pairs separated by
'&'
characters, where both
key
and
value
are quoted using the
quote_via
function. By default,
quote_plus()
is used to quote the values, which means spaces are quoted as a
'+'
character and ‘/’ characters are encoded as
%2F
, which follows the standard for GET requests (
application/x-www-form-urlencoded
). An alternate function that can be passed as
quote_via
is
quote()
, which will encode spaces as
%20
and not encode ‘/’ characters. For maximum control of what is quoted, use
quote
and specify a value for
safe
.
When a sequence of two-element tuples is used as the
query
argument, the first element of each tuple is a key and the second is a value. The value element in itself can be a sequence and in that case, if the optional parameter
doseq
评估为
True
, individual
key=value
pairs separated by
'&'
are generated for each element of the value sequence for the key. The order of parameters in the encoded string will match the order of parameter tuples in the sequence.
safe
,
encoding
,和
errors
parameters are passed down to
quote_via
(
encoding
and
errors
parameters are only passed when a query element is a
str
).
To reverse this encoding process,
parse_qs()
and
parse_qsl()
are provided in this module to parse query strings into Python data structures.
参考
urllib 范例
to find out how the
urllib.parse.urlencode()
method can be used for generating the query string of a URL or data for a POST request.
3.2 版改变: query supports bytes and string objects.
3.5 版新增: quote_via 参数。
另请参阅
Working Group for the URL Standard that defines URLs, domains, IP addresses, the application/x-www-form-urlencoded format, and their API.
This is the current standard (STD66). Any changes to urllib.parse module should conform to this. Certain deviations could be observed, which are mostly for backward compatibility purposes and for certain de-facto parsing requirements as commonly observed in major browsers.
This specifies the parsing requirements of IPv6 URLs.
Document describing the generic syntactic requirements for both Uniform Resource Names (URNs) and Uniform Resource Locators (URLs).
Parsing requirements for mailto URL schemes.
This Request For Comments includes the rules for joining an absolute and a relative URL, including a fair number of “Abnormal Examples” which govern the treatment of border cases.
This specifies the formal syntax and semantics of absolute URLs.