tokenize — 用于 Python 源代码的 Tokenizer
tokenize
tabnanny
源代码: Lib/tabnanny.py
For the time being this module is intended to be called as a script. However it is possible to import it into an IDE and use the function check() described below.
check()
注意
The API provided by this module is likely to change in future releases; such changes may not be backward compatible.
若 file_or_dir is a directory and not a symbolic link, then recursively descend the directory tree named by file_or_dir , checking all .py files along the way. If file_or_dir is an ordinary Python source file, it is checked for whitespace related problems. The diagnostic messages are written to standard output using the print() 函数。
.py
print()
Flag indicating whether to print verbose messages. This is incremented by the -v option if called as a script.
-v
Flag indicating whether to print only the filenames of files containing whitespace related problems. This is set to true by the -q option if called as a script.
-q
被引发通过 process_tokens() if detecting an ambiguous indent. Captured and handled in check() .
process_tokens()
此函数用于 check() to process tokens generated by the tokenize 模块。
另请参阅
Lexical scanner for Python source code.
pyclbr — Python 模块浏览支持
pyclbr
键入搜索术语或模块、类、函数名称。