程式語言:Python
Package:requests
官方文件
測試網頁
功能:模擬 html request
它們全部都會返回一個 Response 對象的實例
url
params
data
json
headers
cookies
files
auth
requests_ntlm
requests-negotiate-sspi
可自定,繼承 requests.auth.AuthBase,並實現 __call__() 方法
設置 request 期間就會調用該模塊。因此 __call__ 方法必須完成使得身份認證生效的所有事情。可額外添加 hooks 提供進一步的功能
timeout
allow_redirects
proxies
verify
stream
cert
requests.head(url, **kwargs)
requests.get(url, params=None, **kwargs)
requests.post(url, data=None, json=None, **kwargs)
requests.put(url, data=None, **kwargs)
requests.patch(url, data=None, **kwargs)
requests.delete(url, **kwargs)
requests.options(url, **kwargs)
max_redirects
params
proxies
stream
trust_env
verify
方法
prepare_request(request)
rebuild_auth(prepared_request, response)
rebuild_method(prepared_request, response)
rebuild_proxies(prepared_request, proxies)
resolve_redirects(resp, req, stream=False, timeout=None, verify=True, cert=None, proxies=None, **adapter_kwargs)
send(request, **kwargs)
register_hook(event, hook)
encoding
headers
history
is_permanent_redirect
is_redirect
links
raw
reason
request
status_code
text
url
connection
方法
iter_lines(chunk_size=512, decode_unicode=None, delimiter=None)
json(**kwargs)
raise_for_status()
url
方法
pool_maxsize
max_retries
pool_block
方法
Package:requests
官方文件
測試網頁
功能:模擬 html request
import requests r = requests.get('https://api.github.com/user', auth=('user', 'pass')) r.status_code # 200 r.headers['content-type'] # 'application/json; charset=utf8' r.encoding # 'utf-8' r.text # {"type":"User"...' r.json() # {'private_gists': 419, 'total_private_repos': 77, ...}
主要方法
所有的功能都可以通過以下方法訪問它們全部都會返回一個 Response 對象的實例
- requests.request(method, url, **kwargs)
- method
- HTTP 的方法,像是 get, post ...
import requests req = requests.request('GET', 'http://httpbin.org/get')
- 網址
- URL 的查詢字符串
import requests payload = {'key1': 'value1', 'key2': ['value2', 'value3'] r = requests.get('http://httpbin.org/get', params=payload) print(r.url) # http://httpbin.org/get?key1=value1&key2=value2&key2=value3
- Dictionary,bytes,file-like object 等資料
import requests payload = {'key1': 'value1', 'key2': 'value2'} r = requests.post("http://httpbin.org/post", data=payload) print(r.text)
- json 資料
import requests payload = {'some': 'data'} r = requests.post("http://httpbin.org/post", json=payload)
- HTTP 的 header
import requests url = 'http://httpbin.org/get' headers = {'user-agent': 'my-app/0.0.1'} r = requests.get(url, headers=headers)
- HTTP 的 cookie
import requests url = 'http://httpbin.org/cookies' cookies = dict(cookies_are='working') r = requests.get(url, cookies=cookies) r.text # '{"cookies": {"cookies_are": "working"}}'
- 上傳的檔案
import requests url = 'http://httpbin.org/post' files = {'file': open('report.xls', 'rb')} # files = {'file': ('report.xls', open('report.xls', 'rb'), 'application/vnd.ms-excel', {'Expires': '0'})} # files = {'file': ('report.csv', 'some,data,to,send\nanother,row,to,send\n')}r = requests.post(url, files=files) r.text
- 認證資料
# 需同網址帳密 user passwd requests.get('http://httpbin.org/basic-auth/user/passwd', auth=('user', 'passwd')) requests.get('http://httpbin.org/basic-auth/fake/test', auth=('fake', 'test'))
- NTLM
- 手動填入帳密
- 自動用 window 帳密登入
- Kerberos or NTLM
- 利用 Windows SSPI interface 自動用 window 帳密登入
設置 request 期間就會調用該模塊。因此 __call__ 方法必須完成使得身份認證生效的所有事情。可額外添加 hooks 提供進一步的功能
import requests class MyAuth(requests.auth.AuthBase): def __call__(self, req): # Implement my authentication return req url = 'http://httpbin.org/get' requests.get(url, auth=MyAuth())
- 等待 response 的時間 (秒)
- 預設:None,永久等待
import requests r = requests.get('https://github.com', timeout=5) # 等待連接 與 等待第一個 byte 分開設定 r = requests.get('https://github.com', timeout=(3.05, 27)) # 永久等待 r = requests.get('https://github.com', timeout=None)
- 是否禁用導向處理
import requests r = requests.get('http://httpbin.org/redirect/6', allow_redirects=False) r.history # []
- 代理伺服器,可指定特定網址使用特定代理
import requests proxies = { 'http://10.20.1.128': 'http://10.10.1.10:5323' "http": "http://user:pass@10.10.1.10:3128/", "https": "http://10.10.1.10:1080", } requests.get("http://example.org", proxies=proxies)
- 驗證 SSL,預設 True
import requests requests.get('https://httpbin.org/', verify=False)
- 是否控制數據流
False:立即下載
True:推遲下載,但此時無法將連接釋放,除非消耗所有的數據,或者調用了 Response.close。注意:無釋放會有連接效率低下的問題
import requests from contextlib import closing with closing(requests.get('http://httpbin.org/stream/20', stream=True)) as r: chunk_size = 10 # bytes for chunk in r.iter_content(chunk_size): print(chunk)
- 指定 SSL 本機證書,私有 key 必須是解密狀態
requests.get('https://kennethreitz.com', cert=('/path/server.crt', '/path/key')) requests.get('https://kennethreitz.com', cert='/path/server.pem')
- HTTP 意義:只取得 header
- HTTP 意義:取得資料
- HTTP 意義:新增資料
- HTTP 意義:替換資料 (新增或完整更新)
- HTTP 意義:部分更新資料
- HTTP 意義:刪除資料
- HTTP 意義:取得可用的 HTTP 方法
Exception
import requests try: r = requests.get(url, params={'s': thing}) except requests.RequestException as e: # This is the correct syntax print(e) sys.exit(1)
import requests try: r = requests.get(url, params={'s': thing}) except requests.Timeout as e: print(e) sys.exit(1) except requests.TooManyRedirects as e: print(e) sys.exit(1) except requests.HTTPError as e: # 404, 503, 500, 403 etc. status_code = e.response.status_code except requests.RequestException as e: print(e) sys.exit(1)
- requests.RequestException
- 任意的異常
- requests.ConnectionError
- 連接異常
- requests.HTTPError
- HTTP 異常
- requests.URLRequired
- 需提供正確的網址
- requests.TooManyRedirects
- 太多重新轉向
- requests.ConnectTimeout
- 連接超時,可試著重新再連接
- requests.ReadTimeout
- 時間內未送任何資料
- requests.Timeout
- 超時,ConnectTimeout、ReadTimeout 都會被抓到
Session
import requests s = requests.Session() s.get('http://httpbin.org/get') s.close() #------------------------------------- with requests.Session() as s: s.get('http://httpbin.org/get')
- class requests.Session
- 屬性
- auth
- cert
- cookies
- headers
- hooks
- 目前只有
response,針對 response 處理
import requests def print_url(rsp, **kwargs): print(rsp.url) def print_encoding(rsp, **kwargs): print(rsp.encoding) hooks=dict(response=[print_url, print_encoding]) r = requests.get('http://httpbin.org', hooks=hooks) # http://httpbin.org # utf-8
- 最大重導向數,預設 30
- 取用系統環境變數
- False,將忽略以下系統參數
- 系統的 proxys
- .netrc 的身份認證
- 定義在 REQUESTS_CA_BUNDLE 中的 CA bundles
- CURL_CA_BUNDLE
import requests session = requests.Session() session.trust_env = False response = session.get('http://www.stackoverflow.com') session.close()
- close()
- 需確保關掉 session
- request(method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None, json=None)
- head(url, **kwargs)
- get(url, **kwargs)
- post(url, data=None, json=None, **kwargs)
- put(url, data=None, **kwargs)
- patch(url, data=None, **kwargs
- delete(url, **kwargs)
- options(url, **kwargs)
- get_adapter(url)
- 回傳 url 對應的 adapter
- merge_environment_settings(url, proxies, stream, verify, cert)
- 合併設定
- mount(prefix, adapter)
- 註冊 adapter
import requests s = requests.Session() http = requests.adapters.HTTPAdapter(max_retries=3) https = requests.adapters.HTTPAdapter(max_retries=3) s.mount('http://', http) s.mount('https://', https) s.get(url)
- 回傳同之前狀態的預備 request
import requests s = requests.Session() req = requests.Request('GET', url, data=data headers=headers ) prepped = s.prepare_request(req) # do something with prepped.body # do something with prepped.headers resp = s.send(prepped, stream=stream, verify=verify, proxies=proxies, cert=cert, timeout=timeout ) print(resp.status_code) s.close()
- 重導向後,原先 auth 不見得適用,故重建 pre-request 的 auth
- 重導向後,原先 http method 不見得適用,故重建 pre-request 的 http method
- 重導向後,原先 proxies 不見得適用,故重建 pre-request 的proxies
- 回傳所有導向過的 response
import requests s = requests.Session() rsp = s.get('http://httpbin.org/redirect/6', allow_redirects=False) assert rsp.is_redirect rsps = s.resolve_redirects(rsp, rsp.request) for rsp in rsps: print(rsp.url) # http://httpbin.org/relative-redirect/5 # http://httpbin.org/relative-redirect/4 # http://httpbin.org/relative-redirect/3 # http://httpbin.org/relative-redirect/2 # http://httpbin.org/relative-redirect/1 # http://httpbin.org/get s.close()
- 送出 PreparedRequest 物件
requests.Request
使用者定義的 Request 物件,使用在產生 PreparedRequest 物件import requests req = requests.Request('GET', 'http://httpbin.org/get') req.prepare() # <PreparedRequest [GET]>
- class requests.Request(method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None)
- 參數
- method, url, headers...
- 同 requests.request
- hooks
- 同 session
- 方法
- prepare()
- 回傳 PreparedRequest 物件
- deregister_hook(event, hook)
import requests def print_url(rsp, **kwargs): print(rsp.url) hooks=dict(response=print_url) req = requests.Request('GET', 'http://httpbin.org/get', hooks=hooks) print(req.hooks) # {'response': [<function print_url at 0x000000000450BD08>]} req.deregister_hook('response', print_url) # True print(req.hooks) # {'response': []}
import requests def print_url(rsp, **kwargs): print(rsp.url) def print_encoding(rsp, **kwargs): print(rsp.encoding) req = requests.Request('GET', 'http://httpbin.org') print(req.hooks) # {'response': []} req.register_hook('response', print_url) req.register_hook('response', print_encoding) print(req.hooks) # {'response': [<function print_url at 0x000000000450BD08>, <function print_encoding at 0x000000000450BD90>]} s = requests.Session() rsp = s.send(req.prepare()) # http://httpbin.org/ # utf-8 s.close()
requests.Response
伺服器回應 request 的 response- class requests.Response
- 屬性
- apparent_encoding
- 猜測可能的編碼,會做 chardet.detect 運算
- rsp.apparent_encoding
- content
- 回應的內容,為 byte 格式
- cookies
- elapsed
- 送出 request 到收到 response 的時間
import requests r = requests.get('http://httpbin.org') r.elapsed # datetime.timedelta(0, 0, 687039) print(r.elapsed) # 0:00:00.687039
- text 的編碼,可變動
- 透過 headers 猜測編碼,如果 header 中不存在 charset,就認為編碼為 ISO-8859-1。若回傳的 headers 沒有指定頁面編碼,可能出現亂碼
- 重導向的記錄
r = requests.get('http://httpbin.org/redirect/3') r.history # [<Response [302]>, <Response [302]>, <Response [302]>]
- 是否該網址已經被永久改變了位置,一定會被重導向
- 是否為重導向,可參考 Session.resolve_redirects
- response header 的 link
- response 的原始內容,為 byte 格式,stream 需設為 True
import requests r = requests.get('https://httpbin.org/get', stream=True) r.raw.read()
- HTTP status,e.g. "Not Found" or "OK"
- PreparedRequest 物件
- HTTP status code,e.g. 404 or 200
- 回應的內容,自動猜測編碼 decode 為 unicode
- response 回應的網址
- response 使用的 requests.adapters.HTTPAdapter
- close()
- 只使用在 stream 設為 True,Requests 無法將連接釋放回連接池
正常情況下不該使用 - iter_content(chunk_size=1, decode_unicode=False)
- 回傳內容為 byte 格式
- chunk_size
- byte 為單位
- 設為 None 將自動決定於 stream 的值
stream = True,則是看已下載到何處,大小就多大
stream = False,則是全部為一個 chunk - decode_unicode
- True 會依最佳的編碼進行 decode
import requests from contextlib import closing with closing(requests.get('http://httpbin.org/stream/20', stream=True)) as r: chunk_size = 100 # bytes for chunk in r.iter_content(chunk_size, True): print(chunk)
- 回傳內容為 byte 格式
- chunk_size
- byte 為單位
- decode_unicode
- True 會依最佳的編碼進行 decode
- delimiter
- line 的結尾,預設為 \n
- 不保證重新調用的安全性。多次調用該方法會導致部分收到的數據丟失
如果要在多處調用,就該先將生成的迭代器對象存起來,如下例
import requests from contextlib import closing with closing(requests.get('http://httpbin.org/stream/20', stream=True)) as r: # 用 , 當作 line 的分隔,預設為 \n lines = r.iter_lines(delimiter=b",") # 存下第一條 line 資料,或直接忽略之 first_line = next(lines) for line in lines: print(line)
- 將 json 轉換為 Dict
- **kwargs
- json.loads 所使用的參數
import requests r = requests.get('https://httpbin.org/get') r.json()
- 假如有發生任何 HTTPError 則 raise
imprt requests r = requests.get('http://httpbin.org/status/404') r.raise_for_status() # Traceback (most recent call last): # File "", line 1, in # File "D:\pythonVenv\developEnv\lib\site-packages\requests\models.py", line 840, in raise_for_status # raise HTTPError(http_error_msg, response=self) # requests.exceptions.HTTPError: 404 Client Error: NOT FOUND for url: http://httpbin.org/status/404
requests.PreparedRequest
預備的 Request,可由 requests.Session.prepare_request(request) 或 requests.Request.prepare() 產生- class requests.PreparedRequest
- 屬性
- body
- headers
- hooks
- method
- path_url
- 相對路徑
import requests r = requests.get('https://httpbin.org/get') r.request.path_url #/get
- deregister_hook(event, hook)
- register_hook(event, hook)
- prepare(method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None)
- prepare_auth(auth, url='')
- prepare_body(data, files, json=None)
- prepare_cookies(cookies)
- prepare_headers(headers)
- prepare_hooks(hooks)
- prepare_method(method)
- prepare_url(url, params)
requests.adapters.HTTPAdapter
內建的 HTTP Adapter (參考)import requests s = requests.Session() a = requests.adapters.HTTPAdapter(max_retries=3) s.mount('http://', a)
- class requests.adapters.HTTPAdapter(pool_connections=10, pool_maxsize=10, max_retries=0, pool_block=False)
- 參數
- pool_connections
- 緩存的 urllib3 連接池個數
import logging logging.basicConfig(level=logging.DEBUG) import requests s = requests.Session() s.mount('https://', requests.adapters.HTTPAdapter(pool_connections=2)) s.get('https://httpbin.org/') s.get('https://www.google.com.tw/') s.get('https://en.wikipedia.org/wiki/Main_Page') s.get('https://www.google.com.tw/') s.close() """output # 新增 httpbin.org 連線 INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): httpbin.org DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 12150 # 新增 www.google.com.tw 連線 INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): www.google.com.tw DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 None # 新增 en.wikipedia.org 連線,因限制為 2,此時 httpbin.org 連線已移除 INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): en.wikipedia.org DEBUG:requests.packages.urllib3.connectionpool:"GET /wiki/Main_Page HTTP/1.1" 200 16864 # 重覆利用 www.google.com.tw 連線 DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 None """
- 連接池中同網域最大連接數,應用在 multithreaded 且又是同樣的 session
import logging logging.basicConfig(level=logging.DEBUG) import requests import _thread def thread_get(s, url): s.get(url) def main(): s = requests.Session() s.mount('https://', requests.adapters.HTTPAdapter(pool_connections=1, pool_maxsize=2)) t1 = _thread.start_new_thread(thread_get, (s, 'https://httpbin.org/',)) t2 = _thread.start_new_thread(thread_get, (s, 'https://httpbin.org/get',)) t3 = _thread.start_new_thread(thread_get, (s, 'https://httpbin.org/headers',)) # 需延遲時間,不然會建立第四個,看不出重覆利用 import time time.sleep(5) t4 = _thread.start_new_thread(thread_get, (s, 'https://httpbin.org/ip',)) """output # 一次連三個 INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): httpbin.org INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (2): httpbin.org INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (3): httpbin.org DEBUG:requests.packages.urllib3.connectionpool:"GET /headers HTTP/1.1" 200 156 DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 12150 DEBUG:requests.packages.urllib3.connectionpool:"GET /get HTTP/1.1" 200 238 # 超過限制,故第三個被拋棄 WARNING:requests.packages.urllib3.connectionpool:Connection pool is full, discarding connection: httpbin.org # 重覆利用 DEBUG:requests.packages.urllib3.connectionpool:"GET /ip HTTP/1.1" 200 32 """
- 最大失敗重試次數
- 只用於 DNS 查詢失敗,socket 連接或連接超時
- 默認情況下 Requests 不會重試失敗的連接,如果你需要對請求重試的條件進行細粒度的控制,可以引入 urllib3 的 Retry 類
- True,同網站連接數超過上限時,會停止動作,直到資源釋放
import logging logging.basicConfig(level=logging.DEBUG) import requests import _thread def thread_get(s, url): s.get(url) def main(): s = requests.Session() s.mount('https://', requests.adapters.HTTPAdapter(pool_connections=1, pool_maxsize=2, pool_block=True)) t1 = _thread.start_new_thread(thread_get, (s, 'https://httpbin.org/',)) t2 = _thread.start_new_thread(thread_get, (s, 'https://httpbin.org/get',)) t3 = _thread.start_new_thread(thread_get, (s, 'https://httpbin.org/headers',)) # 需延遲時間,不然會建立第四個,看不出重覆利用 import time time.sleep(5) t4 = _thread.start_new_thread(thread_get, (s, 'https://httpbin.org/ip',)) """output # 因上限只有兩個,只會連兩個 INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): httpbin.org INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (2): httpbin.org DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 12150 DEBUG:requests.packages.urllib3.connectionpool:"GET /headers HTTP/1.1" 200 156 DEBUG:requests.packages.urllib3.connectionpool:"GET /get HTTP/1.1" 200 238 >>> DEBUG:requests.packages.urllib3.connectionpool:"GET /ip HTTP/1.1" 200 32 """
- add_headers(request, **kwargs)
- 添加需要的 header
- 不該調用該方法,該方法應該只在 HTTPAdapter 的子類中使用
- build_response(req, resp)
- 用 request 構建一個 Response 對象來自 urllib3 response
- 不該調用該方法,該方法應該只在 HTTPAdapter 的子類中使用
- cert_verify(conn, url, verify, cert)
- 驗證一個 SSL 證書
- 不該調用該方法,該方法應該只在 HTTPAdapter 的子類中使用
- close()
- 處理掉所有的內部狀態
- 當前該方法僅僅關閉 PoolManager 進而斷開池中的連接
- get_connection(url, proxies=None)
- 對於給定的URL,返回一個 urllib3 連接
- 不該調用該方法,該方法應該只在 HTTPAdapter 的子類中使用
- init_poolmanager(connections, maxsize, block=False, **pool_kwargs)
- 初始化一個 urllib3 PoolManager 實例
- 不該調用該方法,該方法應該只在 HTTPAdapter 的子類中使用
- proxy_headers(proxy)
- 返回一個 dict 的 header,使得可以在任意 request 加入 proxy
- 不該調用該方法,該方法應該只在 HTTPAdapter 的子類中使用
- proxy_manager_for(proxy, **proxy_kwargs)
- 返回給定代理的 urllib3 ProxyManager 對象
- 不該調用該方法,該方法應該只在 HTTPAdapter 的子類中使用
- request_url(request, proxies)
- 獲取最後一個請求的url,如果消息是由HTTP代理發送的,則必須使用完整的URL,否則只需要使用URL的路徑部分
- 不該調用該方法,該方法應該只在 HTTPAdapter 的子類中使用
- send(request, stream=False, timeout=None, verify=True, cert=None, proxies=None)
- 發送 PreparedRequest 對象,返回 Response 對象
身份驗證
import requests from requests.auth import HTTPBasicAuth auth = HTTPBasicAuth('user', 'pass') requests.get('https://api.github.com/user', auth=auth) # HTTPBasicAuth 的簡寫 requests.get('https://api.github.com/user', auth=('user', 'pass'))
- class requests.auth.AuthBase
- 所有認證的 Base class
- class requests.auth.HTTPBasicAuth(username, password)
- 將 HTTP Basic Authentication 附加到給定的 Request 對象
- class requests.auth.HTTPProxyAuth(username, password)
- 將 HTTP Proxy Authentication 附加到給定的 Request 對象
- class requests.auth.HTTPDigestAuth(username, password)
- 將 HTTP Digest Authentication 附加到給定的 Request 對象
編碼
- requests.utils.get_encodings_from_content(content)
- 從 content 中的 meta charset 得到編碼
- requests.utils.get_encoding_from_headers(headers)
- 若 header 存在 content-type,且具有 charset 則回傳編碼;
不然 MIME 若為 text/*,且無設置 charset,則回傳 ISO-8859-1;
其他情況則回傳 None - requests.utils.get_unicode_from_response(r)
- 以 unicode 回傳 response 內容
Cookie
import requests r = requests.get('https://www.google.com.tw/') r.cookies # <RequestsCookieJar[Cookie(...)]>
- requests.utils.dict_from_cookiejar(cj)
- 從 CookieJar 物件提取,並返回 dict 格式
- requests.utils.cookiejar_from_dict(cookie_dict, cookiejar=None, overwrite=True)
- 合併 cookie dict 到 CookieJar 物件中
cookiejar 為 None,則從 cookie dict 產生 CookieJar - requests.utils.add_dict_to_cookiejar(cj, cookie_dict)
- 將 cookie dict 的值插入 CookieJar 物件中
- class requests.cookies.RequestsCookieJar(policy=None)
- 相容 cookielib.CookieJar,但可用 dict 表示
- add_cookie_header(request)
- 加入 cookie 到 urllib2.Request 物件
- clear(domain=None, path=None, name=None)
- 清除 cookies
- clear_expired_cookies()
- 清理過期 cookies
- 不應該被使用 ,只使用在內部
- clear_session_cookies()
- 清理 session cookies
- copy()
- 複製 cookies 並返回
- extract_cookies(response, request)
- 從 response 抽取 cookies,且此 cookies 已存在於 request
- response
- urllib3.HTTPResponse object
- request
- requests.Request object
- get(name, default=None, domain=None, path=None)
- 得到 cookies 的值
- get_dict(domain=None, path=None)
- 得到 dict 格式的 cookies
- items()
- 得到所有 items
- iteritems()
- 得到 items 的 generator
- keys()
- 得到所有 items
- iterkeys()
- 得到 keys 的 generator
- values()
- 得到所有 items
- itervalues()
- 得到 values 的 generator
- list_domains()
- 列出所有 domain
- ['.google.com.tw']
- list_paths()
- 列出所有 path
- ['/']
- make_cookies(response, request)
- 抽取 response 的 cookies,建立 request 的 cookies
- multiple_domains()
- 檢查是否為多 domains
- pop(k[, d])
- 拋出指定 key 的值,而 cookies 會移除此 key
若找不到,有提供 d,則拋回 d,無的話則 raise KeyError - popitem()
- 拋出 (key, value),若為空則 raise KeyError
- 但實際操作似乎有 bug
- set(name, value, **kwargs)
- 設定 cookies 值,可指定 domain 跟 path
- set_cookie_if_ok(cookie, request)
- 在 cookie 類型允許的條件下設置 cookie
- request
- urllib3.Request
- setdefault(k[, d])
- 如果 k 在 cookies 中,回傳 value 值,反之,將 k:d 加入 cookies 中
- update(other)
- 更新 cookies 來自 CookieJar 或 dict-like
- class requests.cookies.CookieConflictError
- cookies 衝突錯誤
狀態碼查詢
- requests.codes
- requests.codes['temporary_redirect']
- 307
- requests.codes.teapot
- 418
- requests.codes['\o/']
- 200
留言
張貼留言