CrawlSpiders
?
它是Spider的派生類,Spider類的設計原則是只爬取start_url列表中的網(wǎng)頁,而CrawlSpider類定義了一些規(guī)則(rule)來提供跟進link的方便的機制,從爬取的網(wǎng)頁中獲取link并繼續(xù)爬取的工作更適合。 源碼參考class CrawlSpider(Spider):
rules = ()
def __init__(self, *a, **kw):
super(CrawlSpider, self).__init__(*a, **kw)
self._compile_rules()
#首先調(diào)用parse()來處理start_urls中返回的response對象
#parse()則將這些response對象傳遞給了_parse_response()函數(shù)處理,并設置回調(diào)函數(shù)為parse_start_url()
#設置了跟進標志位True
#parse將返回item和跟進了的Request對象
def parse(self, response):
return self._parse_response(response, self.parse_start_url, cb_kwargs={}, follow=True)
#處理start_url中返回的response,需要重寫
def parse_start_url(self, response):
return []
def process_results(self, response, results):
return results
#從response中抽取符合任一用戶定義'規(guī)則'的鏈接,并構(gòu)造成Resquest對象返回
def _requests_to_follow(self, response):
if not isinstance(response, HtmlResponse):
return
seen = set()
#抽取之內(nèi)的所有鏈接,只要通過任意一個'規(guī)則',即表示合法
for n, rule in enumerate(self._rules):
links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]
#使用用戶指定的process_links處理每個連接
if links and rule.process_links:
links = rule.process_links(links)
#將鏈接加入seen集合,為每個鏈接生成Request對象,并設置回調(diào)函數(shù)為_repsonse_downloaded()
for link in links:
seen.add(link)
#構(gòu)造Request對象,并將Rule規(guī)則中定義的回調(diào)函數(shù)作為這個Request對象的回調(diào)函數(shù)
r = Request(url=link.url, callback=self._response_downloaded)
r.meta.update(rule=n, link_text=link.text)
#對每個Request調(diào)用process_request()函數(shù)。該函數(shù)默認為indentify,即不做任何處理,直接返回該Request.
yield rule.process_request(r)
#處理通過rule提取出的連接,并返回item以及request
def _response_downloaded(self, response):
rule = self._rules[response.meta['rule']]
return self._parse_response(response, rule.callback, rule.cb_kwargs, rule.follow)
#解析response對象,會用callback解析處理他,并返回request或Item對象
def _parse_response(self, response, callback, cb_kwargs, follow=True):
#首先判斷是否設置了回調(diào)函數(shù)。(該回調(diào)函數(shù)可能是rule中的解析函數(shù),也可能是 parse_start_url函數(shù))
#如果設置了回調(diào)函數(shù)(parse_start_url()),那么首先用parse_start_url()處理response對象,
#然后再交給process_results處理。返回cb_res的一個列表
if callback:
#如果是parse調(diào)用的,則會解析成Request對象
#如果是rule callback,則會解析成Item
cb_res = callback(response, **cb_kwargs) or ()
cb_res = self.process_results(response, cb_res)
for requests_or_item in iterate_spider_output(cb_res):
yield requests_or_item
#如果需要跟進,那么使用定義的Rule規(guī)則提取并返回這些Request對象
if follow and self._follow_links:
#返回每個Request對象
for request_or_item in self._requests_to_follow(response):
yield request_or_item
def _compile_rules(self):
def get_method(method):
if callable(method):
return method
elif isinstance(method, basestring):
return getattr(self, method, None)
self._rules = [copy.copy(r) for r in self.rules]
for rule in self._rules:
rule.callback = get_method(rule.callback)
rule.process_links = get_method(rule.process_links)
rule.process_request = get_method(rule.process_request)
def set_crawler(self, crawler):
super(CrawlSpider, self).set_crawler(crawler)
self._follow_links = crawler.settings.getbool('CRAWLSPIDER_FOLLOW_LINKS', True)
? CrawlSpider繼承于Spider類,除了繼承過來的屬性外(name、allow_domains),還提供了新的屬性和方法: LinkExtractorsclass scrapy.linkextractors.LinkExtractor Link Extractors 的目的很簡單: 提取鏈接? 每個LinkExtractor有唯一的公共方法是 extract_links(),它接收一個 Response 對象,并返回一個 scrapy.link.Link 對象。 Link Extractors要實例化一次,并且 extract_links 方法會根據(jù)不同的 response 調(diào)用多次提取鏈接? class scrapy.linkextractors.LinkExtractor(
allow = (),
deny = (),
allow_domains = (),
deny_domains = (),
deny_extensions = None,
restrict_xpaths = (),
tags = ('a','area'),
attrs = ('href'),
canonicalize = True,
unique = True,
process_value = None
)
主要參數(shù):
rules在rules中包含一個或多個Rule對象,每個Rule對爬取網(wǎng)站的動作定義了特定操作。如果多個rule匹配了相同的鏈接,則根據(jù)規(guī)則在本集合中被定義的順序,第一個會被使用。 class scrapy.spiders.Rule(
link_extractor,
callback = None,
cb_kwargs = None,
follow = None,
process_links = None,
process_request = None
)
爬取規(guī)則(Crawling rules)繼續(xù)用騰訊招聘為例,給出配合rule使用CrawlSpider的例子:
CrawlSpider 版本那么,scrapy shell測試完成之后,修改以下代碼 #提取匹配 'http://hr.tencent.com/position.php?&start=\d '的鏈接
page_lx = LinkExtractor(allow = ('start=\d '))
rules = [
#提取匹配,并使用spider的parse方法進行分析;并跟進鏈接(沒有callback意味著follow默認為True)
Rule(page_lx, callback = 'parse', follow = True)
]
注意:千萬記住 callback 千萬不能寫 parse,再次強調(diào):由于CrawlSpider使用parse方法來實現(xiàn)其邏輯,如果覆蓋了 parse方法,crawl spider將會運行失敗。
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from mySpider.items import TencentItem
class TencentSpider(CrawlSpider):
name = "tencent"
allowed_domains = ["hr.tencent.com"]
start_urls = [
"http://hr.tencent.com/position.php?&start=0#a"
]
page_lx = LinkExtractor(allow=("start=\d "))
rules = [
Rule(page_lx, callback = "parseContent", follow = True)
]
def parseContent(self, response):
for each in response.xpath('//*[@class="even"]'):
name = each.xpath('./td[1]/a/text()').extract()[0]
detailLink = each.xpath('./td[1]/a/@href').extract()[0]
positionInfo = each.xpath('./td[2]/text()').extract()[0]
peopleNumber = each.xpath('./td[3]/text()').extract()[0]
workLocation = each.xpath('./td[4]/text()').extract()[0]
publishTime = each.xpath('./td[5]/text()').extract()[0]
#print name, detailLink, catalog,recruitNumber,workLocation,publishTime
item = TencentItem()
item['name']=name.encode('utf-8')
item['detailLink']=detailLink.encode('utf-8')
item['positionInfo']=positionInfo.encode('utf-8')
item['peopleNumber']=peopleNumber.encode('utf-8')
item['workLocation']=workLocation.encode('utf-8')
item['publishTime']=publishTime.encode('utf-8')
yield item
# parse() 方法不需要寫
# def parse(self, response):
# pass
LoggingScrapy提供了log功能,可以通過 logging 模塊使用。
Log levels
logging設置通過在setting.py中進行以下設置可以被用來配置logging: LOG_ENABLED 默認: True,啟用logging
LOG_ENCODING 默認: 'utf-8',logging使用的編碼
LOG_FILE 默認: None,在當前目錄里創(chuàng)建logging輸出文件的文件名
LOG_LEVEL 默認: 'DEBUG',log的最低級別
LOG_STDOUT 默認: False 如果為 True,進程所有的標準輸出(及錯誤)將會被重定向到log中。例如,執(zhí)行 print "hello" ,其將會在Scrapy log中顯示。
? 來源:https://www./content-4-589301.html |
|
|