scrapy进阶:防ban策略之随机user-agent,from_crawler等小技巧

标签:
金融量化投资ai |
爬虫会给网站带来一定的压力,网站会做一起反爬的措施。
我们可以对scrapy做一些定制,防止爬虫被禁用。
http://s6/mw690/001rxTlvzy7fHmGZ04535&690
如下代码实现一个随机更换scrapy的user-agent的middleware。
import random
class RotateUserAgentMiddleware(object):
def process_request(self, request, spider):
ua = random.choice(self.user_agent_list)
if ua:
# 显示当前使用的useragent
print("********Current UserAgent:%s************" % ua)
# 记录
#logging.log('Current UserAgent: ' + ua, level='INFO')
request.headers.setdefault('User-Agent', ua)
# the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape
# for more user agent strings,you can find it in http://www.useragentstring.com/pages/useragentstring.php user_agent_list = [ \
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
, "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
, "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11"
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
, "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6"
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
, "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6"
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
, "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1"
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
, "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5"
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
, "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5"
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3"
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3"
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3"
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3"
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3"
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3"
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3"
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
, "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3"
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
, "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24" ]
Middleware在setting里都需要做挂载。
DOWNLOADER_MIDDLEWARES = {
#取消原有的useragent中间件,挂载自己的随机useragent
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'eagle.middlewares.RotateUserAgentMiddleware' :400,
现在user-agent已经是随机的了。
pipline或middleware如何得到setting?
在我们定制自己的pipline或middleware时,有时候会需要访问系统的setting。那如何得到?crawler实例可以访问到几乎所有的配置,我们需要在pipline或middleware初始化的时候传入crawler的实例,这里scrapy的设计比较精妙,也不太好理解。
#如果类方法存在,会被调用,得到全局的crawler
#这个类的实例化的时候,会把当前全局的crawler实例传进来初始化pipline,所以init要传入crawler
@classmethod
def from_crawler(cls, crawler):
return cls(crawler)
class SelfPipeline(object):
def __init__(self, crawler):
super(EaglePipeline, self).__init__()
es_hosts = crawler.settings.get("ES_HOST", "xxx")
如上,pipline的init就传入了全局的crawler。
除了得到crawler的实例,还可以得到spider的事件:
-
open_spider(self, spider)
当 spider 被开启时,这个方法被调用。
-
close_spider(spider)
当 spider 被关闭时,这个方法被调用
request.meta的用处
每一个request都带有一个meta成员变量,其实是一个dict,可以存放各种定义的数据,比如:
{'depth': 2, 'download_timeout': 180.0, 'link_text': '', 'rule': 1}
这个depth就很有用,有时候在middleware,我们想知道当前url在第几个层次,都一些处理。start_url是没有depth这个字段,我们自然也就知道当前url是首页。