千家信息网

python爬虫中如何分类

发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,小编给大家分享一下python爬虫中如何分类,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!1、根据目的可以分为功能性爬虫和
千家信息网最后更新 2025年01月31日python爬虫中如何分类

小编给大家分享一下python爬虫中如何分类,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!

1、根据目的可以分为功能性爬虫和数据增量爬虫。

2、根据url地址和对应的页面内容是否改变,数据增量爬虫可分为地址变内容也变的爬虫和地址不变内容变的爬虫。

实例

# 1.spider文件 import scrapyfrom movieAddPro.items import MovieaddproItemfrom scrapy.linkextractors import LinkExtractorfrom scrapy.spiders import CrawlSpider, Rulefrom redis import Redis class MovieaddSpider(CrawlSpider):    name = 'movieadd'    # allowed_domains = ['www.xxx.com']     start_urls = ['https://www.4567tv.tv/frim/index1.html']     link = LinkExtractor(allow=r'.frim/index1-\d+.html')    rules = (        Rule(link, callback='parse_item', follow=True),    )        # 创建reids连接对象    conn = Redis(host='127.0.0.1',port=6379)    # 解析电影的名称和详情页的url    def parse_item(self, response):        li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')        for li in li_list:            title = li.xpath('./div/a/@title').extract_first()            # 获取详情页url            detail_url = 'https://www.4567tv.tv' + li.xpath('./div/a/@href').extract_first()            item = MovieaddproItem()            item['title'] = title             # 判断该详情页的url是否进行请求发送            ex = self.conn.sadd('movieadd_detail_urls',detail_url)            if ex == 1: # 说明detail_url之前不存在redis的set集合中,需要发送请求                print('已有新数据更新,正在爬取数据......')                yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item})            else:                print('暂无新数据更新......')     def parse_detail(self,response):        item = response.meta['item']        desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[3]/text()').extract_first()        item['desc'] = desc         yield item--------------------------------------------------------------------------------# 2.pipelines文件 class MovieaddproPipeline(object):     def process_item(self, item, spider):        dic = {            'title':item['title'],            'desc':item['desc']        }        print(dic)                conn = spider.conn         conn.lpush('movieadd_data',dic)        return item--------------------------------------------------------------------------------# 3.items文件 import scrapy class MovieaddproItem(scrapy.Item):    title = scrapy.Field()    desc = scrapy.Field()--------------------------------------------------------------------------------# 4.setting文件 BOT_NAME = 'movieAddPro' SPIDER_MODULES = ['movieAddPro.spiders']NEWSPIDER_MODULE = 'movieAddPro.spiders' USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' ROBOTSTXT_OBEY = False LOG_LEVEL = 'ERROR' ITEM_PIPELINES = {   'movieAddPro.pipelines.MovieaddproPipeline': 300,}- 需求:爬取糗事百科中的段子和作者数据。 # 1.spider文件 import scrapyfrom scrapy.linkextractors import LinkExtractorfrom scrapy.spiders import CrawlSpider, Rulefrom incrementByDataPro.items import IncrementbydataproItemfrom redis import Redisimport hashlib class QiubaiSpider(CrawlSpider):    name = 'qiubai'    start_urls = ['https://www.qiushibaike.com/text/']     rules = (        Rule(LinkExtractor(allow=r'/text/page/\d+/'), callback='parse_item', follow=True),        Rule(LinkExtractor(allow=r'/text/$'), callback='parse_item', follow=True),    )    #创建redis链接对象    conn = Redis(host='127.0.0.1',port=6379)    def parse_item(self, response):        div_list = response.xpath('//div[@id="content-left"]/div')         for div in div_list:            item = IncrementbydataproItem()            item['author'] = div.xpath('./div[1]/a[2]/h3/text() | ./div[1]/span[2]/h3/text()').extract_first()            item['content'] = div.xpath('.//div[@class="content"]/span/text()').extract_first()             #将解析到的数据值生成一个唯一的标识进行redis存储            source = item['author']+item['content']            source_id = hashlib.sha256(source.encode()).hexdigest()            #将解析内容的唯一表示存储到redis的data_id中            ex = self.conn.sadd('data_id',source_id)             if ex == 1:                print('该条数据没有爬取过,可以爬取......')                yield item            else:                print('该条数据已经爬取过了,不需要再次爬取了!!!')--------------------------------------------------------------------------------# 2.pipelines文件      from redis import Redisclass IncrementbydataproPipeline(object):    conn = None     def open_spider(self, spider):        self.conn = Redis(host='127.0.0.1', port=6379)     def process_item(self, item, spider):        dic = {            'author': item['author'],            'content': item['content']        }        print(dic)        self.conn.lpush('qiubaiData', dic)        return item

以上是"python爬虫中如何分类"这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注行业资讯频道!

0