python之Scrapy 登录后解析 url 列表

kuangbin 阅读:38 2025-05-04 20:05:19 评论:0

我对python不是很熟悉,所以请耐心等待。
我有一个scrapy爬虫,它应该像它应该的那样工作,但现在我需要做一个新的,但这次它应该爬取一个登录的 session 。
所以我的scrapy使用从站点地图获得的url列表作为start_urls,它应该向登录表单发出请求,然后,如果登录它应该开始解析我的列表......

到目前为止,这是我的代码:

class StockPricesSpider(Spider): 
    name = "logged-in" 
    allowed_domains = ["example.com"] 
    d = strftime("%Y-%m-%d", gmtime()) 
    start_urls = ['https://www.example.com/customer/account/login/'] 
 
    def parse(self, response): 
        return [FormRequest.from_response(response, 
                    formdata={'username': 'myuser', 'password': 'mypass'}, 
                    callback=self.after_login)] 
 
    def after_login(self, response): 
        # check login succeed before going on 
        if "Invalid login or password." in response.body: 
            self.log("Login failed", level=log.ERROR) 
            return 
        else: 
             logging.log(logging.INFO,'Logged in and start parsing') 
             return Request("http://www.example.com/", callback=self.parse_products) 
 
    def parse_products(self, response): 
        f = open("data/sitemaps/urls04102015.txt") 
        start_urls = [url.strip() for url in f.readlines()] 
        f.close() 
        d = strftime("%Y-%m-%d", gmtime()) 
        if os.path.exists("data/results/stock_"+d+".csv"): 
            os.remove("data/results/stock_"+d+".csv")              
 
        sel = Selector(response) 
        separator = ";" 
        items = [] 
 
        item = MyPrices() 
        sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract() 
        logging.log(logging.INFO, sku) 
        if len(sku) > 0:         
            item['sku'] = "med_" + sel.xpath('.//strong[@itemprop="productID"]/text()').extract()[0].strip() 
            ... 
        items.append(item)          
        return items 

所以这不起作用,因为我没有正确调用解析器。
所以基本上,我没有收到错误,但网址也没有被解析。
所以登录有效,我成功登录,但在那之后(登录后)我该怎么做scrapy(解析url列表)?

编辑
我找到了解决我的问题的新方法,但它也无法正常工作。请帮我调试这个(或第一种方法)
class StockPricesSpiderX(InitSpider): 
    name = "logged-in" 
    allowed_domains = ["example.com"] 
    login_page = 'https://www.example.com/ro/customer/account/login/'  
    d = strftime("%Y-%m-%d", gmtime()) 
    f = open("data/sitemaps/urls04102015.txt") 
    start_urls = [url.strip() for url in f.readlines()] 
    f.close() 
    if os.path.exists("data/results/stock_"+d+".csv"): 
        os.remove("data/results/stock_"+d+".csv") 
 
    def init_request(self): 
        """ Called before crawler starts """ 
        logging.log(logging.INFO, 'before crawler starts...') 
        return Request(url=self.login_page, callback=self.login) 
 
    def login(self, response): 
        """ Generate login request """ 
        logging.log(logging.INFO, 'do login...') 
        return FormRequest.from_response(response, 
                                         formdata={'name':'myuser','password':'mypass'}, 
                                         callback=self.check_login_response) 
    def check_login_response(self,response): 
        """ Check the response returned by login request to see if we are logged in """ 
        if "Invalid login or password." in response.body: 
            logging.log(logging.INFO,'... BAD LOGIN ...') 
        else: 
            logging.log(logging.INFO, 'GOOD LOGIN... initialize') 
            self.initialized() 
 
    def parse_item(self, response): 
        sel = Selector(response) 
        separator = ";" 
        items = [] 
        item = StockPrices() 
        sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract() 
        logging.log(logging.INFO, sku) 
        ... 
        items.append(item)          
        return items 

执行日志显示:
2015-12-03 14:54:16 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot) 
2015-12-03 14:54:16 [scrapy] INFO: Optional features available: ssl, http11 
2015-12-03 14:54:16 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'products.spiders', 'FEED_URI': 'calinxautomat.csv', 'LOG_LEVEL': 'INFO', 'DUPEFILTER_CLASS': 'scrapy.dupefilter.RFPDupeFilter', 'SPIDER_MODULES': ['products.spiders'], 'DEFAULT_ITEM_CLASS': 'products.items.Subcategories', 'FEED_FORMAT': 'csv'} 
2015-12-03 14:54:21 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState 
2015-12-03 14:54:23 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2015-12-03 14:54:23 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2015-12-03 14:54:23 [scrapy] INFO: Enabled item pipelines: myWriteToCsv 
2015-12-03 14:54:23 [root] INFO: before crawler starts... 
2015-12-03 14:54:23 [scrapy] INFO: Spider opened 
2015-12-03 14:54:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2015-12-03 14:54:25 [root] INFO: do login... 
2015-12-03 14:54:26 [scrapy] INFO: Closing spider (finished) 
2015-12-03 14:54:26 [scrapy] INFO: Dumping Scrapy stats: 

...

所以这个似乎没有通过登录阶段......就像回调没有从formRequest退出......
我究竟做错了什么?

请您参考如下方法:

parse_products()分配给 start_urls将使用该例程的本地变量,而不是您在蜘蛛顶部设置的全局类。无论如何,我认为分配给 start_urls 不会做你想做的事,scrapy 不会注意到然后解析它们。您需要做的是将要解析的新 url 排队。

for url in f.readlines() 
    yield Request(url.strip(), callback=self.parse_products) 

更新:来自您的更新:scrapy 有一个 url 过滤器,因此它不会重新访问页面。见 this , tldr: 设置 dont_filter=True在表单请求中


标签:Python
声明

1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

关注我们

一个IT知识分享的公众号