您当前的位置: 首页 >  Python

不会翻墙的泰隆

暂无认证

  • 2浏览

    0关注

    31博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

python+selenium 亚马逊商品信息采集

不会翻墙的泰隆 发布时间:2021-08-26 17:11:14 ,浏览量:2

小编最近因为要爬取该网站,发现之前的代码用不了!
  • 所以有了这编文章,原因是因为用requets获取的源码与网页显示不一样,只能逼着我使用selenium,通过本编文章给大家讲解一下selenium基本场景运用。
    1. 照常,打开F12分析网页,获取xpath,这里提醒一下大家,最好是通过右键查看源代码来获取,检查与我们实际得到的代码还是有些不一致的!
    • 废话少说,直接上代码
from selenium import webdriver
from lxml import etree
from selenium.webdriver.common.by import By  # 按照什么方式查找,By.ID,By.CSS_SELECTOR
from selenium.webdriver.common.keys import Keys  # 键盘按键操作
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait  # 等待页面加载某些元素
import csv
import time
all_list = []
class Amazon(object):
    def __init__(self):
        self.url = 'https://www.amazon.com/s?k=%E7%A7%AF%E6%9C%A8&ref=nb_sb_noss'
        self.browser = webdriver.Chrome()

    def __del__(self):
        self.browser.close()

    def get_html(self):
        time.sleep(3)
        text = self.browser.page_source
        html = etree.HTML(text)

        next_url = WebDriverWait(self.browser, 10).until(EC.presence_of_element_located((By.XPATH,'//li[@class="a-last"]')))
        is_next = next_url.get_attribute('xh-highlight')
        urls = html.xpath('//a[@class="a-link-normal s-no-outline"]/@href')
        title = html.xpath('//span[@class="a-size-base-plus a-color-base a-text-normal"]/text()')
        pinfen = html.xpath('//span[@class="a-icon-alt"]/text()')
        rating_value = html.xpath('//span[@class="a-size-base"]/text()')
        prcie = html.xpath('//span[@class="a-offscreen"]/text()')
        for i in range(0,len(urls)):
            try:
                print(title[i], pinfen[i], rating_value[i], urls[i],prcie[i])
                list = [title[i], pinfen[i], rating_value[i], urls[i],prcie[i]]
                all_list.append(list)
            except Exception as e:
                pass
        # 使用csv存储数据
        headers = ["标题",'评分','评论数','链接','价格']
        with open('amazon.csv', 'w', newline='', encoding='utf-8')as f:
            f_csv = csv.writer(f)
            f_csv.writerow(headers)
            f_csv.writerows(all_list)

        return next_url,is_next

    def main(self):
        time.sleep(2)
        self.browser.get(self.url)
        while 1:
            next_url,is_next = self.get_html()
            if is_next is None:
                break
            next_url.click()


if __name__ == '__main__':
    amazon_spider = Amazon()
    amazon_spider.main()

本文章若对你有帮助,烦请点赞,收藏,关注支持一下! 各位的支持和认可就是我最大的动力!
关注
打赏
1658128969
查看更多评论
立即登录/注册

微信扫码登录

0.0597s