您当前的位置: 首页 >  Python

Snakin_ya

暂无认证

  • 3浏览

    0关注

    107博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

python爬虫实战-小说爬取

Snakin_ya 发布时间:2021-10-02 10:46:00 ,浏览量:3

本文是对requests库和xpath语法的一个学习,建议先去学习学习xpath语法。 源码:

from bs4 import BeautifulSoup
import requests
import re
import warnings
import time
from lxml import etree

warnings.filterwarnings("ignore")  # 由于requests获取网页源代码采用verify=False,需要忽略警告
headers = {
  'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36',
  'Connection': 'close',
}


def get_urls(URL):
  Html = requests.get(URL, headers=headers, verify=False)
  Html.encoding = 'utf-8'
  HTML = etree.HTML(Html.text)
  results = HTML.xpath('//dd/a/@href')
  return results

def get_items(result):
  url = 'https://www.ibiquge.net' + str(result)
  html = requests.get(url, headers=headers, verify=False)
  html.encoding = 'utf-8'
  html = etree.HTML(html.text)
  resultstitle = html.xpath('//*[@class="bookname"]/h1/text()')
  resultsbody = html.xpath('//*[@id="content"]/text()')
  items = str(resultstitle).replace(
        '[\'', '').replace('\']', '') + '\n' * 2 +str(resultsbody).replace('\', \'', '').replace('\\xa0\\xa0\\xa0\\xa0','').replace('\\r\\n\\r\\n','\n\n').replace(
        '[\'', '').replace('\']', '').replace('\\r\\n            \\r\\n','').replace('去÷小?說→網』,為您提供精彩小說閱讀。♂去÷小?說→網』,為您提供精彩小說閱讀。','') + '\n' * 2
  return items

def save_to_file(items):
  with open("一念永恒.txt", 'a', encoding='utf-8') as file:  #打开文件用于追加
    file.write(items)

def main(URL):
  results = get_urls(URL)
  cha = 1
  for result in results:
    cha = cha + 1
    if (cha>13):
      ii=cha-13
      items = get_items(result)
      save_to_file(items)
      print('\r' + '爬取进度'+str(ii) +"  / 1267", end="")

if __name__ == '__main__':
  start = time.time()
  URL = 'https://www.ibiquge.net/0_79/'
  main(URL)
  print('爬取完成')
  end = time.time()
  print('爬虫时间:', end - start)



注意事项:

爬虫多次访问同一个网站一段时间后会出现错误

HTTPConnectionPool(host:XX)Max retries exceeded with url ': Failed to establish a new connection: [Errno 99] Cannot assign requested address'

是因为在每次数据传输前客户端要和服务器建立TCP连接,为节省传输消耗,默认为keep-alive,即连接一次,传输多次,然而在多次访问后不能结束并回到连接池中,导致不能产生新的连接

headers中的Connection默认为keep-alive,

将header中的Connection一项置为close即可

参考链接:

https://www.jianshu.com/p/6edb5b987de7

https://www.w3school.com.cn/xpath/xpath_syntax.asp

关注
打赏
1650510800
查看更多评论
立即登录/注册

微信扫码登录

0.0347s