【python】当当网某分类页面爬虫练习
君哥
阅读:2530
2023-12-22 17:17:12
评论:3
import requests
from lxml import etree
import csv
import time
import random
import os
os.makedirs('dangdang', exist_ok=True)
writer =csv.writer(open('dangdang/dangdang.csv', 'w', newline='', encoding='utf-8-sig'))
writer.writerow(['图书名称', '上架时间', '出版社', '价格'])
allUrl = ['https://category.dangdang.com/pg{}-cp01.54.92.02.00.00.html'.format(str(i)) for i in range(1, 10)]
count=0
for url in allUrl:
count = count + 1
print(url)
response = requests.get(url)
response.encoding = 'GB2312'
html = etree.html(response.text)
print(html)
allli = html.xpath('//*[@id="component_59"]/li')
print(allli)
print('第{}页开始采集'.format(count))
for li in allli:
print(li)
book_name = li.xpath('./a/@title')[0]
book_time = li.xpath('./p[5]/span[2]/text()')
book_pub = li.xpath('./p[5]/span[3]/a/text()')
book_price = li.xpath('./p[3]/span[1]/text()')
if book_time:
book_time = book_time[0].replace('/','')
else:
book_time = '无'
if book_pub:
book_pub = book_pub[0]
else:
book_pub = '暂无'
if book_price:
book_price = book_price[0].strip('¥')
else:
book_price = 0
rowInfo = (
book_name,
book_time,
book_pub,
book_price
)
print(rowInfo)
writer.writerow(rowInfo)
print('第{}页采集完成!'.format(count))
time.sleep(random.randint(3, 10))
看到你的文章,我仿佛感受到了生活中的美好。
你的才华横溢,让人敬佩。
你的才华横溢,让人敬佩。