用python selenium etree 获取Indiagogo众筹项目产品信息,把爬出到的数据存为csv格式和json格式,且实现网页长截图

本文介绍如何使用Python的selenium和etree库抓取Indiegogo众筹平台的产品信息,并将数据保存为CSV和JSON格式。同时,还展示了如何进行网页长截图。请注意,此教程仅供技术交流,不应用于商业目的。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

用python selenium etree 获取Indiagogo众筹项目产品信息,把爬出到的数据存为csv格式和json格式,且实现网页长截图,源代码如下:

import csv
import time
import requests
from random import randrange
from fake_useragent import UserAgent
from lxml import html
from selenium import webdriver
import json

import os.path
import multiprocessing as mp
from selenium.webdriver.chrome.options import Options


class getProduct():

    etree = html.etree
    headers = { "authority": "www.indiegogo.com",
"method": "POST",
"path": "/private_api/discover",
"scheme": "https",
"accept": "application/json, text/plain, */*",
"accept-encoding": "gzip, deflate, br",
"accept-language": "zh-CN,zh;q=0.9",
"content-length": '174',
"content-type": "application/json;charset=UTF-8",
"cookie": 'xxxx',
"origin": "https://www.indiegogo.com",
"referer": "https://www.indiegogo.com/explore/tech-innovation?project_type=campaign&project_timing=all&sort=trending",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 xxxx",
"x-csrf-token": "xxxxx",
"x-locale": "en"}
    driver = webdriver.Chrome(r'x:\xxxxxx\chromedriver.exe')
    driver.implicitly_wait(10)
    info = []
    def __init__(self):
        self.headers = headers
        self.info = []

    def get_category(self):
        self.driver.get('https://www.indiegogo.com/explore/tech-innovation?project_type=campaign&project_timing=all&sort=trending')
        time.sleep(3)
        category = self.driver.find_elements_by_xpath('.//li[@class = "ng-scope"]')
        for one in range(len(category)):
            category[one] = category[one].text
        return category

    def webshot(self,url,project_id):

        options = webdriver.ChromeOptions()
        options.add_argument('--headless')
        options.add_argument('--disable-gpu')
        driver = webdriver.Chrome(options=options)
        driver.maximize_window()
        # 返回网页的高度的js代码
        js_height = "return document.body.clientHeight"

        try:
            driver.get(url)
            time.sleep(2)
            driver.maximize_window()
            #  展开详情
            ContinueReading_btn = driver.find_elements_by_xpath(
                '//button[@class="buttonSecondary t-align--center restrictMinWidth buttonSecondary--gogenta buttonSecondary--medium"]')
            if ContinueReading_btn:
                ContinueReading_btn[0].click()
            time.sleep(3)
            k = 1
            height = driver.execute_script(js_height)
            while True:
                if k * 500 < height:
                    js_move = "window.scrollTo(0,{})".format(k * 500)
                    print(js_move)
                    driver.execute_script(js_move)
                    time.sleep(0.2)
                    height = driver.execute_script(js_height)
                    k += 1
                else:
                    break
            scroll_width = driver.execute_script('return document.body.parentNode.scrollWidth')
            scroll_height = driver.execute_script('return document.body.parentNode.scrollHeight')
            driver.set_window_size(scroll_width, scroll_height)

            driver.get_screenshot_as_file(
                r'D:\project\20191101test\img\{}.png'.format(project_id))
            # print("Process {} get one pic !!!".format(os.getpid()))
            print("截图成功:{}".format(project_id))
            time.sleep(0.1)
        except Exception as e:
            print(project_id, e)




    def send_request(self,page):      # 获取所有产品信息
        url = 'https://www.indiegogo.com/private_api/discover'
        # time.sleep(2)
        data = {"sort":"most_funded",
                "category_main": "Audio",
                "category_top_level": "Tech & Innovation",
                "project_timing": "all",
                "project_type": "campaign",
                "page_num": page,
                "per_page": 1,
                "q": "",
                "tags": []}
        header = {"Content-Type":"application/json;charset=UTF-8"}
        data = json.dumps(data)
        response = requests.post(url, data=data, headers=header)
        response.close()
        response = response.json()
        discoverables = response["response"]
        discoverables_values = discoverables["discoverables"]

        return discoverab
### 使用Python编写爬虫程序从微博网站抓取哪吒电影相关帖子及评论 为了实现这一目标,可以采用类似于猫眼电影排行榜的爬虫方法[^1]。具体来说,在构建针对微博平台的数据采集工具时,需注意以下几点: #### 1. 准备工作 - **环境搭建**:确保安装了必要的库文件,如`requests`用于发起HTTP请求;`lxml`或者`BeautifulSoup`负责解析HTML文档结构;如果涉及JavaScript渲染的内容,则可能需要用到`selenium`模拟浏览器行为。 ```bash pip install requests lxml beautifulsoup4 selenium ``` - **开发者账号申请**:对于某些社交平台而言,直接通过网页版API接口访问受限较多,因此建议注册成为开发者并获取相应的AppKey等授权凭证。 #### 2. 获取页面源码 利用`requests.get()`函数发送GET请求至指定URL地址来获得响应对象,进而提取其中包含的目标数据。考虑到微博反扒机制较为严格,适当加入headers参数伪装成正常用户的浏览习惯有助于提高成功率。 ```python import requests url = 'https://weibo.com/search?q=%E5%93%AA%E8%BE%A8' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)', } response = requests.get(url, headers=headers) html_content = response.text ``` #### 3. 数据解析与提取 根据返回的HTML字符串定位到所需的信息节点位置,这里推荐使用XPath表达式配合`etree.HTMLParser()`完成精准匹配操作。对于动态加载的部分内容,考虑借助Chrome DevTools分析AJAX调用过程中的有效载荷格式,从而简化后续处理逻辑。 ```python from lxml import etree tree = etree.HTML(html_content) post_titles = tree.xpath('//div[@class="WB_text W_f14"]/text()') comments = [] for post_title in post_titles: comment_url = f'https://weibo.com/{post_id}/comments' # 假设已知每条帖子ID comments.append(fetch_comments(comment_url)) ``` #### 4. 结果存储 最后一步就是将收集来的资料妥善保存下来以便日后查阅或进一步加工分析。可以选择CSVJSON等形式作为最终输出载体,并按照既定规则命名文件名以方便管理维护。 ```python import json data_to_save = {'posts': [], 'comments': []} with open('nezzar_movie_data.json', mode='w') as file_obj: json.dump(data_to_save, fp=file_obj, ensure_ascii=False, indent=4) ``` 需要注意的是,上述代码片段仅为示意性质,实际开发过程中还需综合考量更多因素,比如异常捕获、重试策略以及遵守各站点的服务条款等等。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值