小男孩‘自慰网亚洲一区二区,亚洲一级在线播放毛片,亚洲中文字幕av每天更新,黄aⅴ永久免费无码,91成人午夜在线精品,色网站免费在线观看,亚洲欧洲wwwww在线观看

分享

Python爬個(gè)淘寶玩玩

 Yy3318q 2021-12-07

版權(quán)聲明:本文為博主原創(chuàng)文章,遵循 CC 4.0 BY-SA 版權(quán)協(xié)議,轉(zhuǎn)載請(qǐng)附上原文出處鏈接和本聲明。

本文鏈接:

blog.csdn.net/zhiguigu/article/details/120061978

前言

是這樣的,之前接了一個(gè)金主的單子,他想在淘寶開(kāi)個(gè)小魚(yú)零食的網(wǎng)店,想對(duì)目前這個(gè)市場(chǎng)上的商品做一些分析,本來(lái)手動(dòng)去做統(tǒng)計(jì)和分析也是可以的,這些信息都是對(duì)外展示的,只是手動(dòng)比較麻煩,所以想托我去幫個(gè)忙。

圖片

一、 項(xiàng)目要求:

具體的要求如下:

1.在淘寶搜索“小魚(yú)零食”,想知道前10頁(yè)搜索結(jié)果的所有商品的銷量和金額,按照他劃定好的價(jià)格區(qū)間來(lái)統(tǒng)計(jì)數(shù)量,給我劃分了如下的一張價(jià)格區(qū)間表:

圖片

2.這10頁(yè)搜索結(jié)果中,商家都是分布在全國(guó)的哪些位置?

3.這10頁(yè)的商品下面,用戶評(píng)論最多的是什么?

4.從這些搜索結(jié)果中,找出銷量最多的10家店鋪名字和店鋪鏈接。

從這些要求來(lái)看,其實(shí)這些需求也不難實(shí)現(xiàn),我們先來(lái)看一下項(xiàng)目的效果。

二、效果預(yù)覽

獲取到數(shù)據(jù)之后做了下分析,最終做成了柱狀圖,鼠標(biāo)移動(dòng)可以看出具體的商品數(shù)量。

圖片

在10~30元之間的商品最多,越往后越少,看來(lái)大多數(shù)的產(chǎn)品都是定位為低端市場(chǎng)。

然后我們?cè)賮?lái)看一下全國(guó)商家的分布情況:

圖片

可以看出,商家分布大多都是在沿海和長(zhǎng)江中下游附近,其中以沿海地區(qū)最為密集。

然后再來(lái)看一下用戶都在商品下面評(píng)論了一些什么:

圖片

字最大的就表示出現(xiàn)次數(shù)最多,口感味道、包裝品質(zhì)、商品分量和保質(zhì)期是用戶評(píng)價(jià)最多的幾個(gè)方面,那么在產(chǎn)品包裝的時(shí)候可以從這幾個(gè)方面去做針對(duì)性闡述,解決大多數(shù)人比較關(guān)心的問(wèn)題。

最后就是銷量前10的店鋪和鏈接了。

圖片

在拿到數(shù)據(jù)并做了分析之后,我也在想,如果這個(gè)東西是我來(lái)做的話,我能不能看出來(lái)什么東西?或許可以從價(jià)格上找到切入點(diǎn),或許可以從產(chǎn)品地理位置打個(gè)差異化,又或許可以以用戶為中心,由外而內(nèi)陸做營(yíng)銷。

越往深想,越覺(jué)得有門道,算了,對(duì)于小魚(yú)零食這一塊我是外行,不多想了。

三、爬蟲(chóng)源碼

由于源碼分了幾個(gè)源文件,還是比較長(zhǎng)的,所以這里就不跟大家一一講解了,懂爬蟲(chóng)的人看幾遍就看懂了,不懂爬蟲(chóng)的說(shuō)再多也是云里霧里,等以后學(xué)會(huì)了爬蟲(chóng)再來(lái)看就懂了。

import csvimport osimport timeimport wordcloudfrom selenium import webdriverfrom selenium.webdriver.common.by import By

def tongji(): prices = [] with open('前十頁(yè)銷量和金額.csv', 'r', encoding='utf-8', newline='') as f: fieldnames = ['價(jià)格', '銷量', '店鋪位置'] reader = csv.DictReader(f, fieldnames=fieldnames) for index, i in enumerate(reader): if index != 0: price = float(i['價(jià)格'].replace('¥', '')) prices.append(price) DATAS = {'<10': 0, '10~30': 0, '30~50': 0, '50~70': 0, '70~90': 0, '90~110': 0, '110~130': 0, '130~150': 0, '150~170': 0, '170~200': 0, } for price in prices: if price < 10: DATAS['<10'] += 1 elif 10 <= price < 30: DATAS['10~30'] += 1 elif 30 <= price < 50: DATAS['30~50'] += 1 elif 50 <= price < 70: DATAS['50~70'] += 1 elif 70 <= price < 90: DATAS['70~90'] += 1 elif 90 <= price < 110: DATAS['90~110'] += 1 elif 110 <= price < 130: DATAS['110~130'] += 1 elif 130 <= price < 150: DATAS['130~150'] += 1 elif 150 <= price < 170: DATAS['150~170'] += 1 elif 170 <= price < 200: DATAS['170~200'] += 1
for k, v in DATAS.items(): print(k, ':', v)

def get_the_top_10(url): top_ten = [] # 獲取代理 ip = zhima1()[2][random.randint(0, 399)] # 運(yùn)行quicker動(dòng)作(可以不用管) os.system(''C:\Program Files\Quicker\QuickerStarter.exe' runaction:5e3abcd2-9271-47b6-8eaf-3e7c8f4935d8') options = webdriver.ChromeOptions() # 遠(yuǎn)程調(diào)試Chrome options.add_experimental_option('debuggerAddress', '127.0.0.1:9222') options.add_argument(f'--proxy-server={ip}') driver = webdriver.Chrome(options=options) # 隱式等待 driver.implicitly_wait(3) # 打開(kāi)網(wǎng)頁(yè) driver.get(url) # 點(diǎn)擊部分文字包含'銷量'的網(wǎng)頁(yè)元素 driver.find_element(By.PARTIAL_LINK_TEXT, '銷量').click() time.sleep(1) # 頁(yè)面滑動(dòng)到最下方 driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') time.sleep(1) # 查找元素 element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class='items']') items = element.find_elements(By.XPATH, './/div[@data-category='auctions']') for index, item in enumerate(items): if index == 10: break # 查找元素 price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,'price')]').text paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class='deal-cnt']').text store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class='location']').text store_href = item.find_element(By.XPATH, './div[2]/div[@class='row row-2 title']/a').get_attribute( 'href').strip() # 將數(shù)據(jù)添加到字典 top_ten.append( {'價(jià)格': price, '銷量': paid_num_data, '店鋪位置': store_location, '店鋪鏈接': store_href })
for i in top_ten: print(i)

def get_top_10_comments(url): with open('排名前十評(píng)價(jià).txt', 'w+', encoding='utf-8') as f: pass # ip = ipidea()[1] os.system(''C:\Program Files\Quicker\QuickerStarter.exe' runaction:5e3abcd2-9271-47b6-8eaf-3e7c8f4935d8') options = webdriver.ChromeOptions() options.add_experimental_option('debuggerAddress', '127.0.0.1:9222') # options.add_argument(f'--proxy-server={ip}') driver = webdriver.Chrome(options=options) driver.implicitly_wait(3) driver.get(url) driver.find_element(By.PARTIAL_LINK_TEXT, '銷量').click() time.sleep(1) element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class='items']') items = element.find_elements(By.XPATH, './/div[@data-category='auctions']') original_handle = driver.current_window_handle item_hrefs = [] # 先獲取前十的鏈接 for index, item in enumerate(items): if index == 10: break item_hrefs.append( item.find_element(By.XPATH, './/div[2]/div[@class='row row-2 title']/a').get_attribute('href').strip()) # 爬取前十每個(gè)商品評(píng)價(jià) for item_href in item_hrefs: # 打開(kāi)新標(biāo)簽 # item_href = 'https://item.taobao.com/item.htm?id=523351391646&ns=1&abbucket=11#detail' driver.execute_script(f'window.open('{item_href}')') # 切換過(guò)去 handles = driver.window_handles driver.switch_to.window(handles[-1])
# 頁(yè)面向下滑動(dòng)一部分,直到讓評(píng)價(jià)那兩個(gè)字顯示出來(lái) try: driver.find_element(By.PARTIAL_LINK_TEXT, '評(píng)價(jià)').click() except Exception as e1: try: x = driver.find_element(By.PARTIAL_LINK_TEXT, '評(píng)價(jià)').location_once_scrolled_into_view driver.find_element(By.PARTIAL_LINK_TEXT, '評(píng)價(jià)').click() except Exception as e2: try: # 先向下滑動(dòng)100,放置評(píng)價(jià)2個(gè)字沒(méi)顯示在屏幕內(nèi) driver.execute_script('var q=document.documentElement.scrollTop=100') x = driver.find_element(By.PARTIAL_LINK_TEXT, '評(píng)價(jià)').location_once_scrolled_into_view except Exception as e3: driver.find_element(By.XPATH, '/html/body/div[6]/div/div[3]/div[2]/div/div[2]/ul/li[2]/a').click() time.sleep(1) try: trs = driver.find_elements(By.XPATH, '//div[@class='rate-grid']/table/tbody/tr') for index, tr in enumerate(trs): if index == 0: comments = tr.find_element(By.XPATH, './td[1]/div[1]/div/div').text.strip() else: try: comments = tr.find_element(By.XPATH, './td[1]/div[1]/div[@class='tm-rate-fulltxt']').text.strip() except Exception as e: comments = tr.find_element(By.XPATH, './td[1]/div[1]/div[@class='tm-rate-content']/div[@class='tm-rate-fulltxt']').text.strip() with open('排名前十評(píng)價(jià).txt', 'a+', encoding='utf-8') as f: f.write(comments + '\n') print(comments) except Exception as e: lis = driver.find_elements(By.XPATH, '//div[@class='J_KgRate_MainReviews']/div[@class='tb-revbd']/ul/li') for li in lis: comments = li.find_element(By.XPATH, './div[2]/div/div[1]').text.strip() with open('排名前十評(píng)價(jià).txt', 'a+', encoding='utf-8') as f: f.write(comments + '\n') print(comments)

def get_top_10_comments_wordcloud(): file = '排名前十評(píng)價(jià).txt' f = open(file, encoding='utf-8') txt = f.read() f.close()
w = wordcloud.WordCloud(width=1000, height=700, background_color='white', font_path='msyh.ttc') # 創(chuàng)建詞云對(duì)象,并設(shè)置生成圖片的屬性
w.generate(txt) name = file.replace('.txt', '') w.to_file(name + '詞云.png') os.startfile(name + '詞云.png')

def get_10_pages_datas(): with open('前十頁(yè)銷量和金額.csv', 'w+', encoding='utf-8', newline='') as f: f.write('\ufeff') fieldnames = ['價(jià)格', '銷量', '店鋪位置'] writer = csv.DictWriter(f, fieldnames=fieldnames) writer.writeheader() infos = [] options = webdriver.ChromeOptions() options.add_experimental_option('debuggerAddress', '127.0.0.1:9222') # options.add_argument(f'--proxy-server={ip}') driver = webdriver.Chrome(options=options) driver.implicitly_wait(3) driver.get(url) # driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class='items']') items = element.find_elements(By.XPATH, './/div[@data-category='auctions']') for index, item in enumerate(items): price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,'price')]').text paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class='deal-cnt']').text store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class='location']').text infos.append( {'價(jià)格': price, '銷量': paid_num_data, '店鋪位置': store_location}) try: driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click() except Exception as e: driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click() for i in range(9): time.sleep(1) driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class='items']') items = element.find_elements(By.XPATH, './/div[@data-category='auctions']') for index, item in enumerate(items): try: price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,'price')]').text except Exception: time.sleep(1) driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,'price')]').text paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class='deal-cnt']').text store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class='location']').text infos.append( {'價(jià)格': price, '銷量': paid_num_data, '店鋪位置': store_location}) try: driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click() except Exception as e: driver.execute_script('window.scrollTo(0,document.body.scrollHeight)') driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click() # 一頁(yè)結(jié)束 for info in infos: print(info) with open('前十頁(yè)銷量和金額.csv', 'a+', encoding='utf-8', newline='') as f: fieldnames = ['價(jià)格', '銷量', '店鋪位置'] writer = csv.DictWriter(f, fieldnames=fieldnames) for info in infos: writer.writerow(info)

if __name__ == '__main__': url = 'https://s.taobao.com/search?q=%E5%B0%8F%E9%B1%BC%E9%9B%B6%E9%A3%9F&imgfile=&commend=all&ssid=s5-e&search_type=item&sourceId=tb.index&spm=a21bo.21814703.201856-taobao-item.1&ie=utf8&initiative_id=tbindexz_20170306&bcoffset=4&ntoffset=4&p4ppushleft=2%2C48&s=0' # get_10_pages_datas() # tongji() # get_the_top_10(url) # get_top_10_comments(url)    get_top_10_comments_wordcloud()

通過(guò)上面的代碼,我們能獲取到想要獲取的數(shù)據(jù),然后再Bar和Geo進(jìn)行柱狀圖和地理位置分布展示,這兩塊大家可以去摸索一下。

    本站是提供個(gè)人知識(shí)管理的網(wǎng)絡(luò)存儲(chǔ)空間,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點(diǎn)。請(qǐng)注意甄別內(nèi)容中的聯(lián)系方式、誘導(dǎo)購(gòu)買等信息,謹(jǐn)防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請(qǐng)點(diǎn)擊一鍵舉報(bào)。
    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評(píng)論

    發(fā)表

    請(qǐng)遵守用戶 評(píng)論公約

    類似文章 更多