MENU

爬猫眼影院信息

July 10, 2019 • Read: 13055 • 教程,学习

女票暑假兼职弄全国各地影院的地址联系方式 以及历史票房,帮她弄过一会,都是复制粘贴复制粘贴

打开我的Sublime Text一顿操作

python 猫眼影院

#-*-coding:utf-8-*-
import requests
import os
from bs4 import BeautifulSoup
import csv
 
page=1    #多少页就写多少
file_name="xx影院.csv" #保存文件名
#cookie 
cookie='_lxsdk_cuid=16ba3042655c8-021bb756ea3cee-e343166-1fa400-16ba3042655c8; uuid_n_v=v1; uuid=EAECE6509A6111E997236B31A64DDA6592C2B1FF33DE4DAE83060027C779F435; _csrf=a49697e895ed83cdc2865a4d32146d83244a096c83f4200ff4cc6a1013b231a5; _lx_utm=utm_source%3DBaidu%26utm_medium%3Dorganic; _lxsdk=EAECE6509A6111E997236B31A64DDA6592C2B1FF33DE4DAE83060027C779F435; ci=52; __mta=20362631.1561808096229.1562714645807.1562714663483.176; _lxsdk_s=16bd90b3f70-aa4-bd3-409%7C%7C26'

#猫眼电影网站有反爬虫措施,设置headers后可以爬取 设置cookie 
headers = {
    'Content-Type': 'text/plain; charset=UTF-8',
    'Origin':'https://maoyan.com',
    'Referer':'https://maoyan.com/',
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36',
    'cookie':cookie
    }
 
#爬取网页源代码
def get_one_page(url,headers):
    try:
        response =requests.get(url,headers =headers)
        if response.status_code == 200:
            return response.text
        return None
    except RequestsException:
        return None
 
#提取影院url
def parse_one_page(html):
    soup = BeautifulSoup(html, 'lxml')
    url_list = soup.find_all('a', attrs={"class": "cinema-name"})
    # print(img_list.title)
    for tmp in url_list:
        url = "https://maoyan.com"+ tmp.get('href')
        # print(url)
        html_info = get_one_page(url,headers)
        parse_one_pageinfo(html_info)

# 影院详细信息
def parse_one_pageinfo(html):
    soup = BeautifulSoup(html, 'lxml')
    cinema_name = soup.find_all('h3', attrs={"class": "name text-ellipsis"})
    cinema_address = soup.find_all('div', attrs={"class": "address text-ellipsis"})
    cinema_phone= soup.find_all('div', attrs={"class": "telphone"})
    print(cinema_name[0].string)
    print(cinema_address[0].string)
    print(cinema_phone[0].string)
    cinema_info = [cinema_name[0].string,cinema_address[0].string, cinema_phone[0].string]
    write_to_file_csv(cinema_info)

def write_to_file_csv(item):
    with open(file_name, 'a', encoding='utf_8_sig',newline='') as f:
        # 'a'为追加模式(添加)
        # utf_8_sig格式导出csv不乱码 
        w = csv.writer(f)
        w.writerow(item) 

def main(offset):
    url = "https://maoyan.com/cinemas?offset="+str(offset)
    print(url)
    html = get_one_page(url,headers)
    if not os.path.exists('covers'):
        os.mkdir('covers')    
    parse_one_page(html)
    # for item in parse_one_page(html):
    #     print(item)
        # write_to_file_csv(item)
        # save_image_file(item['image'],'covers/'+item['title']+'.jpg')
 
if __name__ == '__main__':
    #对每一页信息进行爬取
    for i in range(page):
        main(i*(10+2))

因为历史票房需要app查看,所以爬不了,但还是省了很多时间,几个小时的事情一分钟解决
女朋友都感动哭了系列 啊哈哈哈

请输入图片描述

Last Modified: July 12, 2019
Archives QR Code Tip
QR Code for this page
Tipping QR Code
Leave a Comment

9 Comments
  1. python确实厉害@(真棒)

  2. 有技术走遍天下都不怕(包括交女友 呵呵)

  3. 8错8错,我也玩了一下爬虫,很有意思的活儿~

    1. Bin Bin

      @小彦看着数据乖乖进口袋 美滋滋#(脸红)

    2. @Bin我那弄成了自动填单和下载上传图片,发布新闻,盗新闻一键完成@(捂嘴笑)

  4. 牛逼 我也自学过一点时间 会写一些简单的采集代码 采集了不少网站数据 几百万条吧!!

    1. Bin Bin

      @格子老师我是只会一点 用到啥学啥 因为不是经常用@(笑尿)

    2. @Bin我是看这东西确实不错 就学了几天 后来就放弃了 现在已经很久没弄了

  5. 学以致用,厉害厉害!