爬虫学习基础——爬虫学习基础篇

一.最基础知识

  • 什么是服务器渲染?(简单至极,套路死)

    • 服务器端将数据和HTML结合在一起同意返回给浏览器

    例子:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    from urllib.request import urlopen

    url = "https://www.baidu.com/"
    resp = urlopen(url)

    with open("mybaidu.html", mode="w") as f:
    f.write(resp.read().decode('utf-8'))
    print("over")
    resp.close()
  • 什么是客户端渲染?

    • 第一次请求只要一个html骨架.第二次请求拿到数据.进行数据展示
    • 源代码中无法看到数据
  • User-Agent:请求载体的身份标识(用啥发送的请求)

    • 例子1:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    import requests
    role = input("Enter your role: ")
    url = f"https://sogou.com/web?query={role}"
    headers = {
    "User-Agent": "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36"
    }
    resp = requests.get(url, headers=headers)

    with open("sogou1.html", mode="w") as f:
    f.write(resp.text)
    print(resp.text)
    print("over")
    resp.close()
    • 例子2:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    import requests

    url = "https://fanyi.baidu.com/sug"

    s = input("请输入需要翻译的单词:")
    dict = {
    "kw": s
    }

    # 发送的数据必须防盗字典里面去,通过data参数进行传递
    resp = requests.post(url, data=dict)
    # 服务i其返回的内容转换为json数据
    print(resp.json())
    resp.close()
    • 例子3:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    import requests

    url = "https://movie.douban.com/j/chart/top_list"


    headers = {
    "User-Agent": "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) "
    "Chrome/123.0.0.0 Mobile Safari/537.36"
    }
    for i in range(11):
    params = {
    "type": '24',
    "interval_id": "100:90",
    "action": "",
    "start": 20 * i,
    "limit": 20,
    }
    resp = requests.get(url, params=params, headers=headers)
    print(resp.json())
    resp.close()
    • 例子4:

      1
          
  • Referer: 防盗链**(这次请求是从哪个页面来的?反爬虫)**

  • cookie: 本地字符串数据信息**(用户登陆信息,反爬的token)**

二.数据解析概述

  • re解析

    • 常用元字符

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      . 除了换行符以外的任意字符
      \w 匹配非字母或数字或下划线
      \s 匹配任意空白符
      \d 匹配数字
      \n 匹配一个换行符
      \t 匹配一个制表符

      ^ 匹配字符串的开头
      $ 匹配字符串的结尾

      \W 匹配非字母或数字或下划线
      \D 匹配非数字
      \S 匹配非空白符

      a|b 匹配字符a或者字符b
      () 匹配括号内的表达式,也表示一个组
      [...] 匹配字符组中的字符
      [^..] 匹配除了字符组中字符之外的所有字符
    • 量词

      1
      2
      3
      4
      5
      6
      * 重复0次或者更多次
      + 重复一次或者更多次
      ? 重复0次或1
      {n} 重复n
      {n,} 重复n次或者更多次
      {n, m} 重复n~m次
    • 贪婪匹配和惰性匹配

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      .* 贪婪匹配

      .*? 惰性匹配


      // 玩吃鸡游戏,晚上一起上游戏,干嘛呢?打游戏啊.
      // 玩吃鸡.*?游戏
      // 玩吃鸡游戏

      // 玩吃鸡游戏,晚上一起上游戏,干嘛呢?打游戏啊.
      // 玩吃鸡.*游戏
      // 玩吃鸡游戏,晚上一起上游戏,干嘛呢?打游戏

      // dsfsadfsadfsdafxsdfsadfsdafx
      // .*?x
      // dsfsadfsadfsdafx
      // sdfsadfsdafx
    • 例子1

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      import re

      # findall: 匹配字符串中所有符合正则的内容
      lst = re.findall(r"\d+", "我的电话号是13044923469,adfg123")
      print(lst)

      # finditer: 匹配字符串中的内容[返回迭代器],使用group函数获取内容
      it = re.finditer(r"\d+", "我的电话号是13044923469,adfg123")
      for i in it:
      print(i.group())

      # search是全文匹配,而且匹配到一个就结束
      s = re.search(r"\d+", "我的电话号是13044923469,adfg123")
      print(s.group())

      # match是从头匹配
      m = re.match(r"\d+", "13044923469,adfg123")
      print(m.group())

      # 预加载正则表达式
      obj = re.compile(r"\d+")

      it = obj.finditer("我的电话号是13044923469,adfg123")
      for i in it:
      print(i.group())

      import re
      s = """
      <div class='container1'><span id='1'>Mogullzr</span></div>
      <div class='container2'><span id='2'>Mogullzr1</span></div>
      <div class='container3'><span id='3'>Mogullzr2</span></div>
      <div class='container4'><span id='4'>Mogullzr3</span></div>
      <div class='container5'><span id='5'>Mogullzr4</span></div>
      """

      # re.S表示可以让.匹配\n
      obj = re.compile(r"<div class='(?P<wahaha>.*?)'><span id='\d+'>.*?</span></div>", re.S)

      result = obj.finditer(s)
      for it in result:
      print(it.group("wahaha"))
    • 例子2

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      import requests
      import re
      import csv

      url = "https://movie.douban.com/top250"
      headers = {
      'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36'
      }

      response = requests.get(url, headers=headers)
      text = response.text


      obj = re.compile(r'<li>.*?<div class="item">.*?<span class="title">(?P<name>.*?)</span>.*?'+
      r'<p class="">.*?<br>(?P<year>.*?)&nbsp.*?<span ' +
      r'class="rating_num" property="v:average">(?P<rating>.*?)</span>.*?' +
      r'<span>(?P<num>.*?)人评价</span>.*?', re.S)

      f = open("data.csv", mode="w")
      csvwriter = csv.writer(f)

      it = obj.finditer(text)
      for k in it:
      dic = k.groupdict()
      dic['year'] = dic['year'].strip()
      csvwriter.writerow(dic.values())
      # print(k.group("name"))
      # print(k.group("year").strip())
      # print(k.group("rating"))
      # print(k.group("num"))
      print("over")
      response.close()
    • 例子3

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      import requests
      import re
      import csv

      url = "https://www.dytt89.com/"

      headers = {
      'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
      'Cookie': 'Hm_lvt_93b4a7c2e07353c3853ac17a86d4c8a4=1716901679; Hm_lvt_8e745928b4c636da693d2c43470f5413=1716901679; __51uvsct__KSHU1VNqce379XHB=1; __51vcke__KSHU1VNqce379XHB=6d467789-01e9-57fe-ad7a-98964cdcabfd; __51vuft__KSHU1VNqce379XHB=1716954875531; Hm_lvt_0113b461c3b631f7a568630be1134d3d=1716901679,1716954876; Hm_lvt_93b4a7c2e07353c3853ac17a86d4c8a4=1716901679; Hm_lvt_8e745928b4c636da693d2c43470f5413=1716901679; guardok=Ok8PhPa4NHVXOz5NIh33WnPqdv7EgfglYnT5KNNFfy99UMcmR/yaPGyNgSSVDmcQfqhUmf3PUU+W73PzPlgXaA==; __vtins__KSHU1VNqce379XHB=%7B%22sid%22%3A%20%2259608cf2-bbb9-5ddd-9e7e-7a518e8e8ea7%22%2C%20%22vd%22%3A%205%2C%20%22stt%22%3A%20200939%2C%20%22dr%22%3A%205864%2C%20%22expires%22%3A%201716956876461%2C%20%22ct%22%3A%201716955076461%7D; Hm_lpvt_93b4a7c2e07353c3853ac17a86d4c8a4=1716955077; Hm_lpvt_8e745928b4c636da693d2c43470f5413=1716955077; Hm_lpvt_0113b461c3b631f7a568630be1134d3d=1716955077'
      }
      resp = requests.get(url, verify=False, headers=headers) # 这里的verify=False是取消安全认证
      resp.encoding = 'gb2312'
      text = resp.text

      print(resp.text)

      obj1 = re.compile(r'2024必看热片.*?<ul>(?P<ul>.*?)</ul>', re.S)
      obj2 = re.compile(r"<a href='(?P<href>.*?)'", re.S)

      child_href_list = []
      result1 = obj1.finditer(text)
      for k1 in result1:
      ul = k1.group("ul")
      result2 = obj2.finditer(ul)
      for k2 in result2:
      child_href = url + k2.group("href")
      print(child_href)
      child_href_list.append(child_href)

      f = open("data2.csv", mode="w")
      csvwriter = csv.writer(f)

      for child_href in child_href_list:
      res = requests.get(child_href, verify=False, headers=headers)
      res.encoding = 'gb2312'
      obj3 = re.compile(r'◎片  名 (?P<name>.*?)<br />.*?<li>.*?<td.*?<a href="(?P<Href>.*?)"', re.S)
      result3 = obj3.search(res.text)
      dict = result3.groupdict()
      csvwriter.writerow(dict.values())
      # print(result3.group("name"))
      # print(result3.group("Href"))
  • bs4解析

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    # 代码无效了已经,可能需要重新分析
    import requests
    from bs4 import BeautifulSoup
    import time

    url = "https://www.umei.cc/bizhitupian/weimeibizhi/"

    resp = requests.get(url)
    resp.encoding = 'utf-8'
    print(resp.text)

    # 首先源代码爬下来然后是哟不够BS4
    main_page = BeautifulSoup(resp.text, 'html.parser')
    alist = main_page.find("div", class_="TypeList").find_all("a")
    # print(alist)

    # 在找到子页面链接之后再依次对每个子页面进行请求又得到源代码,
    # 最后再分析找到每张图片对应的下载路径.
    # 在找到路径之后我们就可以将其中内容写入到文件中(因为这里是图片所以是字节)
    # 最后指定下载文件名和下载路径即可
    for a in alist:
    # print(a.get('href'))
    href = a.get("href")
    # 拿到子页面的源码
    child_page_resp = requests.get(href)
    child_page_resp.encoding = 'utf-8'
    child_page_text = child_page_resp.text
    # 从子页面中拿到图片的下载路径
    child_page = BeautifulSoup(child_page_text, 'html.parser')
    p = child_page.find("p", align="center")
    img = p.find("img")
    src = img.get("src")
    # print(src)
    img_resp = requests.get(src)
    # img.resp.content 这里我们拿到的是字节
    img_name = src.split("/")[-1]
    with open("img/" + img_name, 'wb') as f:
    f.write(img_resp.content) # 图片内容写入文件
    print("over!!!", img_name)
    time.sleep(1)
    print("All over")
  • xpath解析

    • 例子

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      from lxml import etree

      xml = """
      <book>
      <id>1</id>
      <name>Mogullzr</name>
      <nick>19</nick>
      <nick class="ok">20</nick>
      <author>
      <nick id="10086">Axliu</nick>
      <div>
      <nick>傻逼</nick>
      </div>
      <span>
      <nick>傻逼</nick>
      </span>
      </author>
      <ul>
      <li><a href="#">123</a></li>
      <li><a href="#">456</a></li>
      <li><a href="#">789</a></li>
      </ul>
      </book>
      """

      tree = etree.XML(xml)
      result1 = tree.xpath('/book/id/text()')
      print(result1) # ['1']

      result2 = tree.xpath('/book/author//nick/text()') # //指的是当前结点的所有子结点中寻找即可
      print(result2) # ['Axliu', '傻逼', '傻逼']

      result3 = tree.xpath('/book/author/*/nick/text()') # 这里的*表示可以是任意的标签,可以理解为通配符
      print(result3) # ['傻逼', '傻逼']

      result4 = tree.xpath('/book/nick[1]/text()') # 取第一个元素内容
      print(result4) # [1]: 取到第一个元素里面的信息

      result5 = tree.xpath('/book/nick[@class="ok"]/text()') # 获取class="ok"的book标签下的nick标签
      print(result5) # ['20']

      result6 = tree.xpath('/book/ul/li')

      for li in result6:
      text = li.xpath('./a/text()') # 再子元素的li结点开始继续往下查找就ok了,注意这里的./
      print(text)
      href = li.xpath('./a/@href')
      print(href)
      # ['123']
      # ['#']
      # ['456']
      # ['#']
      # ['789']
      # ['#']

三.requests进阶概述

用户登陆模拟

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import requests

url = "https://passport.17k.com/ck/user/login"
sessions = requests.session()
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
'Cookie' :'GUID=02e82b18-cc2a-4ec8-9d9a-992926b5bdf0; Hm_lvt_9793f42b498361373512340937deb2a0=1716981730; sajssdk_2015_cross_new_user=1; c_channel=0; c_csc=web; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2202e82b18-cc2a-4ec8-9d9a-992926b5bdf0%22%2C%22%24device_id%22%3A%2218fc4160b6e1456-0a29134a747143-14462c6f-2073600-18fc4160b6f1bde%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_referrer_host%22%3A%22%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%7D%2C%22first_id%22%3A%2202e82b18-cc2a-4ec8-9d9a-992926b5bdf0%22%7D; acw_tc=276077e017169848640356266edaeedbe478506b8cfddf197c2dabab4a92ce; Hm_lpvt_9793f42b498361373512340937deb2a0=1716984932; ssxmod_itna=7qGOY5D5AKBKKBPGKiQi8w+bRq7KDOitUlUDBw8x4iNDnD8x7YDvGGpbYGDoDHYmA2oGIAY2wmT5tihYx4vmWBWn1bGDGoDEx0=DcxDNDGet2DDs=DLKYD4+KGwR4AA2Di2=GIDieDFKqDEAwKDz4Gm4eNDmqGRS4Gy9822R3dDjbX2WxDHYGuKDYPDdeDBzCU2mGux4ZmBGebDtqD98CjqM4G2BeXZBdWzW4ej92BfU7DK8fhQQfhcRGwq7q+1jGDZ9oxYFiD==; ssxmod_itna2=7qGOY5D5AKBKKBPGKiQi8w+bRq7KDOitUlDAqA=4D/QxKT3rLOpD7P0Pok30qM7uFrxvtz/jcGc6C=Bg2q0K5AC77rvWGImb=zeDOdW4G5FmB3P0+xr/jHQhP=DvAn=mNGUzDLxijpWwDqDOA4oAd3miD===; acw_sc__v2=66571cca8f2752a9d481a8325fbb0b7b9df37b67; tfstk=f7NjT2aA12mzLBzSjxQrNMplhBl_Go1FGFgT-Pd2WjhYWfaTScoqQcySw2EmBSl4uu__2k4qQjF_ECa3SS7m_Zc0ofcOYM5E1r4msGqZS-Fjw333yVKxDZzGpxlOYM5y-8jeRfewmhwnNzno5qKxWlH-200ZWfnxB3p-J0nt6fEtw43skI39HxC-yFqlQ45iqrsZ95SpyTQzz0O9OFgmHcCnACdTNj4SvrpM6CFSlxNZqcosVfwT-RGtNZCTokooBWERaIG_yVZIRotRzjU3LuiqONJnjJzTL2E1LK4-w2DLLx-XOfPbySuufep4oAezivVV2IGLKrl4BuCWGkIzyBoCa5v6ImdtPD75Pdva5AFVkAAzyIDxr48fPa9UPx3oPD75PdviH40yla_WLz1..'
}

data = {
"loginName": "13044923469",
"password": "142398749lzr"
}

# 1.login
# resp = sessions.post(url, data=data, headers=headers)
# resp.encoding = "utf-8"

# print(resp.text)
# print(resp.cookies) # 查看cookie

sessions.post(url, data=data, headers=headers)

# 2.那书架数据
# 刚才那个session中是有cookie的
resp = sessions.get("https://useimport requests
import os

os.environ['NO_PROXY'] = "rzcode.top"
proxies = {
"https": "https://117.42.94.41:18027"
}
headers = {
'Connection': 'close',
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
}
resp = requests.get("https://www.rzcode.top", headers=headers, verify=False, proxies=proxies)

print(resp.text)r.17k.com/ck/author2/shelf?page=1&appKey=2406394919")
print(resp.json())

爬取设置有防盗链的网站措施(这里面的视频还用到了一点点其他小手段,但是不多)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import requests

url = "https://pearvideo.com/video_1794442"
contId = url.split("_")[1]

videoStatusUrl = f"https://pearvideo.com/videoStatus.jsp?contId={contId}&mrd=0.49695927938279705"

headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
# 这里用到了基础反爬技术 : 防盗链,可以理解为网站需要你给出你是通过访问什么链接来获取到该post或者get请求的
'Referer': 'https://pearvideo.com/video_1794442'
}
resp = requests.get(videoStatusUrl, headers=headers)
dic = resp.json()
srcUrl = dic['videoInfo']['videos']['srcUrl']
TimeSystem = dic['systemTime']
srcUrl = srcUrl.replace(TimeSystem, f"cont-{contId}")

print(srcUrl)
with open("a.mp4", mode="wb") as f:
f.write(requests.get(srcUrl).content)
print("over")
# "https://video.pearvideo.com/mp4/short/20240524/1716986831992-71106895-hd.mp4"
# "https://video.pearvideo.com/mp4/short/20240524/cont-1794442-71106895-hd.mp4"
# https://video.pearvideo.com/mp4/short/20240524/cont-1794442-71106895-hd.mp4

代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 原理:通过第三方的一个机器去发送请求,下面代码估计没用了
import requests
import os

os.environ['NO_PROXY'] = "rzcode.top"
proxies = {
"https": "https://117.42.94.41:18027"
}
headers = {
'Connection': 'close',
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
}
resp = requests.get("https://www.rzcode.top", headers=headers, verify=False, proxies=proxies)

print(resp.text)

小练习

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import requests
# import time
import csv


f = open("acwing.csv", mode="w", encoding="utf-8")
csvwriter = csv.writer(f)

headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
'Accept': 'application/json, text/javascript, */*; q=0.01',
}

cookie = {
'cookie': 'p_h5_upload_u=A96BFEED-6131-48AF-B174-DCC7040E4209; csrftoken=0Wq5jHb4tpugPvNXV3F25tdGyU7bYqzhZcWLQtHJ3Fv6UtfI6xaAWWE3dMI15u5b; sessionid=xl1rjxcyicecm508sarr6k4b7grs4rwb',
}
url = "https://www.acwing.com/problem/"
url_tmp = url
ac_total_count = 0
un_accept_count = 0
# resp = requests.get(url, headers=headers, cookies=cookie)
# ac_total_count = ac_total_count + str(resp.text).count("已通过这道题目")
# un_accept_count = un_accept_count + str(resp.text).count("尝试过,但未通过这道题目")

for i in range(1, 115):
url = url_tmp + str(i)
print(url)
resp = requests.get(url, headers=headers, cookies=cookie)
k1 = str(resp.text).count("已通过这道题目")
k2 = str(resp.text).count("尝试过,但未通过这道题目")
ac_total_count = ac_total_count + k1
un_accept_count = un_accept_count + k2
list = [f"第{i}页通过{k1}道题目,但是有{k2}道题目尝试过,但尚未通过", ""]
csvwriter.writerow(list)
# time.sleep(1)

print("一共通过题目数量" + str(ac_total_count))
print("尝试过,但未通过的题目数量" + str(un_accept_count))
list = ["一共通过题目数量" + str(ac_total_count), "尝试过,但未通过的题目数量" + str(un_accept_count)]
csvwriter.writerow(list)

四.多线程/进程的基础使用

多线程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
from threading import Thread

def func():
for i in range(1000):
print("func", i)


if __name__ == '__main__':
Thread = Thread(target=func)
Thread.start()
for j in range(1000):
print("main", j)

from threading import Thread


class MyThread(Thread):
def run(self):
for i in range(1000):
print("MyThread", i)


if __name__ == '__main__':
Thread = MyThread()
Thread.start()
for j in range(1000):
print("Main", j)



from threading import Thread

def func(name):
for i in range(1000):
print(name, i)


if __name__ == '__main__':
p1 = Thread(target=func, args=("Mogullzr", ))
p1.start()

p2 = Thread(target=func, args=("Mogullzr123",))
p2.start()

多进程

1
2
3
4
5
6
7
8
9
10
11
12
13
from multiprocessing import Process
md

def func(name):
for i in range(1000):
print(name, i)


if __name__ == '__main__':
p = Process(target=func)
p.start()
for i in range(1000):
print("Main", i)

线程/进程池(一次性开比一些线程,我们用户直接给线程池子提交任务,线程任务的调度交给线程池来完成)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# 小例子
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor


def fn(name):
for i in range(1000):
print(name, i)


if __name__ == '__main__':
# 创建线程池
with ThreadPoolExecutor(max_workers=50) as t:
for i in range(100):
t.submit(fn, name=f"线程{i}")

# 等待线程池中的人物全部执行完毕才继续执行(守护线程)
print("123")

# 爬网站某类数据(线程/进程池)

import requests
from lxml import etree
import csv
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor



headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36'
}


def download_page(url):
resp = requests.get(url, headers=headers)
html = etree.HTML(resp.text)
local = html.xpath('/html/body/div[@class="container-fluid"]/div/main/div[@class="card mb-4 rounded-3 shadow-sm mt-4"]/div[@class="card-body"]' +
'/div[@class="fw-normal"]/strong/text()')[0].split(' ')[2]

print(f"{local}.csv")
f = open(f"{local}.csv", mode="w", encoding="utf-8")
csvwriter = csv.writer(f)
table = html.xpath(
'/html/body/div[@class="container-fluid"]/div/main/div[@class="card mb-4 rounded-3 shadow-sm mt-4"]/div[@class="card-body"]' +
'/div[@class="row py-3"]/div[@class="col-12 col-lg-7"]' +
'/div[@class="table-responsive"]/table[@class="table table-striped table-hover"]')[0]
trs = table.xpath('./tbody/tr')
for tr in trs:
txt = tr.xpath('./td/text()')
# 存取数据
csvwriter.writerow(txt)
# print(txt)
# print(len(tr))
pass


if __name__ == '__main__':
with ThreadPoolExecutor(max_workers=50) as t: # 50个线程同时运行,非常快,但是极大可能性被反爬
for i in range(1, 15):
k = i * 100
if len(str(k)) == 3:
url = f'https://lishidata.com/weather/%E5%8C%97%E4%BA%AC/%E5%8C%97%E4%BA%AC/101010{k}.html'
else:
url = f'https://lishidata.com/weather/%E5%8C%97%E4%BA%AC/%E5%8C%97%E4%BA%AC/10101{k}.html'
t.submit(download_page, (url,))
print(url, "提取完毕")
print("全部下载完毕!!!!")

协程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import asyncio
import aiohttp

# urls里面放好我们需要请求的图片地址
urls = [
"https://www.umei.cc/d/file/20230906/3ff61c3ea61f07c98fb3afd8aff40bf8.jpeg",
"https://www.umei.cc/d/file/20230906/bec2a6b127c47b8d6f53f36e1a875f07.jpeg",
"https://www.umei.cc/d/file/20230906/4c2ecd1edbcb9764b6b3284cca8ee67a.jpg"
]


async def aiodownload(url):
# 这里是将文件名字以"/"分割开取到图片名字
name = url.rsplit("/", 1)[1]
# aiohttp.ClientSession() <==> requests
async with aiohttp.ClientSession() as session:
# 这里的resp就是获取到的请求返回信息了
async with session.get(url) as resp:
# 请求回来, 写入文件
with open(name, mode="wb") as f:
f.write(await resp.content.read()) # 读取内容是异步的, 需要使用await挂起

print("over!!!!")
# aiohttp.ClientSession() # <==> requests
# 发送请求
# 得到图片内容
# 保存到文件
pass


async def main():
tasks = []
for url in urls:
tasks.append(asyncio.create_task(aiodownload(url)))
await asyncio.wait(tasks)
pass


if __name__ == '__main__':
asyncio.run(main())

协程扒光一个小说

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import requests
import asyncio
import aiohttp
import json
import aiofiles


# https://boxnovel.baidu.com/boxnovel/wiseapi/chapterList?bookid=4345105404&pageNum=1&order=asc
# https://appyd.baidu.com/nabook/detail/wap?fr=9&uid=&boxnovelTimeStampNow=1717154932329&data={doc_id:284b464e0efc700abb68a98271fe910ef12daeef}

async def aiodownload(cid, b_id, title):
data = {
"book_id": b_id,
"cid": f"{b_id}|{cid}",
"need_bookinfo": 1
}
# json化
data = json.dumps(data)
url = f"http://dushu.baidu.com/api/pc/getChapterContent?data={data}"

# 首先就是http请求一下
async with aiohttp.ClientSession() as session:
async with session.get(url) as resp:
# 紧接着就是将请求到的数据装到dic里面去
dic = await resp.json()
# 最后就是异步读取同时将数据读取到文件里面去
async with aiofiles.open(f"./西游记/{title}.txt", mode="w", encoding="utf-8") as f:
await f.write(dic['data']['novel']['content'])
pass


async def getCatalog(url):
resp = requests.get(url)
dic = resp.json()
tasks = []
for item in dic['data']['novel']['items']:
title = item['title']
cid = item['cid']
# 准备开始异步任务
tasks.append(asyncio.create_task(aiodownload(cid, b_id, title)))
# tasks.append(asyncio.create_task(aiodownload(book_id, cid, title)))

await asyncio.wait(tasks)


if __name__ == '__main__':
b_id = 4306063500
url = 'http://dushu.baidu.com/api/pc/getCatalog?data={"book_id":"' + str(b_id) + '"}'
asyncio.run(getCatalog(url))

提取视频文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
import requests
import re

headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36'
}
if __name__ == '__main__':
obj = re.compile(r"url: '(?P<url>.*?)',", re.S) # 用来提取m3u8的地址
url = "https://www.91kanju.com/vod-play/54812-1-1.html"
resp = requests.get(url)
m3u8_url = obj.search(resp.text).group("url")
# print(m3u8_url)
resp.close()

resp2 = requests.get(m3u8_url, headers=headers)
with open("?.m3u8", mode="wb") as f:
f.write(resp2.content)
resp2.close()
print("下载完毕")






import requests
# 解析m3u8文件
n = 1
with open("?.m3u8", mode="r", encoding="utf-8") as f:
for line in f:
line = line.strip() # 先去除空格,空白,换行符啥的
if line.startswith("#"):
continue # 如果以#开头的话,不要
# print(line)

# 下载视频片段
resp3 = requests.get(line)
f = open(f"{n}.ts", mode="wb")
f.write(resp3.content)
f.close()
n += 1
print("完成了1个")

提取视频PLUS(基础篇的最终之战!!!!!!!)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
import requests
from bs4 import BeautifulSoup
import re
import asyncio
import aiohttp
import aiofiles
from Crypto.Cipher import AES
import os

def get_iframe_src(url):
resp = requests.get(url)
main_page = BeautifulSoup(resp.text, "html.parser")
return main_page.find("iframe").get("src")
pass


def get_first_m3u8_src(url):
resp = requests.get(url)
obj = re.compile(r'var main = "(?P<m3u8_url>.*?)"', re.S)
m3u8_src = obj.search(resp.text).group("m3u8_url")
return m3u8_src


def download_m3u8_file(url, name):
res = requests.get(url)
with open(name, mode="wb", encoding="utf-8") as f:
f.write(res.content)
pass


async def download_ts(url, name, session):
async with session.get(url) as resp:
async with aiofiles.open(f"video2/{name}", mode="wb") as f:
await f.write(await resp.content.read()) # 下载到的内容写入文件中去
print(f"{name}下载完毕")
pass


async def aio_download(up_url):
tasks = []
async with aiohttp.ClientSession() as session: # 优化
async with aiofiles.open("越狱_second_m3u8.txt", mode="r", encoding="utf-8") as f:
async for line in f:
if line.startswith("#"):
continue
line = line.strip() # 去掉没有用的空格换行
# 拼接真正的ts路径
ts_url = up_url + line
task = asyncio.create_task(download_ts(ts_url, line, session))
tasks.append(task)

await asyncio.wait(tasks)


def get_key(url):
resp = requests.get(url)
return resp.text


# AES解密方式
async def dec_ts(name, key):
aes = AES.new(key=key, IV=b"0000000000000000", mode=AES.MODE_CBC)
async with aiofiles.open(f"video2/{name}", mode="rb") as f1, \
aiofiles.open(f"video2/temp_{name}", mode="wb") as f2:
bs = await f1.read() # 从源文读取内容
await f2.write(aes.decrypt(bs)) # 把解密好的内容写入文件
print(f"{name}处理完毕!!!!")
pass


async def aio_dec(key):
# 解密
tasks = []
async with aiofiles.open("越狱_second_m3u8.txt", mode="r", encoding="utf-8") as f:
async for line in f:
if line.startswith("#"):
continue
line = line.strip()
# 开始创建异步任务
task = asyncio.create_task(dec_ts(line, key))
tasks.append(task)
await asyncio.wait(tasks)
pass


def merge_ts():
# windows: copy /b 1.ts+2.ts+3.ts xxx.mp4
lst = []
with open("越狱_second_m3u8.txt", mode="r", encoding="utf-8") as f:
for line in f:
if line.startswith("#"):
continue
line = line.strip()
lst.append(f"video2/temp_{line}")
s = "+".join(lst)
os.system(f"copy /b {s} movie.mp4")
print("完全搞定!!!!!!!!!1")
pass


def main(url):
# 1.拿到主页面的页面源代码, 找到iframe对应的url
iframe_src = get_iframe_src(url)

# 2.拿到m3u8文件的第一层下载地址
first_m3u8_src = get_first_m3u8_src(iframe_src)
iframe_domain = first_m3u8_src.split("share")[0]
first_m3u8_src = iframe_domain + first_m3u8_src

# 3.1下载第一层m3u8文件
download_m3u8_file(url, "越狱_first_m3u8.txt")

# 3.2下载第二层m3u8文件
with open("越狱_first_m3u8.txt", mode="r", encoding="utf-8") as f:
for line in f:
if line.startswith("#"):
continue
else:
line = line.strip() # 去掉空白或者换行符
# https://boba.52kuyun.com/20170906/Moh219zV/index.m3u8?......
# https://boba.52kuyun.com/20170906/Moh219zV/hls/index.m3u8
# 准备拼接
second_m3u8_src = first_m3u8_src.split("index.m3u8")[0] + line
download_m3u8_file(second_m3u8_src, "越狱_second_m3u8.txt")
# 4.下载视频(使用异步协程)
second_m3u8_src_up = second_m3u8_src.replace("index.m3u8", "")
asyncio.run(aio_download(second_m3u8_src))

# 5.1拿到密钥
key_url = second_m3u8_src_up + "key.key"
key = get_key(key_url)

# 5.2解密(异步协程)
asyncio.run(aio_dec(key))

# 6.合并ts文件
merge_ts()
if __name__ == '__main__':
url = "https://www.91kanju.com/vod-play/541-2-1.html"
main(url)

基础篇总结

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
import requests
from lxml import etree

url = "https://search.dangdang.com/"
keyword = input()
url += "/?key=" + keyword
# print(url)

headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36"
}

response = requests.get(url, headers=headers)

html = etree.HTML(response.text)
titles = html.xpath('//div[@class="con shoplist"]//p[@class="name"]//a/text()')
authors = html.xpath('//div[@class="con shoplist"]//span[@class="search_book_author"]/text()')
princes1 = html.xpath('//div[@class="con shoplist"]//p[@class="price"]//span[@class="search_now_price"]/text()')
princes2 = html.xpath('//div[@class="con shoplist"]//p[@class="price"]//span[@class="search_pre_price"]/text()')
print(titles)

a = 0
for i in range(0, len(titles) + 1, 2):
a += 1
print(f"书名:{titles[i + 1]} 价格:{princes2[i]}; 秒杀价:{princes1[i]}")

print(f"总共抓取到{a}本书!")
print(titles)



from urllib.request import urlopen

url = "https://www.baidu.com/"
resp = urlopen(url)

with open("mybaidu.html", mode="w") as f:
f.write(resp.read().decode('utf-8'))
print("over")


import requests
role = input("Enter your role: ")
url = f"https://sogou.com/web?query={role}"
headers = {
"User-Agent": "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36"
}
resp = requests.get(url, headers=headers)

with open("sogou1.html", mode="w") as f:
f.write(resp.text)
print(resp.text)
print("over")

import requests

url = "https://fanyi.baidu.com/sug"

s = input("请输入需要翻译的单词:")
dict = {
"kw": s
}

# 发送的数据必须防盗字典里面去,通过data参数进行传递
resp = requests.post(url, data=dict)
# 服务i其返回的内容转换为json数据
print(resp.json())

import requests

url = "https://movie.douban.com/j/chart/top_list"


headers = {
"User-Agent": "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/123.0.0.0 Mobile Safari/537.36"
}
for i in range(11):
params = {
"type": '24',
"interval_id": "100:90",
"action": "",
"start": 20 * i,
"limit": 20,
}
resp = requests.get(url, params=params, headers=headers)
print(resp.json())
resp.close()

import re

# findall: 匹配字符串中所有符合正则的内容
lst = re.findall(r"\d+", "我的电话号是13044923469,adfg123")
print(lst)

# finditer: 匹配字符串中的内容[返回迭代器],使用group函数获取内容
it = re.finditer(r"\d+", "我的电话号是13044923469,adfg123")
for i in it:
print(i.group())

# search是全文匹配,而且匹配到一个就结束
s = re.search(r"\d+", "我的电话号是13044923469,adfg123")
print(s.group())

# match是从头匹配
m = re.match(r"\d+", "13044923469,adfg123")
print(m.group())

# 预加载正则表达式
obj = re.compile(r"\d+")

it = obj.finditer("我的电话号是13044923469,adfg123")
for i in it:
print(i.group())


import re
s = """
<div class='container1'><span id='1'>Mogullzr</span></div>
<div class='container2'><span id='2'>Mogullzr1</span></div>
<div class='container3'><span id='3'>Mogullzr2</span></div>
<div class='container4'><span id='4'>Mogullzr3</span></div>
<div class='container5'><span id='5'>Mogullzr4</span></div>
"""

# re.S表示可以让.匹配\n
obj = re.compile(r"<div class='(?P<wahaha>.*?)'><span id='\d+'>.*?</span></div>", re.S)

result = obj.finditer(s)
for it in result:
print(it.group("wahaha"))

import requests
import re
import csv

url = "https://movie.douban.com/top250"
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36'
}

response = requests.get(url, headers=headers)
text = response.text


obj = re.compile(r'<li>.*?<div class="item">.*?<span class="title">(?P<name>.*?)</span>.*?'+
r'<p class="">.*?<br>(?P<year>.*?)&nbsp.*?<span ' +
r'class="rating_num" property="v:average">(?P<rating>.*?)</span>.*?' +
r'<span>(?P<num>.*?)人评价</span>.*?', re.S)

f = open("data.csv", mode="w")
csvwriter = csv.writer(f)

it = obj.finditer(text)
for k in it:
dic = k.groupdict()
dic['year'] = dic['year'].strip()
csvwriter.writerow(dic.values())
# print(k.group("name"))
# print(k.group("year").strip())
# print(k.group("rating"))
# print(k.group("num"))
print("over")
response.close()


import requests
import re
import csv

url = "https://www.dytt89.com/"

headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
'Cookie': 'Hm_lvt_93b4a7c2e07353c3853ac17a86d4c8a4=1716901679; Hm_lvt_8e745928b4c636da693d2c43470f5413=1716901679; __51uvsct__KSHU1VNqce379XHB=1; __51vcke__KSHU1VNqce379XHB=6d467789-01e9-57fe-ad7a-98964cdcabfd; __51vuft__KSHU1VNqce379XHB=1716954875531; Hm_lvt_0113b461c3b631f7a568630be1134d3d=1716901679,1716954876; Hm_lvt_93b4a7c2e07353c3853ac17a86d4c8a4=1716901679; Hm_lvt_8e745928b4c636da693d2c43470f5413=1716901679; guardok=Ok8PhPa4NHVXOz5NIh33WnPqdv7EgfglYnT5KNNFfy99UMcmR/yaPGyNgSSVDmcQfqhUmf3PUU+W73PzPlgXaA==; __vtins__KSHU1VNqce379XHB=%7Bsid%3A%2059608cf2-bbb9-5ddd-9e7e-7a518e8e8ea7%2C%20vd%3A%205%2C%20stt%3A%20200939%2C%20dr%3A%205864%2C%20expires%3A%201716956876461%2C%20ct%3A%201716955076461%7D; Hm_lpvt_93b4a7c2e07353c3853ac17a86d4c8a4=1716955077; Hm_lpvt_8e745928b4c636da693d2c43470f5413=1716955077; Hm_lpvt_0113b461c3b631f7a568630be1134d3d=1716955077'
}
resp = requests.get(url, verify=False, headers=headers) # 这里的verify=False是取消安全认证
resp.encoding = 'gb2312'
text = resp.text

print(resp.text)

obj1 = re.compile(r'2024必看热片.*?<ul>(?P<ul>.*?)</ul>', re.S)
obj2 = re.compile(r"<a href='(?P<href>.*?)'", re.S)

child_href_list = []
result1 = obj1.finditer(text)
for k1 in result1:
ul = k1.group("ul")
result2 = obj2.finditer(ul)
for k2 in result2:
child_href = url + k2.group("href")
print(child_href)
child_href_list.append(child_href)

f = open("data2.csv", mode="w")
csvwriter = csv.writer(f)

for child_href in child_href_list:
res = requests.get(child_href, verify=False, headers=headers)
res.encoding = 'gb2312'
obj3 = re.compile(r'◎片  名 (?P<name>.*?)<br />.*?<li>.*?<td.*?<a href="(?P<Href>.*?)"', re.S)
result3 = obj3.search(res.text)
dict = result3.groupdict()
csvwriter.writerow(dict.values())
# print(result3.group("name"))
# print(result3.group("Href"))


# 代码无效了已经,可能需要重新分析
import requests
from bs4 import BeautifulSoup
import time

url = "https://www.umei.cc/bizhitupian/weimeibizhi/"

resp = requests.get(url)
resp.encoding = 'utf-8'
print(resp.text)

# 首先源代码爬下来然后是哟不够BS4
main_page = BeautifulSoup(resp.text, 'html.parser')
alist = main_page.find("div", class_="TypeList").find_all("a")
# print(alist)

# 在找到子页面链接之后再依次对每个子页面进行请求又得到源代码,
# 最后再分析找到每张图片对应的下载路径.
# 在找到路径之后我们就可以将其中内容写入到文件中(因为这里是图片所以是字节)
# 最后指定下载文件名和下载路径即可
for a in alist:
# print(a.get('href'))
href = a.get("href")
# 拿到子页面的源码
child_page_resp = requests.get(href)
child_page_resp.encoding = 'utf-8'
child_page_text = child_page_resp.text
# 从子页面中拿到图片的下载路径
child_page = BeautifulSoup(child_page_text, 'html.parser')
p = child_page.find("p", align="center")
img = p.find("img")
src = img.get("src")
# print(src)
img_resp = requests.get(src)
# img.resp.content 这里我们拿到的是字节
img_name = src.split("/")[-1]
with open("img/" + img_name, 'wb') as f:
f.write(img_resp.content) # 图片内容写入文件
print("over!!!", img_name)
time.sleep(1)
print("All over")


from lxml import etree

xml = """
<book>
<id>1</id>
<name>Mogullzr</name>
<nick>19</nick>
<nick class="ok">20</nick>
<author>
<nick id="10086">Axliu</nick>
<div>
<nick>傻逼</nick>
</div>
<span>
<nick>傻逼</nick>
</span>
</author>
<ul>
<li><a href="#">123</a></li>
<li><a href="#">456</a></li>
<li><a href="#">789</a></li>
</ul>
</book>
"""

tree = etree.XML(xml)
result1 = tree.xpath('/book/id/text()')
print(result1) # ['1']

result2 = tree.xpath('/book/author//nick/text()') # //指的是当前结点的所有子结点中寻找即可
print(result2) # ['Axliu', '傻逼', '傻逼']

result3 = tree.xpath('/book/author/*/nick/text()') # 这里的*表示可以是任意的标签,可以理解为通配符
print(result3) # ['傻逼', '傻逼']

result4 = tree.xpath('/book/nick[1]/text()') # 取第一个元素内容
print(result4) # [1]: 取到第一个元素里面的信息

result5 = tree.xpath('/book/nick[@class="ok"]/text()') # 获取class="ok"的book标签下的nick标签
print(result5) # ['20']

result6 = tree.xpath('/book/ul/li')

for li in result6:
text = li.xpath('./a/text()') # 再子元素的li结点开始继续往下查找就ok了,注意这里的./
print(text)
href = li.xpath('./a/@href')
print(href)
# ['123']
# ['#']
# ['456']
# ['#']
# ['789']
# ['#']


import requests

url = "https://passport.17k.com/ck/user/login"
sessions = requests.session()
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
'Cookie' :'GUID=02e82b18-cc2a-4ec8-9d9a-992926b5bdf0; Hm_lvt_9793f42b498361373512340937deb2a0=1716981730; sajssdk_2015_cross_new_user=1; c_channel=0; c_csc=web; sensorsdata2015jssdkcross=%7Bdistinct_id%3A02e82b18-cc2a-4ec8-9d9a-992926b5bdf0%2C%24device_id%3A18fc4160b6e1456-0a29134a747143-14462c6f-2073600-18fc4160b6f1bde%2Cprops%3A%7B%24latest_traffic_source_type%3A%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%2C%24latest_referrer%3A%2C%24latest_referrer_host%3A%2C%24latest_search_keyword%3A%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%7D%2Cfirst_id%3A02e82b18-cc2a-4ec8-9d9a-992926b5bdf0%7D; acw_tc=276077e017169848640356266edaeedbe478506b8cfddf197c2dabab4a92ce; Hm_lpvt_9793f42b498361373512340937deb2a0=1716984932; ssxmod_itna=7qGOY5D5AKBKKBPGKiQi8w+bRq7KDOitUlUDBw8x4iNDnD8x7YDvGGpbYGDoDHYmA2oGIAY2wmT5tihYx4vmWBWn1bGDGoDEx0=DcxDNDGet2DDs=DLKYD4+KGwR4AA2Di2=GIDieDFKqDEAwKDz4Gm4eNDmqGRS4Gy9822R3dDjbX2WxDHYGuKDYPDdeDBzCU2mGux4ZmBGebDtqD98CjqM4G2BeXZBdWzW4ej92BfU7DK8fhQQfhcRGwq7q+1jGDZ9oxYFiD==; ssxmod_itna2=7qGOY5D5AKBKKBPGKiQi8w+bRq7KDOitUlDAqA=4D/QxKT3rLOpD7P0Pok30qM7uFrxvtz/jcGc6C=Bg2q0K5AC77rvWGImb=zeDOdW4G5FmB3P0+xr/jHQhP=DvAn=mNGUzDLxijpWwDqDOA4oAd3miD===; acw_sc__v2=66571cca8f2752a9d481a8325fbb0b7b9df37b67; tfstk=f7NjT2aA12mzLBzSjxQrNMplhBl_Go1FGFgT-Pd2WjhYWfaTScoqQcySw2EmBSl4uu__2k4qQjF_ECa3SS7m_Zc0ofcOYM5E1r4msGqZS-Fjw333yVKxDZzGpxlOYM5y-8jeRfewmhwnNzno5qKxWlH-200ZWfnxB3p-J0nt6fEtw43skI39HxC-yFqlQ45iqrsZ95SpyTQzz0O9OFgmHcCnACdTNj4SvrpM6CFSlxNZqcosVfwT-RGtNZCTokooBWERaIG_yVZIRotRzjU3LuiqONJnjJzTL2E1LK4-w2DLLx-XOfPbySuufep4oAezivVV2IGLKrl4BuCWGkIzyBoCa5v6ImdtPD75Pdva5AFVkAAzyIDxr48fPa9UPx3oPD75PdviH40yla_WLz1..'
}

data = {
"loginName": "13044923469",
"password": "142398749lzr"
}

# 1.login
# resp = sessions.post(url, data=data, headers=headers)
# resp.encoding = "utf-8"

# print(resp.text)
# print(resp.cookies) # 查看cookie

sessions.post(url, data=data, headers=headers)

# 2.那书架数据
# 刚才那个session中是有cookie的
resp = sessions.get("https://user.17k.com/ck/author2/shelf?page=1&appKey=2406394919")
print(resp.json())

import requests

url = "https://pearvideo.com/video_1794442"
contId = url.split("_")[1]

videoStatusUrl = f"https://pearvideo.com/videoStatus.jsp?contId={contId}&mrd=0.49695927938279705"

headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
# 这里用到了基础反爬技术 : 防盗链,可以理解为网站需要你给出你是通过访问什么链接来获取到该post或者get请求的
'Referer': 'https://pearvideo.com/video_1794442'
}
resp = requests.get(videoStatusUrl, headers=headers)
dic = resp.json()
srcUrl = dic['videoInfo']['videos']['srcUrl']
TimeSystem = dic['systemTime']
srcUrl = srcUrl.replace(TimeSystem, f"cont-{contId}")

print(srcUrl)
with open("a.mp4", mode="wb") as f:
f.write(requests.get(srcUrl).content)
print("over")
# "https://video.pearvideo.com/mp4/short/20240524/1716986831992-71106895-hd.mp4"
# "https://video.pearvideo.com/mp4/short/20240524/cont-1794442-71106895-hd.mp4"
# https://video.pearvideo.com/mp4/short/20240524/cont-1794442-71106895-hd.mp4


import requests
import os

os.environ['NO_PROXY'] = "rzcode.top"
proxies = {
"https": "https://117.42.94.41:18027"
}
headers = {
'Connection': 'close',
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
}
resp = requests.get("https://www.rzcode.top", headers=headers, verify=False, proxies=proxies)

print(resp.text)

import requests
from Crypto.Cipher import AES

url = "https://interface.music.163.com/weapi/v1/resource/comments/get"


"""
PZFc: function(e, t, n) {
"use strict";
var r = n("SdIM")
, i = n("o6JC");

# encText加密方式:两次加密,加密一次之后需要取到之前加密信息然后再加密一次
function o(e, t) {
var n = r.enc.Utf8.parse(t)
, i = r.enc.Utf8.parse("0102030405060708")
, o = r.enc.Utf8.parse(e);
return r.AES.encrypt(o, n, {
iv: i,
mode: r.mode.CBC
}).toString()
}

# encSecKey加密方式
function a(e, t, n) {
var r;
return i.setMaxDigits(131),
r = new i.RSAKeyPair(t,"",n),
i.encryptedString(r, e)
}
e.exports = {
asrsea: function(e, t, n, r) {
var i = {}
, s = function(e) {
var t, n, r = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789", i = "";
for (t = 0; e > t; t += 1)
n = Math.random() * r.length,
n = Math.floor(n),
i += r.charAt(n);
return i
}(16);

i.encText = o(e, r)
i.encText = o(i.encText, s)
i.encSecKey = a(s, t, n)
return ij
},
ecnonasr: function(e, t, n, r) {
var i = {};
return i.encText = a(e + r, t, n),
i
}
}
},
"""

from threading import Thread


def func():
for i in range(1000):
print("func", i)


if __name__ == '__main__':
Thread = Thread(target=func)
Thread.start()
for j in range(1000):
print("main", j)

from threading import Thread


class MyThread(Thread):
def run(self):
for i in range(1000):
print("MyThread", i)


if __name__ == '__main__':
Thread = MyThread()
Thread.start()
for j in range(1000):
print("Main", j)


from multiprocessing import Process


def func(name):
for i in range(1000):
print(name, i)


if __name__ == '__main__':
p = Process(target=func)
p.start()
for i in range(1000):
print("Main", i)

from threading import Thread


def func(name):
for i in range(1000):
print(name, i)


if __name__ == '__main__':
p1 = Thread(target=func, args=("Mogullzr",)) # 参数必须是元组
p1.start()

p2 = Thread(target=func, args=("Mogullzr123",))
p2.start()


from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor


def fn(name):
for i in range(1000):
print(name, i)


if __name__ == '__main__':
# 创建线程池
with ThreadPoolExecutor(max_workers=50) as t:
for i in range(100):
t.submit(fn, name=f"线程{i}")

# 等待线程池中的人物全部执行完毕才继续执行(守护线程)
print("123")

import requests
from lxml import etree
import csv
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor



headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36'
}


def download_page(url):
resp = requests.get(url, headers=headers)
html = etree.HTML(resp.text)
local = html.xpath('/html/body/div[@class="container-fluid"]/div/main/div[@class="card mb-4 rounded-3 shadow-sm mt-4"]/div[@class="card-body"]' +
'/div[@class="fw-normal"]/strong/text()')[0].split(' ')[2]

print(f"{local}.csv")
f = open(f"{local}.csv", mode="w", encoding="utf-8")
csvwriter = csv.writer(f)
table = html.xpath(
'/html/body/div[@class="container-fluid"]/div/main/div[@class="card mb-4 rounded-3 shadow-sm mt-4"]/div[@class="card-body"]' +
'/div[@class="row py-3"]/div[@class="col-12 col-lg-7"]' +
'/div[@class="table-responsive"]/table[@class="table table-striped table-hover"]')[0]
trs = table.xpath('./tbody/tr')
for tr in trs:
txt = tr.xpath('./td/text()')
# 存取数据
csvwriter.writerow(txt)
# print(txt)
# print(len(tr))
pass


if __name__ == '__main__':
with ThreadPoolExecutor(max_workers=50) as t: # 50个线程同时运行,非常快,但是极大可能性被反爬
for i in range(1, 15):
k = i * 100
if len(str(k)) == 3:
url = f'https://lishidata.com/weather/%E5%8C%97%E4%BA%AC/%E5%8C%97%E4%BA%AC/101010{k}.html'
else:
url = f'https://lishidata.com/weather/%E5%8C%97%E4%BA%AC/%E5%8C%97%E4%BA%AC/10101{k}.html'
t.submit(download_page, (url,))
print(url, "提取完毕")
print("全部下载完毕!!!!")

import requests
# import time
import csv


f = open("acwing.csv", mode="w", encoding="utf-8")
csvwriter = csv.writer(f)

headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36',
'Accept': 'application/json, text/javascript, */*; q=0.01',
}

cookie = {
'cookie': 'p_h5_upload_u=A96BFEED-6131-48AF-B174-DCC7040E4209; csrftoken=0Wq5jHb4tpugPvNXV3F25tdGyU7bYqzhZcWLQtHJ3Fv6UtfI6xaAWWE3dMI15u5b; sessionid=xl1rjxcyicecm508sarr6k4b7grs4rwb',
}
url = "https://www.acwing.com/problem/"
url_tmp = url
ac_total_count = 0
un_accept_count = 0
# resp = requests.get(url, headers=headers, cookies=cookie)
# ac_total_count = ac_total_count + str(resp.text).count("已通过这道题目")
# un_accept_count = un_accept_count + str(resp.text).count("尝试过,但未通过这道题目")

for i in range(1, 115):
url = url_tmp + str(i)
print(url)
resp = requests.get(url, headers=headers, cookies=cookie)
k1 = str(resp.text).count("已通过这道题目")
k2 = str(resp.text).count("尝试过,但未通过这道题目")
ac_total_count = ac_total_count + k1
un_accept_count = un_accept_count + k2
list = [f"第{i}页通过{k1}道题目,但是有{k2}道题目尝试过,但尚未通过", ""]
csvwriter.writerow(list)
# time.sleep(1)

print("一共通过题目数量" + str(ac_total_count))
print("尝试过,但未通过的题目数量" + str(un_accept_count))
list = ["一共通过题目数量" + str(ac_total_count), "尝试过,但未通过的题目数量" + str(un_accept_count)]
csvwriter.writerow(list)


import asyncio
import time


async def func1():
print("my name is Mogullzr01")
# time.sleep(2)
asyncio.
print("my name is Mogullzr01")


async def func2():
print("my name is Mogullzr02")
# time.sleep(3)
print("my name is Mogullzr02")


async def func3():
print("my name is Mogullzr03")
# time.sleep(3)
print("my name is Mogullzr03")


if __name__ == '__main__':
f1 = func1()
f2 = func2()
f3 = func3()
tasks = [
f1, f2, f3
]
time1 = time.time()
asyncio.run(asyncio.wait(tasks))
time2 = time.time()
print(time2 - time1)

import asyncio
import aiohttp

# urls里面放好我们需要请求的图片地址
urls = [
"https://www.umei.cc/d/file/20230906/3ff61c3ea61f07c98fb3afd8aff40bf8.jpeg",
"https://www.umei.cc/d/file/20230906/bec2a6b127c47b8d6f53f36e1a875f07.jpeg",
"https://www.umei.cc/d/file/20230906/4c2ecd1edbcb9764b6b3284cca8ee67a.jpg"
]


async def aiodownload(url):
# 这里是将文件名字以"/"分割开取到图片名字
name = url.rsplit("/", 1)[1]
# aiohttp.ClientSession() <==> requests
async with aiohttp.ClientSession() as session:
# 这里的resp就是获取到的请求返回信息了
async with session.get(url) as resp:
# 请求回来, 写入文件
with open(name, mode="wb") as f:
f.write(await resp.content.read()) # 读取内容是异步的, 需要使用await挂起

print("over!!!!")
# aiohttp.ClientSession() # <==> requests
# 发送请求
# 得到图片内容
# 保存到文件
pass


async def main():
tasks = []
for url in urls:
tasks.append(asyncio.create_task(aiodownload(url)))
await asyncio.wait(tasks)
pass


if __name__ == '__main__':
asyncio.run(main())

import requests
import asyncio
import aiohttp
import json
import aiofiles


# https://boxnovel.baidu.com/boxnovel/wiseapi/chapterList?bookid=4345105404&pageNum=1&order=asc
# https://appyd.baidu.com/nabook/detail/wap?fr=9&uid=&boxnovelTimeStampNow=1717154932329&data={doc_id:284b464e0efc700abb68a98271fe910ef12daeef}

async def aiodownload(cid, b_id, title):
data = {
"book_id": b_id,
"cid": f"{b_id}|{cid}",
"need_bookinfo": 1
}
# json化
data = json.dumps(data)
url = f"http://dushu.baidu.com/api/pc/getChapterContent?data={data}"

# 首先就是http请求一下
async with aiohttp.ClientSession() as session:
async with session.get(url) as resp:
# 紧接着就是将请求到的数据装到dic里面去
dic = await resp.json()
# 最后就是异步读取同时将数据读取到文件里面去
async with aiofiles.open(f"./西游记/{title}.txt", mode="w", encoding="utf-8") as f:
await f.write(dic['data']['novel']['content'])
pass


async def getCatalog(url):
resp = requests.get(url)
dic = resp.json()
tasks = []
for item in dic['data']['novel']['items']:
title = item['title']
cid = item['cid']
# 准备开始异步任务
tasks.append(asyncio.create_task(aiodownload(cid, b_id, title)))
# tasks.append(asyncio.create_task(aiodownload(book_id, cid, title)))

await asyncio.wait(tasks)


if __name__ == '__main__':
b_id = 4306063500
url = 'http://dushu.baidu.com/api/pc/getCatalog?data={"book_id":"' + str(b_id) + '"}'
asyncio.run(getCatalog(url))

import requests

headers={
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36'
}
resp = requests.get("https://boxnovel.baidu.com/boxnovel/content?gid=4355319295&data=%7Bfromaction%3Adushu,fromaction_original%3Adushu%7D&cid=1567042131", headers=headers)

print(resp.text)

# <vedio src="视频.mp4"></video> X
# 一般网站的做法
# 用户上传 -> 转码(把视频作处理, 2K, 1080, 标清) -> 切边处理(把单个文件进行拆分) 60 份
# 用户在进行拉动进度条的时候
# ==========================,需要看哪一段就加载哪些片段

# 需要一个文件: 1.视频播放顺序, 2.视频存放的路径
# 拆分好的视频一般会防盗M3U txt json => 文本里面去


# 想抓到一个视频
# 1.找到m3u8(各种手段)
# 2.通过m3u8下载到ts文件
# 3.可以通过各种手段(不仅仅是编程手段) 把ts文件合并为一个mp4文件

"""
流程:
1.拿到?.html中源代码
2.从源码中提取到m3u8的url
3.下载m3u8
4.读取m3u8文件同时下载视频
5.合并视频
"""

import requests
import re

headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Mobile Safari/537.36'
}
if __name__ == '__main__':
obj = re.compile(r"url: '(?P<url>.*?)',", re.S) # 用来提取m3u8的地址
url = "https://www.91kanju.com/vod-play/54812-1-1.html"
resp = requests.get(url)
m3u8_url = obj.search(resp.text).group("url")
# print(m3u8_url)
resp.close()

resp2 = requests.get(m3u8_url, headers=headers)
with open("?.m3u8", mode="wb") as f:
f.write(resp2.content)
resp2.close()
print("下载完毕")

import requests
# 解析m3u8文件
n = 1
with open("?.m3u8", mode="r", encoding="utf-8") as f:
for line in f:
line = line.strip() # 先去除空格,空白,换行符啥的
if line.startswith("#"):
continue # 如果以#开头的话,不要
# print(line)

# 下载视频片段
resp3 = requests.get(line)
f = open(f"{n}.ts", mode="wb")
f.write(resp3.content)
f.close()
n += 1
print("完成了1个")

"""
思路:
1. 拿到主页面的页面源代码, 找到iframe
2. 从iframe的页面源代码中找到m3u8
3. 下载第一层m3u8文件,然后再下载第二层m3u8文件(视频存放路径)
4. 下载视频
5. 下载密钥, 进行解密操作
6. 合并所有ts文件合体为一个mp4文件
"""
import requests
from bs4 import BeautifulSoup
import re
import asyncio
import aiohttp
import aiofiles
from Crypto.Cipher import AES
import os

def get_iframe_src(url):
resp = requests.get(url)
main_page = BeautifulSoup(resp.text, "html.parser")
return main_page.find("iframe").get("src")
pass


def get_first_m3u8_src(url):
resp = requests.get(url)
obj = re.compile(r'var main = "(?P<m3u8_url>.*?)"', re.S)
m3u8_src = obj.search(resp.text).group("m3u8_url")
return m3u8_src


def download_m3u8_file(url, name):
res = requests.get(url)
with open(name, mode="wb", encoding="utf-8") as f:
f.write(res.content)
pass


async def download_ts(url, name, session):
async with session.get(url) as resp:
async with aiofiles.open(f"video2/{name}", mode="wb") as f:
await f.write(await resp.content.read()) # 下载到的内容写入文件中去
print(f"{name}下载完毕")
pass


async def aio_download(up_url):
tasks = []
async with aiohttp.ClientSession() as session: # 优化
async with aiofiles.open("越狱_second_m3u8.txt", mode="r", encoding="utf-8") as f:
async for line in f:
if line.startswith("#"):
continue
line = line.strip() # 去掉没有用的空格换行
# 拼接真正的ts路径
ts_url = up_url + line
task = asyncio.create_task(download_ts(ts_url, line, session))
tasks.append(task)

await asyncio.wait(tasks)


def get_key(url):
resp = requests.get(url)
return resp.text


# AES解密方式
async def dec_ts(name, key):
aes = AES.new(key=key, IV=b"0000000000000000", mode=AES.MODE_CBC)
async with aiofiles.open(f"video2/{name}", mode="rb") as f1, \=
aiofiles.open(f"video2/temp_{name}", mode="wb") as f2:
bs = await f1.read() # 从源文读取内容
await f2.write(aes.decrypt(bs)) # 把解密好的内容写入文件
print(f"{name}处理完毕!!!!")
pass


async def aio_dec(key):
# 解密
tasks = []
async with aiofiles.open("越狱_second_m3u8.txt", mode="r", encoding="utf-8") as f:
async for line in f:
if line.startswith("#"):
continue
line = line.strip()
# 开始创建异步任务
task = asyncio.create_task(dec_ts(line, key))
tasks.append(task)
await asyncio.wait(tasks)
pass


def merge_ts():
# windows: copy /b 1.ts+2.ts+3.ts xxx.mp4
lst = []
with open("越狱_second_m3u8.txt", mode="r", encoding="utf-8") as f:
for line in f:
if line.startswith("#"):
continue
line = line.strip()
lst.append(f"video2/temp_{line}")
s = "+".join(lst)
os.system(f"copy /b {s} movie.mp4")
print("完全搞定!!!!!!!!!1")
pass


def main(url):
# 1.拿到主页面的页面源代码, 找到iframe对应的url
iframe_src = get_iframe_src(url)

# 2.拿到m3u8文件的第一层下载地址
first_m3u8_src = get_first_m3u8_src(iframe_src)
iframe_domain = first_m3u8_src.split("share")[0]
first_m3u8_src = iframe_domain + first_m3u8_src

# 3.1下载第一层m3u8文件
download_m3u8_file(url, "越狱_first_m3u8.txt")

# 3.2下载第二层m3u8文件
with open("越狱_first_m3u8.txt", mode="r", encoding="utf-8") as f:
for line in f:
if line.startswith("#"):
continue
else:
line = line.strip() # 去掉空白或者换行符
# https://boba.52kuyun.com/20170906/Moh219zV/index.m3u8?......
# https://boba.52kuyun.com/20170906/Moh219zV/hls/index.m3u8
# 准备拼接
second_m3u8_src = first_m3u8_src.split("index.m3u8")[0] + line
download_m3u8_file(second_m3u8_src, "越狱_second_m3u8.txt")
# 4.下载视频(使用异步协程)
second_m3u8_src_up = second_m3u8_src.replace("index.m3u8", "")
asyncio.run(aio_download(second_m3u8_src))

# 5.1拿到密钥
key_url = second_m3u8_src_up + "key.key"
key = get_key(key_url)

# 5.2解密(异步协程)
asyncio.run(aio_dec(key))

# 6.合并ts文件
merge_ts()
if __name__ == '__main__':
url = "https://www.91kanju.com/vod-play/541-2-1.html"
main(url)

五.(进阶的开端)

请看下一节内容的详细讲解……..