Day 03 学习笔记

Selenium剩余部分

 

1.元素交互操作:

    - 点击、清除

        click

        clear

 

    - ActionChains

        是一个动作链对象,需要把driver驱动传给它。

        动作链对象可以操作一系列设定好的动作行为。

 

    - iframe的切换

        driver.switch_to.frame('iframeResult')

 

    - 执行js代码

        execute_script()

from selenium import webdriver
import time

driver = webdriver.Chrome()
try:
    driver.implicitly_wait(10)

    driver.get('https://www.baidu.com/')

    driver.execute_script(
        '''
        alert("浙江万里学院,是浙江最牛皮的学院!")
        '''
    )

    time.sleep(10)

finally:
    driver.close()

 

 

BeautifulSoup4

    BS4

 

    1.什么BeautifulSoup

        bs4是一个解析库,可以通过某种(解析器)来帮我们提取想要的数据。

 

    2.为什么要使用bs4

        因为它可以通过简洁的语法快速提取用户想要的数据内容。

 

    3.解析器的分类

        - lxml

        - html.parser

 

    4.安装与使用

        - 遍历文档树

        - 搜索文档树

html_doc = """
The Dormouse's story

$37


Once upon a time there were three little sisters; and their names were
Elsie,
Lacie and
Tillie;
and they lived at the bottom of a well.



...


"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'lxml')
print(soup)
print(type(soup))
html = soup.prettify()
print(html)

 

 

 

补充知识点:

 

    数据格式:

 

    json数据:

    {

    "name": "tank"

    }

 

    XML数据:

    tank

 

    HTML:

    

 

    生成器: yield 值(把值放进生成器中)

    def f():

 

        # return 1

        yield 1

        yield 2

        yield 3

 

    g = f()

    print(g)

 

    for line in g:

        print(line)

 

 

 

html_doc = """
The Dormouse's story

$37

Once upon a time there were three little sisters; and their names wereElsieLacie andTillieand they lived at the bottom of a well.

...


"""
from bs4 import BeautifulSoup

soup = BeautifulSoup(html_doc, 'lxml')
p = soup.find(name='p')
p_s = soup.find_all(name='p')
print(p)
print(p_s)
p = soup.find(name='p', attrs={"id": "p"})
print(p)
tag = soup.find(name='title', text="The Dormouse's story")
print(tag)
tag = soup.find(name='a', attrs={"class": "sister"}, text="Elsie")
print(tag)


import re
a = soup.find(name=re.compile('a'))
print(a)
a_s = soup.find_all(name=re.compile('a'))
print(a_s)

a = soup.find(attrs={"id": re.compile('link')})
print(a)
print(soup.find(name=['a', 'p', 'html', re.compile('a')]))
print(soup.find_all(name=['a', 'p', 'html', re.compile('a')]))
print(soup.find(name=True, attrs={"id": True}))
def have_id_not_class(tag):
    if tag.name == 'p' and tag.has_attr("id") and not tag.has_attr("class"):
        return tag
print(soup.find_all(name=have_id_not_class))
a = soup.find(id='link2')
print(a)
p = soup.find(class_='sister')
print(p)

转载于:https://www.cnblogs.com/xtx642/p/11127941.html

你可能感兴趣的:(python,json)