Selenium For Web Scraping Python



  1. Python Web Scraping Example
  2. Using Selenium For Web Scraping Python

Lean how to scrape the web with Selenium and Python with this step by step tutorial. We will use Selenium to automate Hacker News login. Kevin Sahin Updated: 11 February, 2021 8 min read. Manually Opening a Socket and Sending the HTTP Request. The most basic way to perform. May 25, 2020 There are a number of web scraping tools out there to perform the task and various languages too, having libraries that support web scraping. Among all these languages, Python is considered as one of the best for Web Scraping because of features like – a rich library, easy to use, dynamically typed, etc.

Imagine what would you do if you could automate all the repetitive and boring activities you perform using internet, like checking every day the first results of Google for a given keyword, or download a bunch of files from different websites.

Python

In this post you’ll learn to use Selenium with Python, a Web Scraping tool that simulates a user surfing the Internet. For example, you can use it to automatically look for Google queries and read the results, log in to your social accounts, simulate a user to test your web application, and anything you find in your daily live that it’s repetitive. The possibilities are infinite! 🙂

*All the code in this post has been tested with Python 2.7 and Python 3.4.

Install and use Selenium

Selenium is a python package that can be installed via pip. I recommend that you install it in a virtual environment (using virtualenv and virtualenvwrapper).

To install selenium, you just need to type:

In this post we are going to initialize a Firefox driver — you can install it by visiting their website. However, if you want to work with Chrome or IE, you can find more information here.

Once you have Selenium and Firefox installed, create a python file, selenium_script.py. We are going to initialize a browser using Selenium:

2
4
6
8
10
12
14
16
18
from selenium.common.exceptions import ElementNotVisibleException
def lookup(driver,query):
try:
box=driver.wait.until(EC.presence_of_element_located(
button=driver.wait.until(EC.element_to_be_clickable(
box.send_keys(query)
button.click()
button=driver.wait.until(EC.visibility_of_element_located(
button.click()
print('Box or Button not found in google.com')
  • the element that raised the exception, button.click() is inside a try statement.
  • if the exception is raised, we look for the second button, using visibility_of_element_located to make sure the element is visible, and then click this button.
  • if at any time, some element is not found within the 5 second period, the TimeoutException is raised and caught by the two end lines of code.
  • Note that the initial button name is “btnK” and the new one is “btnG”.

Method list in Selenium

Selenium

To sum up, I’ve created a table with the main methods used here.

Selenium for web scraping python tutorial

Note: it’s not a python file — don’t try to run/import it 🙂