Tech Tips

2 min read

Python Website Scraping - Advanced Techniques and Libraries

Python Website Scraping - Advanced Techniques and Libraries

In the realm of web scraping, Python stands out with its rich selection of libraries. Based on our real-world experience with a diverse customer base, we often recommend Selenium for its robustness and reliability, especially in scenarios where accuracy is paramount. In this 2024 guide, we explore three primary methods of web scraping using Python: BeautifulSoup with Requests, Scrapy, and Selenium, providing advanced usage examples for each, with a special emphasis on Selenium.

BeautifulSoup with Requests - A Simple Python Web Scraping Library

BeautifulSoup and Requests excel at scraping static content but face challenges with dynamic sites. Incorporating proxies can significantly enhance scraping capabilities:

  • Parsing Nested HTML - Proxies allow for scraping complex structures by circumventing rate limits and IP-based restrictions, ensuring comprehensive data extraction without being blocked.

  • Handling Pagination - By rotating proxies, scrapers can navigate through paginated data seamlessly, avoiding detection by anti-scraping mechanisms and ensuring complete dataset collection.

  • Data Cleaning and Transformation - Proxies enable access to geo-restricted content, enriching the initial dataset for more effective post-scraping processing.

Code Example of an Interactive Script

import requests
from bs4 import BeautifulSoup
import logging
import sys
import json
import csv
from requests.exceptions import ProxyError, SSLError

# Set up logging for the script
def setup_logging():
    logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

# Get user input for URL, output format, filename, and proxy usage
def get_user_input():
    url = input("Enter the URL of the webpage to scrape: ")
    output_format = input("Enter the output format (json/csv): ").lower()
    filename = input("Enter the name of the output file: ")
    use_proxy = input("Use a proxy? (yes/no): ").lower()
    return url, output_format, filename, use_proxy

# Get proxy details from the user
def get_proxy_details():
    proxy = input("Enter the proxy URL (http://proxy_address:port): ")
    requires_auth = input("Does the proxy require authentication? (yes/no): ").lower()
    if requires_auth == 'yes':
        username = input("Enter the proxy username: ")
        password = input("Enter the proxy password: ")
        proxy = {
            'http': f'http://{username}:{password}@{proxy[7:]}',
            'https': f'https://{username}:{password}@{proxy[7:]}'
        proxy = {
            'http': proxy,
            'https': proxy.replace('http:', 'https:')
    return proxy

# Fetch webpage content using requests library
def fetch_webpage_content(url, headers, proxies=None):
    session = requests.Session()
    if proxies:
        session.proxies = proxies
    session.headers = headers
        response = session.get(url)
        return response
    except ProxyError as e:
        logging.error(f"ProxyError fetching {url} with proxy: {e}")
    except SSLError as e:
        logging.error(f"SSLError fetching {url}: {e}")
    except requests.RequestException as e:
        logging.error(f"General Error fetching {url}: {e}")

# Extract content from HTML using BeautifulSoup
def extract_content_from_html(html_content):
    soup = BeautifulSoup(html_content, 'html.parser')
    h2_elements = [h2.get_text().strip() for h2 in soup.find_all('h2')]
    footer_links = [a['href'] for a in soup.find('footer').find_all('a')]
    return h2_elements, footer_links

# Save data to a file in JSON or CSV format
def save_to_file(data, filename, output_format):
    if output_format == 'json':
        with open(filename + '.json', 'w') as file:
            json.dump(data, file, indent=4)"Data saved to {filename}.json")
    elif output_format == 'csv':
        if data:  # Ensure there's at least one row to infer keys
            keys = data[0].keys() if isinstance(data[0], dict) else ['content']
            with open(filename + '.csv', 'w', newline='') as file:
                dict_writer = csv.DictWriter(file, fieldnames=keys)
  "Data saved to {filename}.csv")
            logging.error("No data to save.")
        logging.error("Unsupported file format. Please use 'json' or 'csv'")

# Main function to orchestrate the scraping process
def main():
    url, output_format, filename, use_proxy = get_user_input()
    proxies = None
    if use_proxy == 'yes':
        proxies = get_proxy_details()

    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'

    response = fetch_webpage_content(url, headers, proxies)
    h2_elements, footer_links = extract_content_from_html(response.content)

    data = [{'content': elem} for elem in h2_elements]
    data.extend([{'content': link} for link in footer_links])

    save_to_file(data, filename, output_format)

if __name__ == "__main__":

In this example the script is automated, by leveraging proxies with BeautifulSoup and Requests not only extends the reach of your scraping efforts but also increases resilience against web scraping defenses, making it a crucial strategy for tech-savvy developers looking to navigate the complexities of modern web environments efficiently.

Proxies enable access to geo-restricted content, enriching the initial dataset for more effective post-scraping processing.

Scrapy - A Robust Python Web Scraping Library with Proxy Integration

Scrapy stands out for its efficiency and versatility in web scraping, especially in large-scale data extraction endeavors. Incorporating proxies into your Scrapy projects elevates its functionality, allowing for more discreet operations and overcoming common web scraping challenges. Here's how proxies can play a crucial role in Scrapy's advanced applications.

  • Scraping JavaScript-Loaded Content - Proxies enable Scrapy to bypass site restrictions while using middleware to render dynamic content, maintaining anonymity and avoiding IP bans.

  • Building a Web Crawler - Utilizing proxies allows for extensive site crawling without triggering anti-scraping mechanisms, thanks to IP rotation and request rate management.

  • Scheduled Scraping - For regular data collection, proxies ensure consistent access by varying the source IP, crucial for long-term scraping projects.

Code Example of an Interactive Script

# Import necessary libraries
import scrapy
from scrapy.crawler import CrawlerProcess
from urllib.parse import urlparse
import re
import json

# Define a Scrapy spider named 'trustedproxies'
class TrustedProxiesSpider(scrapy.Spider):
    name = 'trustedproxies'

    # Define a default user agent (you can modify this)
    default_user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'

    def __init__(self, url, proxy=None):
        # Initialize the spider with start URL(s)
        self.start_urls = [url]
        self.custom_settings = {
            'USER_AGENT': self.default_user_agent,  # Set user agent
            'DOWNLOAD_DELAY': 1  # Respectful download delay
        if proxy:
            # Set proxy if provided
            self.custom_settings['PROXY'] = proxy
            # If using Scrapy with HttpProxyMiddleware enabled:
            self.custom_settings['DOWNLOADER_MIDDLEWARES'] = {
                'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,

    def parse(self, response):
        # Extract H2 elements and footer links using XPath
        h2_elements = response.xpath('//h2/text()').getall()
        footer_links = response.xpath('//footer//a/@href').getall()

        # Organize scraped data
        data = {
            'H2 Elements': h2_elements,
            'Footer Href Links': footer_links

        # Print the data in a structured format
        print("Scraped Data:")
        for category, items in data.items():
            for item in items:
                print(f"  - {item}")

        # Save the output to a file (JSON format)
        with open('scraped_data.txt', 'w') as file:
            json.dump(data, file, indent=4)

def is_valid_url(url):
        # Check if the URL is valid using urlparse
        result = urlparse(url)
        return all([result.scheme, result.netloc])
        return False

def main():
    # Get the URL from the user and validate it
    url = input("Enter the URL of the webpage to scrape: ")
    while not is_valid_url(url):
        print("Invalid URL. Please enter a valid URL.")
        url = input("Enter the URL of the webpage to scrape: ")

    # Ask for proxy details
    use_proxy = input("Do you want to use a proxy? (yes/no): ").lower() == 'yes'
    proxy = None
    if use_proxy:
        proxy_url = input("Enter the full proxy URL (including 'http://' or 'https://'): ")
        proxy_user = input("Enter the proxy user (if applicable, press enter to skip): ")
        proxy_pass = input("Enter the proxy password (if applicable, press enter to skip): ")
        if proxy_user and proxy_pass:
            proxy = f"http://{proxy_user}:{proxy_pass}@{proxy_url[7:]}"  # Adjust if your proxy needs a different format
            proxy = proxy_url

    # Create a Scrapy CrawlerProcess and start the spider
    process = CrawlerProcess()
    process.crawl(TrustedProxiesSpider, url=url, proxy=proxy)

    print("Scraped data has been saved to 'scraped_data.txt'.")

if __name__ == '__main__':

Web Scraping with Selenium and Proxies - A Potent Combination and Reliable Choice

Selenium is highly regarded for web scraping, especially for sites loaded with dynamic content and JavaScript. Its synergy with proxies elevates its utility, offering enhanced privacy and access capabilities. Key advantages include

  • Handling User Interactions - Automating complex browser interactions.

  • Capturing Screenshots - Ideal for visual documentation of web pages.

  • Real-Time Data Scraping - Crucial for extracting live-updating data.

  • Discreet User Interactions - Selenium automates browser activities discreetly using proxies, minimizing detection risks.

  • Geographically Specific Scraping - Proxies enable capturing content specific to different regions, crucial for accurate visual documentation

  • Efficient Real-Time Data Extraction - Combining Selenium's interaction mimicry with proxies allows for effective live data scraping, circumventing common web restrictions

Code Example of an Interactive Script

# Import necessary modules
from selenium import webdriver
from import Service
from import By
from import Options
from selenium.common.exceptions import WebDriverException
from urllib.parse import urlparse
import logging

# Function to set up logging configuration
def setup_logging():
    logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")

# Function to get user input for the URL
def get_user_input():
    url = input("Enter the URL of the webpage to scrape: ")
    while not valid_url(url):
        logging.error("Invalid URL entered. Please enter a valid URL.")
        url = input("Enter the URL of the webpage to scrape: ")
    return url

# Function to get the desired output file name from the user
def get_file_name():
    filename = input("Enter the name of the output file (without extension): ")
    return filename if filename else "scraped_data"

# Function to check if a URL is valid
def valid_url(url):
    parsed_url = urlparse(url)
    return bool(parsed_url.scheme and parsed_url.netloc)

# Function to get proxy input from the user
def get_proxy_input():
    use_proxy = input("Do you want to use a proxy? (yes/no): ").lower() == 'yes'
    if use_proxy:
        proxy_url = input("Enter the proxy URL (host:port): ")
        return proxy_url
    return None

# Main function to perform web scraping
def main():
    # Set up logging

    # Set paths for chromedriver and Chrome binary
    chrome_driver_path = '/path-to.../chromedriver'
    chrome_binary_path = '/path-to.../chrome-linux64/chrome'

    # Get user input for URL, proxy, and output filename
    url = get_user_input()
    proxy_url = get_proxy_input()
    filename = get_file_name() + '.txt'

    # Set Chrome options
    chrome_options = Options()
    chrome_options.binary_location = chrome_binary_path

    if proxy_url:
        chrome_options.add_argument('--proxy-server=%s' % proxy_url)

    driver = None
        # Initialize Chrome webdriver
        driver = webdriver.Chrome(service=Service(chrome_driver_path), options=chrome_options)

        # Scrape H2 elements and footer links
        h2_elements = driver.find_elements(By.TAG_NAME, 'h2')
        h2_texts = [element.text for element in h2_elements]

        footer_links = driver.find_elements(By.XPATH, '//footer//a')
        footer_urls = [link.get_attribute('href') for link in footer_links]

        # Write scraped data to the output file
        with open(filename, 'w') as file:
            file.write(f"Page Title: {driver.title}\n\n")
            file.write("H2 Elements:\n")
            file.writelines(f"{text}\n" for text in h2_texts)
            file.write("\nFooter Links:\n")
            file.writelines(f"{link}\n" for link in footer_urls)"Scraping completed for: {url}. Output saved to {filename}")

    except WebDriverException as e:
        logging.error(f"An error occurred while using WebDriver: {e}")
        if driver:

if __name__ == '__main__':

Selenium's dynamic content handling, combined with the strategic use of proxies, makes it an unmatched tool for diverse scraping needs, ensuring reliability and versatility for users.

Web Scraping with Python - Book for Comprehensive Learning

For an in-depth exploration of Python web scraping techniques, including those employing Selenium, "Web Scraping with Python" is an excellent resource. Available on Amazon, this book covers a wide range of topics from basic to advanced levels.

Start your 7-day free trial

Join over 3,000+ companies that have grown with Trusted Proxies.