Keeping up with Amazon’s constantly changing prices can be overwhelming. For e-commerce sellers, dropshippers, and affiliate marketers, knowing when prices change gives a major competitive edge.
In this guide, you’ll learn how to build a full Amazon price tracker using Python—from scraping product data to saving price history, setting alerts, and scaling with rotating residential proxies for undetectable data collection.
Whether you’re a solo hustler monitoring a few ASINs or managing thousands of listings, this tutorial will help you automate pricing intelligence efficiently and safely.
Why Build an Amazon Price Tracker?
Amazon adjusts product prices multiple times per day based on demand, competition, and stock availability. If you can track these fluctuations, you can:
- Monitor competitor pricing to respond faster.
- Identify profit gaps for retail or online arbitrage.
- Track discounts and promotions for affiliate offers.
- Gather data for long-term market analysis.
Manual tracking is tedious and error-prone. Python automation allows you to collect and analyze data across hundreds of products—continuously and accurately.
Creating an Amazon Price Tracker: Step-By-Step Guide
Before we dive into the code, let’s set up our development environment. You’ll need the following tools and libraries:
- Python: The programming language we’ll use.
- BeautifulSoup: A library for web scraping.
- Requests: A library for making HTTP requests.
Step 1: Set Up Your Python Environment
Start by installing the required libraries:
pip install requests beautifulsoup4 lxml pandas schedule
- Requests — sends HTTP requests.
- BeautifulSoup — parses HTML.
- Pandas — manages your data logs.
- Schedule — automates repeated tasks.
Step 2: Analyze an Amazon Product Page
Open any product page and inspect the HTML (right-click → Inspect).
Amazon typically stores prices within these elements:
<span id="priceblock_ourprice">...</span>
<span id="priceblock_dealprice">...</span>
We’ll use these IDs to extract the price dynamically.
Step 3: Create the Scraper Function
Let’s start with a function that fetches the product title and price:
import requests
from bs4 import BeautifulSoup
from datetime import datetime
URL = "https://www.amazon.com/dp/B08N5WRWNW"
HEADERS = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}
def get_price():
response = requests.get(URL, headers=HEADERS)
soup = BeautifulSoup(response.content, "lxml")
title = soup.find(id="productTitle").get_text().strip()
price = soup.find(id="priceblock_ourprice") or soup.find(id="priceblock_dealprice")
price = price.get_text().replace("$", "").strip() if price else None
return {"title": title, "price": price, "time": datetime.now()}
Step 4: Save Price History to CSV
A tracker is only useful if you can analyze changes over time.
This function appends each new data point to a CSV file:
import pandas as pd
import os
def save_to_csv(data, filename="prices.csv"):
df = pd.DataFrame([data])
header = not os.path.exists(filename)
df.to_csv(filename, mode="a", header=header, index=False)
Each row logs the product title, price, and timestamp.
Step 5: Automate Data Collection
You can schedule the tracker to run automatically every morning:
import schedule
import time
def job():
data = get_price()
save_to_csv(data)
print("Saved:", data)
schedule.every().day.at("09:00").do(job)
while True:
schedule.run_pending()
time.sleep(60)
Step 6: Add Price Drop Alerts
If you want an alert when a product price falls below a certain level:
import smtplib
def send_email(price, threshold=900):
if float(price) < threshold:
server = smtplib.SMTP("smtp.gmail.com", 587)
server.starttls()
server.login("[email protected]", "yourpassword")
message = f"Price dropped! New price: ${price}"
server.sendmail("[email protected]", "[email protected]", message)
server.quit()
Combine this with your main function for real-time deal alerts.
Step 7: Avoid Getting Blocked with Proxies
Amazon employs strong anti-bot systems that can block repeated requests.
To avoid CAPTCHAs or IP bans, use rotating residential proxies.
Here’s how to include NodeMaven proxies in your requests:
PROXY = "http://username:[email protected]:8000"
proxies = {"http": PROXY, "https": PROXY}
response = requests.get(URL, headers=HEADERS, proxies=proxies)
Each request will use a different real-device IP from NodeMaven’s pool, mimicking organic browsing.
For larger scraping projects, use sticky sessions or rotating endpoints via NodeMaven’s Amazon proxy solutions.
Step 8: Track Multiple Products
You can scale easily by saving multiple product URLs in a list:
PRODUCTS = [
"https://www.amazon.com/dp/B08N5WRWNW",
"https://www.amazon.com/dp/B07FZ8S74R"
]
for url in PRODUCTS:
URL = url
data = get_price()
save_to_csv(data)
This approach lets you monitor dozens of items in a single run.
Step 9: Visualize Price Trends
To analyze trends visually, use Streamlit or Matplotlib:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("prices.csv")
df["time"] = pd.to_datetime(df["time"])
plt.plot(df["time"], df["price"])
plt.title("Amazon Price Tracker")
plt.xlabel("Date")
plt.ylabel("Price ($)")
plt.show()
You’ll instantly see when the price spikes or drops.
Step 10: Put It All Together
Here’s the complete Python Amazon Price Tracker script — combining scraping, saving, automation, alerts, and proxies into one unified tool.
import requests
from bs4 import BeautifulSoup
import pandas as pd
import schedule
import time
import smtplib
import os
from datetime import datetime
URLS = [
"https://www.amazon.com/dp/B08N5WRWNW",
"https://www.amazon.com/dp/B07FZ8S74R"
]
HEADERS = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}
PROXY = "http://username:[email protected]:8000"
proxies = {"http": PROXY, "https": PROXY}
THRESHOLD = 900 # Set your desired alert threshold
def get_price(url):
response = requests.get(url, headers=HEADERS, proxies=proxies)
soup = BeautifulSoup(response.content, "lxml")
title = soup.find(id="productTitle").get_text().strip()
price = soup.find(id="priceblock_ourprice") or soup.find(id="priceblock_dealprice")
price = price.get_text().replace("$", "").strip() if price else None
return {"url": url, "title": title, "price": price, "time": datetime.now()}
def save_to_csv(data, filename="prices.csv"):
df = pd.DataFrame([data])
header = not os.path.exists(filename)
df.to_csv(filename, mode="a", header=header, index=False)
def send_email(price, title):
if price and float(price) < THRESHOLD:
server = smtplib.SMTP("smtp.gmail.com", 587)
server.starttls()
server.login("[email protected]", "yourpassword")
message = f"Price dropped for {title}! New price: ${price}"
server.sendmail("[email protected]", "[email protected]", message)
server.quit()
print(f"Email alert sent for {title}")
def job():
for url in URLS:
data = get_price(url)
save_to_csv(data)
send_email(data["price"], data["title"])
print("Data saved:", data)
schedule.every().day.at("09:00").do(job)
while True:
schedule.run_pending()
time.sleep(60)
This script automatically:
- Fetches product data daily
- Logs each price update
- Sends email alerts for price drops
- Uses NodeMaven rotating residential proxies for reliability
You can deploy it on a VPS or cloud function (like AWS Lambda or Google Cloud Run) to keep it running 24/7.
Best Practices for Long-Term Success
- Respect Amazon’s robots.txt: Only scrape allowed data.
- Randomize request intervals: Avoid predictable timing.
- Rotate user agents and proxies: Each request should mimic a new visitor.
- Use retries and backoff logic: Handle failed requests gracefully.
- Store logs and monitor success rates: Track performance to fine-tune your setup.
Combining ethical scraping habits with NodeMaven’s clean IP pool makes sure your scraper stays efficient and undetectable.
Final Thoughts
Building an Amazon price tracker in Python isn’t just a fun coding exercise—it’s a powerful business tool. With the right proxy setup, you can monitor hundreds of listings, uncover market gaps, and make smarter pricing decisions automatically.
From solo entrepreneurs to large-scale e-commerce teams, automation like this transforms how you operate.
If you’re serious about scaling your data collection, try NodeMaven’s residential and mobile proxies. With real-device IPs, sticky sessions, and country-level targeting, they’re built for reliable, undetectable scraping at scale.
Start small, track one product, and watch as your Python script evolves into a powerful revenue intelligence tool.
Frequently Asked Questions (FAQs)
requests
and BeautifulSoup
, and schedule it with the schedule
module. For undetectable, high-success scraping, run your tracker through NodeMaven’s rotating residential proxies to prevent blocks or CAPTCHAs.