- Python Web Scraping(Second Edition)
- Katharine Jarmul Richard Lawson
- 274字
- 2021-07-09 19:42:46
Avoiding spider traps
Currently, our crawler will follow any link it hasn't seen before. However, some websites dynamically generate their content and can have an infinite number of web pages. For example, if the website has an online calendar with links provided for the next month and year, then the next month will also have links to the next month, and so on for however long the widget is set (this can be a LONG time). The site may offer the same functionality with simple pagination navigation, essentially paginating over empty search result pages until the maximum pagination is reached. This situation is known as a spider trap.
A simple way to avoid getting stuck in a spider trap is to track how many links have been followed to reach the current web page, which we will refer to as depth. Then, when a maximum depth is reached, the crawler does not add links from that web page to the queue. To implement maximum depth, we will change the seen variable, which currently tracks visited web pages, into a dictionary to also record the depth the links were found at:
def link_crawler(..., max_depth=4):
seen = {}
...
if rp.can_fetch(user_agent, url):
depth = seen.get(url, 0)
if depth == max_depth:
print('Skipping %s due to depth' % url)
continue
...
for link in get_links(html):
if re.match(link_regex, link):
abs_link = urljoin(start_url, link)
if abs_link not in seen:
seen[abs_link] = depth + 1
crawl_queue.append(abs_link)
Now, with this feature, we can be confident the crawl will complete eventually. To disable this feature, max_depth can be set to a negative number so the current depth will never be equal to it.
- DBA攻堅指南:左手Oracle,右手MySQL
- Mastering Objectoriented Python
- Cross-platform Desktop Application Development:Electron,Node,NW.js,and React
- Cocos2d-x游戲開發:手把手教你Lua語言的編程方法
- Magento 2 Theme Design(Second Edition)
- SQL for Data Analytics
- 從0到1:Python數據分析
- 精通MySQL 8(視頻教學版)
- QGIS 2 Cookbook
- 零基礎學HTML+CSS
- Java Hibernate Cookbook
- PhantomJS Cookbook
- Java Script從入門到精通(第5版)
- Build Your Own PaaS with Docker
- Illustrator CS6中文版應用教程(第二版)