官术网_书友最值得收藏!

Avoiding spider traps

Currently, our crawler will follow any link it hasn't seen before. However, some websites dynamically generate their content and can have an infinite number of web pages. For example, if the website has an online calendar with links provided for the next month and year, then the next month will also have links to the next month, and so on for however long the widget is set (this can be a LONG time). The site may offer the same functionality with simple pagination navigation, essentially paginating over empty search result pages until the maximum pagination is reached. This situation is known as a spider trap.

A simple way to avoid getting stuck in a spider trap is to track how many links have been followed to reach the current web page, which we will refer to as depth. Then, when a maximum depth is reached, the crawler does not add links from that web page to the queue. To implement maximum depth, we will change the seen variable, which currently tracks visited web pages, into a dictionary to also record the depth the links were found at:

def link_crawler(..., max_depth=4): 
seen = {}
...
if rp.can_fetch(user_agent, url):
depth = seen.get(url, 0)
if depth == max_depth:
print('Skipping %s due to depth' % url)
continue
...
for link in get_links(html):
if re.match(link_regex, link):
abs_link = urljoin(start_url, link)
if abs_link not in seen:
seen[abs_link] = depth + 1
crawl_queue.append(abs_link)

Now, with this feature, we can be confident the crawl will complete eventually. To disable this feature, max_depth can be set to a negative number so the current depth will never be equal to it.

主站蜘蛛池模板: 车险| 张掖市| 河池市| 海门市| 晋江市| 海盐县| 方山县| 阜新市| 蕉岭县| 怀宁县| 利辛县| 洪雅县| 绥江县| 景谷| 阳春市| 察雅县| 淮阳县| 建始县| 麻城市| 宁武县| 大同县| 慈溪市| 云龙县| 大丰市| 阿拉尔市| 望谟县| 友谊县| 开封市| 高邮市| 云阳县| 个旧市| 巴塘县| 宿迁市| 红原县| 阿拉善右旗| 凤冈县| 桐乡市| 苗栗市| 曲阳县| 建平县| 枣庄市|