官术网_书友最值得收藏!

How it works

The program connects to SQS and opens the queue. Opening the queue for reading is also done using sqs.create_queue, which will simply return the queue if it already exists.

Then, it enters a loop calling sqs.receive_message, specifying the URL of the queue, the number of messages to receive in each read, and the maximum amount of time to wait in seconds if there are no messages available.

If a message is read, the URL in the message is retrieved and scraping techniques are used to read the page at the URL and extract the planet's name and information about its albedo.

Note that we retrieve the receipt handle of the message. This is needed to delete the message from the queue. If we do not delete the message, it will be made available in the queue after a period of time.   So if our scraper crashed and didn't perform this acknowledgement, the messages will be made available again by SQS for another scraper to process (or the same one when it is back up).

主站蜘蛛池模板: 镇坪县| 枞阳县| 义马市| 柯坪县| 南充市| 海安县| 雷山县| 临泉县| 西和县| 马龙县| 广元市| 阿拉善右旗| 龙海市| 山西省| 镇巴县| 宁波市| 贞丰县| 镇雄县| 双江| 都昌县| 宁乡县| 仲巴县| 临泽县| 汉源县| 贺兰县| 桂东县| 泌阳县| 云霄县| 丹阳市| 二手房| 台湾省| 观塘区| 焉耆| 临泉县| 泰和县| 巴林左旗| 绥化市| 邢台县| 丰原市| 武城县| 道孚县|