官术网_书友最值得收藏!

Understanding how to perform HTTP GET requests

One of the most resourceful places to find good data is online. GET requests are common methods of communicating with an HTTP web server. In this recipe, we will grab all the links from a Wikipedia article and print them to the terminal. To easily grab all the links, we will use a helpful library called HandsomeSoup, which lets us easily manipulate and traverse a webpage through CSS selectors.

Getting ready

We will be collecting all links from a Wikipedia web page. Make sure to have an Internet connection before running this recipe.

Install the HandsomeSoup CSS selector package, and also install the HXT library if it is not already installed. To do this, use the following commands:

$ cabal install HandsomeSoup
$ cabal install hxt

How to do it...

  1. This recipe requires hxt for parsing HTML and requires HandsomeSoup for the easy-to-use CSS selectors, as shown in the following code snippet:
    import Text.XML.HXT.Core
    import Text.HandsomeSoup
  2. Define and implement main as follows:
    main :: IO ()
    main = do
  3. Pass in the URL as a string to HandsomeSoup's fromUrl function:
        let doc = fromUrl "http://en.wikipedia.org/wiki/Narwhal"
  4. Select all links within the bodyContent field of the Wikipedia page as follows:
        links <- runX $ doc >>> css "#bodyContent a" ! "href"
        print links

How it works…

The HandsomeSoup package allows easy CSS selectors. In this recipe, we run the #bodyContent a selector on a Wikipedia article web page. This finds all link tags that are descendants of an element with the bodyContent ID.

See also…

Another common way to obtain data online is through POST requests. To find out more, refer to the Learning how to perform HTTP POST requests recipe.

主站蜘蛛池模板: 女性| 营口市| 阳东县| 榆树市| 扎兰屯市| 玛沁县| 临沧市| 博爱县| 阿拉尔市| 玉树县| 徐汇区| 宁城县| 甘孜县| 磐安县| 漳平市| 德州市| 景谷| 壤塘县| 陇川县| 长武县| 玉龙| 唐海县| 建始县| 六安市| 昂仁县| 开化县| 南靖县| 玉门市| 屏南县| 鹤壁市| 深泽县| 浠水县| 开封县| 雷山县| 上思县| 青铜峡市| 望江县| 江北区| 万宁市| 寻甸| 府谷县|