官术网_书友最值得收藏!

Building complex data pipelines

As soon as the data tools that we're building grow into something a bigger than a simple script, it's useful to split data pre-processing tasks into small units, in order to map all the steps and dependencies of the data pipeline.

With the term data pipeline, we intend a sequence of data processing operations, which cleans, augments, and manipulates the original data, transforming it into something digestible by the analytics engine. Any non-trivial data analytics project will require a data pipeline that is composed of a number of steps.

In the prototyping phase, it is common to split these steps into different scripts, which are then run individually, for example:

$ python download_some_data.py
$ python clean_some_data.py
$ python augment_some_data.py

Each script in this example produces the output for the following script, so there are dependencies between the different steps. We can refactor data processing scripts into a large script that does everything, and then run it in one go:

$ python do_everything.py

The content of such script might look similar to the following code:

if __name__ == '__main__': 
  download_some_data() 
  clean_some_data() 
  augment_some_data() 

Each of the preceding functions will contain the main logic of the initial individual scripts. The problem with this approach is that errors can occur in the data pipeline, so we should also include a lot of boilerplate code with try and except to have control over the exceptions that might occur. Moreover, parameterizing this kind of code might feel a little clumsy.

In general, when moving from prototyping to something more stable, it's worth thinking about the use of a data orchestrator, also called workflow manager. A good example of this kind of tool in Python is given by Luigi, an open source project introduced by Spotify. The advantages of using a data orchestrator such as Luigi include the following:

  • Task templates: Each data task is defined as a class with a few methods that define how the task runs, its dependencies, and its output

Dependency graph: Visual tools assist the data engineer to visualize and understand the dependencies between tasks Recovery from intermediate failure: If the data pipeline fails halfway through the tasks, it's possible to restart it from the last consistent state

  • Integration with command-line interface, as well as system job schedulers such as cron job
  • Customizable error reporting

We won't dig into all the features of Luigi, as a detailed discussion would go beyond the scope of this book, but the readers are encouraged to take a look at this tool and use it to produce a more elegant, reproducible, and easily maintainable and expandable data pipeline.

主站蜘蛛池模板: 五台县| 建德市| 吉林省| 凌云县| 绵阳市| 昭平县| 调兵山市| 余庆县| 江山市| 阿瓦提县| 金阳县| 南昌市| 永川市| 麻城市| 乌拉特中旗| 永和县| 广丰县| 金昌市| 永清县| 南宫市| 会昌县| 孝感市| 正宁县| 无为县| 柘城县| 蒙山县| 吴旗县| 富顺县| 新龙县| 怀化市| 临清市| 招远市| 福安市| 尚志市| 辽阳市| 霍州市| 新和县| 榆中县| 剑川县| 渭南市| 盐城市|