Introduction
In the previous chapter, we discussed the layers of a data-driven system and explained the important storage requirements for each layer. The storage containers in the data layers of AI solutions serve one main purpose: to build and train models that can run in a production environment. In this chapter, we will discuss how to transfer data between the layers in a pipeline so that the data is prepared to be used to train a model to create an actual forecast (called the execution or scoring of the model).
In an Artificial Intelligence (AI) system, data is continuously updated. Once data enters the system via an upload, application program interface (API), or data stream, it has to be stored securely and typically goes through a few ETL steps. In systems that handle streaming data, the incoming data has to be directed into a stable and usable data pipeline. Data transformations have to be managed, scheduled, and orchestrated. Further, the lineage of the data has to be stored to trace back the origins of a data point in a report or application. This chapter explains all data preparation (sometimes called pre-processing) mechanisms that ensure raw data can be used for machine learning by data scientists. This is important since raw data is hardly in a form that can be used by models. We will elaborate on the architecture and technology as explained by the layered model in Chapter 1, Data Storage Fundamentals. To start with, let's pe into the details of ETL.
- Arduino入門基礎教程
- Aftershot Pro:Non-destructive photo editing and management
- FPGA從入門到精通(實戰篇)
- 硬件產品經理手冊:手把手構建智能硬件產品
- Apple Motion 5 Cookbook
- Machine Learning Solutions
- 筆記本電腦應用技巧
- Blender Game Engine:Beginner's Guide
- 基于網絡化教學的項目化單片機應用技術
- FPGA實驗實訓教程
- Istio實戰指南
- Angular 6 by Example
- 基于S5PV210處理器的嵌入式開發完全攻略
- Learn Qt 5
- Spring微服務實戰(第2版)