官术网_书友最值得收藏!

Chapter 3. Input Formats and Schema

The aim of this chapter is to demonstrate how to load data from its raw format onto different schemas, therefore enabling a variety of different kinds of downstream analytics to be run over the same data. When writing analytics, or even better, building libraries of reusable software, you generally have to work with interfaces of fixed input types. Therefore, having flexibility in how you transition data between schemas, depending on the purpose, can deliver considerable downstream value, both in terms of widening the type of analysis possible and the re-use of existing code.

Our primary objective is to learn about the data format features that accompany Spark, although we will also delve into the finer points of data management by introducing proven methods that will enhance your data handling and increase your productivity. After all, it is most likely that you will be required to formalize your work at some point, and an introduction to how to avoid the potential long-term pitfalls is invaluable when writing analytics, and long after.

With this is mind, we will use this chapter to look at the traditionally well understood area of data schemas. We will cover key areas of traditional database modeling and explain how some of these cornerstone principles are still applicable to Spark.

In addition, while honing our Spark skills, we will analyze the GDELT data model and show how to store this large dataset in an efficient and scalable manner.

We will cover the following topics:

  • Dimensional modeling: benefits and weaknesses in relation to Spark
  • Focus on the GDELT model
  • Lifting the lid on schema-on-read
  • Avro object model
  • Parquet storage model

Let's start with some best practice.

主站蜘蛛池模板: 耒阳市| 嘉定区| 沙洋县| 德庆县| 汝州市| 临西县| 梓潼县| 东乡县| 汉沽区| 清远市| 崇文区| 上犹县| 酒泉市| 泾阳县| 太仆寺旗| 瑞昌市| 辰溪县| 那坡县| 林芝县| 怀安县| 文水县| 响水县| 太仓市| 赣榆县| 临潭县| 张掖市| 博罗县| 运城市| 瑞金市| 大厂| 高陵县| 三门峡市| 泰州市| 达州市| 敦化市| 宁晋县| 龙川县| 蚌埠市| 临桂县| 英超| 无棣县|