- Learning Spark SQL
- Aurobindo Sarkar
- 583字
- 2021-07-02 18:23:41
Understanding Resilient Distributed Datasets (RDDs)
RDDs are Spark's primary distributed Dataset abstraction. It is a collection of data that is immutable, distributed, lazily evaluated, type inferred, and cacheable. Prior to execution, the developer code (using higher-level constructs such as SQL, DataFrames, and Dataset APIs) is converted to a DAG of RDDs (ready for execution).
You can create RDDs by parallelizing an existing collection of data or accessing a Dataset residing in an external storage system, such as the file system or various Hadoop-based data sources. The parallelized collections form a distributed Dataset that enable parallel operations on them.
You can create a RDD from the input file with number of partitions specified, as shown:
scala> val cancerRDD = sc.textFile("file:///Users/aurobindosarkar/Downloads/breast-cancer-wisconsin.data", 4)
scala> cancerRDD.partitions.size
res37: Int = 4
You can implicitly convert the RDD to a DataFrame by importing the spark.implicits package and using the toDF() method:
scala> import spark.implicits._scala>
val cancerDF = cancerRDD.toDF()
To create a DataFrame with a specific schema, we define a Row object for the rows contained in the DataFrame. Additionally, we split the comma-separated data, convert it to a list of fields, and then map it to the Row object. Finally, we use the createDataFrame() to create the DataFrame with a specified schema:
def row(line: List[String]): Row = { Row(line(0).toLong, line(1).toInt, line(2).toInt, line(3).toInt, line(4).toInt, line(5).toInt, line(6).toInt, line(7).toInt, line(8).toInt, line(9).toInt, line(10).toInt) }
val data = cancerRDD.map(_.split(",").to[List]).map(row)
val cancerDF = spark.createDataFrame(data, recordSchema)
Further, we can easily convert the preceding DataFrame to a Dataset using the case class defined earlier:
scala> val cancerDS = cancerDF.as[CancerClass]
RDD data is logically divided into a set of partitions; additionally, all input, intermediate, and output data is also represented as partitions. The number of RDD partitions defines the level of data fragmentation. These partitions are also the basic units of parallelism. Spark execution jobs are split into multiple stages, and as each stage operates on one partition at a time, it is very important to tune the number of partitions. Fewer partitions than active stages means your cluster could be under-utilized, while an excessive number of partitions could impact the performance due to higher disk and network I/O.
The programming interface to RDDs support two types of operations: transformations and actions. The transformations create a new Dataset from an existing one, while the actions return a value or result of a computation. All transformations are evaluated lazily--the actual execution occurs only when an action is executed to compute a result. The transformations form a lineage graph instead of actually replicating data across multiple machines. This graph-based approach enables an efficient fault tolerance model. For example, if an RDD partition is lost, then it can be recomputed based on the lineage graph.
You can control data persistence (for example, caching) and specify placement preferences for RDD partitions and then use specific operators for manipulating them. By default, Spark persists RDDs in memory, but it can spill them to disk if sufficient RAM isn't available. Caching improves performance by several orders of magnitude; however, it is often memory intensive. Other persistence options include storing RDDs to disk and replicating them across the nodes in your cluster. The in-memory storage of persistent RDDs can be in the form of deserialized or serialized Java objects. The deserialized option is faster, while the serialized option is more memory-efficient (but slower). Unused RDDs are automatically removed from the cache but, depending on your requirements; if a specific RDD is no longer required, then you can also explicitly release it.
- ASP.NET Web API:Build RESTful web applications and services on the .NET framework
- Visual Studio 2012 Cookbook
- .NET 4.0面向?qū)ο缶幊搪劊夯A(chǔ)篇
- MATLAB實(shí)用教程
- Drupal 8 Configuration Management
- Hands-On Functional Programming with TypeScript
- Python忍者秘籍
- Serverless computing in Azure with .NET
- Android系統(tǒng)原理及開(kāi)發(fā)要點(diǎn)詳解
- Go語(yǔ)言精進(jìn)之路:從新手到高手的編程思想、方法和技巧(2)
- RealSenseTM互動(dòng)開(kāi)發(fā)實(shí)戰(zhàn)
- Unity 2018 Augmented Reality Projects
- JSP程序設(shè)計(jì)實(shí)例教程(第2版)
- Everyday Data Structures
- jQuery for Designers Beginner's Guide Second Edition