- Mastering Scala Machine Learning
- Alex Kozlov
- 745字
- 2021-07-02 16:33:20
Working with Scala and Spark Notebooks
Often the most frequent values or five-number summary are not sufficient to get the first understanding of the data. The term descriptive statistics is very generic and may refer to very complex ways to describe the data. Quantiles, a Paretto chart or, when more than one attribute is analyzed, correlations are also examples of descriptive statistics. When sharing all these ways to look at the data aggregates, in many cases, it is also important to share the specific computations to get to them.
Scala or Spark Notebook https://github.com/Bridgewater/scala-notebook, https://github.com/andypetrella/spark-notebook record the whole transformation path and the results can be shared as a JSON-based *.snb
file. The Spark Notebook project can be downloaded from http://spark-notebook.io, and I will provide a sample Chapter01.snb
file with the book. I will use Spark, which I will cover in more detail in Chapter 3, Working with Spark and MLlib.
For this particular example, Spark will run in the local mode. Even in the local mode Spark can utilize parallelism on your workstation, but it is limited to the number of cores and hyperthreads that can run on your laptop or workstation. With a simple configuration change, however, Spark can be pointed to a distributed set of machines and use resources across a distributed set of nodes.
Here is the set of commands to download the Spark Notebook and copy the necessary files from the code repository:
[akozlov@Alexanders-MacBook-Pro]$ wget http://s3.eu-central-1.amazonaws.com/spark-notebook/zip/spark-notebook-0.6.3-scala-2.11.7-spark-1.6.1-hadoop-2.6.4-with-hive-with-parquet.zip ... [akozlov@Alexanders-MacBook-Pro]$ unzip -d ~/ spark-notebook-0.6.3-scala-2.11.7-spark-1.6.1-hadoop-2.6.4-with-hive-with-parquet.zip ... [akozlov@Alexanders-MacBook-Pro]$ ln -sf ~/ spark-notebook-0.6.3-scala-2.11.7-spark-1.6.1-hadoop-2.6.4-with-hive-with-parquet ~/spark-notebook [akozlov@Alexanders-MacBook-Pro]$ cp chapter01/notebook/Chapter01.snb ~/spark-notebook/notebooks [akozlov@Alexanders-MacBook-Pro]$ cp chapter01/ data/kddcup/kddcup.parquet ~/spark-notebook [akozlov@Alexanders-MacBook-Pro]$ cd ~/spark-notebook [akozlov@Alexanders-MacBook-Pro]$ bin/spark-notebook Play server process ID is 2703 16/04/14 10:43:35 INFO play: Application started (Prod) 16/04/14 10:43:35 INFO play: Listening for HTTP on /0:0:0:0:0:0:0:0:9000 ...
Now you can open the notebook at http://localhost:9000
in your browser, as shown in the following screenshot:

Figure 01-2. The first page of the Spark Notebook with the list of notebooks
Open the Chapter01
notebook by clicking on it. The statements are organized into cells and can be executed by clicking on the small right arrow at the top, as shown in the following screenshot, or run all cells at once by navigating to Cell | Run All:

Figure 01-3. Executing the first few cells in the notebook
First, we will look at the discrete variables. For example, to get the other observable attributes. This task would be totally impossible if distribution of the labels, issue the following code:
val labelCount = df.groupBy("lbl").count().collect labelCount.toList.map(row => (row.getString(0), row.getLong(1)))
The first time I read the dataset, it took about a minute on MacBook Pro, but Spark caches the data in memory and the subsequent aggregation runs take only about a second. Spark Notebook provides you the distribution of the values, as shown in the following screenshot:

Figure 01-4. Computing the distribution of values for a categorical field
I can also look at crosstab counts for pairs of discrete variables, which gives me an idea of interdependencies between the variables using http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.DataFrameStatFunctions—the object does not support computing correlation measures such as chi-square yet:

Figure 01-5. Contingency table or crosstab
However, we can see that the most popular service is private and it correlates well with the SF
flag. Another way to analyze dependencies is to look at 0
entries. For example, the S2
and S3
flags are clearly related to the SMTP and FTP traffic since all other entries are 0
.
Of course, the most interesting correlations are with the target variable, but these are better discovered by supervised learning algorithms that I will cover in Chapter 3, Working with Spark and MLlib, and Chapter 5, Regression and Classification.

Figure 01-6. Computing simple aggregations using org.apache.spark.sql.DataFrameStatFunctions.
Analogously, we can compute correlations for numerical variables with the dataFrame.stat.corr()
and dataFrame.stat.cov()
functions (refer to Figure 01-6). In this case, the class supports the Pearson correlation coefficient. Alternatively, we can use the standard SQL syntax on the parquet file directly:
sqlContext.sql("SELECT lbl, protocol_type, min(duration), avg(duration), stddev(duration), max(duration) FROM parquet.`kddcup.parquet` group by lbl, protocol_type")
Finally, I promised you to compute percentiles. Computing percentiles usually involves sorting the whole dataset, which is expensive; however, if the tile is one of the first or the last ones, usually it is possible to optimize the computation:
val pct = sqlContext.sql("SELECT duration FROM parquet.`kddcup.parquet` where protocol_type = 'udp'").rdd.map(_.getLong(0)).cache pct.top((0.05*pct.count).toInt).last
Computing the exact percentiles for a more generic case is more computationally expensive and is provided as a part of the Spark Notebook example code.
- 程序設計與實踐(VB.NET)
- Practical Data Science Cookbook(Second Edition)
- Oracle數據庫從入門到運維實戰
- TypeScript圖形渲染實戰:基于WebGL的3D架構與實現
- QTP自動化測試進階
- SharePoint Development with the SharePoint Framework
- Android程序設計基礎
- Raspberry Pi Home Automation with Arduino(Second Edition)
- Windows Embedded CE 6.0程序設計實戰
- 持續集成與持續交付實戰:用Jenkins、Travis CI和CircleCI構建和發布大規模高質量軟件
- C++編程兵書
- 深度學習原理與PyTorch實戰(第2版)
- Scratch·愛編程的藝術家
- 從0到1:HTML5 Canvas動畫開發
- Android智能手機APP界面設計實戰教程