官术网_书友最值得收藏!

There's more...

Now that you have Jupyter on your machine, and assuming you followed the steps of either the Installing Spark from sources or the Installing Spark from binaries recipes, you should be able to start using Jupyter to interact with PySpark.

To refresh your memory, as part of installing Spark scripts, we have appended two environment variables to the bash profile file: PYSPARK_DRIVER_PYTHON and PYSPARK_DRIVER_PYTHON_OPTS. Using these two environment variables, we set the former to use jupyter and the latter to start a notebook service. 

If you now open your Terminal and type:

pyspark

When you open your browser and navigate to http://localhost:6661, you should see a window not that different from the one in the following screenshot:

主站蜘蛛池模板: 柳江县| 扶余县| 富源县| 黔东| 仁化县| 云林县| 德令哈市| 博罗县| 武清区| 宁化县| 江安县| 贡嘎县| 民和| 合肥市| 义乌市| 商丘市| 江安县| 鹿邑县| 长岛县| 涡阳县| 遂宁市| 静乐县| 周宁县| 铁岭市| 内丘县| 壶关县| 噶尔县| 林州市| 抚顺县| 边坝县| 乐至县| 安丘市| 休宁县| 托克托县| 隆子县| 大理市| 平果县| 金坛市| 康乐县| 定陶县| 原平市|