官术网_书友最值得收藏!

Importing data from another Hadoop cluster

Sometimes, we may want to copy data from one HDFS to another either for development, testing, or production migration. In this recipe, we will learn how to copy data from one HDFS cluster to another.

Getting ready

To perform this recipe, you should already have a running Hadoop cluster.

How to do it...

Hadoop provides a utility called DistCp, which helps us copy data from one cluster to another. Using this utility is as simple as copying from one folder to another:

hadoop distcp hdfs://hadoopCluster1:9000/source hdfs://hadoopCluster2:9000/target

This would use a Map Reduce job to copy data from one cluster to another. You can also specify multiple source files to be copied to the target. There are a couple of other options that we can also use:

  • -update: When we use DistCp with the update option, it will copy only those files from the source that are not part of the target or differ from the target.
  • -overwrite: When we use DistCp with the overwrite option, it overwrites the target directory with the source.

How it works...

When DistCp is executed, it uses map reduce to copy the data and also assists in error handling and reporting. It expands the list of source files and directories and inputs them to map tasks. When copying from multiple sources, collisions are resolved in the destination based on the option (update/overwrite) that's provided. By default, it skips if the file is already present at the target. Once the copying is complete, the count of skipped files is presented.

Note

You can read more on DistCp at https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html.

主站蜘蛛池模板: 南岸区| 新沂市| 迁西县| 舞钢市| 济南市| 江山市| 八宿县| 天等县| 泌阳县| 绍兴县| 奇台县| 夹江县| 梅河口市| 锡林郭勒盟| 灵台县| 武汉市| 花莲市| 沈丘县| 甘德县| 准格尔旗| 台中市| 闸北区| 肃北| 乐昌市| 福泉市| 叶城县| 永清县| 武鸣县| 迭部县| 衡水市| 繁峙县| 静安区| 怀远县| 黄冈市| 光山县| 缙云县| 龙南县| 盐池县| 麻阳| 那曲县| 渝中区|