千家信息网

Hadoop distcp命令如何跨集群复制文件

发表于:2025-01-23 作者:千家信息网编辑
千家信息网最后更新 2025年01月23日,本篇文章为大家展示了Hadoop distcp命令如何跨集群复制文件,内容简明扼要并且容易理解,绝对能使你眼前一亮,通过这篇文章的详细介绍希望你能有所收获。hadoop提供了Hadoop distcp
千家信息网最后更新 2025年01月23日Hadoop distcp命令如何跨集群复制文件

本篇文章为大家展示了Hadoop distcp命令如何跨集群复制文件,内容简明扼要并且容易理解,绝对能使你眼前一亮,通过这篇文章的详细介绍希望你能有所收获。

hadoop提供了Hadoop distcp命令在Hadoop不同集群之间进行数据复制和copy。

使用格式为:hadoop distcp -pbc hdfs://namenode1/test hdfs://namenode2/test

distcp copy只有Map没有Reduce

usage: distcp OPTIONS [source_path...]

OPTIONS

-append Reuse existing data in target files and append new

data to them if possible

-async Should distcp execution be blocking

-atomic Commit all changes or none

-bandwidth Specify bandwidth per map in MB

-delete Delete from target, files missing in source

-diff Use snapshot diff report to identify the

difference between source and target

-f List of files that need to be copied

-filelimit (Deprecated!) Limit number of files copied to <= n

-i Ignore failures during copy

-log Folder on DFS where distcp execution logs are

saved

-m Max number of concurrent maps to use for copy

-mapredSslConf Configuration for ssl config file, to use with

hftps://

-overwrite Choose to overwrite target files unconditionally,

even if they exist.

-p preserve status (rbugpcaxt)(replication,

block-size, user, group, permission,

checksum-type, ACL, XATTR, timestamps). If -p is

specified with no , then preserves

replication, block size, user, group, permission,

checksum type and timestamps. raw.* xattrs are

preserved when both the source and destination

paths are in the /.reserved/raw hierarchy (HDFS

only). raw.* xattrpreservation is independent of

the -p flag. Refer to the DistCp documentation for

more details.

-sizelimit (Deprecated!) Limit number of files copied to <= n

bytes

-skipcrccheck Whether to skip CRC checks between source and

target paths.

-strategy Copy strategy to use. Default is dividing work

based on file sizes

-tmp Intermediate work path to be used for atomic

commit

-update Update target, copying only missingfiles or

directories

不同版本的Hadoop集群由于RPC协议版本不一样不能直接使用命令 hadoop distcp hdfs://namenode1/test hdfs://namenode2/test

对于不同Hadoop版本间的拷贝,用户应该使用HftpFileSystem。 这是一个只读文件系统,所以DistCp必须运行在目标端集群上(更确切的说是在能够写入目标集群的TaskTracker上)。 源的格式是hftp:/// (默认情况dfs.http.address是 :50070)。

上述内容就是Hadoop distcp命令如何跨集群复制文件,你们学到知识或技能了吗?如果还想学到更多技能或者丰富自己的知识储备,欢迎关注行业资讯频道。

0