千家信息网

HUE如何安装与配置

发表于:2025-02-11 作者:千家信息网编辑
千家信息网最后更新 2025年02月11日,这篇文章主要为大家展示了"HUE如何安装与配置",内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下"HUE如何安装与配置"这篇文章吧。HUE 安装与配置1. HU
千家信息网最后更新 2025年02月11日HUE如何安装与配置

这篇文章主要为大家展示了"HUE如何安装与配置",内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下"HUE如何安装与配置"这篇文章吧。

HUE 安装与配置

1. HUE下载:http://cloudera.github.io/hue/docs-3.0.0/manual.html#_hadoop_configuration

2. 安装HUE相关依赖(root下)

Redhat

Ubuntu

gcc

gcc

g++

g++

libxml2-devel

libxml2-dev

libxslt-devel

libxslt-dev

cyrus-sasl-devel

libsasl2-dev

cyrus-sasl-gssapi

libsasl2-modules-gssapi-mit

mysql-devel

libmysqlclient-dev

python-devel

python-dev

python-setuptools

python-setuptools

python-simplejson

python-simplejson

sqlite-devel

libsqlite3-dev

ant

ant

libsasl2-dev

cyrus-sasl-devel

libsasl2-modules-gssapi-mit

cyrus-sasl-gssapi

libkrb5-dev

krb5-devel

libtidy-0.99-0

libtidy (For unit tests only)

mvn

mvn (From maven2 package or tarball)

openldap-dev / libldap2-dev

openldap-devel

$ yum install -y gcc g++ libxml2-devel libxslt-devel cyrus-sasl-devel cyrus-sasl-gssapi mysql-devel python-devel python-setuptools python-simplejson sqlite-devel ant libsasl2-dev libsasl2-modules-gssapi-mit libkrb5-dev libtidy-0.99-0 mvn openldap-dev

3. 修改pom.xml文件

$ vim /opt/hue/maven/pom.xml

a.) 修改hadoop与spark版本

2.6.0

2.6.0

1.4.0

b.) 将hadoop-core修改为hadoop-common

hadoop-common

c.) 将hadoop-test的版本改为1.2.1

hadoop-test

1.2.1

d.) 将两个ThriftJobTrackerPlugin.java文件删除,分别在如下两个目录:

/usr/hdp/hue/desktop/libs/hadoop/java/src/main/java/org/apache/hadoop/thriftfs/ThriftJobTrackerPlugin.java

/usr/hdp/hue/desktop/libs/hadoop/java/src/main/java/org/apache/hadoop/mapred/ThriftJobTrackerPlugin.java

4. 编译

$ cd /opt/hue

$ make apps

5. 启动HUE服务

$ ./build/env/bin/supervisor

$ ps -aux|grep "hue"

$ kill -9 <端口号>

6. hue.ini 参数配置

$ vim /usr/hdp/hue/hue-3.10.0/desktop/conf/hue.ini

a.) [desktop] 配置

[desktop]

# Webserver listens on this address and port
http_host=xx.xx.xx.xx
http_port=8888

# Time zone name
time_zone=Asia/Shanghai

# Webserver runs as this user
server_user=hue
server_group=hue

# This should be the Hue admin and proxy user
default_user=hue

# This should be the hadoop cluster admin
default_hdfs_superuser=hdfs

[hadoop]
[[hdfs_clusters]]

[[[default]]]
# Enter the filesystem uri

# 如果HDFS没有配置 HA,则按照以下配置

fs_defaultfs=hdfs://xx.xx.xx.xx:8020 ## hadoop NameNode节点

# 如果HDFS配置了HA,则按照以下配置
fs_defaultfs=hdfs://mycluster ## 逻辑名称,与core-site.xml的fs_defaultfs保持一致

# NameNode logical name.
## logical_name=carmecluster

# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.

# 如果HDFS没有配置HA,则按照以下配置

webhdfs_url=http://xx.xx.xx.xx:50070/webhdfs/v1

# 如果HDFS配置HA,则HUE只能通过Hadoop-httpfs来访问HDFS; 手动安装Hadoop-httpfs:$ sudo yum install Hadoop-httpfs 启动Hadoop-httpfs服务:$ ./hadoop-httpfs start &
webhdfs_url=http://xx.xx.xx.xx:14000/webhdfs/v1

[[yarn_clusters]]

[[[default]]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=xx.xx.xx.xx

# The port where the ResourceManager IPC listens on
resourcemanager_port=8050

# Whether to submit jobs to this cluster
submit_to=True

# Resource Manager logical name (required for HA)
## logical_name=

# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false

# URL of the ResourceManager API
resourcemanager_api_url=http://xx.xx.xx.xx:8088

# URL of the ProxyServer API
proxy_api_url=http://xx.xx.xx.xx:8088

# URL of the HistoryServer API
history_server_api_url=http://xx.xx.xx.xx:19888

# URL of the Spark History Server
## spark_history_server_url=http://localhost:18088

[[mapred_clusters]]

[[[default]]]
# Enter the host on which you are running the Hadoop JobTracker
jobtracker_host=xx.xx.xx.xx

# The port where the JobTracker IPC listens on
jobtracker_port=8021

# JobTracker logical name for HA
## logical_name=

# Thrift plug-in port for the JobTracker
thrift_port=9290

# Whether to submit jobs to this cluster
submit_to=False

[beeswax]

# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=xx.xx.xx.xx

# Port where HiveServer2 Thrift server runs on.
hive_server_port=10000

# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/etc/hive/conf

# Timeout in seconds for thrift calls to Hive service
## server_conn_timeout=120


[hbase]
# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
# Use full hostname with security.
# If using Kerberos we assume GSSAPI SASL, not PLAIN.
hbase_clusters=(Cluster|xx.xx.xx.xx:9090)

# 如果连接hbase报错, 开启服务 $ nohup hbase thrift start &

[zookeeper]

[[clusters]]

[[[default]]]
# Zookeeper ensemble. Comma separated list of Host/Port.
# e.g. localhost:2181,localhost:2182,localhost:2183
host_ports=xx.xx.xx.xx:2181,xx.xx.xx.xx:2181,xx.xx.xx.xx:2181

[liboozie]
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs. Empty value disables the config check.
oozie_url=http://xx.xx.xx.xx:11000/oozie

b.) hadoop 相关配置

hdfs-site.xml 配置文件


dfs.webhdfs.enabled
true

core-site.xml 配置文件


hadoop.proxyuser.hue.hosts
*


hadoop.proxyuser.hue.groups
*

如果HUE server在hadoop集群外,可以通过运行HttpFS server服务来访问HDFS. HttpFS服务仅需要一个开放

httpfs-site.xml 配置文件

httpfs.proxyuser.hue.hosts
*


httpfs.proxyuser.hue.groups
*

c.) MapReduce 0.20(MR1)相关配置

HUE与JobTracker的通信是通过一个jar包,在mapreduce的lib文件夹下

如果JobTracker和HUE在同一个主机上,拷贝他

$ cd /usr/share/hue
$ cp desktop/libs/hadoop/java-lib/hue-plugins-*.jar /usr/lib/hadoop-0.20-mapreduce/lib

如果JobTracker 运行在不同的主机上,需要scp的Hue plugins jar 到JobTracker主机上

添加以下内容到mapred-site.xml配置文件,重启JobTracker


jobtracker.thrift.address
0.0.0.0:9290


mapred.jobtracker.plugins
org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin
Comma-separated list of jobtracker plug-ins to be activated.

d.) Oozie 相关配置

oozie-site.xml配置


oozie.service.ProxyUserService.proxyuser.hue.hosts
*


oozie.service.ProxyUserService.proxyuser.hue.groups
*

以上是"HUE如何安装与配置"这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注行业资讯频道!

0