千家信息网

Hadoop环境配置中的hive环境配置是怎么样的

发表于:2024-10-06 作者:千家信息网编辑
千家信息网最后更新 2024年10月06日,这篇文章给大家介绍Hadoop环境配置中的hive环境配置是怎么样的,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。1、将下载的hive压缩包拉到/opt/software/文件夹
千家信息网最后更新 2024年10月06日Hadoop环境配置中的hive环境配置是怎么样的

这篇文章给大家介绍Hadoop环境配置中的hive环境配置是怎么样的,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。

1、将下载的hive压缩包拉到/opt/software/文件夹下

安装包版本:apache-hive-3.1.2-bin.tar.gz

2、将安装包解压到/opt/module/文件夹中,命令:

cd /opt/software/tar -zxvf 压缩包名 -C /opt/module/

3、修改系统环境变量,命令:

vi /etc/profile

在编辑面板中添加如下代码:

export HIVE_HOME=/opt/module/apache-hive-3.1.2-binexport PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin

4、重启环境配置,命令:

source /etc/profile

5、修改hive环境变量

cd  /opt/module/apache-hive-3.1.2-bin/bin/

①配置hive-config.sh文件

vi hive-config.sh

在编辑面板中添加如下代码:

export JAVA_HOME=/opt/module/jdk1.8.0_212export HIVE_HOME=/opt/module/apache-hive-3.1.2-binexport HADOOP_HOME=/opt/module/hadoop-3.2.0export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.2-bin/conf

6、拷贝hive配置文件,命令:

cd  /opt/module/apache-hive-3.1.2-bin/conf/cp hive-default.xml.template hive-site.xml

7、修改hive配置文件,找到对应位置按一下代码进行修改:

vi hive-site.xml
      javax.jdo.option.ConnectionDriverName    com.mysql.cj.jdbc.Driver    Driver class name for a JDBC metastore      javax.jdo.option.ConnectionUserName    root    Username to use against metastore database      javax.jdo.option.ConnectionPassword    123456# 自定义密码    password to use against metastore database      javax.jdo.option.ConnectionURLjdbc:mysql://192.168.1.100:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT          JDBC connect string for a JDBC metastore.      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.            datanucleus.schema.autoCreateAll    true    Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.      hive.metastore.schema.verification    false          Enforce metastore schema version consistency.      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures            proper metastore schema migration. (Default)      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.          hive.exec.local.scratchdir    /opt/module/apache-hive-3.1.2-bin/tmp/${user.name}    Local scratch space for Hive jobs    system:java.io.tmpdir/opt/module/apache-hive-3.1.2-bin/iotmp       hive.downloaded.resources.dir/opt/module/apache-hive-3.1.2-bin/tmp/${hive.session.id}_resources    Temporary local directory for added resources in the remote file system.      hive.querylog.location    /opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}    Location of Hive run time structured log file        hive.server2.logging.operation.log.location/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}/operation_logs    Top level directory where operation logs are stored if logging functionality is enabled        hive.metastore.db.type    mysql          Expects one of [derby, oracle, mysql, mssql, postgres].      Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.            hive.cli.print.current.db    true    Whether to include the current database in the Hive prompt.        hive.cli.print.header    true    Whether to print the names of the columns in query output.        hive.metastore.warehouse.dir    /opt/hive/warehouse    location of default database for the warehouse  

8、上传mysql驱动包到/opt/module/apache-hive-3.1.2-bin/lib/文件夹下

驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包

9、进入数据库,在数据库中新建名为hive的数据库,确保 mysql数据库中有名称为hive的数据库

mysql> create database hive;

10、初始化元数据库,命令:

schematool -dbType mysql -initSchema

11、群起,命令:

start-all.sh    Hadoop100上start-yarn.sh    Hadoop101上

12、启动hive,命令:

hive

13、检测是否启动成功,命令:

show databases;

出现各类数据库,则启动成功

关于Hadoop环境配置中的hive环境配置是怎么样的就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。

0