Hive使用指南
更新时间:2024-01-19
Hive
hive是基于Hadoop的一个数据仓库工具,用来进行数据提取、转化、加载,这是一种可以存储、查询和分析存储在Hadoop中的大规模数据的机制。hive数据仓库工具能将结构化的数据文件映射为一张数据库表,并提供SQL查询功能,能将SQL语句转变成MapReduce任务来执行.
前提条件
首先参考BOS HDFS一文安装并配置BOS HDFS,本机安装的hadoop版本为hadoop-3.3.2,参考文中"开始使用"一节完成BOS HDFS的基本试用,并设置环境变量:
export HADOOP_HOME=/opt/hadoop-3.3.2
export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath
安装mysql
mysql用来存储hive的元数据,可以选择本地安装,也可以直接远程连接已安装的mysql或者RDS。本机安装的版本是:mysql-5.1.61-4.el6.x86_64 安装完成之后,可用 service mysqld status 查看运行状态,并使用
/usr/bin/mysqladmin -u root -h ${IP} password ${new-password} #设置新密码
可以在mysql中创建hive专用的用户,并设置密码。
安装hive
本机安装的版本为2.3.9. 修改conf文件夹下的两个配置:
mv hive-env.sh.template hive-env.sh
mv hive-site.xml.template hive-site.xml
在hive-env.sh中添加:
export HIVE_CONF_DIR=/ssd2/apache-hive-2.3.9-bin/conf
在hive-site.xml中添加:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
<description>MySQL</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>JDBC</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>new-password</value>
<description>passward</description>
</property>
配置中 javax.jdo.option.ConnectionURL 是指mysql server的连接地址,javax.jdo.option.ConnectionUserName 就是用于hive的mysql用户名,javax.jdo.option.ConnectionPassword 是用户名对应的密码。设置完成后,把mysql的JBDC驱动复制到lib文件夹下,本机采用的驱动为 mysql-connector-java-5.1.32-bin.jar. 初始化msyql
./bin/schematool -dbType mysql -initSchema
启动hive
./bin/hive
hive测试
创建表
create database hive; // 创建数据库
create table hive_test (a int, b string) //创建表
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
- 新建一个shell脚本,名为gen_data.sh
#!/bin/bash
MAXROW=1000000 #指定生成数据行数
for((i = 0; i < $MAXROW; i++))
do
echo $RANDOM, \"$RANDOM\"
done
2.运行脚本,生成测试数据
./gen_data.sh > hive_test.data
3.把数据加载到表中
load data inpath "bos://${bucket_name}/hive_test.data" into table hive.hive_test;
4.查询
hive> select count(*) from hive_test;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20230528173013_6f5296db-562e-4342-917f-bcf14fc1480d
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2023-05-28 17:30:16,548 Stage-1 map = 0%, reduce = 0%
2023-05-28 17:30:18,558 Stage-1 map = 100%, reduce = 100%
Ended Job = job_local238749048_0001
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1000000
hive> select * from hive_test limit 10;
OK
11027 "11345"
10227 "24281"
32535 "16409"
24286 "24435"
2498 "10969"
16662 "16163"
5345 "26005"
21407 "5365"
30608 "4588"
19686 "11831"