一、实验目的

本实验通过模拟一个典型的应用场景和实际数据量,测试并对比HAWQ内部表、外部表与Hive的查询性能。

二、硬件环境

1. 四台VMware虚机组成的Hadoop集群。
2. 每台机器配置如下:
(1)15K RPM SAS 100GB
(2)Intel(R) Xeon(R) E5-2620 v2 @ 2.10GHz,双核双CPU
(3)8G内存,8GSwap
(4)10000Mb/s虚拟网卡

三、软件环境

1. Linux:CentOS release 6.4,核心2.6.32-358.el6.x86_64
2. Ambari:2.4.1
3. Hadoop:HDP 2.5.0
4. Hive(Hive on Tez):2.1.0
5. HAWQ:2.1.1.0
6. HAWQ PXF:3.1.1

四、数据模型

1. 表结构

实验模拟一个记录页面点击数据的应用场景。数据模型中包含日期、页面、浏览器、引用、状态5个维度表,1个页面点击事实表。表结构和关系如图1所示。

图1

2. 记录数

各表的记录数如表1所示。

表名

行数

page_click_fact

1亿

page_dim

20万

referrer_dim

100万

browser_dim

2万

status_code

70

date_dim

366

表1

五、建表并生成数据

1. 建立hive库表

create database test;
use test;create table browser_dim(browser_sk bigint, browser_nm varchar(100), browser_version_no varchar(100), flash_version_no varchar(100), flash_enabled_flg int, java_version_no varchar(100), platform_desc string, java_enabled_flg int, java_script_enabled_flg int, cookies_enabled_flg int, user_language_cd varchar(100), screen_color_depth_no varchar(100), screen_size_txt string)
row format delimited fields terminated by ','
stored as orc; create table date_dim(cal_dt date, day_in_cal_yr_no int, day_of_week_no int, start_of_month_dt date, start_of_quarter_dt date, start_of_week_dt date, start_of_year_dt date)
row format delimited fields terminated by ','
stored as orc;create table page_dim(page_sk bigint, domain_nm varchar(200), reachability_cd string, page_desc string, protocol_nm varchar(20))
row format delimited fields terminated by ','
stored as orc;create table referrer_dim(referrer_sk bigint, referrer_txt string, referrer_domain_nm varchar(200))
row format delimited fields terminated by ','
stored as orc;create table status_code_dim(status_cd varchar(100), client_error_flg int, status_cd_desc string, server_error_flg int)
row format delimited fields terminated by ','
stored as orc;create table page_click_fact(visitor_id varchar(100), detail_tm timestamp, page_click_dt date, page_sk bigint, client_session_dt date, previous_page_sk bigint, referrer_sk bigint, next_page_sk bigint, status_cd varchar(100), browser_sk bigint, bytes_received_cnt bigint, bytes_sent_cnt bigint, client_detail_tm timestamp, entry_point_flg int, exit_point_flg int, ip_address varchar(20), query_string_txt string, seconds_spent_on_page_cnt int, sequence_no int, requested_file_txt string)
row format delimited fields terminated by ','
stored as orc;

说明:hive表使用ORCfile存储格式。

2. 用Java程序生成hive表数据

ORC压缩后的各表对应的HDFS文件大小如下:

2.2 M   /apps/hive/warehouse/test.db/browser_dim
641     /apps/hive/warehouse/test.db/date_dim
4.1 G   /apps/hive/warehouse/test.db/page_click_fact
16.1 M  /apps/hive/warehouse/test.db/page_dim
22.0 M  /apps/hive/warehouse/test.db/referrer_dim
1.1 K   /apps/hive/warehouse/test.db/status_code_dim

3. 分析hive表

analyze table date_dim compute statistics;
analyze table browser_dim compute statistics;
analyze table page_dim compute statistics;
analyze table referrer_dim compute statistics;
analyze table status_code_dim compute statistics;
analyze table page_click_fact compute statistics;

4. 建立HAWQ外部表

create schema ext;
set search_path=ext;create external table date_dim(
cal_dt              date,
day_in_cal_yr_no    int4,
day_of_week_no      int4,
start_of_month_dt   date,
start_of_quarter_dt date,
start_of_week_dt    date,
start_of_year_dt    date
)
location ('pxf://hdp1:51200/test.date_dim?profile=hiveorc')
format 'custom' (formatter='pxfwritable_import'); create external table browser_dim(
browser_sk              int8,
browser_nm              varchar(100),
browser_version_no      varchar(100),
flash_version_no        varchar(100),
flash_enabled_flg       int,
java_version_no         varchar(100),
platform_desc           text,
java_enabled_flg        int,
java_script_enabled_flg int,
cookies_enabled_flg     int,
user_language_cd        varchar(100),
screen_color_depth_no   varchar(100),
screen_size_txt         text
)
location ('pxf://hdp1:51200/test.browser_dim?profile=hiveorc')
format 'custom' (formatter='pxfwritable_import'); create external table page_dim(
page_sk             int8,
domain_nm           varchar(200),
reachability_cd     text,
page_desc           text,
protocol_nm         varchar(20)
)
location ('pxf://hdp1:51200/test.page_dim?profile=hiveorc')
format 'custom' (formatter='pxfwritable_import'); create external table referrer_dim(
referrer_sk         int8,
referrer_txt        text,
referrer_domain_nm  varchar(200)
)
location ('pxf://hdp1:51200/test.referrer_dim?profile=hiveorc')
format 'custom' (formatter='pxfwritable_import'); create external table status_code_dim(
status_cd           varchar(100),
client_error_flg    int4,
status_cd_desc      text,
server_error_flg    int4
)
location ('pxf://hdp1:51200/test.status_code_dim?profile=hiveorc')
format 'custom' (formatter='pxfwritable_import'); create external table page_click_fact(
visitor_id                varchar(100),
detail_tm                 timestamp,
page_click_dt             date,
page_sk                   int8,
client_session_dt         date,
previous_page_sk          int8,
referrer_sk               int8,
next_page_sk              int8,
status_cd                 varchar(100),
browser_sk                int8,
bytes_received_cnt        int8,
bytes_sent_cnt            int8,
client_detail_tm          timestamp,
entry_point_flg           int4,
exit_point_flg            int4,
ip_address                varchar(20),
query_string_txt          text,
seconds_spent_on_page_cnt int4,
sequence_no               int4,
requested_file_txt        text
)
location ('pxf://hdp1:51200/test.page_click_fact?profile=hiveorc')
format 'custom' (formatter='pxfwritable_import');

说明:HAWQ外部表使用PXF协议,指向相应的hive表。

5. 建立HAWQ内部表

set search_path=public;create table date_dim(
cal_dt              date,
day_in_cal_yr_no    int4,
day_of_week_no      int4,
start_of_month_dt   date,
start_of_quarter_dt date,
start_of_week_dt    date,
start_of_year_dt    date) with (compresstype=snappy,appendonly=true); create table browser_dim(
browser_sk              int8,
browser_nm              varchar(100),
browser_version_no      varchar(100),
flash_version_no        varchar(100),
flash_enabled_flg       int,
java_version_no         varchar(100),
platform_desc           text,
java_enabled_flg        int,
java_script_enabled_flg int,
cookies_enabled_flg     int,
user_language_cd        varchar(100),
screen_color_depth_no   varchar(100),
screen_size_txt         text
) with (compresstype=snappy,appendonly=true); create table page_dim(
page_sk             int8,
domain_nm           varchar(200),
reachability_cd     text,
page_desc           text,
protocol_nm         varchar(20)
) with (compresstype=snappy,appendonly=true); create table referrer_dim(
referrer_sk         int8,
referrer_txt        text,
referrer_domain_nm  varchar(200)
) with (compresstype=snappy,appendonly=true); create table status_code_dim(
status_cd           varchar(100),
client_error_flg    int4,
status_cd_desc      text,
server_error_flg    int4
) with (compresstype=snappy,appendonly=true); create table page_click_fact(
visitor_id                varchar(100),
detail_tm                 timestamp,
page_click_dt             date,
page_sk                   int8,
client_session_dt         date,
previous_page_sk          int8,
referrer_sk               int8,
next_page_sk              int8,
status_cd                 varchar(100),
browser_sk                int8,
bytes_received_cnt        int8,
bytes_sent_cnt            int8,
client_detail_tm          timestamp,
entry_point_flg           int4,
exit_point_flg            int4,
ip_address                varchar(20),
query_string_txt          text,
seconds_spent_on_page_cnt int4,
sequence_no               int4,
requested_file_txt        text
) with (compresstype=snappy,appendonly=true);

说明:内部表结构定义与hive表等价,使用snappy压缩的行存储格式。

6. 生成HAWQ内部表数据

insert into date_dim select * from hcatalog.test.date_dim;
insert into browser_dim select * from hcatalog.test.browser_dim;
insert into page_dim select * from hcatalog.test.page_dim;
insert into referrer_dim select * from hcatalog.test.referrer_dim;
insert into status_code_dim select * from hcatalog.test.status_code_dim;
insert into page_click_fact select * from hcatalog.test.page_click_fact;

说明:通过HCatalog直接查询hive表,插入到HAWQ内部表中。snappy压缩后的各表对应的HDFS文件大小如下:

6.2 K   /hawq_data/16385/177422/177677
3.3 M   /hawq_data/16385/177422/177682
23.9 M  /hawq_data/16385/177422/177687
39.3 M  /hawq_data/16385/177422/177707
1.8 K   /hawq_data/16385/177422/177726
7.9 G   /hawq_data/16385/177422/177731

7. 分析HAWQ内部表

analyze date_dim;
analyze browser_dim;
analyze page_dim;
analyze referrer_dim;
analyze status_code_dim;
analyze page_click_fact;

六、执行查询

分别在hive表、HAWQ外部表、HAWQ内部表上执行以下5个查询语句,记录执行时间。

1. 查询给定周中support.sas.com站点上访问最多的目录

-- hive查询
select top_directory, count(*) as unique_visits     from (select distinct visitor_id, substr(requested_file_txt,1,10) top_directory  from page_click_fact, page_dim, browser_dim  where domain_nm = 'support.sas.com'   and flash_enabled_flg=1   and weekofyear(detail_tm) = 19   and year(detail_tm) = 2017  ) directory_summary  group by top_directory  order by unique_visits;  -- HAWQ查询,只是用extract函数代替了hive的weekofyear和year函数,与hive的查询语句等价。
select top_directory, count(*) as unique_visits     from (select distinct visitor_id, substr(requested_file_txt,1,10) top_directory  from page_click_fact, page_dim, browser_dim  where domain_nm = 'support.sas.com'   and flash_enabled_flg=1   and extract(week from detail_tm) = 19   and extract(year from detail_tm) = 2017  ) directory_summary  group by top_directory  order by unique_visits;

2. 查询各月从www.google.com访问的页面

-- hive查询
select domain_nm, requested_file_txt, count(*) as unique_visitors, month  from (select distinct domain_nm, requested_file_txt, visitor_id, month(detail_tm) as month  from page_click_fact, page_dim, referrer_dim   where domain_nm = 'support.sas.com'   and referrer_domain_nm = 'www.google.com'  ) visits_pp_ph_summary  group by domain_nm, requested_file_txt, month  order by domain_nm, requested_file_txt, unique_visitors desc, month asc;  -- HAWQ查询,只是用extract函数代替了hive的month函数,与hive的查询语句等价。
select domain_nm, requested_file_txt, count(*) as unique_visitors, month  from (select distinct domain_nm, requested_file_txt, visitor_id, extract(month from detail_tm) as month  from page_click_fact, page_dim, referrer_dim   where domain_nm = 'support.sas.com'   and referrer_domain_nm = 'www.google.com'  ) visits_pp_ph_summary  group by domain_nm, requested_file_txt, month  order by domain_nm, requested_file_txt, unique_visitors desc, month asc;

3. 给定年份support.sas.com站点上的搜索字符串计数

-- hive查询
select query_string_txt, count(*) as count  from page_click_fact, page_dim  where query_string_txt <> ''   and domain_nm='support.sas.com'   and year(detail_tm) = '2017'  group by query_string_txt  order by count desc;-- HAWQ查询,只是用extract函数代替了hive的year函数,与hive的查询语句等价。
select query_string_txt, count(*) as count  from page_click_fact, page_dim  where query_string_txt <> ''   and domain_nm='support.sas.com'   and extract(year from detail_tm) = '2017'  group by query_string_txt  order by count desc;

4. 查询使用Safari浏览器访问每个页面的人数

-- hive查询
select domain_nm, requested_file_txt, count(*) as unique_visitors  from (select distinct domain_nm, requested_file_txt, visitor_id  from page_click_fact, page_dim, browser_dim  where domain_nm='support.sas.com'   and browser_nm like '%Safari%'   and weekofyear(detail_tm) = 19   and year(detail_tm) = 2017  ) uv_summary  group by domain_nm, requested_file_txt  order by unique_visitors desc;  -- HAWQ查询,只是用extract函数代替了hive的weekofyear和year函数,与hive的查询语句等价。
select domain_nm, requested_file_txt, count(*) as unique_visitors  from (select distinct domain_nm, requested_file_txt, visitor_id  from page_click_fact, page_dim, browser_dim  where domain_nm='support.sas.com'   and browser_nm like '%Safari%'   and extract(week from detail_tm) = 19   and extract(year from detail_tm) = 2017  ) uv_summary  group by domain_nm, requested_file_txt  order by unique_visitors desc;

5. 查询给定周中support.sas.com站点上浏览超过10秒的页面

-- hive查询
select domain_nm, requested_file_txt, count(*) as unique_visits  from (select distinct domain_nm, requested_file_txt, visitor_id  from page_click_fact, page_dim  where domain_nm='support.sas.com'   and weekofyear(detail_tm) = 19   and year(detail_tm) = 2017   and seconds_spent_on_page_cnt > 10 ) visits_summary  group by domain_nm, requested_file_txt  order by unique_visits desc;  -- HAWQ查询,只是用extract函数代替了hive的weekofyear和year函数,与hive的查询语句等价。
select domain_nm, requested_file_txt, count(*) as unique_visits  from (select distinct domain_nm, requested_file_txt, visitor_id  from page_click_fact, page_dim  where domain_nm='support.sas.com'   and extract(week from detail_tm) = 19   and extract(year from detail_tm) = 2017   and seconds_spent_on_page_cnt > 10 ) visits_summary  group by domain_nm, requested_file_txt  order by unique_visits desc;

七、测试结果

Hive、HAWQ外部表、HAWQ内部表查询时间对比如表2所示。每种查询情况执行三次取平均值。

查询

Hive(秒)

HAWQ外部表(秒)

HAWQ内部表(秒)

1

74.337

304.134

19.232

2

169.521

150.882

3.446

3

73.482

101.216

18.565

4

66.367

359.778

1.217

5

60.341

118.329

2.789

表2

从图2中的对比可以看到,HAWQ内部表比Hive on Tez快的多(4-50倍)。同样的查询,在HAWQ的Hive外部表上执行却很慢。因此,在执行分析型查询时最好使用HAWQ内部表。如果不可避免地需要使用外部表,为了获得满意的查询性能,需要保证外部表数据量尽可能小。同时要使查询尽可能简单,尽量避免在外部表上执行聚合、分组、排序等复杂操作。

图2 

HAWQ与Hive查询性能对比测试相关推荐

  1. Presto、Spark 和 Hive 即席查询性能对比

    Presto.Spark 和 Hive 是三个非常流行的大数据处理框架,它们都有着各自的优缺点.在本篇博客文章中,我们将对这三个框架进行详细的对比,以便读者更好地了解它们的异同点. Presto 是一 ...

  2. 读书笔记:为啥要有Hive?Hadoop上查询性能问题

    因为提数需要用到Hive,于是阅读<Hive实战>,并整理成笔记, 下面是来自我:一个数据开发路人甲的理解,如何不当欢迎留言或私信. 1. 再认识Hadoop 粗略地说,Hadoop是针对 ...

  3. 突破DBMS局限性,阿里借力Spark提升查询性能

    本文根据dbaplus社群第167期线上分享整理而成 讲师介绍 傅宇 阿里数据库事业部高级开发工程师 曾任职微软.Splunk,现任阿里DRDS分布式数据库团队高级工程师,专注于数据库与大数据系统. ...

  4. mysql5.7.23分区表_MySQL5.7.23 VS MySQL5.6.21 分区表性能对比测试

    为评估MySQL从5.6.21升级到5.7.23版本的性能,针对分区表的读写做了对比测试. [测试环境] 1. 两台HP380的物理机,配置一致,CPU:Intel(R) Xeon(R) CPU E5 ...

  5. Hive查询分析计算:股票分析

    Hive查询分析计算案例:股票分析 案例需求: 本案例是对单支股票一年中每日交易的数据处理,形成K线分析,重点在于前期数据规整处理与导入导出,从数据仓库方案的设计,涉及Hive优化操作,关系型数据库的 ...

  6. Mysql 索引 与 多表查询性能优化

    最近做项目需要用到Luence Whoosh,要定时从数据库中索引出数据来供检索,但是在索引中设计多表查询,速度较慢,因为强迫症,想要做性能优化,因此把Mysql的核心又翻出来研究一遍. 关于MySQ ...

  7. 查询性能显著提升,Apache Doris 向量化版本在小米 A/B 实验场景的调优实践

    导读: 长期以来,Apache Doris在小米集团都有着广泛的应用.随着小米互联网业务的快速发展,用户对Apache Doris的查询性能提出了更高的要求,Doris 向量化版本在小米内部上线已经迫 ...

  8. Hive企业级性能优化(好文建议收藏)

    Hive作为大数据平台举足轻重的框架,以其稳定性和简单易用性也成为当前构建企业级数据仓库时使用最多的框架之一. 但是如果我们只局限于会使用Hive,而不考虑性能问题,就难搭建出一个完美的数仓,所以Hi ...

  9. Hive查询的18种方式,你都学会了吗?

    前言 大家一定对Hive不陌生吧!Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供类SQL查询功能(HQL).Hive的优点是学习成本低,可以通过类似SQ ...

最新文章

  1. ISE下载到FPGA内的文件格式
  2. 11210怎么等于24_【Python】鸡兔同笼怎么“妙解”?
  3. 配置SQL Server数据库连接
  4. 基于Angular创建后台数据模拟(译)
  5. SOA_环境安装系列1_Oracle SOA Suite11g安装总括(案例)
  6. 决策树-CART算法
  7. 【渝粤题库】国家开放大学2021春1283社会保障学(本)题目
  8. 32接上拉5v_51单片机P0口上拉电阻的选择
  9. [单选题]?php echo ceil(2.1/0.7); ?
  10. 【Xamarin挖墙脚系列:Android最重要的命令工具ADB】
  11. 轻量化网络:ShuffleNet V2
  12. [Javascript]基于ExplorerCanvas绘制表盘时钟
  13. android ion --system heap(个人理解,不确定完全对)
  14. python绘制椭圆双曲线_奇妙一招:如何用“尺规作图”作出椭圆双曲线抛物线?...
  15. 软件测试工程师 岗位分析
  16. 如何在Linux系统中解压rar文件
  17. 戴尔DELLR740服务器修改bios启动项,安装redhat7.4
  18. “四舍六入五成双规则” 与 C语言如何实现“四舍五入”
  19. 各类牛B电影,暑假慢慢看完
  20. Blackboard在线教学管理平台

热门文章

  1. 1-7-04:石头剪子布
  2. 1.7-04:石头剪子布
  3. 什么是性能测试?这些你都知道吗?
  4. Android P 分析 HAL3 图片信息 exif
  5. 2.4G wifi 的频道/信道 20M 40M的概念,区别
  6. xpath常用的定位方法
  7. springboot项目jsp在线引用jquery
  8. 复习Java第二个项目仿QQ聊天系统 03(两种通信类、登录以及注册功能完善) Java面试题并发编程相关知识生活【记录一个咸鱼大学生三个月的奋进生活】025
  9. 保存好你的密码 —— 从芝麻金融被攻破说起
  10. 用JS+HTML结合ASP.NET Core Web API给ASP.NET写一个基于Token的登录认证功能