这里只贴上一个配置文件,具体问题请移步一次关于 CDH 中 Spark SQL代码操作Hive无法连接Hive元数据问题

<?xml version="1.0" encoding="UTF-8"?><!--Autogenerated by Cloudera Manager-->
<configuration><property><name>hive.metastore.uris</name><value>thrift://node95:9083,thrift://node96:9083,thrift://node97:9083</value></property><property><name>hive.metastore.client.socket.timeout</name><value>300</value></property><property><name>hive.metastore.warehouse.dir</name><value>/user/hive/warehouse</value></property><property><name>hive.warehouse.subdir.inherit.perms</name><value>true</value></property><property><name>hive.log.explain.output</name><value>false</value></property><property><name>hive.auto.convert.join</name><value>true</value></property><property><name>hive.auto.convert.join.noconditionaltask.size</name><value>20971520</value></property><property><name>hive.optimize.index.filter</name><value>true</value></property><property><name>hive.optimize.bucketmapjoin.sortedmerge</name><value>false</value></property><property><name>hive.smbjoin.cache.rows</name><value>10000</value></property><property><name>hive.server2.logging.operation.enabled</name><value>true</value></property><property><name>hive.server2.logging.operation.log.location</name><value>/var/log/hive/operation_logs</value></property><property><name>mapred.reduce.tasks</name><value>-1</value></property><property><name>hive.exec.reducers.bytes.per.reducer</name><value>67108864</value></property><property><name>hive.exec.copyfile.maxsize</name><value>33554432</value></property><property><name>hive.exec.reducers.max</name><value>1099</value></property><property><name>hive.vectorized.groupby.checkinterval</name><value>4096</value></property><property><name>hive.vectorized.groupby.flush.percent</name><value>0.1</value></property><property><name>hive.compute.query.using.stats</name><value>false</value></property><property><name>hive.vectorized.execution.enabled</name><value>true</value></property><property><name>hive.vectorized.execution.reduce.enabled</name><value>false</value></property><property><name>hive.merge.mapfiles</name><value>true</value></property><property><name>hive.merge.mapredfiles</name><value>false</value></property><property><name>hive.cbo.enable</name><value>false</value></property><property><name>hive.fetch.task.conversion</name><value>minimal</value></property><property><name>hive.fetch.task.conversion.threshold</name><value>268435456</value></property><property><name>hive.limit.pushdown.memory.usage</name><value>0.1</value></property><property><name>hive.merge.sparkfiles</name><value>true</value></property><property><name>hive.merge.smallfiles.avgsize</name><value>16777216</value></property><property><name>hive.merge.size.per.task</name><value>268435456</value></property><property><name>hive.optimize.reducededuplication</name><value>true</value></property><property><name>hive.optimize.reducededuplication.min.reducer</name><value>4</value></property><property><name>hive.map.aggr</name><value>true</value></property><property><name>hive.map.aggr.hash.percentmemory</name><value>0.5</value></property><property><name>hive.optimize.sort.dynamic.partition</name><value>false</value></property><property><name>hive.execution.engine</name><value>mr</value></property><property><name>spark.executor.memory</name><value>5343281152</value></property><property><name>spark.driver.memory</name><value>966367641</value></property><property><name>spark.executor.cores</name><value>4</value></property><property><name>spark.yarn.driver.memoryOverhead</name><value>102</value></property><property><name>spark.yarn.executor.memoryOverhead</name><value>899</value></property><property><name>spark.dynamicAllocation.enabled</name><value>true</value></property><property><name>spark.dynamicAllocation.initialExecutors</name><value>1</value></property><property><name>spark.dynamicAllocation.minExecutors</name><value>1</value></property><property><name>spark.dynamicAllocation.maxExecutors</name><value>2147483647</value></property><property><name>hive.stats.fetch.column.stats</name><value>true</value></property><property><name>hive.mv.files.thread</name><value>15</value></property><property><name>hive.blobstore.use.blobstore.as.scratchdir</name><value>false</value></property><property><name>hive.load.dynamic.partitions.thread</name><value>15</value></property><property><name>hive.exec.input.listing.max.threads</name><value>15</value></property><property><name>hive.msck.repair.batch.size</name><value>0</value></property><property><name>hive.spark.dynamic.partition.pruning.map.join.only</name><value>false</value></property><property><name>hive.metastore.execute.setugi</name><value>true</value></property><property><name>hive.support.concurrency</name><value>true</value></property><property><name>hive.zookeeper.quorum</name><value>node93,node95,node97,node96,node94</value></property><property><name>hive.zookeeper.client.port</name><value>2181</value></property><property><name>hive.zookeeper.namespace</name><value>hive_zookeeper_namespace_hive</value></property><property><name>hbase.zookeeper.quorum</name><value>node93,node95,node97,node96,node94</value></property><property><name>hbase.zookeeper.property.clientPort</name><value>2181</value></property><property><name>hive.cluster.delegation.token.store.class</name><value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value></property><property><name>hive.metastore.fshandler.threads</name><value>15</value></property><property><name>hive.server2.thrift.min.worker.threads</name><value>5</value></property><property><name>hive.server2.thrift.max.worker.threads</name><value>100</value></property><property><name>hive.server2.thrift.port</name><value>10000</value></property><property><name>hive.entity.capture.input.URI</name><value>true</value></property><property><name>hive.server2.enable.doAs</name><value>true</value></property><property><name>hive.server2.session.check.interval</name><value>900000</value></property><property><name>hive.server2.idle.session.timeout</name><value>43200000</value></property><property><name>hive.server2.idle.session.timeout_check_operation</name><value>true</value></property><property><name>hive.server2.idle.operation.timeout</name><value>21600000</value></property><property><name>hive.server2.webui.host</name><value>0.0.0.0</value></property><property><name>hive.server2.webui.port</name><value>10002</value></property><property><name>hive.server2.webui.max.threads</name><value>50</value></property><property><name>hive.server2.webui.use.ssl</name><value>false</value></property><property><name>hive.aux.jars.path</name><value>{{HIVE_HBASE_JAR}}</value></property><property><name>hive.server2.use.SSL</name><value>false</value></property><property><name>spark.shuffle.service.enabled</name><value>true</value></property><property><name>hive.service.metrics.file.location</name><value>/var/log/hive/metrics-hiveserver2/metrics.log</value></property><property><name>hive.server2.metrics.enabled</name><value>true</value></property><property><name>hive.service.metrics.file.frequency</name><value>30000</value></property>
</configuration>

hive-site.xml相关推荐

  1. hdfs配置文件(hdfs.site.xml)详解

    简单的对hdfs(hdfs.site.xml)配置文件做一个简单的说明. <configuration> <property> <!-- 为namenode集群定义一个s ...

  2. Oozie中允许hive程序xml配置

    <?xml version="1.0" encoding="UTF-8"?> <!--Licensed to the Apache Softw ...

  3. hadoop生态下hive安装过程

    Hive的安装部署 1.首先在Linux本地,新建/data/hive1目录,用于存放所需文件 (1)mkdir -p /data/hive1 (2)切换目录到/data/hive1下,上传apach ...

  4. 一脸懵逼学习Hive的元数据库Mysql方式安装配置

    1:要想学习Hive必须将Hadoop启动起来,因为Hive本身没有自己的数据管理功能,全是依赖外部系统,包括分析也是依赖MapReduce: 2:七个节点跑HA集群模式的: 第一步:必须先将Zook ...

  5. SuperSet连接Hive失败(客户端报日志拒绝连接)

    先上报错,百度无数SuperSet的解决办法整了三四天无果后,打了两天游戏突然想到了个找问题的办法 INFO:thrift.transport.TSocket:Could not connect to ...

  6. hadoop hive 的安装问题

    origin: http://blog.163.com/songyalong1117@126/blog/static/1713918972014124481752/ hadoop hive 的安装问题 ...

  7. Hive设置参数-指定引擎-队列

    文章转载:https://www.cnblogs.com/huangmr0811/p/5571001.html Hive提供三种可以改变环境变量的方法,分别是:(1).修改${HIVE_HOME}/c ...

  8. 【原】hive 操作笔记

    1.建表: hive> CREATE TABLE pokes (foo INT, bar STRING); hive> CREATE TABLE invites (foo INT, bar ...

  9. 大数据应用技术实验报告六 Hive和MySQL

    MapReduce 实现 HiveQL 常见操作 Join的实现原理: select u.name, o.orderid from order o join user u on o.uid = u.u ...

  10. spark on hive 的部署,和spark on hive (ha)在本地测试步骤

    spark  on hive  的部署 1.把hive -site.xml文件拷贝到安装spark 目录下conf里(注意,要重启spark集群生效) 2.启动hive服务 hive --servic ...

最新文章

  1. 聊一聊我在 B站 上自学编程的经历吧
  2. 使用 Python 的图像隐写术
  3. 了解下WSDL 文档
  4. 【深入Java虚拟机JVM 05】HotSpot对象探秘
  5. 【温故知新】CSS学习笔记(后代和子代选择器)
  6. 第六天学习Java的笔记(循环语句)
  7. Oracle 返回结果集 sys_refcursor
  8. SSL/TLS协议运行机制
  9. 阿里开发者招聘节 | 面试题01:如何实现一个高效的单向链表逆序输出?
  10. vue标准时间改为时间戳_2021考研网上确认照片采集新标准公布 网上确认时间表...
  11. 思科CCNA考试流程-ielab
  12. Matlab中MOSEK优化包的配置及使用
  13. UE4编辑器下Tick的实现
  14. 程序员培训班要多少米?报名很贵吗?
  15. 光猫DNS服务器未响应,有光纤猫了还要猫吗?
  16. ES文件浏览器(清理垃圾神器, 强大网盘管理功能, 强大文件分析能力)
  17. piaget读法_读音教学 | 这些手表品牌原来是这么念的!
  18. 使用fsck修复系统文件错误
  19. 关闭所有杀毒软件快捷方法
  20. Fiddler 安装使用教程

热门文章

  1. Linux磁盘扩容三种方式
  2. 关于51内核的N76E003单片机
  3. pc端微博分享 html,新浪微博发布PC端网页版V6版本 信息流呈现卡片化
  4. 综评计算机考试范围,南信大2021年综评开考,7110名考生角逐354个计划
  5. 软件工程(方法学、三要素)
  6. DDL(数据定义语言)
  7. freenas mysql_FreeNas安装PHP5+mySQL5.5
  8. FreeNas安装教程
  9. Kafka系列:查看Kafka版本
  10. 高精度计算(大数计算)