spark 连接 Kerberos hbase 使用小记

使用spark连接Kerberos认证的hbase
背景:使用一个spark集群(集群上有hbase 等大数据组件),去连接另一个集群上的 Kerberos 认证的 hbase 集群。

改写mlsql 连接hbase的项目。使用 yarn-client 模式。将 krb5.conf 和 wc1-ods.keytab 文件分发到所有集群节点的同一路径下。 使用spark newAPIHadoopRDD的方式去读 最好重写 TableInputFormat 的方法 在里面加入Kerberos认证

报错
20/10/19 16:08:24 ERROR utils.hbase_KerberorsJavaUtil: Get HBaseAuthentication Failed
java.io.IOException: Login failure for ods from keytab /home/yqq/wc1/wc1-ods.keytab: javax.security.auth.login.LoginException: java.lang.IllegalArgumentException: Illegal principal name ods@WC1.HBASE.COM:org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to ods@WC1.HBASE.COMat org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1052)at org.apache.spark.sql.execution.datasources.utils.hbase_KerberorsJavaUtil.getHBaseAuthentication(hbase_KerberorsJavaUtil.java:44)at org.apache.spark.sql.execution.datasources.hbase.HBaseConfBuilder$.buildKerberos(HBaseConfBuilder.scala:147)at org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.<init>(DefaultSource.scala:210)at org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(DefaultSource.scala:54)at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)at streaming.core.datasource.impl.MLSQLHbase.load(MLSQLHbase.scala:69)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at tech.mlsql.dsl.adaptor.LoadPRocessing$$anonfun$parse$2.apply(LoadAdaptor.scala:114)at tech.mlsql.dsl.adaptor.LoadPRocessing$$anonfun$parse$2.apply(LoadAdaptor.scala:112)at scala.Option.map(Option.scala:146)at tech.mlsql.dsl.adaptor.LoadPRocessing.parse(LoadAdaptor.scala:112)at tech.mlsql.dsl.adaptor.LoadAdaptor.parse(LoadAdaptor.scala:82)at streaming.dsl.ScriptSQLExecListener.exitSql(ScriptSQLExec.scala:289)at streaming.dsl.parser.DSLSQLParser$SqlContext.exitRule(DSLSQLParser.java:296)at org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:47)at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:30)at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)at streaming.dsl.ScriptSQLExec$._parse(ScriptSQLExec.scala:155)at streaming.dsl.ScriptSQLExec$.parse(ScriptSQLExec.scala:142)at streaming.rest.RestController$$anonfun$query$1$2.apply$mcV$sp(RestController.scala:140)at tech.mlsql.job.JobManager$.run(JobManager.scala:73)at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)at java.lang.Thread.run(Thread.java:745)
Caused by: javax.security.auth.login.LoginException: java.lang.IllegalArgumentException: Illegal principal name ods@WC1.HBASE.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: Norules applied to ods@WC1.HBASE.COMat org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:217)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)at javax.security.auth.login.LoginContext.login(LoginContext.java:588)at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1043)... 45 more
Caused by: java.lang.IllegalArgumentException: Illegal principal name ods@WC1.HBASE.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to ods@WC1.HBASE.COMat org.apache.hadoop.security.User.<init>(User.java:50)at org.apache.hadoop.security.User.<init>(User.java:43)at org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:215)... 57 more
Caused by: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to ods@WC1.HBASE.COMat org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:400)at org.apache.hadoop.security.User.<init>(User.java:48)... 59 more

大数据Kerberos认证报No rules applied to

throw new IOException("Login failure for " + user + " from keytab " +                       path+ ": " + le, le);
排除 user 和 路径的问题
  /*** Get the translation of the principal name into an operating system* user name.* @return the short name* @throws IOException throws if something is wrong with the rules*/public String getShortName() throws IOException {String[] params;if (hostName == null) {// if it is already simple, just return itif (realm == null) {return serviceName;}params = new String[]{realm, serviceName};} else {params = new String[]{realm, serviceName, hostName};}for(Rule r: rules) {String result = r.apply(params);if (result != null) {return result;}}throw new NoMatchingRule("No rules applied to " + toString());}

根据传入的 principal 遍历所有的规则未能返回一个result,getShortName抛出这个错误。在使用时只会连接一个Kerberos 认证的集群。

          hconf.set("hadoop.security.auth_to_local", "RULE:[1:$1]\n" +"RULE:[2:$1]\n" +"DEFAULT");
//上面是 hadoop.security.auth_to_local 的默认配置规则。使用下面的方法 可以由 传入的 priinciple val arry = Array[String]("hbase/_HOST@WC1.HBASE.COM")HadoopKerberosName.main(arry)
//Name: hbase/_HOST@WC1.HBASE.COM to hbase
//Name: ods@TEST.COM to ods

kerberos主体的hadoop转换规则

Principal可以理解为用户或服务的名字,全集群唯一,由三部分组成:username(or servicename)/instance@realm,例如:nn/zelda1@ZELDA.COM,zelda1为集群中的一台机器;或admin/admin@ZELDA.COM,管理员账户。

<property><name>hadoop.security.auth_to_local</name><value>
RULE:[2:$1@$0]([nd]n@ZELDA.COM)s/.*/dtdream/
DEFAULT
</value>
</property>RULE:[1:$1@$0](hdfs@HADOOP.COM)s/.*/hdfs/RULE:[2:$1@$0](hbase@WC1.HBASE.COM)s/.*/ods/

ps.

在使用spark standalone 模式(Kerberos认证hbase 同环境的spark集群) 往hdfs写数据时 报过No rules applied to 错误。未配置规则,默认规则也未生效,ods@WC1.HBASE.COM 没有映射到hdfs的用户名 加入规则后能成功映射到hdfs的用户名ods生效。

hbase/_HOST@WC1.HBASE.COM
hbase:username or servicename 服务名。
_HOST:机器名 instance。
WC1.HBASE.COM:域 realm

$0: 转换的域名 $1表示 第一个组件 ,$2表示用户名中的第二个组件 (没看懂)

//修改规则
hconf.set("hadoop.security.auth_to_local", "RULE:[1:$1@$0](ods@WC1.HBASE.COM)s/.*/asianfo/");

如果直接加 ods 会报没有该用户

20/10/20 14:34:45 WARN security.ShellBasedUnixGroupsMapping: unable to return groups for user ods
PartialGroupNameException The user name 'ods' is not found. id: ods: no such user
id: ods: no such user

没报错时的日志

20/10/26 22:47:04 ERROR utils.MyTableInputFormat: get Kerberos realm: null
20/10/26 22:47:04 ERROR utils.MyTableInputFormat: username: ods
20/10/26 22:47:04 ERROR utils.MyTableInputFormat: keytabFile: /home/yqq/wc1/wc1-ods.keytab
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: ------Start Get HBaseAuthentication-----
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: master_principal: hbase/_HOST@WC1.HBASE.COM
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: regionserver_principal: hbase/_HOST@WC1.HBASE.COM
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: ------dev_yx.keytab path is---/home/yqq/wc1/wc1-ods.keytab
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: get Kerberos realm: null
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: System get Kerberos realm: WC1.HBASE.COM
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: System get Kerberos kdc: wc1.server.ambari
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: ===========loginUserFromKeytab username ods
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: ------HadoopKerberosName.getRules-----RULE:[1:$1@$0](ods@WC1.HBASE.COM)s/.*/asianfo/
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: UserGroupInformation.getLoginUser0: yqq (auth:SIMPLE)
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: UserGroupInformation.getCurrentUser0: yqq (auth:SIMPLE)
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: UserGroupInformation.getLoginUser1: ods@WC1.HBASE.COM (auth:KERBEROS)
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: UserGroupInformation.getCurrentUser1: ods@WC1.HBASE.COM (auth:KERBEROS)
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: ------Get HBaseAuthentication Successed-----
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: =====put the logined userinfomation to user====
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: UserGroupInformation.getLoginUser: ods@WC1.HBASE.COM (auth:KERBEROS)
20/10/26 22:47:04 ERROR utils.hbase_KerberorsJavaUtil: UserGroupInformation.getCurrentUser: ods@WC1.HBASE.COM (auth:KERBEROS)
2
Caused by: Can't get Kerberos realm

Can’t get Kerberos realm

没能正常加载conf 文件。

在启动的 spark submit 语句加入

        --conf "spark.driver.extraJavaOptions"="-Djava.security.krb5.conf=/home/yqq/wc1/krb5.conf" \--conf "spark.executor.extraJavaOptions"="-Djava.security.krb5.conf=/home/yqq/wc1/krb5.conf" \

或者在代码里加入

      System.setProperty("java.security.krb5.conf","/home/yqq/wc1/krb5.conf")
//      //ods@WC1.HBASE.COM
//      System.setProperty("java.security.krb5.realm","WC1.HBASE.COM")
      //
//      System.setProperty("java.security.krb5.kdc","wc1.server.ambari")sun.security.krb5.Config.refresh();

因为重写了 TableInputFormat 加入了Kerberos认证

        System.setProperty("java.security.krb5.conf", "/home/yqq/wc1/krb5.conf");System.setProperty("java.security.krb5.realm","WC1.HBASE.COM");System.setProperty("java.security.krb5.kdc","wc1.server.ambari");LOG.error("get Kerberos realm: "+getProperty("java.security.krb5.realm"));try {sun.security.krb5.Config.refresh();} catch (KrbException e) {e.printStackTrace();}
3
20/10/26 22:48:11 WARN hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_2029880377_1] for 68 seconds.  Will retry shortly ...
java.io.IOException: Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is: "
YHBB01/10.161.75.84"; destination host is: "sta1":8020;at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)at org.apache.hadoop.ipc.Client.call(Client.java:1476)at org.apache.hadoop.ipc.Client.call(Client.java:1409)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)at com.sun.proxy.$Proxy12.renewLease(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:590)at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)at com.sun.proxy.$Proxy13.renewLease(Unknown Source)at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:942)at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423)at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448)at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304)at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:756)at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)at org.apache.hadoop.ipc.Client.getConnection(Client.java:1525)at org.apache.hadoop.ipc.Client.call(Client.java:1448)... 16 more

网上资料说1 Server端 要求 simple 的验证方式。但是只有 Kerberos 验证的方式。

2 本地搭建Kerberos 客户端删除 提交 spark 任务时需要删掉下面的配置文件。

3 经过排查 在Kerberos认证时在代码里配置的默认 ipc.client.fallback-to-simple-auth-allowed 未生效。(CDH管理的集群)。在hdfs配置文件里重新配置 后无此报错。

hconf.set("ipc.client.fallback-to-simple-auth-allowed", "true");

4

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=61, exceptions:
Wed Oct 28 14:55:51 CST 2020, null, java.net.SocketTimeoutException: callTimeout=540000, callDuration=680718: row 'td_b_payment_deposit,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740
, hostname=wc1.slave3.ambari,16020,1555077306296, seqNum=0at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:286)at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:231)at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867)at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:88)at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)at org.apache.spark.sql.execution.datasources.utils.MyTableInputFormat.getSplits(MyTableInputFormat.java:252)at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:130)at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)at scala.Option.getOrElse(Option.scala:121)at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)... 72 more
Caused by: java.net.SocketTimeoutException: callTimeout=540000, callDuration=680718: row 'td_b_payment_deposit,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=wc1.slave3.amb
ari,16020,1555077306296, seqNum=0at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)... 1 more
Caused by: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on wc1.slave3.ambari,16020,1567683889060at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3273)at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3250)at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1414)at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2964)at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3289)at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:327)at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:402)at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:203)at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:381)

报错猜测是 连接hbase的meta表失败 region获取失败

NotServingRegionException 异常猜测是 region正在分裂;region数据有损坏。报错多指向是hbase的问题。

后查看spark executors 日志 。因为访问的是另一个集群的hbase,hosts文件只在本机增加了,在spark集群的executer 节点 telnet hbase地址端口,重连报连接被拒绝,没有报hostname not found ,疑惑,怀疑是其他节点的hosts文件未修改导致。修改hosts文件没有权限 该问题未解决。

修改 hosts 文件。 HBase 设置

hc.set("zookeeper.znode.parent", "/hbase-secure")

5

在本地跑测试代码 和在节点使用local模式 验证 可以通过Kerberos认证。在使用 yarn client时报错。查看driver端日志 Kerberos 认证通过。executor 端报错。spark ui界面 stage 看到报错日志

20/11/06 10:11:59 ERROR utils.MyTableInputFormat_ugi: ------Start MyTableInputFortmatUGI-----
20/11/06 10:11:59 ERROR utils.MyTableInputFormat_ugi: get Kerberos realm: WC1.HBASE.COM
20/11/06 10:11:59 ERROR utils.MyTableInputFormat_ugi: get Kerberos realm: /data/v01/wc1/krb5.conf
20/11/06 10:11:59 ERROR utils.MyTableInputFormat_ugi: HBase client scan came from : file:/data/v01/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hbase-client-1.2.0-cdh5.10.0.jar!/org/apache/hadoop/hbase/client/Scan.class
20/11/06 10:11:59 ERROR utils.MyTableInputFormat_ugi: username: ods
20/11/06 10:11:59 ERROR utils.MyTableInputFormat_ugi: keytabFile: /data/v01/wc1/wc1-ods.keytab
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: =====put the logined userinfomationUGI to user====
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: ------Start Get HBaseAuthenticationUGI-----
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: master_principal: hbase/_HOST@WC1.HBASE.COM
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: regionserver_principal: hbase/_HOST@WC1.HBASE.COM
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: ------dev_yx.keytab path is---/data/v01/wc1/krb5.conf
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: get Kerberos realm: null
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: System get Kerberos realm: WC1.HBASE.COM
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: System get Kerberos kdc: wc1.server.ambari
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: ===========loginUserFromKeytab username ods
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: ------HadoopKerberosName.getRules-----RULE:[1:$1@$0](ods@WC1.HBASE.COM)s/.*/yqq/
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: UserGroupInformation.getLoginUser0: yarn (auth:SIMPLE)
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: UserGroupInformation.getCurrentUser0: yqq (auth:SIMPLE)
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: /data/v01/wc1/wc1-ods.keytab file:false
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: /data/v01/wc1/wc1-ods.keytab canRead file:false
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: Get HBaseAuthentication Failed
java.io.IOException: Login failure for ods from keytab /data/v01/wc1/wc1-ods.keytabCaused by: javax.security.auth.login.LoginException: Unable to obtain password from userat com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)at javax.security.auth.login.LoginContext.login(LoginContext.java:587)at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1261)... 32 more
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: UserGroupInformation.getLoginUser: yarn (auth:SIMPLE)
20/11/06 10:11:59 ERROR utils.hbase_KerberorsJavaUtil_ugi: UserGroupInformation.getCurrentUser: yqq (auth:SIMPLE)
20/11/06 10:11:59 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Scan.setCacheBlocks(Z)Lorg/apache/hadoop/hbase/client/Scan;at org.apache.spark.sql.execution.datasources.utils.MyTableInputFormat_ugi.setConf(MyTableInputFormat_ugi.java:189)at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:189)

driver 端日志如下

一 executor Kerberos验证失败

二 executor 端hbase-client jar包冲突

        java.net.URL res = MyTableInputFormat.class.getClassLoader().getResource("org/apache/hadoop/hbase/client/Scan.class");System.out.println("HBase client scan came from " + res.getPath());LOG.error("HBase client scan came from : "+ res.getPath());

在代码中判断是否可读 keytab文件 和 使用的hbase-client的版本 打印可知。

在使用yarn client时 把当前用户切换成yarn用户 无法访问 keytab 文件报错。executor 端使用的jar包是cdh版本 没有使用上传的 开源版本。

Spark连接需Kerberos认证的HBase

我重写了doAs方法传出connection 在使用yarn-client还是会报错。将keytab文件和jar包赋权为777后执行成功。

在启动命令中加入 解决 hbase-clientjar包问题

        --conf "spark.driver.extraClassPath=/data/v01/mlsql_1.6/updown/jar/*" \--conf "spark.executor.extraClassPath=/data/v01/mlsql_1.6/updown/jar/*" \

spark程序jar与spark lib jar冲突,加载顺序

spark on yarn运行产生jar包冲突问题

spark-submit参数说明–on YARN

Spark on Yarn运行时加载的jar包

spark.executor.userClassPathFirst 和spark.driver.userClassPathFirst 参数时会报错 可能是spark 1.3之前可以使用

Spark2中操作HBase的异常:java.lang.NoSuchMethodError:

定位 是哪个jar包冲突导致

Hadoop认证Kerberos–UserGroupInformation.doAs

doAs 不会因为切换用户导致验证失败

spark on yarn模式下扫描带有kerberos的hbase

没有用到 值得参照

改写 UserGroupInformation.doAs 参照

[kerberos下JAVA代码操作hbase的方式(客户端方式,应用程序方式)]

优雅解决Spark Application jar包冲突问题

使用打jar包的方式解决jar包冲突

收藏的链接

Spark部署模式详解(Local,Standlone,Yarn)

有kerberos认证hbase在spark环境下的使用

重写了 TableInputFormat的方法

Kerberos 使用小记相关推荐

  1. kerberos java实现,基于kerberos实现jaas登录

    这段时间在做hadoop和kerberos的整合,顺便看了jaas和kerberos,这里给出使用kerberos登录模块的jaas例子. 前提条件 1.kerberos已经安装,principal已 ...

  2. hp-ux 集群,内存 小记

    hp-ux 集群,内存 小记 -----查看hp 集群状态信息 # cmviewcl -v CLUSTER        STATUS       dbsvr          up          ...

  3. <笔记2>numpy的生成随机数用法小记

    numpy的生成随机数用法小记 numpy生成随机数 <以下图片来自黑马程序猿录播课程笔记> import numpy as np import random #random 生产随机数 ...

  4. hadoop java client_hadoop3 Java client客户端kerberos认证

    hadoop集群升级hadoop3,并需要Kerberos认证,hadoop3代码包做了合并,引用jar包如下: org.apache.hadoop hadoop-hdfs 3.1.1 org.apa ...

  5. 2021年中寻找新SAP项目机会小记

    2021年中寻找新SAP项目机会小记 最近一段时间,笔者有在酝酿下一个SAP项目机会.在与相关业界同仁洽谈项目机会的过程中,笔者发现自己还是在犯一些低级错误,感觉自己还是太不够理性和成熟. 1,事情没 ...

  6. K项目小记 - 项目已开工整整四周!

    K项目小记 - 项目已开工整整四周! 至今天,K项目已经推进到了第四周.这四周时间里,我们项目组全体成员经历了一段刻骨铭心的战斗时光.我们每天开会,谈流程,展示全球模板,找GAP.K项目是以客户总部的 ...

  7. 肺炎疫情期间购买口罩小记

    肺炎疫情期间购买口罩小记 谁都不曾想到,平时不怎么用到的口罩,在此次肺炎疫情期间,突然成为紧俏物资,重要物资. 早在疫情爆发初期,就在新闻报道里听说医护人员缺少医护物资,其中就有各种级别的口罩.此时笔 ...

  8. SAP MM 进销存报表优化小记

    SAP MM 进销存报表优化小记 笔者刚刚加入SY项目,就接到了SY集团上海总部SAP运维部门负责人的工作分配,说是有一只进销存报表,需要做一个优化,可能是需要重新设计重新开发. 笔者研究了他们现行进 ...

  9. linux下kerberos教程

    一.kerberos介绍 Kerberos这一名词来源于希腊神话"三个头的狗--地狱之门守护者"系统设计上采用客户端/服务器结构与DES加密技术,并且能够进行相互认证,即客户端和服 ...

最新文章

  1. Scala Trait详解
  2. 成为顶尖机器学习算法专家需要知道哪些算法?
  3. 大白话解析模拟退火算法、遗传算法入门
  4. 此microsoft fix it不适用于您的操作系统 解决方案
  5. 牛客16662 津津的储蓄计划
  6. java8收集器,Java 8中的收集器collectionAndThen()方法
  7. 28.课时28.【Django模块】with标签使用详解(Av61533158,P28)
  8. LabVIEW自带函数实现SQL Server操作(上)
  9. android 偏好设置,SharedPreferences保存、显示用户偏好设置
  10. spring 源码下载地址
  11. python爬取论文代码_Python selenium爬取微信公众号文章代码详解
  12. IOS LocationManager定位国内偏移,火星坐标(GCJ-02)解决方法
  13. Android Multimedia框架总结(八)Stagefright框架之AwesomePlayer及数据解析器
  14. python可视化界面开发实例-Python可视化界面编程入门
  15. 熟识三菱PLC编程基本指令
  16. MarkDown 语法手册
  17. 串级控制系统的计算机控制原理图,串级PID控制原理
  18. 作文素材:看完这23种蔬菜描写,恨不得穿过屏幕吃掉它们!
  19. 魂斗罗java代码素材和代码_魂斗罗素材
  20. obs代码总体架构图

热门文章

  1. html背景图片随鼠标滚动条,博客音效代码、滚动条代码、单张图片滚动代码、背景图片代码、个性鼠标(2)...
  2. 入职字节跳动那一天,我哭了(蘑菇街被裁,奋战7个月拿下offer)
  3. 缺这项能力,做不了技术管理工作
  4. 手机库存或达数亿,高傲的国产手机无奈低头,纷纷降价超千元促销
  5. Python : Xpath简介及实例讲解
  6. 高速缓存cache详解
  7. window环境导入odbc数据源
  8. python实现洗牌算法_python-洗牌算法的实现
  9. Skycc营销软件,给她美好的每一天
  10. 判断某一年是否为闰年