sql server - How to import data from sqlserver to hbase except mapreduce and phoenix? -


how import data sqlserver hbase except mapreduce , phoenix?

i tried serveral different methods.

i can dataframe sqlserver, know how keep dataframe hbase?

i tried morning.

refrence link:https://hbase.apache.org/book.html#_sparksql_dataframes

but looks need define catalog again.

if don't , write this:

val jdbcdf = spark.read.format("jdbc").option("url","jdbc:sqlserver://192.168.1.21;username=sa;password=yishidb;database=cdrdb16").option("driver","com.microsoft.sqlserver.jdbc.sqlserverdriver").option("dbtable","testsparkhbase").load()  jdbcdf.write.format("org.apache.hadoop.hbase.spark ").mode("overwrite")save()   

or

jdbcdf.write.options(map("table" -> "testsparkhbase","zkurl" -> "hadoop001:2181")).format("org.apache.hadoop.hbase.spark").mode("overwrite").save() 

they both gave me exception this:

scala> jdbcdf.write.format("org.apache.hadoop.hbase.spark").mode("overwrite").save() java.lang.abstractmethoderror: org.apache.hadoop.hbase.spark.defaultsource.createrelation(lorg/apache/spark/sql/sqlcontext;lorg/apache/spark/sql/savemode;lscala/collection/immutable/map;lorg/apache/spark/sql/dataset;)lorg/apache/spark/sql/sources/baserelation; @ org.apache.spark.sql.execution.datasources.datasource.write(datasource.scala:472) @ org.apache.spark.sql.execution.datasources.saveintodatasourcecommand.run(saveintodatasourcecommand.scala:48) @ org.apache.spark.sql.execution.command.executedcommandexec.sideeffectresult$lzycompute(commands.scala:58) @ org.apache.spark.sql.execution.command.executedcommandexec.sideeffectresult(commands.scala:56) @ org.apache.spark.sql.execution.command.executedcommandexec.doexecute(commands.scala:74) @ org.apache.spark.sql.execution.sparkplan$$anonfun$execute$1.apply(sparkplan.scala:117) @ org.apache.spark.sql.execution.sparkplan$$anonfun$execute$1.apply(sparkplan.scala:117) @ org.apache.spark.sql.execution.sparkplan$$anonfun$executequery$1.apply(sparkplan.scala:138) @ org.apache.spark.rdd.rddoperationscope$.withscope(rddoperationscope.scala:151) @ org.apache.spark.sql.execution.sparkplan.executequery(sparkplan.scala:135) @ org.apache.spark.sql.execution.sparkplan.execute(sparkplan.scala:116) @ org.apache.spark.sql.execution.queryexecution.tordd$lzycompute(queryexecution.scala:92) @ org.apache.spark.sql.execution.queryexecution.tordd(queryexecution.scala:92) @ org.apache.spark.sql.dataframewriter.runcommand(dataframewriter.scala:610) @ org.apache.spark.sql.dataframewriter.save(dataframewriter.scala:233)
... 48 elided


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -