scala - java.net.BindException: Cannot assign requested address: no further information while trying to connect to HDFS -
the problem i'm configuring connector hdfs:
conf.set(hdfs_default_fs, connectorconf.getstring(hdfs_default_fs)) conf.set(hdfs_default_name, conf.get(hdfs_default_fs)) conf.set(hdfs_nameservices_key, nameservice_pattern.findfirstin(connectorconf.getstring(hdfs_default_fs)).get) conf.set(hdfs_service_namenodes, connectorconf.getstring(hdfs_namenodes_names)) val namenodelist = connectorconf.getstring(hdfs_namenodes_names).split(comma) namenodelist.foreach(namenode => { conf.set(hdfs_namenode_rpc_addres + namenode, connectorconf.getstring(hdf_namenode_address + namenode)) }) conf.set(hdfs_failover_proxy_provider_key, hdfs_failover_proxy_provider_value) conf.addresource(connectorconf.getstring(hdfs_core)) conf.addresource(connectorconf.getstring(hdfs_site)) conf.set(hdfs_use_datanode, connectorconf.getstring(hdfs_use_datanode))
and have 2 namenodes configured in property file. so, question is, why can recieve exception? maybe because of changes in configuration of hadoop in core-site.xml or in hdfs-site.xml ? previosly code worked fine, can't explain why happend.
problem binding [czc1434zht/144.5.82.125:0] java.net.bindexception: cannot assign requested address: no further information; more details see: http://wiki.apache.org/hadoop/bindexception sun.reflect.generatedconstructoraccessor83.newinstance(unknown source) sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl.java:45) java.lang.reflect.constructor.newinstance(constructor.java:423) org.apache.hadoop.net.netutils.wrapwithmessage(netutils.java:791) org.apache.hadoop.net.netutils.wrapexception(netutils.java:720) org.apache.hadoop.ipc.client.call(client.java:1472) org.apache.hadoop.ipc.client.call(client.java:1399) org.apache.hadoop.ipc.protobufrpcengine$invoker.invoke(protobufrpcengine.java:232) com.sun.proxy.$proxy29.getlisting(unknown source) org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocoltranslatorpb.getlisting(clientnamenodeprotocoltranslatorpb.java:554) sun.reflect.generatedmethodaccessor5.invoke(unknown source) sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43) java.lang.reflect.method.invoke(method.java:498) org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler.java:187) org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:102) com.sun.proxy.$proxy30.getlisting(unknown source) org.apache.hadoop.hdfs.dfsclient.listpaths(dfsclient.java:1969) org.apache.hadoop.hdfs.dfsclient.listpaths(dfsclient.java:1952) org.apache.hadoop.hdfs.distributedfilesystem.liststatusinternal(distributedfilesystem.java:693) org.apache.hadoop.hdfs.distributedfilesystem.access$600(distributedfilesystem.java:105) org.apache.hadoop.hdfs.distributedfilesystem$15.docall(distributedfilesystem.java:755) org.apache.hadoop.hdfs.distributedfilesystem$15.docall(distributedfilesystem.java:751) org.apache.hadoop.fs.filesystemlinkresolver.resolve(filesystemlinkresolver.java:81) org.apache.hadoop.hdfs.distributedfilesystem.liststatus(distributedfilesystem.java:751) org.endali.catalog.plugins.connector.hdfsconnector.getobjectlist(hdfsconnector.scala:134) org.endali.catalog.plugins.connector.hdfsconnector$$anonfun$getobjectslist$1.apply(hdfsconnector.scala:111) org.endali.catalog.plugins.connector.hdfsconnector$$anonfun$getobjectslist$1.apply(hdfsconnector.scala:111) scala.option.fold(option.scala:158) org.endali.catalog.plugins.connector.hdfsconnector.getobjectslist(hdfsconnector.scala:111) org.endali.catalog.services.connector.connectorservice.getobjectslist(connectorservice.scala:53) org.endali.catalog.services.objects.importcontroller$$anonfun$getobjectslist$1$$anonfun$apply$3.apply(importcontroller.scala:90) org.endali.catalog.services.objects.importcontroller$$anonfun$getobjectslist$1$$anonfun$apply$3.apply(importcontroller.scala:89) play.api.libs.json.jsresult$class.fold(jsresult.scala:72) play.api.libs.json.jssuccess.fold(jsresult.scala:9) org.endali.catalog.services.objects.importcontroller$$anonfun$getobjectslist$1.apply(importcontroller.scala:85) org.endali.catalog.services.objects.importcontroller$$anonfun$getobjectslist$1.apply(importcontroller.scala:83) play.api.mvc.action$.invokeblock(action.scala:498) play.api.mvc.action$.invokeblock(action.scala:495) play.api.mvc.actionbuilder$$anon$2.apply(action.scala:458) play.api.mvc.action$$anonfun$apply$2$$anonfun$apply$5$$anonfun$apply$6.apply(action.scala:112) play.api.mvc.action$$anonfun$apply$2$$anonfun$apply$5$$anonfun$apply$6.apply(action.scala:112) play.utils.threads$.withcontextclassloader(threads.scala:21) play.api.mvc.action$$anonfun$apply$2$$anonfun$apply$5.apply(action.scala:111) play.api.mvc.action$$anonfun$apply$2$$anonfun$apply$5.apply(action.scala:110) scala.option.map(option.scala:146) play.api.mvc.action$$anonfun$apply$2.apply(action.scala:110) play.api.mvc.action$$anonfun$apply$2.apply(action.scala:103) scala.concurrent.future$$anonfun$flatmap$1.apply(future.scala:253) scala.concurrent.future$$anonfun$flatmap$1.apply(future.scala:251) scala.concurrent.impl.callbackrunnable.run(promise.scala:32) akka.dispatch.batchingexecutor$abstractbatch.processbatch(batchingexecutor.scala:55) akka.dispatch.batchingexecutor$blockablebatch$$anonfun$run$1.apply$mcv$sp(batchingexecutor.scala:91) akka.dispatch.batchingexecutor$blockablebatch$$anonfun$run$1.apply(batchingexecutor.scala:91) akka.dispatch.batchingexecutor$blockablebatch$$anonfun$run$1.apply(batchingexecutor.scala:91) scala.concurrent.blockcontext$.withblockcontext(blockcontext.scala:72) akka.dispatch.batchingexecutor$blockablebatch.run(batchingexecutor.scala:90) akka.dispatch.taskinvocation.run(abstractdispatcher.scala:39) akka.dispatch.forkjoinexecutorconfigurator$akkaforkjointask.exec(abstractdispatcher.scala:405) scala.concurrent.forkjoin.forkjointask.doexec(forkjointask.java:260) scala.concurrent.forkjoin.forkjoinpool$workqueue.runtask(forkjoinpool.java:1339) scala.concurrent.forkjoin.forkjoinpool.runworker(forkjoinpool.java:1979) scala.concurrent.forkjoin.forkjoinworkerthread.run(forkjoinworkerthread.java:107)
also running code on machine recieved :
couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.configuredfailoverproxyprovider
thanks in advance
Comments
Post a Comment