Skip to main content

HDFS Client Configs for talking to HA Hadoop NameNodes



One more simple thing, that had relatively scarce documentation out on the Internet.

As you might know, Hadoop NameNodes finally became HA in 2.0. The HDFS client configuration, which is already a little bit tedious, became more complicated.

Traditionally, there were two ways to configure a HDFS client (lets stick to Java)


  1. Copy over the entire Hadoop config directory with all the xml files, place it somewhere in the classpath of your app or construct a Hadoop Configuration object by manually adding in those files.
  2. Simply provide the HDFS NameNode URI and let the client do the rest. 

        Configuration conf = new Configuration(false);
        conf.set("fs.default.name", "hdfs://localhost:8020"); // this is deprecated now
        conf.set("fs.defaultFS", "hdfs://localhost:8020");
        FileSystem fs = FileSystem.get(conf);
Most people prefer 2, unless you need way more configs from the actual xml config files, at which point it actually makes sense to copy the entire directory over.  Now, with NameNodes being HA, which NameNode's URI do you use? The answer is : the active Namenode's rpc address. But then, your client can fail if the active Namenode becomes passive or dies.

So, here's how you deal with this. (a simple program that copies files between local filesystem and HDFS)




Basically, you point your fs.defaultFS at your nameservice and let the client know how its configured (the backing namenodes) and how to fail over between them.

THE END


Comments

  1. this code is not working for HA.

    ReplyDelete
  2. When using zookeper for maintaning state of namenodes (automatic failover), I would think that we should query the zookeepers for namenode-address.

    ReplyDelete
  3. What if a SOCKS server was being used to connect to HDFS initially? How can that be used in the case of HA

    ReplyDelete
  4. Thanks very much. This code is working for me.

    ReplyDelete

Post a Comment

Popular posts from this blog

Learning Spark Streaming #1

I have been doing a lot of Spark in the past few months, and of late, have taken a keen interest in Spark Streaming . In a series of posts, I intend to cover a lot of details about Spark streaming and even other stream processing systems in general, either presenting technical arguments/critiques, with any micro benchmarks as needed. Some high level description of Spark Streaming (as of 1.4),  most of which you can find in the programming guide .  At a high level, Spark streaming is simply a spark job run on very small increments of input data (i.e micro batch), every 't' seconds, where t can be as low as 1 second. As with any stream processing system, there are three big aspects to the framework itself. Ingesting the data streams : This is accomplished via DStreams, which you can think of effectively as a thin wrapper around an input source such as Kafka/HDFS which knows how to read the next N entries from the input. The receiver based approach is a little compl

Setting up Hadoop/YARN/Spark/Hive on Mac OSX El Capitan

If you are like me, who loves to have everything you are developing against working locally in a mini-integration environment, read on Here, we attempt to get some pretty heavy-weight stuff working locally on your mac, namely Hadoop (Hadoop2/HDFS) YARN (So you can submit MR jobs) Spark (We will illustrate with Spark Shell, but should work on YARN mode as well) Hive (So we can create some tables and play with it)  We will use the latest stable Cloudera distribution, and work off the jars. Most of the methodology is borrowed from here , we just link the four pieces together nicely in this blog.  Download Stuff First off all, make sure you have Java 7/8 installed, with JAVA_HOME variable setup to point to the correct location. You have to download the CDH tarballs for Hadoop, Zookeeper, Hive from the tarball page (CDH 5.4.x page ) and untar them under a folder (refered to as CDH_HOME going forward) as hadoop, zookeeper $ ls $HOME /bin/cdh/5.4.7 hadoop

Enabling SSL in MAMP

Open /Applications/MAMP/conf/apache/httpd.conf, Add the line LoadModule ssl_module modules/mod_ssl.so          or uncomment this out if already in the conf file Also add lines to listen on 80, if not there already Listen 80   ServerName localhost:80 Open /Applications/MAMP/conf/apache/ssl.conf. Remove all lines as well as . Find the line defining SSLCertificateFile and SSLCertificateKeyFile, set it to SSLCertificateFile /Applications/MAMP/conf/apache/ssl/server.crt SSLCertificateKeyFile /Applications/MAMP/conf/apache/ssl/server.key Create a new folder /Applications/MAMP/conf/apache/ssl. Drop into the terminal and navigate to the new folder cd /Applications/MAMP/conf/apache/ssl Create a private key, giving a password openssl genrsa -des3 -out server.key 1024 Remove the password cp server.key server-pw.key openssl rsa -in server-pw.key -out server.key Create a certificate signing request, pressing return for defa