Skip to main content

Setting up a Play Framework application on RedHat's Openshift

Play framework is an interesting web development option, using Netty directly as a web server, providing a basic MVC framework to build web applications on. A

Here's how you marry them both.. The documentation out there (git quick starts ) seem to be outdated. But, its simple really and truly DIY.. So, thought I would let people know.

1. Install Play 2.0 

Spin up the play server and make sure something renders on the localhost.

2. Open an account with openshift.com and create a DIY application

When you checkout the git projects, by default, you get the following directory structure with a Ruby script to serve up a index.html page.

$ ls -a 
.. .git .openshift README diy misc
$ ls diy 
logs  index.html  testrubyserver.rb
$ ls .openshift/action_hooks 
build  deploy  post_deploy  pre_build  start  stop

3. Package your Play app

$ play stage
$ target/start
Now, copy target folder to
$ cp -rf target $OPENSHIFT_PROJECT_GIT_REPO/diy/
Throw away the index.html
$ git rm diy/index.html

4. Change the start and stop scripts

Make sure we bring up the Play server on the correct ip and port.


$ cat .openshift/action_hooks/start
#!/bin/bash
# The logic to start up your application should be put in this
# script. The application will work only if it binds to
# $OPENSHIFT_DIY_IP:8080
nohup $OPENSHIFT_REPO_DIR/diy/target/start -Dhttp.port=$OPENSHIFT_DIY_PORT -Dhttp.address=$OPENSHIFT_DIY_IP  >> $OPENSHIFT_HOMEDIR/app-root/logs/server.log 2>&1 &


$ cat .openshift/action_hooks/stop
#!/bin/bash
# The logic to stop your application should be put in this script.
kill `ps -ef | grep "play.core.server.NettyServer" | grep -v grep | awk '{ print $2 }'` > /dev/null 2>&1
exit 0

$ git add diy/target
$ git commit -a 
$ git push



That's it..



Comments

Popular posts from this blog

Learning Spark Streaming #1

I have been doing a lot of Spark in the past few months, and of late, have taken a keen interest in Spark Streaming . In a series of posts, I intend to cover a lot of details about Spark streaming and even other stream processing systems in general, either presenting technical arguments/critiques, with any micro benchmarks as needed. Some high level description of Spark Streaming (as of 1.4),  most of which you can find in the programming guide .  At a high level, Spark streaming is simply a spark job run on very small increments of input data (i.e micro batch), every 't' seconds, where t can be as low as 1 second. As with any stream processing system, there are three big aspects to the framework itself. Ingesting the data streams : This is accomplished via DStreams, which you can think of effectively as a thin wrapper around an input source such as Kafka/HDFS which knows how to read the next N entries from the input. The receiver based approach is a little compl

Setting up Hadoop/YARN/Spark/Hive on Mac OSX El Capitan

If you are like me, who loves to have everything you are developing against working locally in a mini-integration environment, read on Here, we attempt to get some pretty heavy-weight stuff working locally on your mac, namely Hadoop (Hadoop2/HDFS) YARN (So you can submit MR jobs) Spark (We will illustrate with Spark Shell, but should work on YARN mode as well) Hive (So we can create some tables and play with it)  We will use the latest stable Cloudera distribution, and work off the jars. Most of the methodology is borrowed from here , we just link the four pieces together nicely in this blog.  Download Stuff First off all, make sure you have Java 7/8 installed, with JAVA_HOME variable setup to point to the correct location. You have to download the CDH tarballs for Hadoop, Zookeeper, Hive from the tarball page (CDH 5.4.x page ) and untar them under a folder (refered to as CDH_HOME going forward) as hadoop, zookeeper $ ls $HOME /bin/cdh/5.4.7 hadoop

HDFS Client Configs for talking to HA Hadoop NameNodes

One more simple thing, that had relatively scarce documentation out on the Internet. As you might know, Hadoop NameNodes finally became HA in 2.0 . The HDFS client configuration, which is already a little bit tedious, became more complicated. Traditionally, there were two ways to configure a HDFS client (lets stick to Java) Copy over the entire Hadoop config directory with all the xml files, place it somewhere in the classpath of your app or construct a Hadoop Configuration object by manually adding in those files. Simply provide the HDFS NameNode URI and let the client do the rest.          Configuration conf = new Configuration(false);         conf.set("fs.default.name", "hdfs://localhost:8020"); // this is deprecated now         conf.set("fs.defaultFS", "hdfs://localhost:8020");         FileSystem fs = FileSystem.get(conf); Most people prefer 2, unless you need way more configs from the actual xml config files, at which po