Skip to main content

A simple non blocking event counter


I am writing share something I implemented recently, to monitor streaming operations into the Voldemort server. What I needed was a simple avg and event rate counter

The idea is very simple. Have atomic longs for the event value sum and event count, to be updated by the application threads. Whats the big deal right?  The deal (not sure how big, depends on who you are) is to have this counter reset itself, every so often. This is a very common usage pattern in data systems, where we would like to monitor performance over small periods of time (so we get enough samples to smoothen the outliers) and reset every so often. Also, one should make sure the "monitoring" thread always reports statistics for a complete interval.

The trick is to avoid resetting the atomic long every interval, since doing so would mean missing some concurrent updates from other application threads in the meantime. Also, since we are maintaining state about the previous interval, it becomes necessary to synchronize briefly on the state update, between the monitoring thread and the one application thread that gets to do the honors of transitioning to the next interval.

Let me now cut to the chase and present the code itself, which is fairly well documented



A very interesting next challenge would be to implement a non-blocking Histogram. Or atleast I am very interested in studying such implementations.

Comments

  1. (the wrong type of fighters are docked though, they belong to the other team actualy)trailer

    ReplyDelete

Post a Comment

Popular posts from this blog

Learning Spark Streaming #1

I have been doing a lot of Spark in the past few months, and of late, have taken a keen interest in Spark Streaming . In a series of posts, I intend to cover a lot of details about Spark streaming and even other stream processing systems in general, either presenting technical arguments/critiques, with any micro benchmarks as needed. Some high level description of Spark Streaming (as of 1.4),  most of which you can find in the programming guide .  At a high level, Spark streaming is simply a spark job run on very small increments of input data (i.e micro batch), every 't' seconds, where t can be as low as 1 second. As with any stream processing system, there are three big aspects to the framework itself. Ingesting the data streams : This is accomplished via DStreams, which you can think of effectively as a thin wrapper around an input source such as Kafka/HDFS which knows how to read the next N entries from the input. The receiver based approach is a little compl

Setting up Hadoop/YARN/Spark/Hive on Mac OSX El Capitan

If you are like me, who loves to have everything you are developing against working locally in a mini-integration environment, read on Here, we attempt to get some pretty heavy-weight stuff working locally on your mac, namely Hadoop (Hadoop2/HDFS) YARN (So you can submit MR jobs) Spark (We will illustrate with Spark Shell, but should work on YARN mode as well) Hive (So we can create some tables and play with it)  We will use the latest stable Cloudera distribution, and work off the jars. Most of the methodology is borrowed from here , we just link the four pieces together nicely in this blog.  Download Stuff First off all, make sure you have Java 7/8 installed, with JAVA_HOME variable setup to point to the correct location. You have to download the CDH tarballs for Hadoop, Zookeeper, Hive from the tarball page (CDH 5.4.x page ) and untar them under a folder (refered to as CDH_HOME going forward) as hadoop, zookeeper $ ls $HOME /bin/cdh/5.4.7 hadoop

Thoughts On Adding Spatial Indexing to Voldemort

This weekend, I set out to explore something that has always been a daemon running at the back of my head. What would it mean to add Spatial Indexing support to Voldemort , given that Voldemort supports a pluggable storage layer.. Would it fit well with the existing Voldemort server architecture? Or would it create a frankenstein freak show where two systems essentially exist side by side under one codebase... Let's explore.. Basic Idea The 50000 ft blueprint goes like this. Implement a new Storage Engine on top Postgres sql (Sorry innoDB, you don't have true spatial indexes yet and Postgres is kick ass) Implement a new smart partitioning layer that maps a given geolocation to a subset of servers in the cluster (There are a few ways to do this. But this needs to be done to get an efficient solution. I don't believe in naive spraying of results to all servers) Support "geolocation" as a new standard key serializer type in Voldemort. The values will sti