Skip to main content

Java file.canWrite always returning true

This is going to be very short. Actually does not entail a blog post. Anyways, here goes.

So, when I was trying to setup some Junit tests on an EC2 instance, I stumble into this ridiculous failure where, file.canWrite() simply returns true, no matter what. A code like below, keeps printing the "file is writable"

import java.io.File;

public class Test1 {
        public static void main(String[] args) throws Exception {
                File file = new File("hello.txt");
                file.createNewFile();
                file.setReadOnly();
                if (file.canWrite()) {
                        System.out.println("File is writable!");
                } else {
                        System.out.println("File is in read only mode!");
                }
        }
}


Turns out, the culprit is that I am running as "root". Since root can do-it-all, it simply returns true even though the file permissions read


[root@myhost ~]# ls -al hello.txt
-r--r--r--. 1 root root 0 Jan 30 18:52 hello.txt

Kind of subtle, since I would expect the permissions to be the ultimate source of truth.

Comments

Popular posts from this blog

Learning Spark Streaming #1

I have been doing a lot of Spark in the past few months, and of late, have taken a keen interest in Spark Streaming . In a series of posts, I intend to cover a lot of details about Spark streaming and even other stream processing systems in general, either presenting technical arguments/critiques, with any micro benchmarks as needed. Some high level description of Spark Streaming (as of 1.4),  most of which you can find in the programming guide .  At a high level, Spark streaming is simply a spark job run on very small increments of input data (i.e micro batch), every 't' seconds, where t can be as low as 1 second. As with any stream processing system, there are three big aspects to the framework itself. Ingesting the data streams : This is accomplished via DStreams, which you can think of effectively as a thin wrapper around an input source such as Kafka/HDFS which knows how to read the next N entries from the input. The receiver based approach is a little compl

Setting up Hadoop/YARN/Spark/Hive on Mac OSX El Capitan

If you are like me, who loves to have everything you are developing against working locally in a mini-integration environment, read on Here, we attempt to get some pretty heavy-weight stuff working locally on your mac, namely Hadoop (Hadoop2/HDFS) YARN (So you can submit MR jobs) Spark (We will illustrate with Spark Shell, but should work on YARN mode as well) Hive (So we can create some tables and play with it)  We will use the latest stable Cloudera distribution, and work off the jars. Most of the methodology is borrowed from here , we just link the four pieces together nicely in this blog.  Download Stuff First off all, make sure you have Java 7/8 installed, with JAVA_HOME variable setup to point to the correct location. You have to download the CDH tarballs for Hadoop, Zookeeper, Hive from the tarball page (CDH 5.4.x page ) and untar them under a folder (refered to as CDH_HOME going forward) as hadoop, zookeeper $ ls $HOME /bin/cdh/5.4.7 hadoop

Memory allocation speed check

Traditionally, in high performance systems, repeatedly allocating and deallocating memory has been found to be costly. (i.e a malloc vs free cycle). Hence, people resorted to building their own memory pool on top of the OS, dealing with fragmentation/free list maintenance etc. One of the popular techniques to doing this being Slab allocators . This post is about doing a reality check about the cost of explicitly doing an alloc() and free() cycle, given that most popular OS es, specifically Linux gotten better at memory allocation recently. Along the way, I will also compare the JVM memory allocations (which should be faster since we pay a premium for the freeing of memory via garbage collection). So, all set here we go.  The following is a comparison of native c allocations, java jvm based allocation, java direct buffer allocation. For each of them we measure the following. Allocation/free rate (rate/s): This gives you an upper bound on the single threaded throughput of your