Skip to main content

Wamp enable mod_rewrite

To get mod rewrite working with Apache under wamp..

Near the following block in httpd.conf
#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride None

to

#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride All

Uncomment the followiung line in httpd.conf

LoadModule rewrite_module modules/mod_rewrite.so

Restart Apache !!

Comments


  1. - Bạo !
    đẩy thân thể hắn, miệng vừa quát to:

    - Lý Nguyên, còn không mau mở Lôi Đỉnh !

    Thanh âm này giống như tiếng sét đánh thẳng vào trong đầu Lý Nguyên, khiến cho hắn tỉnh lại từ trong mê man, theo tiềm thức liền xuất Lôi Đỉnh ra. Trong nháy mắt khi Lôi Đỉnh xuất hiện, thân thể hắn đã tới đại môn tiên giới đang nhỏ chỉ còn bằng nắm tay kia.

    Một đạo ánh sáng từ trong Lôi Đỉnh lóe lên, bao vây lấy thân thể của Lý Nguyên, sau đó biến mất.

    Cùng lúc đó, Vương Lâm cũng xuất ra Lôi Đỉnh, đang định rời đi. Đúng lúc này, hai mắt quái nhân tóc bạc lóe lên, miệng nói khẽ:

    - Đóng cửa !đồng tâm
    game mu
    cho thuê phòng trọ
    cho thuê phòng trọ
    nhac san cuc manh
    tổng đài tư vấn luật miễn phí
    văn phòng luật
    số điện thoại tư vấn luật
    thành lập công ty


    Lời này vừa dứt, cánh cửa tiên giới vốn đã nhỏ bằng nắm tay lập tức biến mất hoàn toàn.

    Vương Lâm cầm Lôi Đỉnh, ánh sáng lạnh lẽo lóe lên trong mắt, cũng không quay đầu lại mà nhanh chóng thuấn di ra xa xa trong hư vô. Quái nhân tóc bạc cười khằng khặc, bước từng bước đuổi theo.

    ReplyDelete

Post a Comment

Popular posts from this blog

Learning Spark Streaming #1

I have been doing a lot of Spark in the past few months, and of late, have taken a keen interest in Spark Streaming . In a series of posts, I intend to cover a lot of details about Spark streaming and even other stream processing systems in general, either presenting technical arguments/critiques, with any micro benchmarks as needed. Some high level description of Spark Streaming (as of 1.4),  most of which you can find in the programming guide .  At a high level, Spark streaming is simply a spark job run on very small increments of input data (i.e micro batch), every 't' seconds, where t can be as low as 1 second. As with any stream processing system, there are three big aspects to the framework itself. Ingesting the data streams : This is accomplished via DStreams, which you can think of effectively as a thin wrapper around an input source such as Kafka/HDFS which knows how to read the next N entries from the input. The receiver based approach is a little compl

Setting up Hadoop/YARN/Spark/Hive on Mac OSX El Capitan

If you are like me, who loves to have everything you are developing against working locally in a mini-integration environment, read on Here, we attempt to get some pretty heavy-weight stuff working locally on your mac, namely Hadoop (Hadoop2/HDFS) YARN (So you can submit MR jobs) Spark (We will illustrate with Spark Shell, but should work on YARN mode as well) Hive (So we can create some tables and play with it)  We will use the latest stable Cloudera distribution, and work off the jars. Most of the methodology is borrowed from here , we just link the four pieces together nicely in this blog.  Download Stuff First off all, make sure you have Java 7/8 installed, with JAVA_HOME variable setup to point to the correct location. You have to download the CDH tarballs for Hadoop, Zookeeper, Hive from the tarball page (CDH 5.4.x page ) and untar them under a folder (refered to as CDH_HOME going forward) as hadoop, zookeeper $ ls $HOME /bin/cdh/5.4.7 hadoop

HDFS Client Configs for talking to HA Hadoop NameNodes

One more simple thing, that had relatively scarce documentation out on the Internet. As you might know, Hadoop NameNodes finally became HA in 2.0 . The HDFS client configuration, which is already a little bit tedious, became more complicated. Traditionally, there were two ways to configure a HDFS client (lets stick to Java) Copy over the entire Hadoop config directory with all the xml files, place it somewhere in the classpath of your app or construct a Hadoop Configuration object by manually adding in those files. Simply provide the HDFS NameNode URI and let the client do the rest.          Configuration conf = new Configuration(false);         conf.set("fs.default.name", "hdfs://localhost:8020"); // this is deprecated now         conf.set("fs.defaultFS", "hdfs://localhost:8020");         FileSystem fs = FileSystem.get(conf); Most people prefer 2, unless you need way more configs from the actual xml config files, at which po