hadoop

#

29 Mar: Playing with Stack Overflow data

I used the data from Stack Overflow in order to see the interest on some of the products I follow (yes, HBase, Spark and others). The interest is calculated for each month on the last 5 years and is based on the number of posts and replies associated for a tag (ex: hdfs, elasticsearch and so on). Remember that Stack Overflow is a (huge) developper community with questions about programming, so the results are automatically biased. Indeed,…

#

23 Feb: Testing BigData projects

Writing tests that use a traditional database is hard. But writing tests in a project using Hadoop is really harder. Hadoop stacks are complex pieces of software and if you want to test your Hadoop projects, it may be a real nightmare: – many components are involved, you are not just using HBase, but HBase, Zookeeper and a DFS. – a lot of configuration is needed – cleaning the data of the previous tests relies on many…

#

23 Dec: HBase: having fun with the shell

HBase shell is a full interactive JRuby shell (IRB) providing tools allowing you to query your data or execute admin commands on a HBase cluster. Since it uses JRuby, this shell is a powerful interactive scripting environment. This post is not about presenting you the commands available in the shell, you can easily find documentation or article on the Internet, but more about the possibilities of the shell. Add custom command Actually, there is not easy way…

#

06 Dec: Knox in production: avoid pitfalls and common mistakes

I’ve already post articles about Knox some weeks ago about two subjects: how to use the HBase REST API througth Knox and how to submit Spark job via the Knox API. In my current mission, many projects are now using Knox as main gateway for many services like HBase and HDFS, but also for Oozie, Yarn… After some weeks of development and deployment in production, I’ve decided to write a post about some troubles that you may…

#

19 Nov: Working with Parquet files

Apache Parquet is a columnar storage format available for most of the data processing frameworks in the Hadoop ecosystem: Hive Pig Spark Drill Arrow Apache Impala Cascading Crunch Tajo … and many more! In Parquet, the data are compressed column by column. This means that commands like these: hdfs dfs -cat hdfs://nn1.example.com/file1 hdfs dfs -text /…/file2 can not work anymore on Parquet files, all you can see are binary chunks on your terminal. Thankfully, Parquet provides an…

#

16 Nov: Using HBase REST API with the Knox Java client

I’ve already introduced Knox in a previous post in order to deploy Spark Job with Knox using the Java client. This post is still about the Knox Java client, but we’ll see here an other usage with HBase. HBase provides a well documented and rich REST API with many endpoints exposing the data in various formats (JSON, XML and Protobuf!). First, we need to import the dependencies for the Knox Java client: <dependency> <groupId>org.apache.knox</groupId> <artifactId>gateway-shell</artifactId> <version>0.10.0</version> </dependency>…

#

09 Nov: Submitting Spark Job via Knox on Yarn

Apache Knox is a REST API Gateway for interacting with Apache Hadoop clusters. It offers an extensible reverse proxy exposing securely REST APIs and HTTP based services in any Hadoop platform. Althought Knox is not designed to be a channel for high volume data ingest or export, it is perfectly suited for exposing a single entrypoint to your cluster and can be seen as a bastion for all your applications. One of the possible use-case of Knox…

25 Nov: How to kill Hadoop jobs matching a pattern?

Today, I had to kill a list of jobs (45) running on my Hadoop cluster. Ok, let’s have a look to the docs http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#job But wait a minute… No, Hadoop knows the “kill” command, but not the “pkill”… One solution is: import java.io.IOException; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.CommandLineParser; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.Options; import org.apache.commons.cli.ParseException; import org.apache.commons.cli.PosixParser; import org.apache.commons.lang.ArrayUtils; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapred.JobClient; import org.apache.hadoop.mapred.JobStatus; import org.apache.hadoop.mapred.RunningJob; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class PKill { private final…

17 Oct: Myrrix, the REST-ified Mahout for real-time recommandations

Myrrix is a complete, real-time, scalable clustering and recommender system, evolved from Apache Mahout. The full Myrrix system uses two components: a Computation Layer and one or many Serving Layers. While the Computation Layer computes the large machine learning models needed by the Serving Layer, the Serving Layer is a Java HTTP server application. This server serves user requests in real-time, making recommendations and receiving new input via a REST API. Many instances of the Serving Layer…

09 Apr: Transfert files from Hadoop to a remote server via ssh

When working with Hadoop, you produce files in the hdfs. In order to copy them in one of your remote servers, you have to first use the get or the copyToLocal command to copy the files in your local filesystem and then use a scp command. But this two steps process is not really efficient since you are double-copying the files. sshj is a pure Java implementation of SSHv2 allowing you to connect to an sshd server…