bigdata

#

09 Jul: Back in time: unreliable clocks and distributed computing

Many scalable NoSQL databases like Cassandra, HBase, Mongo, provide tunable consistency in order to define a specific guarantees level for an operation. And what make them scalable make them also vulnerable: in all case the whole cluster must run on synchronized clocks. It’s quite surprising that, given how important this is, it is not very detailled in the product documentation. One chapter in the HBase documentation, a pararaph in the MongoDB production readiness, a few lines in…

#

02 Jul: Why (and how) you should stop writing shell scripts

If you worked on a Big Data project, you should have seen, and maybe used, some shell scripts. Honestly, I love hearing “The future is now” while talking about a bunch of scripts scheduled by Oozie, but it seems like we couldn’t create a data project in 2018 without some lets-run-it.sh file. For the last 7 years I have seen many people writing x-SH scripts for various reasons, but the main reason today (at least on Big…

#

23 Feb: Testing BigData projects

Writing tests that use a traditional database is hard. But writing tests in a project using Hadoop is really harder. Hadoop stacks are complex pieces of software and if you want to test your Hadoop projects, it may be a real nightmare: – many components are involved, you are not just using HBase, but HBase, Zookeeper and a DFS. – a lot of configuration is needed – cleaning the data of the previous tests relies on many…