Tuesday, February 17, 2015

A Whale, an Elephant, a Dolphin and a Python walked into a bar....

This project started simply as an experiment in trying to execute a Spark job that writes to specific path locations based on partitioned key/value tuples. Once I figured out the usage of rdd.saveAsHadoopFile with a customized MultipleOutputFormat implementation and a customized RecordWriter, I was partitioning and shuffling data in all the right places.
Though I could read the content of a file in a path, I could not query selectively the content. So to query the data, I need to SQL map the content. Enter Hive.  It enables me to define a table that is externally mapped by partition to path locations. What makes Hive so neat is that schema is applied on read rather than on write, this is very unlike traditional RDBMS systems. Now, to execute HQL statements, I need a fast engine. Enter SparkSQL. It is such an active project, and with all the optimizations that can be applied to the engine, I think it will rival Impala and Hive on Tez !!
So I came to a point where I can query the data using SQL. But, what if the data becomes too big ? Enter HDFS.  So now, I need to run HDFS on my mac. I could download a bloated Hadoop distribution VM like Cloudera QuickStart or HortworkWorks Sandbox, but I just need HDFS (and maybe YARN :-) Enter Docker. Found the perfect Hadoop image from SequenceIQ that just runs HDFS and YARN on a single node. So now, with a small addition of a config file to my classpath, I can write the data into HDFS and since I have docker now, this enables me to move the Hive Metastore from the embedded Derby to an external RDBMS. Found a post that describes that and bootstrapped yet another container with a MySQL instance to house the Hive Metastore.
Seeing data streaming on the screen like in the Matrix is no fun for me - but placing that data on a map, now that is expressive and can tell a story.  Enter ArcMap (On the TODO list, is to use Pro). Using a Python Toolbox extension, I can include a library that can make me communicate with SparkSQL to query the data and turn it into a set of features on the map.

Wow...Here is what the "Zoo" looks like:


And like usual, all the source code and how to do this yourself is available here.

Monday, February 2, 2015

Accumulo and Docker

If you want to experiment with BigData and Accumulo, then you can use docker to build an image and run a single node instance using this docker project.

In that container, you will have a single instance of Zookeeper, YARN, HDFS and Accumulo. You can 'hadoop fs -put files' in HDFS, run MapReduce jobs and start an Accumulo interactive shell.

There was an issue with setting the vm.swappiness in the docker container directly where it was not taking effect, and the only way I could make it stick, was to set it in the docker daemon environment, in such that it is "inherited" (not sure if this is the correct term) by the container.

This project was an experiment for me in the hot topic of container based applications using docker, and as a way to share with colleagues a common running environment for some upcoming Accumulo based projects.

And so far it has been a success :-) You can pull the image using:

docker pull mraad/accumulo

And like usual, all the source code is here.

Sunday, January 18, 2015

Spark, Cassandra, Tessellation and ArcGIS

If you do BigData and have not heard or used Spark then…..you are living under a rock!
When executing a Spark job, you can read data from all kind of sources with schemas like file, hdfs, s3 and can write data to all kind of sinks with schemas like file and hdfs.
One BigData repository that I’ve been exploring is Cassandra.  The DataStax folks released a Cassandra connector to Spark enabling the reading and writing of data from and to Cassandra.
I’ve posted on Github a sample project that reads the NYC trip data from a local file and tessellates a hexagonal mosaic with aggregates of pickup locations.  That aggregation is persisted onto Cassandra.
To visualize the aggregated mosaic, I extended ArcMap with an ArcPy toolbox that fetches the content of a Cassandra table and converts it to a set of features in a FeatureClass. The resulting FeatureClass is associated with a gradual symbology to become a layer on the map as follows:

Like usual all the source code is here.

Saturday, January 17, 2015

Scala Hexagon Tessellation

I've committed myself for 2015 to learn Scala, and I wish I did that earlier after 20 years of Java (wow, that makes me sound old :-).  I've placed on Github a simple Scala based library to compute the row/column pair of a planar x/y value on a hexagonal grid.
Will be using that library in following posts...
In the meantime, like usual, all the source code is available here.

Friday, January 2, 2015

Spark SQL DBF Library

Happy new year all…It’s been a while. I was crazy busy from May till mid December of last year implementing BigData  geospatial solutions at client sites all over the world. Was in Japan a couple of times, Singapore, Malaysia, UK, and do not recall the times I was in Redlands, Texas and DC.  In addition, I’ve been investing heavily in Spark and Scala. Do not recall the last time I implemented a Hadoop MapReduce job !

One of the resolutions for the new year (in addition to the usual eating right, exercising more and the never-off-the-bucket-list biking Mt Ventoux) is to blog more. One post per month as a minimum.

So…to kick to year right, I’ve implemented a library to query DBF files using Spark SQL. With the advent of Spark 1.2, a custom relation (table) can be defined as a SchemaRDD.  A sample implementation is demonstrated by Databrick’s spark-avro, as Avro files have embedded schema and data so it is relatively easy to convert that to a SchemaRDD. We, in the geo community have such a “old” format that encapsulates schema and data; the DBF format. Using the Shapefile project, I was able to create an RDD using the spark context Hadoop file API and the implementation of a DBFInputFormat. Then using the DBFHeader fields information, each record was mapped onto a Row to be processed by SparkSQL.  This is mostly work in progress and is far from been optimized, but it works !


Like usual, all the source code can be downloaded from here. Happy new year all.

Sunday, May 4, 2014

Spatially Enabling In-Memory BigData Stores

I deeply believe that the future of BigData stores and processing will be driven by GPUs and purely based on distributed InMemory engines that is backed by something resilient to hardware failure like HDFS.
HBase, Accumulo, Cassandra depend heavily on their in-memory capabilities for their performance. And when it comes to processing, SQL is still King….MemSQL is combining both - pretty impressive.
However, ALL lack something that is so important in today’s BigData world and that is true spatial storage, index and processing of native points, lines and polygons. SpaceCurve is doing great progress on that front.
A lot of smart people have taken advantage of the native lexicographical indexing of these key value stores and used geohash to save, index, and search spatial elements, and have solved the Z-order range search. Though these are great implementation, I always thought that the end did not justify the means. There is a need for a true and effective BigData spatial capabilities.
I’ve been a big fan of Hazelcast for quite some time now and was always impressed by their technology. In their latest implementation, they have added a MapReduce API, in such that now you can send programs to data - very cool !
But…like the others, they lack the spatial aspect when it comes my world. So…here is a set of small tweaks that truly spatially enables this in-memory BigData engine. I’ve used the MapReduce API and the spatial index in an example to visualize hotspot conflict in Africa.

Like usual, all the source code can be downloaded from here.

Monday, March 3, 2014

Yet Another Temporal/Spatial Storage/Analysis of BigData

At this year’s FedUC, I presented an introduction and a practice session on spatial types in BigData. In these sessions, I demonstrated how to analyze temporal and spatial BigData in the form of Automatic Identification System.  This post discusses an ensemble of projects that made a section of this demo possible.  The storage and subsequence processing of the data is very specific to this project, where data is stored and analyzed in a temporal and then a spatial order. Please note the order; temporal then spatial. However, I see this pattern in a lot of BigData projects that I worked on.  This order enables us to take advantage of the native partitioning scheme of paths in HDFS for temporal indexing that we later augment with a “local” spatial index. So time, or more specifically an hour’s data file can be located by traversing the HDFS file system in the form /YYYY/MM/DD/HH/UUID, where YYYY is the year, MM is the numeric month, DD is the day, HH is the hour and UUID is a file with a unique name that holds all the data for that hour.  You can imagine a process, such as GeoEvent Processor that continuously adds data in this pattern. However, in my case, I received the data as a set of GZIP files and used the Hadoop MapReduce AISImport tool to place the data in the correct folders/files in HDFS. Once an hour file is closed, a spatial index file is created that enables the spatial search of the data at that hour.  This spatial index is based on the QuadTree model for point specific geometry and is initiated using the AISTools.  I rewrote my “ubiquitous” density MapReduce job to take into account this new spatial index, where now I can rapidly ask questions such as “What is the AIS density by unique MMSI and what does it look like at the entry of the harbor every day at 10AM in the month of January ?”  The following is a visualization of the answer in ArcMap from the Miami Port sample data.

Like usual, all the source code can be found here.