Wednesday, August 1, 2012

Big Data, Small Data, Big Visualization


Ever since I became a Cloudera Certified Developer for Apache Hadoop, I've been walking around with a hammer written on it "Map Reduce" looking for Big Data nails to pound.  Finally, a real world problem from a customer came to my attention where a Hadoop implementation will solve his dilemma. Given a 250GB (I know, I know, this is _not_ big) CSV data set of demographic data consisting of gender, race, age, income and of course location, and given a set of Point of Interest locations, generate a 50 mile heatmap for each demographic attribute for each the POI locations.
Using the "traditional" GeoProcessing with Python would take way more than a couple of days to run and would generate over 850GB of raster data. What do I mean by the "traditional" way ? You load the CSV data into a GeoDatabase and then you write an ArcPy script that; for each location, generate a 50 mile buffer.  Cookie cut the demographic data based on an attribute using the buffer and pass that feature set to the statistical package for density analysis which generates a raster file. Sure, you can manually partition the process onto acquired high CPU machines, but as I said, all that has to be done manually and will still take days.

There gotta to be a better way!

Enter Hadoop and a "different" way to process the data by taking advantage of:
- Hadoop File System fast splittable input streaming
- Distributed nature of Hadoop map reduce parts
- Distributed cache for "join" data
- External Fast java-based computational geometry library
- Producing vectors data rather than raster images

The last advantage is very important. This is something I call "Cooperative processing". See, people forget that there is a CPU/GPU on their client machines. If the server can producer vector data and we let the client render that data based on its capabilities, we will have a way more expressive application and the size of the data is way smaller. Will explain that in a bit.

Let me go back to the data processing. Actually, there is nothing to do.  There is no need for a transform and load process, as the CSV data can be directly placed onto an HDFS folder. The hadoop job will take as input the HDFS folder.

The Mapper Task - After instantiation, the 'configure' method is invoked to load the POI locations from the distributed cache and a 50 mile buffer is generated for each POI location using the fast computational geometry library, whereupon the buffer polygons are stored in an memory-based spatial index for fast intersection look up. The 'map' method is invoked on every record in the 250GB CSV input, where each record is tokenized for coordinates and demographic values. Using the coordinates and the prebuilt spatial index, we can find the associated POI locations.  Each 50 mile buffer is logically divided into kernel cells. Knowing the POI location, we can determine mathematically the relative kernel cell. We emit as a map key the combination of the POI location and the demographic value, and we emit the relative kernel cell as a map value.

map(k1,v1) - list(k2,v2)
where:
k1 = lineno
v1 = csv text
k2 = POI location,demographic
v2 = cellx,celly,1

Again, taking advantage of the powerful shuffle and sort capability of Hadoop on the POI Location/demographic key, I am ensuring that a reduce task will receive all the cells for a POI location/demographic combination.

The Reduce Task - For a POI location/demographic, the reduce method is invoke with the list of its associated cells. Cells with the same cellx,celly values are aggregated to produce a new list. We compose a JSON document of the new list and we emit the string representation of the JSON document using a custom output formatter onto which we override the 'generateFileNameForKeyValue' method to return something of the form "poi-location_demographic.json".

reduce(k2,list(v2)) - list(k3,v3)
where:
k2: POI location, demographic
v2: cellx, cellx, 1
k3: POI location, demographic
v3: JSON Text

I was able to validate my progress by invoking MRUnit on my codebase to ensure the soundness of my logic.

I packaged my map/reduce code and the geometry library into a jar, and I was ready to test it on the 250GB CSV.

But where to run this ?

Enter Amazon Elastic MapReduce. With a virtual swipe of a credit card, I was able to start up 10 large instances passing it a reference to my data and my jar in S3. 30 minutes later, a set of JSON files where produced in S3 occupying 238MB of space! Pretty cool, eh ? Compare that to days of execution time, and 850GB of rasters.  What is even more exciting, after a set of trials and errors and density kernel adjustments, I looked up my account balance and I owed Amazon $37.67 (will cost more to process a reimbursement request) !

Next comes the fun part, how to represent this JSON data for a particular POI location/demographic? Enter the Flex API for ArcGIS with its amazing extensibility and the flash player with its vector graphic and bitmap GPU-enhanced capabilities. See, by using the gradient filling of a drawn circle and the screen blending mode when placing that circle onto a bitmap, a set of close points will dissolve into a heatpoint. So, by taking advantage of this collaborative process between the server and the client, where the server generates a weighted point and let the client rasterize that point based on its weight, you get an expressive dynamic application. Let's push the visualization further to the coolest platform....the iPad. Flex code can be cross-compiled to run natively on the an iOS device. Let's push a bit more....3D. Taking advantage to the Stage3D capability, the heatmap vector data can be downloaded at runtime and dynamically morphed into a heightmap and a texture that can be draped on that heightmap. And here is the result....I call it "Heatmap in the Cloud". You can download the pdf of this Esri UC 2012 presentation from here. Have fun.

Thursday, March 22, 2012

Enter the Fifth Dimension; Sentiment Map Rendering

We have been plotting x,y,z and time for a while now. But this is all syntax; Tim Berners-Lee believes that the future of the web is semantics, and I could not agree more. You have a glimpse of it today when you use for example Apple’s Siri. In our GIS space, MetaCarta at one time had a great appliance/service that inferred location from the semantics of a sentence. For example, “Joe ate a Chicago style pizza in downtown Boston”, would return the latitude and longitude of Downtown Boston and would ignore “Chicago” as a location, Impressive! With the explosion of social media and the infusion of geo-locations as ‘fields’ in our records, we are reduced to plotting locations with cute popups. Boring! We humans are quite the emotional animals (some more that others, specially if these humans come from the Middle East :-) and these emotions are quite visible now within our tweets and facebook posts. So, what if I can plot these sentiments, there could be something to “see” in these maps. Enter LinguaSys, I met one of their senior scientists by accident when sheltering from a rain storm. We ended up talking about the weather, this and that and of course the question “What do you do ?” had to eventually come up. “I do sentiment analysis”, he replied. “Wow, that is exactly what I was researching before leaving”, I replied. “I am looking for an ‘engine’ that I can pass it, say a set of tweets, and it will return to me for each tweet a sentiment index.”. “Not sure about tweets”, he replied, “as we deal with entire documents, but I am sure we can adjust our engine to such a process”. The rain stopped, we exchanged contact info and parted on the promise that we will stay in touch. A couple of month later, a very good customer with receptive avant-garde ideas needed something ‘new’. I proposed the sentiment index mapping based on social media to highlight areas of interest. The following is a derivative of this work based on a totally different interest; TSA approval or disapproval tweets. Working with LinguaSys, I was handed a set of XML files where one contained the tweets and associated sentiment indexes, for example:
<text>RT @msnbc_travel: Good news for elderly fliers (75 and above): TSA announces pilot program the relaxes security procedures http:\/\/t.co\/WTaM9AgW</text>
<disapprovalFactor factor="-0.8" reason=“indicatorOfSatisfaction”/>
Note that the factor is negative to indicate a level of satisfaction. The internal factor ranges from -1 (totally satisfied) to 1 (totally pissed off :-). Easy to parse and to associate a range based renderer. The second file is the “location” of the tweeters. Notice that I put location between quotes, that is because some values were great, like an exact latitude/longitude. Others were like “Boston, MA” And some (and most) were like “Earth”, or “Best Location, NYC!” or my favorite “Look behind u… Boo!”. There was a “sense” of location in these that I think would have given MetaCarta a run for its money. This is where being a unix CLI geek with tools like awk, grep and sed come to the rescue to massage the data. I downloaded a cities.csv to cross reference a city/state name to a location and now I can plot on a map the locatable tweets. Using the latest built-in capabilities of ArcGIS API for Flex such as clustering with flares, info window rendering on clicks, custom function referencing in symbols, I was able to quickly build an application to display the sentiments. You can see the application in action here. Hover over a cluster to flare it and click on a flare element to see its details. To make things more understandable, I reversed the factor value displayed in the info window. Warning: These are real tweets with sometimes very offensive language, so…. do not call HR on me, ok ? And like usual, you can see the source code here.
BTW, Check out GeoTagger for ArcGIS Runtime to see MetaCarta in action.

Tuesday, March 20, 2012

DnD File using HTML5 into Flex Web App

So….I always wanted to Drag and Drop an external file into a Flex application running in the browser. Well.. you can use the FileReference API to browse and load a file, but I really wanted to DnD. Unfortunately, this is not allowed due to security constraints of the Flash Player. However, using the latest proposed W3C DnD API and the File API, I should be able to use the last two and the AS3 ExternalInterface API to accomplish what I want. So, here is the content of a simple xml file that I would like to DnD and render on a map:
<markers>
    <marker x="45" y="45" label="M 1" info="This is M1"/>
    <marker x="-45" y="-45" label="M 2" info="This is M2"/>
</markers>
I modified the HTML wrapper to use the JavaScript DnD and File API (if available) to listening for “dragenter”, “dragover” and “drop” events on the FlashPlayer container.
var DnD = {
loadHandler:function () {
  var dropContainer = document.getElementById("DnDApp");
  dropContainer.addEventListener("dragenter", function (event) {
    event.stopPropagation();
    event.preventDefault();
  }, false);
  dropContainer.addEventListener("dragover", function (event) {
    event.stopPropagation();
    event.preventDefault();
    event.dataTransfer.dropEffect = 'copy';
  }, false);
  dropContainer.addEventListener("drop", function (event) {
    event.stopPropagation();
    event.preventDefault();
    var files = event.dataTransfer.files, len = files.length;
    for (var i = 0; i < len; i++) {
      var file = files[i];
      var fileReader = new FileReader();
      fileReader.onload = function (event) {
        dndApp.drop(event.target.result);
      }
      fileReader.readAsText(file);
    }
  }, false);
}
};
if( window.File && window.FileReader){
  window.addEventListener("load", DnD.loadHandler, false);
} else {
  alert('Your browser does not support File/FileReader !');
}
On dragenter, I stop the event propagation and prevent the default behavior. On dragover, I do the same and in addition, I update the drop effect to show a “+” icon over the drop area. And finally, on drop, I iterate over the list of dropped files, whereupon I read as text each file using the FileReader API. When the file is read (remember, this is all asynchronous), I hand over the content to the Flex application. On creation completion of the Flex application, the “drop” callback is registered using the EternalInterface enabling the host javascript wrapper to invoke the internal dropHandler function.
private function this_creationCompleteHandler(event:FlexEvent):void
{
  ExternalInterface.call("setObjectID",
    ExternalInterface.objectID);
  ExternalInterface.addCallback("drop", dropHandler);
}

private function dropHandler(text:String):void
{
  const doc:XML = new XML(text);
  for each (var markerXML:XML in doc.marker)
  {
    const mapPoint:MapPoint = new WebMercatorMapPoint(
      markerXML.@x,
      markerXML.@y);
    arrcol.addItem(new Graphic(
      mapPoint,
      null, {
        label: markerXML.@label,
        info: markerXML.@info }));
  }
}
The latter accepts a String argument that is converted into an XML instance, and using E4X, each child marker element is converted to a Graphic that is added to a graphic layer graphic provider array collection. Cool, eh ? Note that the graphic layer has it infoWindowRenderer property defined, is such a way that if you click on any of its graphics, a info window content will be displayed whose content is an instance of the defined component. Like usual all the source code is available here. Have fun DnD’ing.
You can see the application in action by download the markers.xml file, and drag and drop it on the application running here.
PS: As of this writing, the two JS APIs work in Google Chrome 16 and later, Firefox 3.6 and later, Safari 6 will support the standard File API and our favorite (Not!) Internet Explorer 10 (Preview 2+). One of these days, will come back to this and use something like Dojo DnD to abstract me from all this - that will be a nice post!

Monday, March 19, 2012

Offline TPK viewer in a Flex Mobile Application

In this post I will demonstrate how to write a Flex mobile application that displays a map with a tiled layer, where the tile source is the unpacked content of a tpk file. A tile package or tpk is generated using ArcMap and is a zip file that contains a couple of configuration files holding tile metadata and a set of ISAM files where the variable length files contain the tile images. The mobile application enables a user to download one or more tpk files from a web server to the local storage and disconnect from the “network” for offline viewing. When the user selects a local tpk, it will be unpacked and a map will render a tiled layer whose tile source is the local file unpacked images, enabling the user to zoom in and out and pan over the available levels as defined in the metadata file. This is possible due to the availability of the ByteArray class in AS3 and the capability to read and write binary files. The application is based on the Holistic Framework and is linked with the fast airzip library and the ArcGIS API for Flex. The TPKLayer in the map view is a subclass of the Esri TiledMapServiceLayer class where the getTileURL function is overridden to return a URLRequest instance with a custom ‘data’ scheme. This ‘data’ scheme informs the super class that the tile data is in a byte array referenced by the URLRequest instance ‘data’ property. In the TPKLayer, the getTileURL is invoked with a level, a row and a column value that are used to seek using the File API to a specific location to read the binary image at that location into a byte array. Using an iOS deployment, a user can drag and drop a tpk file onto the application's iTunes document folder for a later sync. This is possible due to the UIFileSharingEnabled declaration in the xml application manifest (CaheApp-app.xml). Like usual, all the source code is available, and you can download it from here. BTW, I was able to build the app using FlashBuilder 4.6 and the ArcGIS API 2.5 and upcoming 3.0 for Flex without issues. Have fun and keep me posted on your implementations.

Tuesday, October 11, 2011

Yet Another Micro Architecture for Flex RIA and RMA

The Holistic micro-architecture framework for building RIAs and RMAs (Rich Mobile Applications) "borrows" from Robotlegs, PureMVC, Cairngorm, SWIZAS3Signals and the SpringFramework. Each has one or two things that I really like about it, but what I wanted is to bring all these things into one simple, very lightweight framework that enables me to quickly and more importantly methodically build mobile, web and desktop applications. This Holistic API is designed to work on top of either the Flex web or mobile framework and takes advantage of the Flex compiler, environment and lifecycle. Several "design patterns" are utilized in the API, such as Model-View-Controller, Loose Coupling, Locator, Usage of Interface, Delegation, Dependency Injection, Separation of Concerns and Inversion Of Control, not sure if some of latter are pure design patterns per GoF, but bear with me :-) One thing that is heavily relied on, is programming to convention versus programming to configuration.

MVC

Looking at the above diagram, the state of an application resides in the Model. The model is a set of non visual properties, where some properties are annotated using the [Bindable] metadata, in such that a change event will be dispatched whenever that property is mutated. This [Bindable] metadata is an indication that this property is represented by a view.
public class Model {
    [Bindable]
    public var text:String;
}
Views are subclasses of UIComponents and are bound using curly braces (I call them 'magic' braces) to the Model to represent the state of the application based on the view capabilities.
<s:label text="{model.text}"/>
In the above example, this Label instance is representing the model 'text' property and any changes to the 'text' value will be auto-magically shown in the label location on the screen. For a more complex example; a property that is a list of features can be bound to a map view and the features will be drawn as points on the map. At the same time, that same list can be bound to a data grid where each feature will be represented as a row in that grid. A model can have multiple view representations in an application.
Now, if a view wants to modify the model, it does so using a controller. A view is not, I repeat, is not allowed to mutate the model. Only a controller is allowed to mutate the model. This is very important as a convention. All the logic to mutate the model should reside in a controller even if it means that the logic is a single line implementation. Trust me on this, when developing the application, in the beginning this might be a single line. But... along the application development process, this will get more elaborated per the application requirements. You will be tempted by the programming devils to "touch" the model from the view and you will come to regret it later on. So be resilient and do the right thing !
Enough preaching. So how does a view tell a controller to mutate the model ? Simple, using signals. A signal is a glorified event with integrated event dispatching enabling a loose coupling between the view and the controller.
The following is the signature of the static 'send' function in the Signal class:
public static function send(type:String,...args):void
The first required string argument is the signal type. This string is very important in our convention over configuration design as we will see later on. A signal can optionally carry additional information such as in the following example:
<s:textinput id="ti"/>
<s:button click="Signal.send('submit',ti.text)" label="{model.text}"/>
When the user clicks on the Button instance, a signal of type 'submit' is sent along with the text that was entered in the TextInput instance.
Now, I need 'something' to receive that signal and act on it. This something is a controller. Again, relying on convention over configuration, I will create a controller named 'SubmitController'. Note the prefix 'Submit', it is the same as the signal type. Again this is the convention over configuration that is working in my favor where by writing pseudo-self documenting code. I can look at my list of controllers in my IDE and can tell immediately from the names what signal is handled by what class. Yes, I will have a lot of controllers, but this divide and conquer approach enables me to do one thing and one thing very well and separate my concerns.
In the controller class implementation, to handle the 'submit' signal, I should and must have a function named 'submit' that accepts one argument of type String like the following:
[Signal]
public function submit(text:String):void
{
  ...
}
Note the [Signal] metadata on the function declaration. See, as a Flex developer, you are already familiar with and using the built-in annotations such as [Bindable]. But Flex enables a developer to create his/her own metadata that will be attached to the class in question for introspection, cool, eh ? Back to signals, one more example to solidify the association of signals to controllers - if you send a signal of the form:
Signal.send('foo', 123, 'text', new Date());
To handle that signal, you should have the following controller declaration:
public class FooController {
    [Signal]
    public function foo( nume:Number, text:String, now:Date):void {
      ...
    }
}
Note that the order of the handler function arguments should match the order and type of the signal arguments. 123 -> nume, 'text' -> text, new Date() -> now. What makes this pretty neat is the independence of the hardwiring signal dispatching mechanism and the handler is just a function that can be unit tested, more on that later.
Applications need to communicate with the outside world, say for example you want to locate an address using an in-the-cloud-locator service. Controllers do not communicate with the outside world, they delegate that external communication to a service. That service will use the correct protocol and payload format to talk to the external service be SOAP, REST, RemoteService in XML, JSON or AMF or whatever. To enable different implementations of these protocols, an interface is declared and is injected into the controller for usage like as follows:
public class LocateController {

    [Inject]
    public var locateService:ILocateService;

    [Signal]
    public function locate(address:String):void
    {
        locateService.locate(address,
            new AsyncResponder(resultHandler, faultHandler));
    }
}
The locateService variable is assigned at runtime using inversion of control and when the 'locate' signal is sent, it is handled by the 'locate' function who delegates it to the ILocateService implementation. The [Inject] metadata is for more than injecting service implementations. Here is another usage to overcome AS3 language constraints and make your code more testable. Say you start a project and Signal A is sent, you go and you write Controller A to handle the signal. Now you have to write another controller B to handle signal B (remember SoC :-) but you find that Controller A and B will share some code. Since you are a good OO developer, you create a super class S that has the common code and make Controller A and Controller B subclass S. You feeling pretty good, onto Controller C to handle signal C. But wait a minute, some code from Controller B can be shared with Controller C. Ok, you create a super class D and subclass. But wait a minute..., AS3 is a single inheritance model, than means Controller B cannot subclass super class S and D at the same time. This is where composition is better that inheritance where now I can move the common code to class S and class D and inject those classes into controller A,B and C.
public class AController {
    [Inject]
    public var refS:ClassS;

    [Signal]
    public function doA(val:*):void {
        refS.doS(val);
    }
}

public class BController {
    [Inject]
    public var refD:ClassD;
    
    [Inject]
    public var refS:ClassS;

    [Signal]
    public function doB(val:*):void {
        refS.doS(val);
        refD.doD(val);
    }
}

public class CController {
    [Inject]
    public var refD:ClassD;

    [Signal]
    public function doC(val:*):void {
        refD.doD(val);
    }
}
Cool ? Onward, something _has_ to wire all these pieces together and that something is a Registry instance that is declared in the main application mxml as follows:
<fx:Declarations>
    <h:Registry id="registry">
        ...
   </h:Registry>
</fx:Declarations>
The children of the Registry are all the application controllers and all injectable delegates and services. So using the above example:
<h:Registry id="registry">
   <m:Model/>
    <c:ClassS/>
    <c:ClassD/>
    <s:AController/>
    <s:BController/>
    <s:CController/>
</h:Registry>
Taking advantage of the declarative nature of Flex, I declare the registry children that gets translated into ActionScript instantiation, whereupon creation completion, the registry will introspect each child for [Inject] metadata and invokes the setter with the appropriate type instances. Next, the [Signal] metadata are located and a proxy object is created wrapping the annotated function as event listener to the Signals (remember, signals are nothing more than glorified events). All this introspection by the Registry is perform using the as3-commons-reflect library (url). Going back to programing to interfaces and having multiple implementation of an interface in the Registry, how is the injection resolved ? Well, by default the first implementation is injected. But what if I want a specific implementation ? here is the solution:

<h:Registry>
    <c:RestService/>
    <c:SoapService id="soapService"/>
    <c:FooController/>
    <c:BarController/>
</h:Registry>

[Register(name="restService")]
public class RestService Implements IService {
  ...
}

public class FooController {
  [Inject]
  public var restService:IService;
  ...
}

public class BarController {
  [Inject(name="soapService")]
  public var service:IService;
  ...
}

There is a lot packed in this example and there is a lot of conventions, so stay with me. The registry is declared with a couple of services and controllers. Note that the SoapController is registered with the "soapController" id. This enables the BarController to be injected with that specific implementation of the IService interface via the name attribute in the inject metadata. Next, the RestService is registered with the Registry with the name "restService" as declared in the class metadata. Now (magic time), the FooController is injected with the RestService instance despite the absence of the name attribute in the inject metadata because the _variable_ name is same as the class registration. Pretty powerful, I know, mind blowing!

Ok, last but not least, unit testing. Actually, if you do TDD, that should be first. The holistic framework looks for simple interfaces, classes and functions, and with the built-in capabilities of unit testing and code coverage add-on to FlashBuilder, there is no excuse not to test your code. Whole books and articles have been written about Flex unit testing so google them.

Like usual all the source code is available here. I drink my own champagne, what you will find is the Flex unit test project that includes the holistic library.

Have fun.

Update: I created a very simple project that demonstrated the usage of the Holistic framework. As I said it is a simple application that displays a data grid that is bound to a list property in the model. Below the grid is a form that enables you to enter a first name and last name. When you click on the submit button, a signal is sent with the entered info. A handler will received the info and will delegate it to a service that uppercases the values and adds them to the list.

Map Tiles For Offline Usage Using ArcGIS API for Flex

So… Google introduced an offline feature to their mobile mapping application, enabling you to view map tiles when you are disconnected from the network. This is pretty neat and very useful now that local storage is so “abundant” on mobile devices. In this post, I would like to show you how to use the mobile device local storage for offline tile retrieval using the ArcGIS API for Flex. When we built the API, we always had the vision of extensibility to enable people to do things that we did not think about. One of then was to enable the control of the URL from where the tiles will be retrieved. A while back, I did such an implementation using Amazon S3. So, I rehashed that code using Adobe AIR File capabilities. The demo application that I am featuring here operates in two modes; an online mode and an offline mode. In the online mode, I keep a set all downloaded tiles for a particular viewing session. Before I go offline, I download the map server metadata and all the visited tiles to my device local storage. The AIR runtime can notify an application when the network connectivity changes. This enables me to put the application in offline mode and when I start panning and zooming, rather than retrieving the tiles from the cloud, I retrieve the tiles from my local storage. Pretty neat, eh ? So here is the code:
public class OfflineTiledMapServiceLayer extends ArcGISTiledMapServiceLayer
{
  override protected function getTileURL(
     level:Number,
     row:Number,
     col:Number
  ):URLRequest
  {
    var urlRequest:URLRequest;

    if (Model.instance.isOffline)
    {
      urlRequest = new URLRequest(
        "app-storage:/l" + level + "r" + row + "c" + col);
    }
    else
    {
      urlRequest = super.getTileURL(level, row, col);
      if (urlRequest.url in Model.instance.cacheItemDict === false)
      {
        const item:CacheItem = new CacheItem();
        item.urlRequest = urlRequest;
        item.level = level;
        item.row = row;
        item.col = col;
        Model.instance.cacheItemDict[urlRequest.url] = item;
      }
   }

   return urlRequest;
 }
}
The OfflineTiledMapServiceLayer extends the ArcGISTiledMapServiceLayer class and overrides the getTileURL function. This function is invoked to get the tile URL for a particular map level, row and column. If the application mode is offline, then the “app-storage” url scheme is used and the path is in the form of “l”+level+”r”+row+”c”+column. If the application mode is online, then the super.getTileURL is invoked and we keep a set of visited URLs. Using the application settings view, the application has the option to download the map server metadata and iterate over the visited tiles and save the bitmap images to the local storage as defined by File.applicationStorageDirectory. The AIR runtime has the capability to notify the application of a network change. When this occurs, I ping a URL (www.google.com) using the HTTPService to determine if this is a connect or a disconnect change thus putting the application in an online or offline state.
The application can be written in such a way that any visited tile can automatically be saved to the local storage, I leave that as an exercise for the reader :-)
Like usual, all the source code is available here.
NOTE: This sample application is for demonstration purposes ONLY and is intended to be used with your own legally cacheable tiles - I am not a lawyer, but I am pretty sure that it is not legal to save locally the ArcGIS.com accessible tiles.

Monday, September 12, 2011

Introspective Event Handling For Flex SkinnableComponents

So the Flex Spark architecture promotes the separation of a component model/controller from its view, or in Flex lingo, its skin. Here is the PITA process that I go throughout when creating a skinnable component manually:
  1. I create an ActionScript class (the host component) that subclasses SkinnableComponent.
  2. I define and annotate the skin parts. I try to define the skin part types to be as high as possible in the class hierarchy. What I mean by that is instead of defining a part to be of type Button, I make it of type ButtonBase.
  3. I override the partAdded function and add all the event listeners for each part, as event handling should be done in the host component not in the skin.
  4. I override the partRemoved function and remove all the added event listeners, as I want to be a “good citizen”.
  5. I create a subclass of Skin and associate it with the host component.
  6. I add the skin parts and any graphic elements to make it “pretty”.
  7. I “ClassReference” the skin to its host component as the default skin in the main application stylesheet.
  8. I Implement the content of the event listeners.
  9. Done, to the next skinnable component.
Told you was PITA! So here is what a very simple skinnable component looks like:
package com.esri.views {
import flash.events.MouseEvent;
import mx.controls.Alert;
import spark.components.supportClasses.ButtonBase;

public class MySkinnableComponent extends SkinnableComponent{
    [SkinPart]
    public var myPart:ButtonBase;

    public function MySkinnableComponent(){
    }

    override protected function partAdded(partName:String, instance:Object):void {
      super.partAdded(partName, instance);
      if( instance === myPart) {
        myPart.addEventListener(MouseEvent.CLICK,myPart_clickHandler);
      }
    }

    override protected function partRemove(partName:String, instance:Object):void {
      super.partAdded(partName, instance);
      if( instance === myPart) {
        myPart.removeEventListener(MouseEvent.CLICK,myPart_clickHandler);
      }
    }

    public function myPart_clickHandler(event:MouseEvent):void {
      Alert.show('myPart_clickHandler');
    }
}
}
Pretty Eh ? When programming, I do believe in DRY and if something is “boilerplate”, then it should be "templated". In the above, what is really the PITA, is the monkey-coding of adding and removing event listeners for each added and removed part. Talk about repeating yourself ! What if we could automate that process with convention and very minimal configuration. This will enable me to focus on the fun part which is the skinning and styling, and on the money making part which is the logic. Now, please note my event handlers:
	public function myPart_clickHandler(event:MouseEvent):void
This naming convention says a lot; this is an event handler for a part named “myPart” and it is handling the “click” event whenever it is dispatched. Cool eh ? See, using this convention, a colleague can look at this “self-documented” function and figure out what is going on at that line of code. So how to make this set of functions with this convention be automagically “hooked” and discovered by the running application ? Enter metadata! So with minimal configuration, I can now have:
	[SkinPartEventHandler]
	public function myPart_clickHandler(event:MouseEvent):void
The discovery and handling of these functions can now be done for any skinnable component in a templated way by overriding the partAdded function using the amazing as3-commons-reflect reflection library:
override protected function partAdded(
  partName:String,
  instance:Object
  ):void
{
  super.partAdded(partName, instance);
  for each (var method:Method in m_type.methods){
    const metadataArray:Array = method.getMetadat("SkinPartEventHandler");
    if (metadataArray && metadataArray.length){
      const metadata:Metadata = metadataArray[0];
      const tokens:Array = method.name.split("_");
      const localName:String = tokens[0];
      if (localName === partName){
        const eventHandler:String = tokens[1];
        const eventType:String = eventHandler.substr(0,
             eventHandler.indexOf("Handler"));
        instance.addEventListener(eventType, this[method.name]);
      }
    }
  }
}
So what is happening here ? As each skin part is added, we look for all the method in this class that are annotated with SkinPartEventHandler. Based on the agreed convention, the name of each matching method can be split into two tokens using the underscore character as a separator. If first split token matches the added part name, then we can get the event type from the second token which is the string that is prefixing the ‘Handler’ text. So now, we can add the matching method as a listener to that added instance for that event type. Cool ? I think so too ! Come to think about it, All event handling in Flash/Flex should be done with metadata and convention. Oh well! Here is a FlashBuilder project that you can download and see how this is implement and for you to DRY.

I am leaving the MOST important part for last, make sure to add "-keep-as3-metadata+=SkinPartEventHandler" to your "Additional compiler arguments" in the "Flex Compiler" under your project properties, or else this special metadata will not be compiled into the class definition by default.