Helper class for NSNotificationCenter

Tags

, , , ,

Last weekend I tried to modify one of apps to use NSNotificationCenter instead of complicated custom code to notify more than one object about change in data. For this purpose I wrote simple helper class.

Header:

#import <Foundation/Foundation.h>
#import "NotificationDelegate.h"

@interface NotificationHelper : NSObject

+ (void)pushNotification:(NSString*)notification WithObject:(id)object;
+ (void)registerForNotification:(NSString*)notification WithDelegate:(id)delegate;
+ (void)unregisterForNotification:(id)delegate;

@end

Source:

#import "NotificationHelper.h"

@implementation NotificationHelper

+ (void)pushNotification:(NSString*)notification WithObject:(id)object
{
     [[NSNotificationCenter defaultCenter] postNotificationName:notification object:object];
}

+ (void)registerForNotification:(NSString*)notification WithDelegate:(id)delegate
{
     [[NSNotificationCenter defaultCenter] addObserver:delegate selector:@selector(reactOnNotification:) 
           name:notification object:nil];
}

+ (void)unregisterForNotification:(id)delegate
{
     [[NSNotificationCenter defaultCenter] removeObserver:delegate];
}

@end

and NotificationDelegate protocol:

#import <Foundation/Foundation.h>

@protocol NotificationDelegate

- (void)reactOnNotification:(NSNotification*)notification;

@end

Usage is pretty simple. All receivers should implement NotificationDelegate and register for notifications with call to NotificationHelper. Code which should be executed when message is recieved goes inside reactOnNotification:(NSNotification*)notification. Message sender should to call NotificationHelper’s pushNotification when they have message.

I pushed sample application on my github.

Enjoy.

Advertisements

First steps with Jade, Node template engine

Tags

, , , , , , ,

During experimenting with Jade I tried to create as simple as possible server / client code for serving of simple HTML file generated with Jade. Result are two examples that I will try to explain briefly, one using server side to generate HTML and second using client side to generate HTML.

Client side example uses Node as server of Jade template files which will be obtained by Ajax on client side, compiled, rendered and added to HTML. For server I used modified version of Simple static file HTTP server with Node.js. Only change is one extra if for files with jade extension which will look inside jade sub directory.

    } else if (filename.match(".jade$")) {
        contentType = "text/plain";
        pathToRead = "jade/" + filename;
    }

Magic happens on client side during load of html:

    $(document).ready(function() {
        $.ajax({url: "index.jade", success:function(data) {
            var fn = jade.compile(data);
            var html = fn({});
            document.write(html);
        }});
    });

Server side example uses Node to generate HTML which will be pushed back to client in rendered HTML form. Code for sever is different from one used in first example. Basically, we assume that each HTML file have corresponding Jade template, try to fetch that template, parse it, render and serve back to client as HTML.

var fs = require("fs");
var url = require("url");
var jade = require("jade");
var connect = require("connect");

connect.createServer(function(req, res){
    var request = url.parse(req.url, false);
    var filename = request.pathname.slice(1);

    if (request.pathname == '/') {
        filename = 'index.html';
    }

	console.log("Serving request: " + request.pathname + " => " + filename);

    var jadeFilename = "jade/" + filename.slice(0, filename.lastIndexOf(".")) + ".jade";

    console.log("Serving jade file: " + jadeFilename);

	try {
		fs.realpathSync(jadeFilename);
	} catch (e) {
		res.writeHead(404);
		res.end();
	}

	fs.readFile(jadeFilename, function(err, data) {
		if (err) {
            console.log(err);
			res.writeHead(500);
			res.end();
			return;
		}

        res.writeHead(200, {"Content-Type": "text/html"});

        var fn = jade.compile(data);
        var html = fn({});

		res.write(html);
		res.end();
	});
}).listen(8080);

You can get code for examples from my GitHub repo. To start any of these use node base/server.js inside example directory.

Drawing line to UIImage using CoreGraphics [ iOS ]

Tags

, , , , , ,

How to draw single line into UIImage using Core Graphics framework? Recently I needed this, so here is code ( everything is straightforward -> no explanation is needed 🙂 ):

    NSLog(@"Creating image");

    CGSize size = CGSizeMake(240.0f, 240.0f);
    UIGraphicsBeginImageContext(size);
    CGContextRef context = UIGraphicsGetCurrentContext();

    CGContextSetStrokeColorWithColor(context, [[UIColor blackColor] CGColor]);
    CGContextSetFillColorWithColor(context, [[UIColor whiteColor] CGColor]);

    CGContextFillRect(context, CGRectMake(0.0f, 0.0f, 240.0f, 240.0f));

    CGContextSetLineWidth(context, 5.0f);
    CGContextMoveToPoint(context, 100.0f, 100.0f);
    CGContextAddLineToPoint(context, 150.0f, 150.0f);
    CGContextStrokePath(context);

    UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    imageView.image = result;
    [imageView setNeedsDisplay];

    NSLog(@"Image creation finished");

Example can be found on my GitHub

Best

Hadoop: How to get active jobs in cluster

Tags

, , ,

Recently I was making Hadoop alerting infrastructure and I needed something to track active jobs in cluster.

So, for start you need instance of JobClient. JobClient is wrapper around JobTracker RPC, basically
under the hood JobClient creates JobSubmissionProtocol instance:

    public void init() throws IOException {
        String tracker = conf.get("mapred.job.tracker", "local");
        if ("local".equals(tracker)) {
          this.jobSubmitClient = new LocalJobRunner(conf);
        } else {
          this.jobSubmitClient = (JobSubmissionProtocol) 
            RPC.getProxy(JobSubmissionProtocol.class,
                         JobSubmissionProtocol.versionID,
                         JobTracker.getAddress(conf), conf);
        }        
    }

Let’s code:

import org.apache.hadoop.mapred.JobClient;

initialize JobClient instance:

JobClient jobClient = new JobClient(new InetSocketAddress(jobTrackerHost, jobTrackerPort), new Configuration());

where jobTrackerHost and jobTrackerPort are host name and port where Job Tracker is running …

To get list of currently active jobs in cluster all you have to do is:

JobStatus[] activeJobs = jobClient.jobsToComplete();

This will give you list of all active records, JobStatus is pretty useful. You can get jobId, username that was used to summit job, start time…

I created sample code which will every n seconds output number of active jobs and their info, look on my GitHub.

Hadoop “Could not complete file …” issue

Tags

, , , , ,

Recently I run into problem with Hadoop, causing DFSClient to stop responding and run into infinite while displaying “Could not complete file …” message. Hadoop version is 0.20.1, svn version 810220, but it seems to me from code that this issue can occur on newer version too.

NameNode logs are showing that file was created, blocks where assigned to it and there is no complete message in logs. In DataNode logs there is Exception which looks like network connectivity issue.

I found this issue on web HDFS-148 and it seems that I have same problem. My biggest problem is that I cannot replicate issue, it happens once in let’s say month.

After some digging in code I found part that causes me trouble:

    
    private void completeFile() throws IOException {
      long localstart = System.currentTimeMillis();
      boolean fileComplete = false;
      while (!fileComplete) {
        fileComplete = namenode.complete(src, clientName);
        if (!fileComplete) {
          if (!clientRunning ||
                (hdfsTimeout > 0 &&
                 localstart + hdfsTimeout < System.currentTimeMillis())) {
              String msg = "Unable to close file because dfsclient " +
                            " was unable to contact the HDFS servers." +
                            " clientRunning " + clientRunning +
                            " hdfsTimeout " + hdfsTimeout;
              LOG.info(msg);
              throw new IOException(msg);
          }
          try {
            Thread.sleep(400);
            if (System.currentTimeMillis() - localstart > 5000) {
              LOG.info("Could not complete file " + src + " retrying...");
            }
          } catch (InterruptedException ie) {
          }
        }
      }
    }

So, as I can conclude from logs, DFSClient entered this while loop and is constantly outputting:

LOG.info("Could not complete file " + src + " retrying...");

From some reason file is never completed ( name node doesn’t have complete call in logs, probably some network issue ), but completeFile should throw IOException when this is fulfilled:

if (!clientRunning || (hdfsTimeout > 0 && localstart + hdfsTimeout < System.currentTimeMillis()))

By default hdfsTimeout is set to -1 and client is running so this piece of code that throws exception is never executed. Code that sets hdfsTimeout in Client looks like:

  final public static int getTimeout(Configuration conf) {
    if (!conf.getBoolean("ipc.client.ping", true)) {
      return getPingInterval(conf);
    }
    return -1;
  }

I tried to look more about setting ping to false, found this HADOOP-6099. I will try to play with disabling ping but it’s hard because I can’t recreate issue.

Node.js on Amazon EC2

Tags

, , , , ,

What’s fastest way to start up EC2 instance and run Node.js on it which will serve simple http requests?

So, you created Amazon AWS account and wanted to give it a try. As Amazon promised you will have micro instance for one year free of charge, let’s use it 🙂

I found various posts how to run Node.js with already prepared image or some third party EBS ( Elastic Block Store ) images but I wanted to do it from scratch. Here are steps:

0. Log in to AWS Management Console: Here
1. Fire up micro instance, I’m using “Lunch Classic Wizard” with Basic 64-bit Amazon Linux AMI.
2. Setup key pair if you don’t have it already. Download to computer. You will need them to access instance after start up is finished
3. Under “Configure firewall”, be sure to add ssh and http ports exceptions. For ssh, tcp port 22 with source: 0.0.0.0/0 and for http, tcp port 80 with source 0.0.0.0/0. I added range 8000 – 9000 ( just in case I need it later ). Note: for https different port is used than for http.
4. Lunch instance

After instance is lunched you will see instance details under “Instances”. Copy public DNS we will use it to attach to instance ( something like ec2-XX-XX-XX-XX.x.compute.amazonaws.com )

Note: There is possibility to assign static IP address to this instance. This feature is free as I understood as long it’s linked to active instance. If instance is down and given IP is kept you need to pay. Smart way to convince you to run your instance 24×7 :). For more details see Amazon AWS pricing and “Elastic IPs” under AWS Console.

So to continue our journey, you have active instance, so let’s use it:

5. ssh to your instance ssh -i //ec2.pem ec2-user@ec2-XX-XX-XX-XX.x.compute.amazonaws.com ( you will probably need to change certificate permissions to 700 )
6. do sudo su to get the feeling 🙂

You have root access to you machine, let’s install Node.js and run it.

7. Install required packages

yum install gcc gcc-c++ openssl-devel make

8. install Node.js

wget http://nodejs.org/dist/v0.6.5/node-v0.6.5.tar.gz
tar -zxf node-v0.6.5.tar.gz
cd node-v0.6.5
./configure
make
sudo make install
sudo chown -R ec2-user /usr/local
curl http://npmjs.org/install.sh | sh

After this steps you should have node installed. To check it out you can download some simple node server ( I will use one from one of earlier posts Simple static file HTTP server with Node.js )

9. install git

yum install git

10. become ec2-user ( use exit command if you are root )
11. checkout code to home dir

git clone git://github.com/vanjakom/JavaScriptPlayground.git
cd JavaScriptPlayground/nodejs_static_file_server

12. run Node.js

node server.js

You can test everything by going with you browser to http://ec2-XX-XX-XX-XX.x.compute.amazonaws.com:8000/test.html.

Note: if you want to run Node.js on default HTTP port ( 80 ) you need to do so as root and of course to change server.js code to start server on port 80

13. change server.js

...
}).listen(80);
...

14. run Node.js as root

sudo /usr/local/bin/node server.js

Scrolling view to show fields hidden by keyboard on iOS

Tags

, , , , , , , , ,

Introduction

During some JSON framework testing I found interesting problem, on screen keyboard was hiding one of text fields. Trying to figure out how to fix this problem I found two approaches, basically both are using same technique, move view up when keyboard is shown and return it to original position when keyboard is dismissed. First, cleaner approach, uses UIScrollView to do moving also recommended in Text, Web, and Editing Programming Guide for iOS under “Moving Content That Is Located Under the Keyboard”. Second approach is to use plain UIView and do moving manually. For production purpose I will definitely use first, but now I wanted to give a chance to second.

Idea

I will create simple project that will contain four text fields and one text view. All five elements will report to single view controller via UITextFieldDelegate and UITextViewDelegate. On textFieldDidBeginEditing and textViewDidBeginEditing view is moved if needed. When textFieldDidEndEditing and textViewDidEndEditing are triggered view will be moved to original position. On screenshots given bellow implemented functionality is shown. On first screen all five elements are shown, on second screen keyboard is active and text field with text: “test 4” is edited.

screenscreen with keyboard shown

Code

Showing textFieldDidBeginEditing:

- (void)textViewDidBeginEditing:(UITextView *)textView
{
    NSLog(@"textViewDidBeginEditing"); 

    if (textView.frame.origin.y + textView.frame.size.height > 480 - 216) {
        double offset = 480 - 216 - textView.frame.origin.y - textView.frame.size.height - 20;
        CGRect rect = CGRectMake(0, offset, self.view.frame.size.width, self.view.frame.size.height);

        [UIView beginAnimations:nil context:NULL];
        [UIView setAnimationDuration:0.3];

        self.view.frame = rect;

        [UIView commitAnimations];
    }    
}

And textViewDidEndEditing:

- (void)textViewDidEndEditing:(UITextView *)textView
{
    NSLog(@"textViewDidEndEditing");

    CGRect rect = CGRectMake(0, 20, self.view.frame.size.width, self.view.frame.size.height);

    [UIView beginAnimations:nil context:NULL];
    [UIView setAnimationDuration:0.3];

    self.view.frame = rect;

    [UIView commitAnimations];     
}

Also, hideKeyboard is used to hide keyboard when “Done” is pressed ( all text fields are calling this selector on “Did End On Exit”.

- (IBAction)hideKeyboard:(id)sender
{
    NSLog(@"hideKeyboard");
    [sender resignFirstResponder];
}

Source code is available on my GitHub under iOS Playground repo. Project is called OnKeyboardViewResize.

HTTPS client for iOS

Tags

, , , , , ,

Some time ago I blogged about client/server implementation for two way SSL ( both client and server are authenticated with certs ) on top of nodeJS, ( blog post ). Now, I’m trying to connect iOS client to nodeJS HTTPS server.

I will use NSURLConnection for requests. Also, client authentication is only possible with asynchronized requests, simplified sendSynchronousRequest doesn’t support delegates. I will load p12 generated certificate from App resources for authentication ( maybe better idea is to use Keychain for “production” applications but here I just want to test things )

Example Xcode project is available on HttpSSLClient – GitHub

So, when request is made to HTTPS server which requires authentication with certificate delegate’s connection:didReceiveAuthenticationChallenge is called. In this method we need to obtain certificate ( for code simplicity I will load cert each time from resource ) and present it to sender which will use that certificate against server.

Note: I’m playing here with self signed certificates and code didn’t work until I added this ( thanks to guy that found this ). Also, for simplicity I’m returning YES without any validation here.

- (BOOL)connection:(NSURLConnection *)connection canAuthenticateAgainstProtectionSpace:(NSURLProtectionSpace *)protectionSpace
{
    return YES;
}

Back on track, to load certificate I’m using:

NSString *path = [[NSBundle mainBundle] pathForResource:@"userA" ofType:@"p12"];
NSData *p12data = [NSData dataWithContentsOfFile:path];
CFDataRef inP12data = (__bridge CFDataRef)p12data;
        
SecIdentityRef myIdentity;
SecTrustRef myTrust;
OSStatus status = extractIdentityAndTrust(inP12data, &myIdentity, &myTrust);
    
SecCertificateRef myCertificate;
SecIdentityCopyCertificate(myIdentity, &myCertificate);
const void *certs[] = { myCertificate };
CFArrayRef certsArray = CFArrayCreate(NULL, certs, 1, NULL);

extractIdentityAndTrust function is copied from Apple Certificate, Key, and Trust Services Programming Guide with slightly modifications:

OSStatus extractIdentityAndTrust(CFDataRef inP12data, SecIdentityRef *identity, SecTrustRef *trust)
{
    OSStatus securityError = errSecSuccess;
    
    CFStringRef password = CFSTR("userA");
    const void *keys[] = { kSecImportExportPassphrase };
    const void *values[] = { password };
    
    CFDictionaryRef options = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);
    
    CFArrayRef items = CFArrayCreate(NULL, 0, 0, NULL);
    securityError = SecPKCS12Import(inP12data, options, &items);
    
    if (securityError == 0) {
        CFDictionaryRef myIdentityAndTrust = CFArrayGetValueAtIndex(items, 0);
        const void *tempIdentity = NULL;
        tempIdentity = CFDictionaryGetValue(myIdentityAndTrust, kSecImportItemIdentity);
        *identity = (SecIdentityRef)tempIdentity;
        const void *tempTrust = NULL;
        tempTrust = CFDictionaryGetValue(myIdentityAndTrust, kSecImportItemTrust);
        *trust = (SecTrustRef)tempTrust;
    }
    
    if (options) {
        CFRelease(options);
    }
    
    return securityError;
}

After certificate is loaded we need NSURLCredential that will be sent to certificate challenger

NSURLCredential *credential = [NSURLCredential credentialWithIdentity:myIdentity certificates:(__bridge NSArray*)certsArray persistence:NSURLCredentialPersistencePermanent];
    
[[challenge sender] useCredential:credential forAuthenticationChallenge:challenge];

For debug I’m using nodeJS server and certificate for userA is taken from that Git repo.

Links:

NSURLConnection class
Apple URL loading guide
Apple Certificate, Key, and Trust Services Programming Guide
NSURLConnection with Self-Signed Certificates
How to use Client Certificate Authentication in iOS App
How to connect with client certificate using a WebView in Cocoa?

Simple HTTP client for iOS

Tags

, , , , , ,

As part of research for my ToDoGenius project I was playing with HTTP requesting from iOS.

I created sample project which can be used as starting point, HttpClientTest

HttpClientTest support synchronized data retrieve ( this approach is not a good choose because it blocks main thread ) and asynchronized retrieve. Retrieved data is shown in UITextView, pretty simple. For server I’m using nodeJS static file server.

If you want to “try out” blocking of main thread during sync vs async request download use server_withtimeout.js

For synchronized request retrieve code looks like:

[status setText:@"Retrieving response sync"];
[response setText:@""];

NSURL* requestUrl = [[NSURL alloc] initWithString:url.text];
NSURLRequest* request = [NSURLRequest requestWithURL:requestUrl cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:60.0];

NSData* data = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil];

NSString* responseString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];

[response setText:responseString];

[status setText:@"Response retrieved sync"];

For asynchronized retrieve ViewController is used as NSUrlConnection delegate

- (void)connection:(NSURLConnection*) connection didReceiveResponse:(NSURLResponse *)response
{
    NSLog(@"Response recieved");
}

- (void)connection:(NSURLConnection*) connection didReceiveData:(NSData *)data
{
    NSLog(@"Data recieved");    

    NSString* responseString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];

    [response setText:responseString];

    [status setText:@"Response retrieved async"];
}

Code which starts synchronized request is:

[status setText:@"Retrieving response async"];
[response setText:@""];

NSURL* requestUrl = [[NSURL alloc] initWithString:url.text];
NSURLRequest* request = [NSURLRequest requestWithURL:requestUrl cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:60.0];

NSURLConnection* connection = [[NSURLConnection alloc] initWithRequest:request delegate:self];
[connection start];

Bloom filter inside Map Reduce

Tags

, , , ,

Bloom filter is pretty useful tool when writing Map Reduce tasks. With the constraint that it can generate a certain percentage of false positives, Bloom filter is ideal space-efficient solution to get rid of irrelevant records during map phase of Map Reduce tasks.

So scenario is following. Let’s say you have 200M records from which you want to select 20M ( filtering by id ) and add some data ( additional_data ) on top of record.

Solution 1, without use of Bloom filter, drive records, additional data and ids list to map input. Map to (id, record), (id, additional_data) and (id, “yes”) pairs. Inside reduce if id has corresponding “yes” record apply additional data to record and output (null, record + additional_data).

Solution 2, without use of Bloom filter, but with HashSet inside map. Now we drive only records and additional data to map input, but in map:setup call we load HashSet with record ids from HDFS that should be outputted to reduce phase. During map:call we output only records which are inside HashSet. In reducer we now have only required records, one thing left is to apply additional data and output record.

Let’s review Solution 1. Small memory footprint, easy to write and use, large IO between map and reduce ( we transfer between nodes 90% of unused data ).

Solution 2 reviewed. To get this work on cluster we need much more memory for each map task on nodes -> number of parallel map tasks must be reduced -> longer execution time.

Solution 3, use Bloom filter. Same scenario as in Solution 2, but instead HashSet we use Bloom filter with pre calculated data set inside map tasks and to map input we put records, additional data and ids of required records. Bloom filter can be pre calculated locally or with simple Map Reduce task. In reduce we have our 20M + few percentages of unwanted records. We filter these as in Scenario 1. This will give us benefits of low IO between nodes presented with Solution 2 and small memory footprint presented with Solution 1.

Bloom filter implementation is well explained on links given bellow, so I will not go in detail with implementation. Main part of implementation is for sure decision which hasher to use, this decision is of course made with key type in mind. I will assume in implementation given bellow that key is represented as long. In example on GitHub I gave three implementations based on Murmur hash, java Random and basic string hasher.

Also, BloomFilterExample contains two ways of upload of Bloom filter to Map task. In first, job configuration is used. This is just as example, real usage of this would be inappropriate if filter footprint is larger. As second, I choose Distributed Cache feature of Hadoop.

Two tests are also included, one unit and one performance. Performance test shows pretty same results for Murmur hash and java Random implementation. Table shows false positive results on 1M records depending on number of bits used for each element ( from this and number of elements number of hash functions is calculated ):

number of bits per element Java Random Murmur hash String hash
2 39% 39% 39%
4 14% 14% 31%
8 2% 2% 25%

To run everything, check out GitHub and do

mvn clean package

This should compile everything and run unit test. For performance test:

mvn -Dtest=com.busywait.bloomfilter.BloomFilterPerformanceTest test

To run Map Reduce Bloom example with use of configuration:

bin/bloom_configuration.sh

For Map Reduce Bloom example based on Distributed Cache:

bloom_distributedcache.sh

Map Reduce examples are set to run on localhost Hadoop CDH3u0 cluster, with configuration in /etc/hadoop/conf/, output will be saved to hdfs://localhost/temp/output_configuration and hdfs://localhost/temp/output_distcache. Input is automatically generated inside hdfs://localhost/temp/ dir.

Useful links:

Wiki page
Greplin Bloom filter implementation
Java Bloom filter implementation
A decent stand-alone Java Bloom Filter implementation