@interface AQBlog : NSBlog @end

Tutorials, musings on programming and ePublishing

Blocks, Episode 1

Permalink

I was expecting to have to wait until the release of Snow Leopard to write any of this small series of tutorials on using Blocks and the different paradigms you might want to learn as a result. I will still have to do so to really get involved with the actual capabilities of things like Grand Central Dispatch. However, since Landon Fuller / Plausible Labs released their port of the Blocks runtime to OS X 10.5 and iPhone OS 3.0, I can give you a heads-up on the things you can do with them, and on the new programming behaviours they allow you to implement.

Most importantly of all, anyone who's ever written code in a functional language will tell you that closures/blocks can give you a very neat and tidy way of implementing concurrent code. Specifically, thread-safe concurrent code without locks. This is all based upon the use of serial queues; yes, not unlike an NSOperationQueue with its maximum concurrency set to 1. You can already use a queue like that of course, but it's a little more awkward to actually put tasks onto that queue. Blocks give us the ability to implement a much simpler way of doing the same thing.

For instance, take the following code:

- (void) setTitle: (NSString *) newTitle
{
@synchronized(self)
{
[newTitle copy];
[myTitle release];
myTitle = title;
}
}
- (NSString *) title
{
NSString * result = nil;
@synchronized(self)
{
result = [title retain];
}
return ( [result autorelease] );
}

Those methods behave in much the same manner as the synthesized accessors for an atomic, copied property. To implement atomicity, they use a lock. The real property code doesn't actually use @synchronized, it uses a more optimized set of locks; the principle is the same, however. The 'problem' with this code is that it always acquires a lock, even if it doesn't need to. This causes a lot of overhead, such that if this property were accessed or changed frequently you would probably want to optimize them. You could use NSLock, pthread_mutex, pthread_rwlock, or OSSpinLock. If those werent fast enough, you could do some clever math on the values to implement a non-locking algorithm. But that can be a pain to write efficiently (or indeed at all). This is where queues come in, and where blocks make these queues wonderfully simple to use.

Here's what you could write, given the existence of a single global serial queue upon which to run your blocks:

- (void) setTitle: (NSString *) newTitle
{
[[AQSerialQueue globalQueue] runBlockSync: ^{
newTitle = [newTitle copy];
[myTitle release];
myTitle = newTitle;
}];
}
- (NSString *) title
{
NSString * result = nil;
[[AQSerialQueue globalQueue] runBlockSync: ^{
result = [myTitle retain];
}];
return ( [myTitle autorelease] );
}

Each of the blocks in the above two methods takes no parameters and returns no values; they just operate on their current scope. However, they don't have any locks. They achieve thread-safety by performing their duties on a serial queue– and it's this queue which provides the thread safety. By its serial nature, no matter how many threads step through the code above at any one time, the queue will run each block in the order it was received, one at a time. So no setter and no getter will ever preempt another.

Also note that these are being run synchronously– this ensures that the API contract is enforced for the setters (the new value will have been set by the time the setter function returns), and that the block can modify a stack variable to be returned in the getter. If such contracts aren't needed, you could just as easily drop the new value into a queue to be run asynchronously, and return safe in the knowledge that while it might not have been set right now, it will be set at some point in the future, and still in order respective of other threads calling this API.

As for AQSerialQueue itself, that's something I'll be unveiling soon, but probably not today— unlike some lucky folks I dont write for a living, so I have to go back to work now. It will be coming soon, however. In the meantime you can experiment with async blocks via NSOperationQueue with this code I just whipped off in a completely random and untested fashion:

typedef void (^AQQuickiePlainBlock)(void);
@interface NSOperationQueue (AQQuickieBlockSupport)
- (void) addOperationWithBlock: (AQQuickiePlainBlock) operationBlock;
@end
@interface AQQuickieBlockOperation : NSOperation
{
AQQuickiePlainBlock theBlock;
}
@property (nonatomic, copy) AQQuickiePlainBlock theBlock;
@end
@implementation NSOperationQueue (AQQuickieBlockSupport)
- (void) addOperationWithBlock: (AQQuickiePlainBlock) operationBlock
{
AQQuickieBlockOperation * op = [[AQQuickieBlockOperation alloc] init];
op.theBlock = operationBlock;
[self addOperation: op];
[op release];
}
@end
@implementation AQQuickieBlockOperation
@synthesize theBlock;
- (void) dealloc
{
[theBlock release];
[super dealloc];
}
- (void) finalize
{
// PLBlocks doesn't play nice with GC just yet
[theBlock release];
[super finalize];
}
- (void) main
{
theBlock();
}
@end

In the next article, we'll go into a little more detail about the Blocks runtime itself, and will for instance explain why we use (nonatomic, copy) for the block property on this NSOperation subclass.

Comments