Sunday, August 20, 2006

Closures At Last !

There has been some significant movements amongst the Java leaders to introduce this much awaited feature in the Java programming language. The big team of Gilad Bracha, Neal Gafter, James Gosling and Peter von der Ahé has released a draft proposal for adding closures to Dolphin (JDK 7). I know Gilad has been a big proponent of having closures in Java and he has expressed his frustration in his blog at Java being a so late entry to close this out.

Brevity

Thoughts about introducing closures in Java has definitely been triggerred by the excellent support of closures provided by C# and the host of scripting languages like Ruby, Groovy, Python and Javascript. The syntax, as proposed in the draft looks a bit cumbersome to me, particularly after getting used to the elegance of Groovy, Ruby and even C#. I know Java, being a statically typed language does not help in making the closure syntax as elegant as dynamic languages.

My Wishlist of Associated Features

If closures see the light of the day in Java, then, I would like to have the following associated features, which will make the set more complete :


  • Type aliasing / typedefs : Without type aliasing it will be extremely cumbersome to write the entire type signature everytime. I am sure this will also make programming with generics much easier. The keyword is *syntax-brevity* and type aliasing is a great way to achieve it.

  • Currying : Higher order functions and closures are definitive steps towards implementing full currying features.

  • Generic Closures : It will be interesting to find out how closures will mix with generics.

  • Internal Iterators : I would love to write code like the following:


    int[] list = ...
    int[] evens = Arrays.findAll(list, (int n) { return n % 2 == 0; });

Sunday, August 13, 2006

Extend Your Type Transparently using Spring Introductions

One of the main intents of the Bridge design pattern is to allow decoupled dual hierarchies of interfaces and implementations growing independently and allowing users the flexibility to compose. However, the binding part is that all implementations have to abide by the base class of contract dictated by the abstract interface.

Readers of my blog must have been bored by now with my regular chantings for the necessity of a generic data access layer in Java based applications. I have designed one which we have been using in some of the Java projects - I have blogged extensively about the design of such an artifact here, here and here. The DAO layer has been designed as a Bridge with a dual hierarchy of interfaces acting as client contracts backed up by the implementation hierarchies. So long the clients had been using the JDBC implementation and have never complained about the contracts. Only recently I thought that I will have to sneak in a JPA implementation as well, since Spring has also started supporting JPA.

Things fell into place like a charm, till I hit upon a roadblock in the design. If u need to provide some contracts which make sense for some specific implementation (not all), then what do u do ? The basic premise of using Bridge is to have a single set of interfaces (contracts) which all implementations need to support. We have the following options :

  • Throw exceptions for unsupported implementations and hope the user does not use 'em. Document extensively warning users not to venture into these territories. But if my client is like me and does not have the habit of reading documentations carefully before coding, then he may be in for some surprises.


  • Use the Extension Object Design Pattern, which allows u to extend an object's interface and lets client choose and access the interfaces they need. Cool - this is what I need to extend the contract of my generic DAO ! But hold on !! Look at the very first line of the pattern's intent, as described by Erich Gamma .. "Anticipate that an object’s interface needs to be extended in the future.". What this means is that u will have to design your abstraction anticipating a'priori that it may be extended. So if the necessity of providing extensions is an afterthought (which is, in my case), then it doesn't fit the bill.


Extension of the Generic DAO Contract

One of the nifty features of EJB QL is that the user can specify a constructor within the SELECT clause that can allocate non-entity POJOs with the set of specified columns as constructor arguments. Let me illustrate through an example shamelessly copied from Richard Monson-Haefel and Bill Burke's Enterprise JavaBeans book.

public class Name {
  private String first;
  private String last;

  public Name(String first, String last) {
    this.first = first;
    this.last = last;
  }

  public String getFirst() { return first; }
  public String getLast() { return last; }
}


Note that Name is NOT an entity. Using EJB QL, we can actually write a query which will return a list of Name classes instead of a list of Strings.

SELECT new com.x.y.Name(c.firstName, c.lastName) FROM Customer c

I wanted to provide a contract which can return a collection of objects belonging to a different class than the Entity itself :

public <Context, Ret> List<Ret> read(Context ctx,
      String queryString,
      Object[] params);


And I wanted to have this contract specifically for the JPA implementation.

Dynamic Extension Objects using Inter-type Declarations in Aspects

Inter-type declarations in aspects provide a convenient way to declare additional methods or fields on behalf of a type. Since I have been using Spring 2.0 for the JPA implementation of the DAO, I went in for Spring Introductions, which allow me to introduce new interfaces (and a corresponding implementation) to any proxied object.

Quick on the heels, I came up with the following contract which will act as a mixin to the DAO layer:

public interface IJPAExtension {
  public <Context, Ret> List<Ret> read(Context ctx,
      String queryString,
      Object[] params);
}


and a default implementation ..

public class JPAExtension<T extends DTOBase> implements IJPAExtension {
  public <Context, Ret> List<Ret> read(Context ctx,
      String queryString,
      Object[] params) {
    // ...
  }
}


And .. the Weaving in Spring 2.0

The client who wishes to use the new interface needs to define the extension object just to introduce the mixin - the rest is the AOP magic that weaves together all necessary pieces and makes everybody happy.

@Aspect
public class DAOExtension {

  @DeclareParents(value="com.x.y.dao.provider.spring.jpa.dao.*+",
    defaultImpl=JPAExtension.class)
  private IJPAExtension mixin;
}


The original contracts remain unpolluted, other implementations do not bloat, still we have successfully introduced new functionalities in the JPA implementation, still without the client committing to any implementation class (we all know why to program-to-interfaces - right ?). The client code can write the following :

IJPAExtension mixin = (IJPAExtension)restaurantDao;
List<RName> res =
    mixin.read(factory,
      "select new com.x.y.dao.provider.spring.jpa.RName(r.id, r.name) from Restaurant r where r.name like ?1",
      params);


Inter-type declarations are not a very frequently used feature of aspect oriented programming. But it is a useful vehicle for implementing many patterns in a completely non-invasive way. I found it very useful while extending my JPA based DAO implementations without adding to the base contracts of the bridge.

Tuesday, August 08, 2006

XML Integration in Java and Scala

During my trip to JavaOne 2006, I had missed out the session by Mark Reinhold where he discussed Java's plan of integrating XML into the Java programming language. There have been lots of discussions in various forums about the possibilities of this happening in Dolphin - Kirill Grouchnikov has blogged about his thoughts on what he would like to see as part of native XML support in Java. The community, as usual, is divided on this subject - many people feel that integrating XML into the Java language will be a serious compromise on the simplicity of the language. Look at the comments section of this posting in JavaLobby. This feeling of compromise has gained more momentum in view of the upcoming integration of the scripting languages like Javascript, ECMA, Rhino etc. with Java (JSR 223).

Anyway, I think Java will have the first cut integration of XML in Dolphin. In the JavaOne session, Mark had discussed some of the options which they plan to offer in java.lang.XML, so as to make XML processing simpler in Java and liberate the programmers from the hell of dealing with DOM apis. Microsoft has already published its implementation of XML integration into C# and VB in the form of XLinq. I tried my hands at it using the June CTP and found it to be quite elegant. In fact the whole stuff looks seamless with the entire LINQ family and Microsoft's plan of fixing the infamous ROX triangle. Java has been lagging behind in this respect and is trying to make its last attempt to catch up - though expect nothing till Dolphin! I appreciate the fact that considering the millions of user base that Java has today and its committments to the community as being the default choice for enterprise platform (unless u r Bruce Tate, of course!), it is not easy to veto a change in the language. Still, better late, than never.

<scala/xml>

A few days ago, I was browsing through some of the slides of Mark from JavaOne, when I thought that it will be a worthwhile exercise to find out how these could be implemented in Scala, which, in fact offers the most complete XML integration as part of the language. I have repeatedly expressed my views about Scala in my blog (see here) and how positive I feel about saying Hello Scala. XML integration in Scala is no exception - in fact the nicest part of this integration is that the designers did not have to do much extra to push XML as a first class citizen in the Scala world. The elements of Scala that make it a nice host to XML integration are some of the core features of the language itself :

  • Scala being a functional language suppports higher order functions, which provides a natural medium to handle recursive XML trees

  • Scala supports pattern matching, which can model algebraic data types and be easily specialized for XML data

  • For-comprehensions in Scala act as a convenient front end syntax for queries


Go through this Burak Emir paper for more on how XML integration in Scala offers scalable abstractions for service based architectures.

For brevity, I am not repeating the snippets as Mark presented. They can be found in the JavaOne site for the session TS-3441. I will try to scratch the head with some of the equivalent Scala manifestations.

Disclaimer: I am no expert in Scala, hence any improvements / suggestions to make the following more Scala-ish is very much welcome. Also I tested these codes with the recent drop of 2.1.7-patch8283.

Construction : XML Literals

This example adds more literals to an existing XML block. Here's the corresponding snippet in Scala:


val mustang =
  <feature>
    <id>29</id>
    <name>Method to find free disk space</name>
    <engineer>iris.garcia</engineer>
    <state>approved</state>
  </feature>;

def addReviewer(feature: Node, user: String, time: String): Node =
  feature match {
    case <feature>{ cs @ _* }</feature> =>
      <feature>{ cs }<reviewed>
      <who>{ user }</who>
      <when>{ time }</when>
      </reviewed></feature>
  }

Console.println(addReviewer(mustang,
        "graham.hamilton",
        "2004-11-07T13:44:25.000-08:00"));


The highlights of the above implementation are the brevity of the language, mixing of code and XML data in the method addReviewer() and the use of regular expression pattern matching which can be useful for non-XML data as well. In case u wish, u can throw in some Java expressions within XML data as well.

Queries, Collections, Generics, Paths

This snippet demonstrates the capabilities of XML queries in various manifestations including XPath style queries. One major difference that I noticed is that the Scala representation of runtime XML is immutable, while the assumption in Mark's example was that java.lang.XML is mutable. I am not sure what will be the final Java offering, but immutable data structures have their own pros, and I guess, the decision to make XML runtime representation immutable was a very well thought out one by the Scala designers. This adds little verbosity to the Scala code below compared to its Java counterpart.

val mustangFeatures =
  <feature-list>
    <release>Mustang</release>
    <feature>
      <id>29</id>
      <name>Method to find free disk space</name>
      <engineer>iris.garcia</engineer>
      <state>approved</state>
    </feature>
    <feature>
      <id>201</id>
      <name>Improve painting (fix gray boxes)</name>
      <engineer>scott.violet</engineer>
      <state>approved</state>
    </feature>
    <feature>
      <id>42</id>
      <name>Zombie references</name>
      <engineer>mark.reinhold</engineer>
      <state>rejected</state>
    </feature>
  </feature-list>;

def isOpen(ft: Node): Boolean = {
  if ((ft \ "state").text.equals("approved"))
    false
  true
}

def rejectOpen(doc: Node): Node = {

  def rejectOpenFeatures(features: Iterator[Node]): List[Node] = {
    for(val ft <- features) yield ft match {

      case x @ <feature>{ f @ _ * }</feature> if isOpen(x.elements.next) =>
        <feature>
        <id>{(x.elements.next \ "id").text}</id>
        <name>{(x.elements.next \ "name").text}</name>
        <engineer>{(x.elements.next \ "engineer").text}</engineer>
        <state>rejected</state>
      </feature>

      case _ => ft
    }
  }.toList;

  doc match {
    case <feature-list>{ fts @ _ * }</feature-list> =>
      <feature-list>{ rejectOpenFeatures(fts.elements) }</feature-list>
  }
}

val pp = new PrettyPrinter( 80, 5 );
Console.println(pp.format(rejectOpen(mustangFeatures)));


The observations on the XML querying support in Scala are :

  • Use of for-comprehensions (in rejectOpenFeatures()) adds to the brevity and clarity of the clarity of the code

  • XPath methods (in isOpen() .. remember in Scala ft \ "state" becomes ft.\("state")) allows XQuery style of programming.


Another example which combines both of the above features and makes it a concise gem, is the following from another Burak Emir presentation:

for (val z <- doc(“books.xml”)\“bookstore”\“book”;
    z \ “price” > 30)
yield z \ “title”


Streaming In and Out

Mark showed an example of formatting XML output after summarizing all approved features from the input XML. We can have a similar implementation in Scala as follows :

def findApproved(doc: Node): Node = {

  def findApprovedFeatures(features: Iterator[Node]): List[Node] = {
    for(val ft <- features; (ft \ "state").text.equals("approved"))
      yield ft
    }.toList;

  doc match {
    case <feature-list>{ fts @ _ * }</feature-list> =>
      <feature-list>{ findApprovedFeatures(fts.elements) }</feature-list>
  }
}

Console.println(new PrettyPrinter(80, 5)
      .format(findApproved(XML.loadFile("mustang.xml"))));


Along with formatted output, the snippet above also demonstrates loading of XML from a stream.


On the whole, Scala's support for XML processing is very rich, more so, because of the support that it gets from the underlying features of the language. Scala offers powerful abstractions for transformations (scala.xml.transform), parsing, validations, handling XML expressions, XPath projections, supporting XSLT style transformations and XQuery style querying. The Scala XML library is fairly comprehensive - most importantly it is alive and kicking. Till u have the same support in Java (Dolphin is still at least one year away), enjoy <scala/xml>.

Monday, July 31, 2006

Inside the New ConcurrentMap in Mustang

Tiger has offered a large number of killer goodies for Java developers. Some of them have enjoyed major focus in the community, like generics, enhanced for-loop, autoboxing, varargs, type-safe enums etc. But none has had the sweeping impact as the new java.util.concurrent. Thanks to Doug Lea, Java now boasts of the best library support for concurrent programming in the industry. Martin Fowler mentions about an interesting anecdote in his report on OOPSLA 2005, where he mentions
While I'm on the topic of concurrency I should mention my far too brief chat with Doug Lea. He commented that multi-threaded Java these days far outperforms C, due to the memory management and a garbage collector. If I recall correctly he said "only 12 times faster than C means you haven't started optimizing".

Indeed the concurrency model in Tiger has brought to mainstream programming the implementation of non-blocking algorithms and data structures, based on the very important concept of CAS. For a general introduction to CAS and nonblocking algorithms in Java 5, along with examples and implementations, refer to the Look Ma, no locks! article by Brian Goetz.

Lock Free Data Structures

The most common way to synchronize concurrent access to shared objects is usage of mutual exclusion locks. While Java has so long offerred locking at various levels as this synchronization primitive, with Tiger we have non blocking data structures and algorithms based on the Compare and Set primitive, available in all modern processors. CAS is a processor primitive which takes three arguments - the address of a memory location, an expected value and a new value. If the memory location holds the expected value, it is assigned the new value atomically. Unlike lock based approaches, where we may have performance degradation due to delay of the lock-holder thread, lock-free implementations guarantee that of all threads trying to perform some operations on a shared object, at least one will be able to complete within a finite number of steps, irrespective of the other threads' actions.

java.util.concurrent provides ample implementations of lock free algorithms and data structures in Tiger. All of these have been covered extensively in Brian Goetze's excellent book Java Concurrency In Practice, released in JavaOne this year - go get it, if u haven't yet.


I must admit that I am not a big fan of the management of Sun Microsystems, and the confused state of mind that Schwartz and his folks out there portray to the community. Innovation happens elsewhere - this has never been more true of the way the Java community has been moving. And this is what has kept Sun moving - the vibrant Java community have been the real lifeblood behind Java's undisputed leadership in the enterprise software market today (Ruby community - r u listening ?). The entire Java community is still working tirelessly to improve Java as the computing platform. Lots of research are still going on to increase the performance of memory allocation and deallocation in the JVM (see this). Lots of heads are burining out over implementing generational garbage collection, thread-local allocation blocks and escape analysis in Java. Doug Lea is still working on how to make concurrent programming easier for the mere mortals. This, I think is the main strength of the Java community - any other platform that promises more productivity has to walk (or rail) more miles to come up with something similar.

In this post, I will discuss one such innovation that has been bundled into Mustang. I discovered it only recently while grappling with the Mustang source drop and thought that ruminating on this exceptional piece of brilliance has its own worth and deserves a separate column of its own.

The New ConcurrentMap in Mustang

In Tiger we had ConcurrentHashMap as an implementation of the ConcurrentMap. Mustang comes with another variant of the map - the contract for ConcurrentNavigableMap and a brilliant implementation in ConcurrentSkipListMap. Have a look at the source code for this beast - you will never regret that data structures are there to encapsulate the guts and provide easy-to-use interfaces to the application developers.

Concurrent programming has never been easy and lock-free concurrency implementation is definitely not for the lesser mortals. We have been blessed that we have people like Doug Lea to take care of these innards and expose easy-to-use interfaces to us, the user community. Despite the fact that research for lock-free data structures has been going on for more than a decade, the first efficient and correct lock-free list-based set algorithm (CAS based) that is compatible with lock-free memory management methods, came out only in 2002. Lea's implementation of ConcurrentSkipListMap is based on this algorithm, although it uses a slightly different strategy for handling deletion of nodes.

Why SkipList ?

The most common data structure to implement sorted structures is a form of balanced tree. The current implementation of ConcurrentSkipListMap goes away from this route and uses the probabilistic alternative - skip lists. As Bill Pugh says

Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a result the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees.


The verdict is not as clear as Bill says, but the main reason behind using skip lists in the current implementation is that there are no known efficient lock-free insertion and deletion algorithms for search trees (refer JavaDoc for the class). The class uses a two-dimensionally linked skip list implementation where the base list nodes (holding key and data) form a separate level than the index nodes.

Lock-Free Using CAS Magic

Any non-blocking implementation has a core loop, since the compareAndSet() method relies on the fact that one of the threads trying to access the shared resource will complete. Here is the snippet from Brian Goetz article (look at the increment() method of the counter) :

public class NonblockingCounter {
  private AtomicInteger value;

  public int getValue() {
    return value.get();
  }

  public int increment() {
    int v;
    do {
      v = value.get();
    }
    while (!value.compareAndSet(v, v + 1));
    return v + 1;
  }
}


Similarly, the implementation methods in ConcurrentSkipListMap all have a basic loop in order to ensure a consistent snapshot of the three node structure (node, predecessor and successor). Repeated traversal is required in this case because the 3-node snapshot may have been rendered inconsistent by some other threads either through deletion of the node itself or through removal of any of its adjaccent nodes. This is typical CAS coding and can be found in implementation methods doPut(), doRemove(), findNode() etc.

Handling Deletion

The original designers of these algorithms for list based sets used mark-bits and lazy removal for deletion of nodes. Doug Lea made a clever improvization here to use a marker node (with direct CAS'able next pointers) instead, which will work faster in the GC environment. He however, retains the key technique of marking the next pointer of a deleted node in order to prevent a concurrent insertion. Here's the sequence of actions that take place in a delete :

  1. Locate the node (n)

  2. CAS n's value to null

  3. CAS n's next pointer to point to a marker node

  4. CAS n's predecessor's next pointer over n and the marker

  5. Adjust index nodes and head index level


Any failure can lead to either of the following consequences :

  1. Simple retry when the current thread has lost to a race with another competing thread

  2. Some other thread traversing the list hits upon the null value and helps out the marking / unlinking part


The interesting point is that in either case we have progress, which is the basic claim of the CAS based non-blocking approach. Harris and Maged has all the gory details of this technique documented here and here.

Postscript

The code for the implementation of ConcurrentSkipListMap is indeed very complex.Firstly it deals with a multilevel probabilistic data structure (Skip List) and secondly it makes that piece concurrent using lock-free techniques of CAS. But, on the whole, for anyone who enjoys learning data structure implementations, this will definitely be a very good learning experience. The devil is in the details - it could not have been more true than this exquisite piece from Doug Lea!

Monday, July 24, 2006

From Java to Ruby ? Now ? Naah ..

Bruce Tate has recently got his From Java to Ruby out. This is another one in series of those publications which professes Ruby as the successor of Java in the enterprise. The book is targeted towards the technical managers who can stand by the decision of their enlightened programmers of making the royal switch in the enterprise and take this decision upwards within the organization (without getting fired !). I have not yet read Tate's book, but I thoroughly enjoyed reading his Beyond Java. This entry is the blueprint of my thoughts on this subject - will Java be replaced by Ruby in the enterprise today ?

Cedric has already voiced his opinion on this subject in one of his usual opinionated (Rails is also opinionated - right ?) posts. I think I have a couple to add to his list ..

Scaling Up with Ruby

One of the major areas of my concern with Ruby being mainstream is the skillset scalability of the enterprise. The programming force, at large, is now baked in the realm of the supposedly (im)pure OO paradigms of Java. Call it the Perils of Java Schools, the lack of appreciation for the elegance of functional programming or whatever, the fact is that the zillions of developers today are used to program with assignments and iterators, as they are idiomed in Java and C++. It will take quite a beating to infuse into them Why FP Matters.

Ruby is elegant, Ruby blocks are cool, Ruby has Continuations and Ruby offers coroutine based solution for the same-fringe problem. But, again, there ain't no such thing as a free lunch ! You have to develop your workforce to take good care of this elegance in developing enterprise scale applications. The following is an example of the paradigm shift, shamelessly ripped from Bruce Tate's Beyond Java :

The developers in my workforce are used to writing JDBC-style access in Spring using anonymous inner classes :

JdbcTemplate template = new JdbcTemplate(dataSource);
final List names = new LinkedList();

template.query("select user.name from user",
    new RowCallbackHandler() {
      public void processRow(ResultSet rs) throws SQLException {
        names.add(rs.getString(1));
      }
    }
);


Here is a Ruby snippet implementing similar functionality through "blocks" ..

dbh.select_all("select name, category from animal") do |row|
    names << row[0]
end


A real gem - but the developers have to get used to this entirely new paradigm. It is not only a syntactical change, it implies a new thought process on part of the developer. Remember, one of the reasons why Java could smartly rip apart the C++ community was that it was a look-alike language with a cleaner memory model and a closer affiliation to the Internet. At one point in time, we all thought that SmallTalk had an equal chance of gobbling up the C++ programming fraternity. Smalltalk is a much purer OO language, but proved to be too elegant to be adopted en-masse.

Martin Fowler and Bruce Tate have been evangelizing Ruby, DHH has presented us with a masterfully elegant framework (ROR). But we need more resources to scale up - more books, more tutorials, more evangelism on the idioms of Ruby, which have gone behind the mastery of ROR.

The Art of Ruby Metaprogramming

Metaprogramming is the second habit of Ruby programmers (possibly after "Ruby Blocks"). Many of the problems that we face today due to the lack of formal AOP in Ruby can be addressed by metaprogramming principles. In fact, metaprogramming offers much more "raw" power than AOP, as can be very well illustrated by the following method from Rails validation ..



def validates_presence_of(*attr_names)
  configuration = { :message => ActiveRecord::Errors.default_error_messages[:blank], :on => :save }
  configuration.update(attr_names.pop) if attr_names.last.is_a?(Hash)

  # can't use validates_each here, because it cannot cope with nonexistent attributes,
  # while errors.add_on_empty can
  attr_names.each do |attr_name|
    send(validation_method(configuration[:on])) do |record|
      unless configuration[:if] and not evaluate_condition(configuration[:if], record)
        record.errors.add_on_blank(attr_name,configuration[:message])
      end
    end
  end
end



But this also reflects upon my earlier concern - the programmers have to be developed to cope with this kind of semantics in their programming. Many of the metaprogramming techniques have become idioms in Ruby - we need to have more preachings, professing their uses and best practices to the programming community. Otherwise Ruby Metaprogramming will remain a black magic for ever.

Final Thoughts

Rails may be the killer app, metaprogramming may be the killer technique, but we all need to be more pragmatic about Ruby's chance in the enterprise. There are performance concerns for Rails, the model that it adopts for ORM is divergent from the one that we do for Java and definitely not the one that can back up a solid object oriented domain model. It is debatable whether this will be a better fit for enterprise applications - but the community needs to tune the framework constantly if it were to compete with the ageold competencies of Java. With Java 5, we have a JVM which has been tuned for the last 10 years, we have killer libraries for concurrency (I heard it is capable of competing with raw C!) and we have oodles of goodies to make Java programming compete with the best of the breed performant systems. We have Mustang and Dolphin ready to make their impact on the enterprise world. It is definitely worth looking forward to whether the elegance of Ruby can scale up to the realities and give Sun (and the entire Java community) a run for their money.

Sunday, July 16, 2006

DAOs on Steroids - Fun Unlimited with Generic DAOs in Spring 2.0

I have already blogged a lot on DAOs in general and generic DAOs in particular (see here, here and here). All the entries are enough to prove that generic DAOs provide better engineering than boilerplate DAOs, resulting in a substantial lesser amount of code. This entry rounds up all of my thoughts on generic DAOs and how their deployment with the Spring 2.0 container results in a seamless injection into the domain model of an application. Without much ado, buckle up for the fun-unlimited in the roller coaster ride of the DAO world.

Generic DAO Abstraction

At the risk of repetition, let me recall the generic abstraction for a DAO - the DAO is parameterized on the class DomainBase, the base class for all domain objects. In case u want to use DTOs, the parameterization can also be done on DTOBase as well. Currently let us assume that we will use the rich domain model without the behaviourless DTOs.

public abstract class DAOBase<T extends DomainBase> {
  // The underlying implementation.
  private DAOImplBase<T> daoImpl;
  ...
}


The above abstraction uses the pimpl idiom and the Bridge pattern to decouple the abstraction from the implementation. The implementation can be based on JDBC, Hibernate or JPA and can be switched flexibly, without an iota of impact on the client codebase. Cool!

The concrete DAO extends the abstract DAO :

public class EmployeeDAO<T extends DomainBase> extends DAOBase<T> {
  ...
}


The concrete DAO can be kept generic as well, in order to allow the user to instantiate the generic DAO with multiple domain abstractions. If u want to keep it simple, you can make the concrete DAO a non-generic implementation as well ..

public class EmployeeDAO extends DAOBase<Employee> {
  ...
}


The Bridge

The abstraction DAOBase<T> delegates all DAO methods to the implementation, which is concretized based on the implementation platform - Hibernate, JPA, or vanilla JDBC.

public abstract class DAOBase<T extends DomainBase> {
  ...
  // sample finder
  public final <Context> List<T> find(
        Context ctx,
        ICriteria cri,
        Class<T> clazz) {

    return daoImpl.read(ctx,
      getTableName(),
      AndCriteria.getInstance(
      getJoinCondition(),
      cri),
      clazz);
  }
  ...
}


The implementation leg of the bridge gives concrete implementation for every specific platform ..

// JDBC based implementation
public class DAOJDBCImpl<T extends DomainBase>
    extends DAOImplBase<T> {
  ...

  @Override
  public <Context> List<T> find(Context ctx,
        String tableName,
        ICriteria cri,
        Class<T> clazz) {

    try {
    Connection conn = (Connection) ctx;

    SelectStatement stmt = new SelectStatement()
        .setFromClause(tableName)
        .setSelectClause(" * ")
        .setWhereClause(cri);

    List<T> result =
      QueryBuilderUtils.query(conn, stmt.toSelectString(), clazz, 0);
    return result;
    } catch (SQLException e) {
      throw new DataAccessException(e);
    }
  }
  ...
}


Similarly for JPA based implementation ..

// JPA based implementation
public class DAOJPAImpl<T extends DomainBase>
      extends DAOImplBase<T> {
  ...
}


Concrete DAOs

Now that the base abstraction DAOBase provides the contract and the implementation hierarchy provides generic implementation for all the DAO methods, the concrete DAOs only have to provide the table specific information that will be used by the typed interfaces during runtime.

Here's a minimalist implementation ..



public class EmployeeDAO<T extends DTOBase>
      extends DAOBase<T> {

  public enum ColumnNames {
    // Enum for column names.
  }

  /**
   * Returns a list of the column names.
   * @return list of column names.
   */
  protected List<String> getColumnNames() {
    ...
  }

  /**
   * Subclasses must override and provide the TABLE_NAME
   * that the bean is associated with.
   *
   * @return the table name.
   */
  public String getTableName() {
    return "EMPLOYEE";
  }

  /**
   * {@inheritDoc}.
   */
  protected ICriteria getPrimaryKeyWhereClause(T employee) {
    ...
  }
}



Deploy in Spring

The creation of DAOs can be controlled through the Spring IoC - all DAOs will be singleton beans, lazily loaded ..

<bean id="empdao"
  class="org.dg.gen.EmployeeDAO"
  lazy-init="true"
</bean>


Quite straightforward - uh!

In case of generic concreate DAOs, the EmployeeDAO<Employee> will have to be instantiated through a static factory method ..

public class EmployeeDAO<T extends DomainBase> extends DAOBase<T> {
  ...
  public static EmployeeDAO<Employee> makeDAO() {
    return new EmployeeDAO<Employee>();
  }
}


No problem .. add the factory method in configuration and the Spring managed bean lookup-method-injection magic takes care of the rest.

<bean id="empdao"
  class="org.dg.gen.EmployeeDAO"
  lazy-init="true"
  factory-method="makeDAO">
</bean>


The DAO needs to be wired with the domain object, since the rich domain object may need to query the database using the DAO. Spring 2.0 to the rescue - Spruce Up Your Domain Model and inject the DAO within it through 2.0 magic ..

@Configurable
public class Employee extends DomainBase {

  private EmployeeDAO<Employee> dao;

  public void setDao(EmployeeDAO<Employee> dao) {
    this.dao = dao;
  }
  ...
}


Remember the Employee domain object will not be instantiated by Spring - yet the setter injection works, courtesy the @Configurable annotation. And of course the following addition in the configuration ..

<aop:spring-configured/>
<bean class="org.dg.gen.Employee"
  singleton="false">
  <property name="dao"><ref bean="empdao"/></property>
</bean>


This is Spring 2.0 dessert course - more elaborated here.

Excitement with AOP Introductions

The Spring AOP Introductions magic helps u to add functionality to an existing DAO by wrapping it in a proxy and defining new interfaces to be implemented. This article describes how introductions are used to add a number of custom finder methods to DAO implementations specific to each domain object. This is a real treat that Spring AOP offers to customize your DAOs individually on top of generic abstractions. The custom functionality can be implemented as separate classes / interfaces and injected specifically to selected DAOs as u need 'em. The best part is that all of this machinery works non-invasively - your individual concrete DAOs can still be generated through your MOJOs (as maven plugins), yet you can add specific functionality of custom interfaces injected through the magic of Spring AOP.

As the DeveloperWorks paper suggests, all u have to do to integrate this AOP machinery into ur application's configuration file :


  1. Define a bean for custom org.springframework.aop.IntroductionAdvisor for handling additional methods that u would like to introduce to ur DAO

  2. Define a bean for the target of interception, which the proxy will wrap. Make it abstract to enable reuse of the definition in specific DAOs

  3. Define the proxy of the class org.springframework.aop.framework.ProxyFactoryBean, which will wrap the target of above step

  4. Finally add the bean for the specific DAO along with the proxy interfaces that it needs to implement



For the full and final elaboration, have a look at the excellent treatment of this subject in the paper mentioned above.

Monday, July 10, 2006

Spring Web Flow - A Declarative Web Controller Architecture

In the war for web application frameworks, the buzzword (or buzzphrase) is state management for conversational applications. Soon after JBoss released Seam, I had blogged about its state management capabilities using contextual components. At that point in time I did not have a look at the corresponding Spring offering and Keith was quick to point me out to Spring Web Flow, which also offers similar functionalities in modeling conversational Web applications.

I must admit that I am an ardent Spring fan, not because it is the most popular IoC container out there. The feature of the Spring application framework that appeals to me most is the non-intrusive / non-invasive approach which it evangelizes, encouraging developers to declare injection points externally into pre-built components, rather than forcing them to adopt their own. When I started digging into Spring Web Flow, I was not surprised that the same theme has been applied in this controller architecture as well. In fact, it is very well possible to implement a Spring Web Flow application without any Spring Web Flow specific Java code - it can be all POJOs injected via DI in user defined scoped flows and totally agnostic about the web stack above.

Flow is the King

When I started fiddling with Spring Web Flow, the first thing which struck me is the level of abstraction at which a Flow has been modeled. In a controller architecture, a flow has to be the central abstraction and all other functionalities revolve around a flow. Keith has expressed it so well in his blog :

Since a Flow is a first-class object (effectively an mini application module), you can interact with it, message it, store it, observe it, intercept it, etc. This is very much a contrast to traditional controller frameworks where there is simply no first-class concept of a "Flow"; rather, there are only a bunch of independent actions (or pages) whose executions are driven by the browser.


Technology Agnostic Abstractions

The abstractions of Spring Web Flow are not coupled with specific protocols like Http Servlet APIs. Instead they have an extra level of indirection in the abstraction layer, which makes integration with other technologies (like Struts, JSF and Tapestry) quite straightforward.

public interface FlowExecution {
  public ViewDescriptor start(Event startingEvent);
  public ViewDescriptor signalEvent(Event event);
}

public interface Action {
  public Event execute(RequestContext context);
}


In the above interfaces, ViewDescriptor, Event and RequestContext are top level abstractions which can be subclassed appropriately to provide specific implementations. This is a big deviation from JBoss Seam's philosophy, which has tied the latter to EJB 3.0 and JSF - no wonder Gavin's initial intuition is that that two models simply don't make sense together.

Continuations based Flow Control

SWF offers continuations based flow control for conversational applications. This implies uniform handling of browser navigation buttons, simultaneous multiple conversations and a cool transparent Strategy based flow execution storage - all out-of-the-box! Again, what I like on this continuation support is that Spring provides the correct level of framework abstraction. The purpose of Spring is not to provide a full-blown continuation framework (like Rife, which has a different purpose in life) - SWF provides continuation for DSL based flow execution, upon which a snapshot can be saved and restored at any point. This is a clear decision that avoids over-engineering, or as Bruce Eckel would call Framework-itis.

OTOH conversational states are managed in Seam using stateful session beans and what Gavin calls Subversion of Control (Bijection = Injection + Outjection). It is an alternative approach to what SWF offers. However, I find the SWF model more intuitive to address the cross cutting concern that conversations offer.

Other Cool Features

Apart from the above specials, you have lots of cool stuff up for grabs in SWF. Here are a few samplers which caught me :


  • Conditional Transitions as part of the flow control DSL, along with a pluggable TransitionCriteria abstraction.

  • Solid support for wizard style flow cases.

  • Flow-By-Contract - support for enhanced flow lifecycle callbacks through an Observer based listener interface. You can drop-in pre-conditions and post-conditions with invariants for the flow to pass through that state.

  • Reusable sub flows as a separate abstraction.

  • Bean method binding capability while modeling your flow action.

  • Transactional Web Flows - no more hell with tokens to prevent duplicate form submission. And best part is that it works transparently for flow conversation snapshots as well. All you have to do is to turn on the transactional property in the web flow declaration in your DSL.

  • Strategy based state exception handler which u can plug in to your web flow declaration.



The distribution of SWF comes with a rich set of examples, which amply demonstrates all of the above features. The sellItem example is the richest one and has all of the killer features demonstrated. I fiddled with the distribution for a week (not as part of my regular day jobs, though) and have come across the above cool stuff. On the whole it looks to be a regular Spring delivery - framework based, patterns enriched, easy-to-use and above all, non-invasive (according to me, the coolest aspect of any Spring component).

Sunday, July 02, 2006

Spring 2.0 AOP - Spruce Up Your Domain Model

Just started playing around with Spring 2.0 and poke at some of the new features over the weekend. The coolest dude of them looked like the ability to attach post-instantiation processors to beans which have NOT been instantiated by the Spring container. This means a lot to me when I started thinking about how this feature can add value to the current scheme of things in a POJO based Spring configured application.

Have a look at the following domain class from our current application :

// models a security trade
public class Trade {
  // state
  // getters and setters
  // domain behavior
}

A typical domain object is instantiated in the application either from the ORM layer (read Hibernate through persistence services) or by the user using factories. It is never instantiated by the Spring container. Hence there is no way that we can use Spring DI to inject services into the domain model. Any sort of service injection that is done will be through hardwired code :

public class Trade {
  // ...
  public BigDecimal calculateAccruedInterest(...) {
    BigDecimal interest =
      InterestRateDao.getInstance().find(...);
    // ...
  }
  // ...
}

The above domain class now has a hardwired dependency on the class InterestRateDao, which brings in all sorts of unwanted side-effects :


  1. Domain layer should be persistence agnostic

  2. The domain classes should be unit-testable without the service layer

  3. Proliferation of business logic in the controllers



Let me explain a bit on the third point ..

Since I cannot have transparent DAO injection in my domain model, I cannot have my calculateAccruedInterest() method with my domain object. The inevitable impact will be to move the logic down to the controller, the lifecycle which can be configured using Spring. Now I have the controller class which computes the accrued interest of the Trade once the object has been instantiated by the persistence layer. Result ? My domain logic has started infiltrating into the controller layer which, ideally should be a facade only and strictly a *thin* glue between the domain layer and the presentation layer.

// Controller class for trade entry use case
public class TradeService {
  // ...
  public void enterTrade(...) {
    Trade tr = TradeFactory.create(...);
    // accrued interest needs to be calculated only for bond trades
    if (tr.isBondTrade()) {
      tr.setAccruedInterest(
        AccruedInterestCalculator.getInstance()
        .calculateAccruedInterest(tr));
    }
  }
  // ...
}

Design Smell !! The domain model becomes anemic and the controller layer becomes fleshy.

Enter Spring 2.0

The new AOP extensions in Spring 2.0 allow dependency injection of any object even if it has been created outside the control of the container. Our domain objects fit nicely in this category. Service objects can now be injected into domain objects so that the domain model can be enriched with domain behavior ensuring proper separation of concerns across the layers of the application architecture. The enriched domain behavior can now interact with the domain state in a more object-oriented way than the erstwhile anemic model.

Spring 2.0 offers annotation driven aspects towards this end as the most recommended approach towards dependency injection into the domain model. Let us see how the Trade class changes :

@Configurable("trade")
public class Trade {
  // ...
  public BigDecimal calculateAccruedInterest(...) {
    BigDecimal interest =
      dao.find(...);
    // ...
  }
  // ...

  // injected DAO
  private InterestRateDao dao;
  public void setDao(InterestRateDao dao) {
    this.dao = dao;
  }
}

And the usual stuff in the XML for application context :

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
...>

  <aop:spring-configured/>

  <bean id="interestRateDao"
    class="com.xxx.dao.InterestRateDaoImpl"/>

  <bean id="trade"
    class="com.xxx.Trade"
    lazy-init="true">
    <property name="dao" ref="interestRateDao" />
  </bean>
</beans>


Getaways


  • The anemic domain model is history. Fine grained DI using AOP helps domain model regain smart behavior.

  • No compromise on unit testability. For the above annotation to kick in, the annotated types must be woven with the AspectJ weaver either through a build-time ant or maven task or through load-time weaving. Do away with the AspectJ weaving and replace the property references by mock objects in XML and fire to test drive your domain model.

  • All crosscutting infrastructure services can be injected transparently into the domain model from s single point of contact.

  • In case you are allergic towards annotation (can't believe many people are!), you may also use AnnotationBeanConfigurerAspect in spring-aspects.jar (distributed with Spring) to implement the same behavior.

  • Service Objects, Factories and Repositories are typical examples of artifacts that can be injected into the domain model.

Saturday, June 24, 2006

Variance in C# Generics to Follow Scala

This paper by Burak Emir, Andrew J. Kennedy, Claudio Russo, Dachuan Yu, slated for ECOOP 2006, has proposed modeling variance in C# generics in line very much similar to what Scala does. The proposed notation is based on definition-site declarations (as in Scala) and uses +/- to model covariance / contravariance, as also in Scala.

This proposed extension will make the following declarations possible:

// Covariant parameters used as result types
interface IEnumerator<+T> { T Current { get; } }

// Covariant parameters used in covariant result types
interface IEnumerable<+T> { IEnumerator<T> GetEnumerator(); }

// Contravariant parameters used as argument types
interface IComparer<-T> { int Compare(T x, T y); }


Here is the corresponding syntax in Scala using a covariant type parameter:

class Array[+a] {
  def apply(index: int): a
  ...
}


In order to ensure type-safety through static type checking, the new extension to variance annotations in C# allows covariant parameters only in producer positions in signatures and contravariant paramaters only in consumer positions. These restrictions are, however, overcome through the use of type constraints:

class List<+T> {
  ...
  List<U> Append<U>(U other) where T : U { ... }
  List<U> Append<U>(List<U> other) where T : U { ... }
  ...
}


Quite similarly we have the same solution in Scala by using a polymorphic method with a lower type parameter bound:

class Stack[+a] {
  def push[b >: a](x: b): Stack[b]
    = new NonEmptyStack(x, this)
  ...
}


As a sidenote, it may be mentioned that Java Generics addresses the variance problem by employing wildcards and using use-site declarations (as opposed to defintion-site in C# and Scala).

Friday, June 16, 2006

Does Ruby need AOP ?

With its strong metaprogramming facilities providing interceptors, hotswapping, mixins and hooks for injecting custom codes, it seems that Ruby developers can very well do without a separate vehicle for addressing crosscutting concerns. Bruce Tate, in his book Beyond Java, has opined that for Ruby developers, AOP is not quite as urgent, because you've already got robust tools to deal with these kinds of concerns ... Another Ruby champion David Heinemeier Hansson of Rails fame, has also expressed similar opinion in the same book:
A standardized AOP framework has never really taken off in Ruby because the language itself already supports most of the desirable functionality of AOP.


MetaProgramming - A Substitute for AOP ?

Never! Metaprogramming can at most be the vehicle for implementing AOP. The most important aspect (no pun intended) of AOP is the semantics, which it enforces for the problems which are supposed to be solved by metaprogramming. AOP is NOT about inserting code snippets at various places, it is about the semantics of abstracting the crosscutting behaviors of an application. The man who brought AOP to the world (Gregor Kiczales) was himself an expert of metaprogramming before and he has the following to say on the semantics of AOP :
Another advantage of a direct semantics for AOP is that it helps with abstraction. When we look at a complex object model, it's simpler to think of it as describing the behavior of the objects, rather than describing transformations on method tables that implement the behavior of objects. The same is true of AOP—it's simpler to think in terms of aspects of an object's behavior, rather than transformations on classes, which transform method tables, which implement the behavior of objects.

So, AOP is all about semantics, abstraction and behavior of the application and is at a much higher level of implementation than the raw power of metaprogramming.

Metaprogramming in Ruby - Too Much Power ?

Metaprogramming operates at a much lower level, empowers developers with too much raw power, which may be too potent for most applications. Let us look at the following snippet from Rails code (adopted from Beyond Java book) :

base.class_eval do
    alias_method :render_without_layout, :render
    alias_method :render, :render_with_layout
end


The code snippet displays metaprogramming prowess of Rails, which implements hotswapping behavior of the base class with improved functionality. But, as discussed above, this is too low a level of abstraction. Using AOP we would have had semantics first - a neat aspect consisting of advices operating on pointcuts. No wonder, Ruby has decided to introduce AOP constructs from Ruby2.

AOP - Only for Statically Typed Languages ?

This is a myth. The real truth is that dynamically typed languages have not yet reached the sufficient level of adoption in the community to feel the need of AOP. Hence Ruby programmers are quite content at the extreme power of meta programming features to implement crosscutting concerns in their applications. Once they reach the maturity in acceptance and we start seeing millions of lines of code written in big enterprise applications, they are sure to feel the need for more abstraction power in addressing the crosscutting concerns. Because of large scale adoption, we have rich tooling support for AOP in Java - when the same happens for Ruby and other dynamic languages we will definitely see the emergence of tools which allow us to program in aspects.

TailPiece

Irrespective of what Gavin King thinks about AOP (he recently snubbed AOP as totally overhyped, failed technology), I am a big fan of aspect oriented programming - recently this technology has delivered a world of good to our team in a Java based enterprise application. But that's for another day, another post ..

Thursday, June 08, 2006

Stateful Thinking with JBoss-Seam

A couple of days back I listened to the great interview of Gavin King at JavaPosse. Lots of voices have been burnt over this interview in TSS and elsewhere. In the interview Gavin talks about Hibernate, EJB3, JSF and the latest from JBoss, Seam. He lambasted stateless architectures and dismissed the thought that stateful session beans do not scale as an ancient canard in the Java community. He made some insightful comments regarding the architecture of Seam and how the stateful and contextual components of Seam manage application state for POJOs.

The main reason why the above interview of Gavin has raised so many eyebrows is that it has hit the community right on the nail and has attacked the most pervasive architecture in today's Java EE space. Rod Johnson led Spring community has successfully evangelized the stateless architecture backed by the most mature IoC container, since the dark ages of EJB 1 and EJB 2. In his landmark book, Rod mentions
Applications that don't hold server-side state are usually the most scalable. In a web application, for example, if we need to hold only minimal user state, we might be able to hold it in cookies, avoiding any need for HTTP session state replication.

Give me a break! Can u have a meaningful enterprise application without any requirement for holding server-side state ? What about the conversational applications that thrive on long running business processes necessitating application state management in multiple contexts ? I guess the Extended Persistence Context (EPC) was designed in EJB specifically for this purpose. I personally have been an admirer of the Spring technology and the simple yet powerful programming model that it has evangelized leading to a significant impact in developer's productivity. But Gavin King definitely has a point, which all of us need to ponder over - one size doesn't fit all. As I mentioned in my last blog, the developer community needs to make the proper judgement before deciding whether to eat the elephant or not.

HTTPSession or SFSB ?

This is one of the questions that has been raging the floors and blogs of all Java EE developers. The Spring community thinks SFSBs are sin - maintain state (if required), in HTTPSession and use session state replication for scalability. They suggest minimal state on the server side using fine grained session objects, which can be easily replicated across servers. But, as I mentioned above, how do we handle optimistic transactions, long sessions, conversational applications ? Gavin is justified when he FUDs Spring on this ..
Huh? You don't have SFSBs (or anything equivalent). How could you implement EPC (optimistic transaction) support without difficulty? If you could do this without difficulty, why did you not do it years ago, and save Hibernate users from years of pain with LazyInitializationException?!

Enter Seam - a grand unification of the component models of JSF and EJB3. The mission - Deprecate the stateless architecture that we have learnt to adopt so naturally. Seam adopts SFSBs as the main container of managed application state - a theory that Gavin King believes in from way back in 2004 while ruminating on the notion of "application transactions" in a hotel room in Thailand.

Why does Gavin believe that HTTPSession is NOT the place to hold application state and that doing so is NOT scalable ?

In order to answer this, let us first assume that any meaningful enterprise application needs to maintain server-side state. In a clustered environment, replicating state for transparently managing failover is expensive. Typically SFSBs-are-unscalable-school architects adopt either of the following solutions (based on Gavin's presentation on Seam at JavaOne) :

  • Maintain state in the database. This is very expensive since it involves heavy I/O with the slowest layer in the tier. Inevitably they land up with a second level cache that needs to be kept transactionally consistent between the database and every node on the cluster – even more expensive!

  • Maintain state in HttpSession as JavaBeans. This has 2 problems -


    • The contract for HttpSession does not have dirty-checking - session replication means a bulk copy of the entire coarse grained object across servers. Though implementations like JBoss HttpSession and caching solutions like Tangosol Coherence offer attribute level buddy replication, it is definitely not the standard. Hence u need to re-set the attribute each time you change the bean state, which is extremely bug-prone and difficult to test.

    • A JavaBean is not transactional - hence u need to have an indirection of the session stored JavaBean through SLSBs for implementing business logic.



OTOH, SFSBs are managed by the container, provides complete transaction semantics and JBoss SFSB, being implemented on top of JBossCache provides attribute level replication. Hence stateful session beans provide much richer semantics than the dumb JavaBeans. Cool Stuff! An application which involves long transactions needs to maintain conversational states in multiple contexts - Seam scaffolds all these state management under the hood of stateful session beans as contextual components. Hence what we get is an efficient implementation for EPC. Of course, in order to synchronize the lifecycle of the SFSBs with the session events, we need to maintain a ref/handle/proxy of the bean in the HttpSession. Even the new session scoped beans introduced in Spring 2.0 will not be able to offer an as efficient implementation, since HttpSession is not the suitable place to keep this "big" information and will have the usual problems of clusterability.

Lessons Learnt

Stateful session beans are not as black as they are painted. In fact they may be the best bet to store the application state. And Seam exploits these capabilities to the fullest and provides a very elegant framework to model conversational applications. So, keeping in line with my last post, I would like to end this musing with the belief that we, the developers should not be carried away by the current hype that open source Java has created. The developer community should always try to embrace the standards, critique the standards and make the standards better instead of trading it for an implementation.

Sunday, June 04, 2006

Spring, JBoss and The Elephant

Interface 21 and BEA recently announced the release of Pitchfork, the EJB3 implementation within the Weblogic container, built using Spring. It is actually more than an implementation of EJB3 specification and provides the additional capabilities offered by Spring, viz. full use of AOP and AspectJ integration, advanced DI capabilities etc. Every EJB u build can use all Spring backed services along with JSR 250 annotations and EJB style interceptions.

This release has sparked off a war of words between the Spring and JBoss community. The trail at Raible's blog is an interesting read and provides in depth analysis and invaluable judgement to architects and developers building up their Java EE stack. At the same time it also opens up a new can of worms, enough to put the entire Java EE developer community into a bed of confusion. After struggling with the hell of deployment descriptors and the infamous programming model of EJB2, I had been enlightened with the lights of the landmark book by Rod Johnson, which instantly indoctrinated me to the Spring world of POJOs, Dependency Injection and IoC Containers. I had always felt that Spring + Hibernate make my day. But some of the recent developments in the Java open source world like the tie up of Spring with BEA/WLS has pitchforked me (no pun intended!) into a world of uncertainty. Myself, being a Java developer by heart (of course my CEO thinks otherwise and does not give me a raise for architecting a good piece of Spring bean :-(), can't help but seek replies from the elite community regarding the following questions :

Question 1

Regarding EJB 3, Bruce Tate once mentioned "Don't make me eat the elephant again". We all know, EJB is a standard and being compliant with the standard is like programming to the interface. I don't like to get locked in an implementation. EJB 3 is designed for extension - as Gavin has rightly mentioned in the same trail, we should strive to provide add-ons via extension points provided in the EJB specification and offer them portably on all application servers. JBoss is pledging to provide opensource extension to EJB3 that adds pointcuts, which would spruce up EJB 3's anemic dependency injection functionality. As of this day, JBoss offers an Embeddable EJB3 Container using which u can deploy applications in Tomcat. Next time when I offer my Java EE stack to my client, should I go for the most trusted IoC container (a.k.a. Spring) and run the risk of locking myself to a concrete implementation OR try to eat the elephant again.

Question 2

Is there anything under the hood in Spring's tie up with BEA ? With the open source community raging amock, BEA has started losing grounds, they have lost their innovation trail and has come to the hard realization that they need to have Spring in their arms to beat the time to market of their competitors. But what is the value proposition of Spring in this entire deal - are they trying to trade in their quasi-EJB3 container (quoted from Gavin King in the comments trail in Matt Raible's blog entry) with the standards based implementations used by the huge customer base of BEA and WLS.

Parting Thoughts

The deficiencies of the current EJB3 specfication and implementation have been documented in many burning blogs and articles. OTOH, all is not green in the Spring world either. Spring does not have SFSBs - hence cannot provide an efficient implementation of EPC, modeling conversations in Spring is also broken (I have not yet looked at Spring 2.0) and above all u need to deal with reams of XML for whatever u want to do. Gavin King, trying to desperately defend his stake on the EJB3 specifications, has branded AOP as an overhyped and failed technology - possibly all for the sake of pushing the JBoss camp. Believe it or not, the open source world of the Java community is on fire. They have realized that all innovation happens here but the technology needs to carry the endorsement of Sun for it to reap the commercial benefits. As Peter Thomas has rightly pointed out in his blog, the open source landscape is getting increasingly commercialized nowadays. The developer community needs to make the proper judgement before deciding whether to eat the elephant or not.

Saturday, May 27, 2006

Functional Programming Under the Hoods - Developers Call it LINQ

The May CTP of C# 3.0 introduces LINQ amongst some other additional features of the language like implicitly typed local variables, extension methods, collection and object initializers etc. All of these features have been extensively critiqued here, here and here. But undoubtedly the features on which most blogs have burnt are the lambda expressions and Language INtegrated Query (LINQ). Lambda expressions in C# form the core of LINQ and brings us back the memories of functional programming of Lisp genre. With the current generation afflicted with the perils of Java schools, it will definitely require a change in mindset, a fresh relook at the old pages of Paul Graham to appreciate the underpinnings of the design of LINQ in the new release of C# 3.0 and Visual Basic 9. Think collections as monads, query operators as monad primitives, queries as monad comprehensions and what you get in LINQ is today's incarnation of Lisp, Haskell or ML. As Erik Meijer has mentioned correctly,
Functional programming has finally reached the masses, except that it is called Visual Basic instead of Lisp, ML, or Haskell.


Code-as-Data: Unveiling Lambdas in C# 3.0

Like Lisp, the lambdas of C# 3.0 support the code-as-data paradigm. However, in Lisp or Scheme, it is upto the developer to decide whether to use the code or data through the application of quote and quasi quote. C# 3.0 and VB 9 compiler automatically decides on the code or data for the lambda expression depending on the static type of the context in which the lambda occurs.

Expression<Func<Customer,bool>> predicate = c => c.City == "London";
Func<Customer,bool> d = predicate.Compile();


In the first statement, Expression<T> is a distinguished type, which preserves the lambda in the form of expression trees instead of generating traditional IL-based method body. Thus the lambda is data, instead of code - lispers reminisce of the quoted expression to achieve the same effect. In Lisp you use eval to translate the data into some executable - C# 3.0 offers Compile() over the expression tree, as in the second line above, which makes the compiler emit the IL.

The Grand Unification

The primary goal of LINQ is to unify programming against relational data, objects, and XML. Typically in today's applications, developers employ three disparate models - SQL for relational database programming, XQuery, XSLT etc. for XML processing and OOP for business logic. The LINQ framework presents a unified model using the technology of functional programming under the hoods and exploiting the algebraic nature of collections and operations on collections.

The design of LINQ consists of generic operations on collections abstracted as base patterns of query operators and domain specific extensions for XML (XLINQ) and relational database programming (DLINQ). Some of the experts have expressed concerns that LINQ may once again prove to be a bloat in the language with efforts towards ORM implementations. I have a completely different opinion on this. I have worked on enterprise scale Java EE projects with a relational database at the backend. I have personally burnt my head over performance issues in these applications and have come to the hard realization that you need to write performant SQL queries to have good transaction throughput. And typically developers write these SQL queries as untyped string constants within the Java codebase. Efforts towards using popular ORM frameworks like Hibernate simplify things a little bit, but still you cannot avoid having typeless SQL statements buried within your Java code, so long u have the impedance mismatch of the relational database at the backend. This is where LINQ and DLINQ rocks with the typed lambdas - the developer can write typed queries that work seamlessly over collections of objects, relational tables and XML documents. The ultimate aim of Microsoft is to use the LINQ framework to provide a unified approach to programming data and offering DSL support over multiple tuple-based data sources, resulting in a simpler programming model.

Polymorphic Queries in LINQ

Another exciting new feature that LINQ brings is the IQueryable interface, which allows writing true polymorphic queries that can be executed on an immediate or deferred mode in any target environment. The following example is from The Wayward WebLog :

public int CustomersInLondon(IQueryable<CUSTOMER> customers) {
  int count = (from c in customers
        where c.City == "London"
        select c).Count();
  return count;
}


In case you want to execute the query over a collection of objects ..

List<Customer> customers = ...;
CustomersInLondon(customers.ToQueryable());

And if the query is executed against a remote DLINQ collection, the underlying engine translates it to the appropriate SQL for the underlying database. Actually what goes on underneath is what the functional community calls comprehension. In case of the above query, the comprehension syntax is translated by the compiler into standard query operators.


All the new features in C# 3.0 and Visual Basic 9 bring back strong memories of Lisp and Haskell and rediscovers the joy of declarative programming. It only took a couple of decades to establish them as part of a mainstream language. It remains to be seen if other languages like Java follow suit.

Saturday, May 20, 2006

The Show is Over!

Day 4 at the Moscone Center began with the Sun raazmataaz where all real time applications of Java were at display. Mobile devices with rich Swing UIs, racing cars with Java inside, Java enabled robots etc. lighted up the Moscone Center for the last time in 2006. All of these high tech stuff was being conducted by James Gosling, the father himself, with a smile of satisfaction, that only goes along with an invention. Of course before the tech extravaganza, we had Scott McNealy giving away the Lance Armstrong tshirt to the winner of ride-your-bike competition in this year's JavaOne. Boy, what a shift in the paradigm - Scott was the CEO a month back when possibly he had prepared his keynote for JavaOne 2006. And now he got a 20 minute slot on the last day of the Conference for giving away tshirts!

There were some very good sessions lined up for the last day. I started with the one on EJB 3.0 delivered by Craig McClanahan, Linda DeMichiel and Gavin King. The session took us through the POJO slogan which EJB 3.0 could ultimately get out of its door. Linda described all the major architectural features of EJB 3.0, Craig took us through the JSF integration part, while Gavin King demonstrated some of the advanced applications of transaction handling using the architecture. EJB 3.0 is a much better incarnation than its predecessor, EJB 2.0, which was almost unusable. But still the capabilities of 3.0 are only a subset of what we have in Spring - experts are very sceptical and some of them has procrastinated EJB 3.0 as inferior and a big step backward from Spring. For more details, go through what Rick Hightower has to say in this JDJ article.

On popular demand, an introductory session of Ajax was repeated on this day - the one by Ben Galbraith. I decided to walk into it - at the end of it, I do not regret my decision. It was a very well delivered session with lots of information (mostly introductory though) on Ajax. The guy showed lots of real life examples of Ajax implementations including google maps, google suggest and some cool programming tricks and debugging how-tos.

On the whole, JavaOne 06 demonstrated to the world the awesome community that has been built around this technology platform. It has truly transcended all barriers of life and touched upon all walks of the social fabric. All said and done, the users want more from the technology which promises to be the ubiquitous platform of computing. A more viable licensing policy for Java, more enhanced JVM with support for functional languages, continuations, tail call, higher order functions etc. etc. .. the list goes on. I am always hopeful of a brighter tomorrow ..

And oh, BTW, the bookstore at the Conference was a smashing hit - I myself picked up five books for my collection at a coool discount of 20% ..

JavaOne Day 3 - The Rampdown Begins

Day 3 began with Erich Gamma taking centrestage with mission Eclipse. In the presentation titled "Java in the Trenches - Reflections Along the Eclipse Way", Gamma and John Wiegand took us through the development cycle of the Eclipse platform, which has delivered quality software with impeccable precision on delivery dates. The visuals which they demonstrated for the Eclipse version of agile methodologies reflected upon the enormous experience and process maturity that the team has imbibed over the years.

Gilad - The Saint

Gilad Bracha has a unique way of presenting things. He does not have the aggression of Josh Bloch, rather a very saintly way of taking the hall through the journey. His session entitled "Dynamic Languages on the Java Platform" had a full house. Though there was nothing very new from the content point of view apart from what he has already blogged on this subject, it was a fairly informative session. Typical responses expected as Q&A like tail call support in the JVM were dealt with in a saintly way (of course in the negative) by Gilad. Later after the talk, he has blogged his disappointment at the crowd not asking for Continuations support in the JVM - he had the response ready though. Support of continuations in the JVM has been discussed in many forums and Gilad has categorically turned all of them down citing reasons like cost of implementation and incompatibility with the current Java SE security model. Hence, I guess, people have given up all hope of finding continuation support in a JVM in the near future. This is what Gilad has to say in his blog on this issue
... continuation based web servers are very nice. Seaside is the nicest one I've seen. But ultimately they are just a phase, and we can already see how we will outgrow that phase. Since continuation-based servers won't be all that significant in the long term, and given the huge lead times and costs of adding continuations to the JVM, it makes little sense to support them.

Web servers aside, continuations, though admittedly difficult to implement, has many other uses. The comments section of Gilad's blog entry has ample testimony towards that.

Ajax Ajax

In day 3, also there were lots of sessions on Ajax, RIA, Dojo and Web 2.0. Not all of them were of high quality, but, while the lights are on, go follow the bandwagon. The new kid on the Ajax and SOA combo, JackBe, had Deepak Alur and Dan Malks (both of J2EE patterns fame) moderate a session on "Java Technology, Ajax, Web 2.0 and SOA", which had Craig McClanahan and Rod Johnson (among others) on the panel. I attended the session, which I felt was quite crappy, since none of Craig or Rod ever found the space to get going. And when Ajax is there in the panel discussion topic and u have a full house, nothing, but chaos can only reign!

In another session titled "Using the DOJO Toolkit to Develop AJAX enabled Java EE Web Applications", the speakers did a fairly good job of taking the (once again) full house through the process of using frameworks in an Ajax application. And DOJO seems like a leader in this space.


Apart from the above, I attended some of the other sessions where the main themes were once again GlassFish, NetBeans and how a combination can give us an easier programming model. Motorola, in their general session once again emphasized how they are transforming the mobile world with Java technology. The evening sessions on Day 3 saw a remarkable fall in attendance, since the sessions had to compete with free beer available in the After Dark Bash - no wonder we had some more crappy stuff scheduled during these hours.

Tomorrow I plan to attend a couple of sessions on EJB 3.0, which has been released after it passed the community process. If u consider the pre-Spring era, EJB 3.0 looks to be a solid progress from its demon predecessor towards a scalable component model. But hey, we have been through Spring and the best practices sermons of Rod Johnson - compared to that EJB 3.0 falls far below expectations. However, that's for another day, folks .. for now it is DG signing off from Moscone Center, San Francisco.

Friday, May 19, 2006

JavaOne Day 2 - Josh Rocks, But It's Ajax All the Way

Every year, JavaOne throws up the theme of the show. This year it's Ajax all the way. Every tech session on Ajax is booked, the most notable among them being the flagship session on "An Introduction to AJAX" by the founders of Ajaxian.com, Ben Galbraith and Dion Almaer. As has been mentioned in the conference report by John O' Conner
Using simple but effective examples, the two made a compelling argument for using Asynchronous JavaScript with XML (AJAX) to create dynamic, responsive applications that are easy to deploy and that require nothing more than a common web browser.

Besides this flagship from the creators, there were many other sessions which focussed on Ajax, RIA, SWT and the likes which promise to bring heaven on the client browser.

Josh Rocks!

I attended three sessions by Joshua Bloch, one of them the famous Puzzlers Session with Neal Grafter. Their book "Java Puzzlers - Traps, Pitfalls and Corner Cases" has been doing great in this year's JavaOne, but in their tech session the two wizards set the floor on fire with new additions to the puzzlers collection. Great show and I enjoyed every bit of it. This was preceded by a solo from Josh where he discussed some of the advanced Java patterns and idioms, which he promised to load in the next edition of Effective Java. Cool stuff! The icing on the cake was the BOF on "Collection Connection", which happened to be the eleventh edition of the show. Eleven long years - yet the Collections Framework is evolving, a definite sign of a living language (or perhaps Dynamic Language, as Gilad would like to mention it). The BOF was brightened up by the lumious anecdotes which Josh mentioned while recollecting his struggle with Binary Search in his CMU days. Overall the BOF was extremely enlightening with good indications of what to expect from the Collections Framework in Mustang and Dolphin.

Don't Ignore the Scripters

Two very interesting sessions which I attended on the second day of the conference included the ones on Groovy and JRuby. The Groovy session, led by Rod Cope had entertainments written all over. Some of the crazy things which he demonstrated included opening up new excel worksheets, filling up with data, formatting stuff, generating charts and ultimately putting the entire control within a Swing panel with buttons and callbacks - all with 20 lines of Groovy code! This is developer's productivity, at its best.

In the session on JRuby, the creators, Charles Nutter and Thomas Enebo, were there discussing passionately their experience trying to bring up the most powerful scripting language on planet into the Java Virtual Machine.

Can Java Studio Creator Eclipse the IDE World ?

Sun has come out strong with the message to use the Sun Java Studio Creator. Based on its much hyped NetBeans development environment, the creator has lots to offer as a rapid web application visual development tool. The Creator comes bundled with the Application Server, a database named Derby, data binding facilities through drag and drop, Ajax enabled JSF components, portlets as a project type, bunch of goodies for writing Web Services, themes and what not!. It will be really interesting to find out how the Big Blue responds with all the happenings in its Eclipse world. Tomorrow, I will find out what IBM has to say in the General Session to be addressed by none other than Erich Gamma.

In today's general sessions, both Oracle and BEA, led with their efforts towards contributing stuff to the Java space. BEA mentioned about their "blended strategy" of fusing commercial software and open source software towards development of enterpise tools that will make programming an enjoyable developer experience. The director of developer relations at BEA reiterated that BEA has been a key player in the open-source space. The company is a board member of the Eclipse Foundation and sponsored such projects as Apache Beehive, Apache XMLBeans, Eclipse AspectJ, Eclipse WTP, and more, besides being the first application server to provide certified support for the Spring framework.

Overall, day 2 at JavaOne had the all pervasive theme of Rich Internet Experience with Ajax leading all the way. Amidst all the industry frantics and open source hullahs, Gilad Bracha's session on "Superpackages: Development Modules in Dolphin" went almost unnoticed, with half of the hall empty. It had some important points though, which promise to make modular developments much easier in Java. But, at least for today, Ajax rules !