Thursday, November 29, 2007

10 Tips on JPA Domain Modelling

This post is a collection of tips on what I think is good advice, when domain modelling in Java with JPA as ORM mapping technology. Do you agree? Do you have extra advice? Please let me know!

Here they come, in no particular order.

1. Put Annotation on Methods, not Attributes
If using annotations on attributes, JPA engine will set directly in the attributes using reflection, hereby by-passing any code in setters and getters. This makes it hard to do extra work in setters and getters, if the need arises.

In addition, if you add a getter for some calculated value which has no corresponding attribute, you can mark it @Transient on the method. Had you been putting it on the attributes, you would have no attribute to put the annotation on.

Whatever you do: Try not to mix, using with annotations both on fields and methods. Some JPA providers cannot handle this!

2. Implement Serializable
The specification says you have to, but some JPA providers does not enforce this. Hibernate as JPA provider does not enforce this, but it can fail somewhere deep in its stomach with ClassCastException, if Serializable has not been implemented.

3. Use The Fine Grained Domain Modelling and Mapping Possibilities in JPA
If coming from EJBs (before EJB3), you are not used to be able to do fine grained modelling. EJB2.x was very entity centric. In JPA, you have @Embeddable and @Embedded. Doing more fine grained domain modelling can help make your domain model more expressive.

An @Embeddable is a value object, and as such, it shall be immutable. You do this by only putting getters and no (public) setters on its class. The identity of a value object is based on its state rather than its object id. This means, that the @Embeddable will have no @Id annotation.

As an example, given the domain class Person:
@Entity
public class Person implements Serializable {
...
private String address;

@Basic
public String getAddress() { return address; }
public void setAddress(String address) { this.address = address; }
}
We could express Address better, by giving it a class of its own. Not because it should be mapped to some other table, but because it makes sense in this particular model. Like this:
@Embeddable
public class Address implements Serializable {
...
private String houseNumber;
private String street;

@Transient
public String getHouseNumber() { return houseNumber; }

@Transient
public String getStreet() { return street; }

@Basic
public String getAddress() { return street + " " + houseNumber; }

// setter needed by JPA, but protected as value object is immutable to domain
protected void setAddress(String address) {
// do all the parsing and rule enforcement here
}
}

@Entity
public class Person implements Serializable {
...
private Address address;

@Embedded
public Address getAddress() { return address; }
public void setAddress(Address address) { this.address = address; }
}
The better expressiveness comes from: a) Putting a named class on a concept in the model and, b) having a place (the value object class) where to put domain logic and enforce domain rules.

4. Implement Equality using Real Domain Attribute Values
Classes marked @Entity will always have an id attribute. Often, this is a long sequence. It can be tempting to use this value when implementing equals and hashCode (which is also a requirement), but I recommend against it. I can find two good reasons: One based on modelling rules and one based on technical terms.
  • Modelling rule: A class modelled as an entity should be uniquely distinguishable from other instances, solely based on a combination of some of its domain attributes. A long sequence, used solely to obtain relational identification, does not constitute a domain attribute. If you are unable to find a unique combination, it might very well be a sign of a problem with the model.
  • Technical term: If equality is based on a database generated and assigned value, you will not be able to use equals and hashCode before an instance has been persisted. That includes putting the instance into container classes, as they rely on equals and hashCode.
5. Protect the Default Constructor
The JPA specification mandates a default constructor on mapped classes, but a default constructor seldom makes sense in modelling terms. With it, you would be able to construct an entity instance with no state. A constructor should always leave the instance created in a sane state. The requirement for the default constructor is only to make dynamic instantiation of instances of the class possible by the JPA provider.

Luckily, you can, and are allowed to, mark the default constructor as protected. Hibernate will even accept it as private, but that is not by the spec.

6. Protect Setter Method on Id Attribute
Basically the same story as above. In this case, it is just because it makes no sense for the application to assign an id.

NOTE: This is only for when the id attribute is marked as assignable by the provider.

7. Avoid Primitives when Mapping Id Attribute
Simply use Long and not long. This makes it possible to detect a not yet set value by testing for null. If using Java5 or above, auto-boxing should take away the pain.

8. Use the Basic Annotation to Override Defaults
By all means, use @Basic to override the default true value of optional to false, for those fields that are not actually optional (I often find that to be most of my attributes).

9. Go Ahead and Use the Column Annotation
Even if you are not interested in generating a schema or DDL from it, you should not hold back on using the @Column annotation. It tells the reader of the code about important information related to the attribute. This is stuff like nullability, length, scale and precision.

10. Do Not Use java.sql.Date/java.sql.Timestamp in Domain Model
Instead, use java.util.Date or java.util.Calendar. Using the types from the java.sql package is a leakage of concerns into the domain model, that we do not want, nor need.

Just remember to put @Temporal on date and calendar attributes.


Wednesday, November 28, 2007

Now You Can "Click" on a Newspaper Article

When google introduced printed ads in newspapers as part of the AdWords product some time ago, my first response was something in the lines of "what the f...". How can that be a good idea. I didn't give it much thought, and have also been unable to see the big idea there.

Well, until today, when I noticed googles Print Ads 2D Barcodes technology. The idea is to put a small 2D barcode next to the printed ad, in which a URL to a website is encoded. You can then take a snapshot of the barcode with your mobile phone, have software on the phone automatically decode the 2D barcode into the URL, and lastly open a web browser, again on the phone, taking you to the URL.

That is kind of cool, I think :-)

Of course, google has software to decode 2D barcodes, which runs on J2ME and of course Android.

But there are others too. Take a look at Nokia mobile codes, where you can create 2D codes and get scanner software for your phone. To the right is a 2D barcode of the URL to this blog. Sadly, I did not find any barcode scanner software ready for download to my Sony Ericsson W810i. Guess I will have to build zxing and run is on J2ME then.

Thursday, November 22, 2007

Why Override Annotation is Cool

Since Java5 which brought us annotations, Java has had the @Override annotation. But I don't see it used very often, so here I will try and explain why I find it valuable.

When you write a POJO like this:
public class Person {
private String name;

...

public String tostring() {
return "Person[" + name + "]";
}
}
Where the intention of the tostring() method was to override the Object#toString() method, you might be surprised when your method is not called. In this example, it was due to a simple case error ("tostring()" and not "toString()"), but in other scenarios, the error can be hidden somewhere else in a more complex method signature.

Had you been putting a @Override annotation on top of the method signature, you would have gotten a compiler error (or quite possibly already in the IDE). So the @Override annotation says that the annotated method must fulfill the contract of a method up in the class hierarchy. That is the valuable part of the override annotation :-)

Eclipse and IDEA
Last time I tried Eclipse, it played nice and automatically put the annotation on methods, when I used Eclipse keyboard shortcuts to introduce an override. This was on by default. It is not in IDEA :-( But is is very simple to add. Notice the little checkmark "Insert @Override" in the dialog that Ctrl-O opens? Check it, and it will be checked by default from here on.

C# Has It Built-In
The C# language of .Net has a override keyword as a part of language itself. In the C# language you are forced to use the keyword, which is okay by me, as I like it. What I really do not like then, is the virtual keyword of C#. I much better like that all methods are virtual in Java. But that is another story...

Speedlinking About Flex Libraries and Tools

Here is some collected information about the flex/as3 libraries I have seen out there:

Component Libraries
These are either AS3 utility libraries or flex UI component libraries.
  • as3corelib: Contains a bunch of useful stuff like MD5 and SHA hashing, VCard parsing, jpeg and png encoding, JSON support and utilities for working with arrays, dates, numbers, strings and xml. Seems to be Adobe contributed code.
  • flexlib: A lot of nice UI components and stuff, like the TreeGrid. Also with some Adobe contributions.
  • alivepdf: A way to produce pdf output via as3 code. Not sure where I would use that. Maybe if someone needed to off-load the server from the task. Maybe it is most useful in AIR applications and not so much in the more constrained flex sandbox.
  • actionscript3libraries: A nice list of others over at http://riaforge.org/.
  • tweener: Doing animations using code to move things instead of being forced into the flash timeline model.
  • flexvizgraphlib: A really interesting library with which you can do data visualization in flex

Frameworks
These are not simply libraries of functionality, but whole frameworks that tries to help you structure your complete flex application into something manageable. Mostly helpful when your flex application gets really large, I think. These are about application architecture.
  • Cairngorm: A MVC based framework. Done by Adobe themselves.
  • PureMVC: Another MVC based framework
  • Model-Glue: Well, yet another MVC based framework :-)
  • ARP: A pattern based framework.
  • Flest: Hmm, a MVC looking framework which claims not to be an MVC implementation
  • Foundry: Interesting in that in includes a Java/server-side part, which claims to aid in FDS server development. Not important for dudes like me, which try to avoid FDS, though.
Testing
Some libraries that aid in automatic testing of your flex applications.
  • flexunit: JUnit based API. Not dependent on payware.
  • funfx: Write your tests in the nice ruby language. Uses the automation package of FDS to remote control the application when doing automatic testing, which leads to the need of a license on FDS (payware).
Tools
Do you know of other tools, libs or the like, please let me know. I would like to know!

Wednesday, November 21, 2007

How Nicely Decoupled a Web Client Flex Is

Flex got to be the most decoupled web client I have ever worked with!

Back half a year ago, I blogged about choosing a RIA framework, which I found hard. At that time, we ended up with Flex2 as our choice, and the next couple of months it would have to prove its value. And I most certainly must say that it has! I just wanted to share what our experience have been so far.

Editing and Building
It has proved to be real nice to work with. We are simply using the free SDK, combined with the maven israfil plugin for building and IDEA for editing. With IDEA7, there is pretty good JavaScript (and hereby ActionScript) support combined with some (not much, yet) Flex support.

Flex Framework
The framework itself has shown to be powerful enough for our needs, and I suspect it also will be for the needs of most people. There are quite a few UI components available, including a powerful DataGrid. And the upcoming Flex3 will have even more. Also, the extension possibilities have proved to be good.

The documentation is quite good, and where it lacks or in situations of bugs in the framework, it has come in handy to have the open source code at hand. Sadly, not all of it is open. Missing are stuff like the remote stack (web service calls) and of course all in the flash.* packages.

And there are already flex/as3 component libraries out there, like flexlib on google code and a bunch here, linked to from riaforge.

Sooo Decoupled
Propably, the thing that have stricken me the most about working with flex (or flash) as a platform for applications on the net, has been how decoupled from the server development it can be. And I find that a good thing.

As an example, take the whole build, assemble, deploy and test cycle.

With a technology stack where the web front is using a server-side html/js generation framework, you most often build, assemble and deploy that together with all of the other parts and layers of the application. First then, you test.

With flex, you can compile and try out the UI web front completely by itself. When I work on UI related stuff, I simply compile the flex part and launch the standalone player to try it out. That is a 15 sec. build cycle at most. At the same time, I have my web services up and running in the back all the time, so the standalone player is actually playing up against the server side, as it will be when the flash is being served from a web application to a browser.

Testability
At the time where we chose flex, we had an open issue on how to test it. I regret to say, that this is still an open issue :-( The best bet right now seems to be flexunit, but we have postponed it a bit, as it seems hard to integrate nicely into a headless, continuous integration build. For a short while I thought I was going to use funfx and write my tests in Ruby, but that was only until I found out it required a FDS license.

Bottom Line
Bottom line is, that it is simply nice to work with, and I am loving it!
Recommendable!

Forcing Error Pop-Up on a Flex Control without a Validator

In my previous post about ToolTipManager and its delays, I showed how error pop-ups from ToolTipManager looks like. I'll repeat the image here:


This pop-up is normally a result of a validator. But actually, you can, in a very simple way, force such a pop-up to appear quite easily. You simply assign the errorString property on a flex component to a non-empty string. Like this:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">

<mx:Script>
<![CDATA[
private function setError() : void {
data.errorString = "this is an error popup";
}

private function clearError() : void {
data.errorString = "";
}
]]>
</mx:Script>

<mx:TextInput id="data" text="Enter data here"/>

<mx:Button label="Set error" click="setError()"/>
<mx:Button label="Clear error" click="clearError()"/>
</mx:Application>

That was easy! The errorString property is defined on UIComponent, so it is quite universally available on the UI components of the flex framework.

ToolTipManager.scrubDelay and Hanging ToolTips in Flex

When I use validators on input controls in flex, the flex framework marks fields in error with a red border and a nice little red pop-up with a message when the mouse is moved over the component. Like this:


But sometimes, I experience that the pop-up with the error keeps hanging, even though I move the mouse out of the field in error. This led me to look for the code that performs the pop-up, and found it in ToolTipManager. This interface has three delay values that you can control:
The showDelay is simply millis from the mouse enters the field until the tooltip pop-up comes up. The hideDelay, is how long time the pop-up stays open, while the mouse is in the field. The default is 10 seconds, so if you keep the mouse over a field in error in more than 10 seconds, the pop-up will dissappear. Okay, sane enough.

Now we come to scrubDelay. From the documentation, it was a bit hard to understand (at least for me):

The amount of time, in milliseconds, that a user can take when moving the mouse between controls before Flex again waits for the duration of showDelay to display a ToolTip.

This setting is useful if the user moves quickly from one control to another; after displaying the first ToolTip, Flex will display the others immediately rather than waiting. The shorter the setting for scrubDelay, the more likely that the user must wait for an amount of time specified by showDelay in order to see the next ToolTip. A good use of this property is if you have several buttons on a toolbar, and the user will quickly scan across them to see brief descriptions of their functionality.

The first paragraph, I did not get! The second paragraph though, made me think this could be the source of my trouble. So I grabbed the source, and found out what it did. Here is my explanation:
  • When mouseOverIn or mouseOverOut handlers are called in ToolTipManagerImpl, it is checked if the mouse is over a new target component.
  • If so, the targetChangedMethod is called. This method erases any existing pop-up from another component and pop-ups a new tool-tip for the new target component.
  • BUT: If the scrubTimer is NOT running, there will be a showDelay pause. If the scrubTimer IS running, it will pop-up the new tool-tip immidiately.
I tried setting the scrubDelay larger, and it seemed to make my problem with hanging pop-ups disappear. Great, ... though I am not quite sure exactly why :-(

Sunday, November 18, 2007

Maven and Excluding Transitive Dependencies

Warning, this is just another maven rant...

Why has someone concluded, that we do not need to exclude all transitive dependencies of some dependency?

More than once have I had the need to grab a dependency, but then exclude all or a lot by wildcard, of the incoming transitive dependencies. I cannot even exclude all artifacts from a group of transitive dependencies, without explicitly mentioning each and every of the transitive dependencies that I wont need.

If I do:
<dependency>
<groupId>org.acegisecurity</groupId>
<artifactId>acegi-security</artifactId>
<version>1.0.4</version>
</dependency>
And I do not want all the spring stuff, it comes with, I cannot write:
<dependency>
<groupId>org.acegisecurity</groupId>
<artifactId>acegi-security</artifactId>
<version>1.0.4</version>
<exclusions>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
or something like it. No no, I will have to do:
<dependency>
<groupId>org.acegisecurity</groupId>
<artifactId>acegi-security</artifactId>
<version>1.0.4</version>
<exclusions>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-mock</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-remoting</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-jdbc</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework</groupId>
<artifactId>spring-support</artifactId>
</exclusion>
</exclusions>
</dependency>
Or am I missing how to do this? Please correct me if I am wrong!

IDEA Seems to have Built-In Knowledge about JUnit API

Hmm, I have always liked IDEAs all-time present analysis of my AST when typing, which for instance can give me a warning, if some variable could be null when referenced. I think IDEA analyzes the possible execution paths of the code, and then determines if some variable might be null in a place where it is referenced. The other day I noticed, that IDEA must have built-in knowledge about JUnit, because it seems to use the contract of assertNotNull, in its analysis of nullable variables.

In this little example (JUnit) testcase:
public class TestBlahBlah extends TestCase {
public void testBlah() {
String s = null;
System.out.println("s.length() = " + s.length());
}
}
IDEA promptly warns me about the expression s.length() might produce NullPointerException, and indeed, it will. But, if I add an assertNotNull(s), so that the test looks like this instead:
    ...
public void testBlah() {
String s = null;
assertNotNull(s);
System.out.println("s.length() = " + s.length());
}
The warnings goes away. Nice! And it is not just because the variable has been used to something else in between. If I add assertEquals(s, s) in place of assertNotNull, the warning pops up again.

Just one of the little niceties of a great IDE!

Getting started with Pattern and Matcher from java.util.regex

Since Java 1.4, we have had pattern matching in the java.util.regex package. This package has lots of power as a consequence of the power of regular expressions, but it does not have the most intuitive interface (in my opinion). In addition, the javadocs are very formal, with little overall "how to use it" examples on the class docs. So, here is a small "getting started with java.util.regex package".

Example: Matching and Looping over Result
     import java.util.regex.Matcher;
import java.util.regex.Pattern;

...

String regex = "Foo";
String input = "FooBarFooBar";

Pattern compiledPattern = Pattern.compile(regex);
Matcher matcher = compiledPattern.matcher(input);

while (matcher.find()) {
String matchedSubString = input.substring(matcher.start(), matcher.end());
System.out.println(String.format("Matched '%s'", matchedSubString));
}
First, we compile the regular expression with this line "Pattern compiledPattern = Pattern.compile(regex)". Then, we match it against an input string, to get a Matcher object, with this line "Matcher matcher = compiledPattern.matcher(input)". The matcher can then be used to go matching against the input, match by match. This is what we do in the loop:
     while (matcher.find()) {
String matchedSubString = input.substring(matcher.start(), matcher.end());
System.out.println(String.format("Matched '%s'", matchedSubString));
}

the find() method on the matcher instance goes looking for the next matched substring in the input and returns true, if found. After a find() call which returned true, you can use matcher.start() and matcher.end(), as substring indices into the input string, to get the string, that was matched.

There is more in the regex package, amongst other is regular expression grouping and replacement functionality. But this should get you easily started.

Monday, November 12, 2007

How To Access Session from a JAX-WS Web Service Implementation

Today, I had the need to take a peek into the HttpSession from inside a JAX-WS webservice implementation class. Normally, such an implementation has no access to the servlet API, and generally, I believe that is how it should be.

Anyway, I had the need, so this is how to do it:
import javax.annotation.Resource;
import javax.xml.ws.WebServiceContext;
import javax.xml.ws.handler.MessageContext;
import javax.jws.WebService;
import javax.servlet.http.HttpSession;
import javax.servlet.http.HttpServletRequest;

@WebService(serviceName = "FooService")
public class FooImpl implements Foo {

@Resource
private WebServiceContext wsContext;

public void wsOperation() {
MessageContext msgContext = wsContext.getMessageContext();
HttpServletRequest request = (HttpServletRequest) msgContext.get(MessageContext.SERVLET_REQUEST);
HttpSession session = request.getSession();
// work with the session here ...
}
}
When the JAX-WS stack implementation you are running your web services on is deploying your web service endpoint, it will inject an instance of WebServiceContext into the impl instance. You can then ask it for a MessageContext, and from this get the request, session, response etc.

Beware though. By doing this, you are binding your implementation to knowledge about which transport mechanism, it is deployed on and accessed through. It could be something else than HTTP. On the other hand, I guess 99.9999% of all web service implementations are deployed on HTTP :-)

I have this working with CXF in version 2.0.1-incubator. The WebServiceContext class is only since JAX-WS 2.0 and the @Resource annotation is only from JEE5 and on.

Thursday, November 08, 2007

AVM2 vs JVM and ActionScript3 Performance

AVM2 is Adobes virtual machine in Flash Player 9, that executes ActionScript3 (AS3) bytecode. If you are accustomed to writing code for the great JVM and are starting to write Flex or AIR applications, there are things you should be aware of.

In this blog, I present information on how the AVM2 works, and also show some tips on how to make your code perform better on AVM2.

Most of the information in this blog should be credited to Gary Grossman from Adobe. I found a (great) presentation with his name on it, entitled "ActionScript 3.0 and AVM2: Performance Tuning". I was mostly interested in the AVM2 part, and luckily, the 74 pages of the presentation had quite a bit of information on that subject.

Types and Type Information
Prior to AS3, all type information was stripped out of the code when compiled. At runtime, everything was just dynamically typed atoms. Starting with AS3, when can get the type information all the way down to the runtime.

Another point in the introduction of the native int and uint types, which can make code perform better.

Here is how to take advantage of this great new stuff:

First of all, start working with typed variable. That is, do this:
var document : XML = ....;
and not this:
var document = ...;
var document : Object = ...;
var document : * = ...;
Using explicitly typed variables can improve performance and reduce memory consumption. As a little side note, going from untyped, or dynamically typed to the explicit type information, enables the compiler to catch more errors early on in the development cycle, but that is another story :-)

Another advice is to use the native int and uint types where possible, instead of using the Number object type. But beware, as compliance with ECMAScript requires the VM to promote basic int types to Number sometimes. For example in this code:
var i:int=1;
methodCall(i + 42); // will promote to Number
If you know, that it will be within an int type, you can help the compiler by introducing a coercion like this: methodCall(int(i+42)).

Also, array index with int and uint types have fast paths when compared to Number, so consider coercion of array indexing too.

Common Subexpression Elimination (CSE)
When writing Java code, I am accustomed to not thinking that much about common subexpressions in loops etc. I let the compiler and the VM handle all that optimization. It is pretty good at it, and me trying to help it might very well destroy its possibilities of doing it well.

AVM2 can do CSE too, but not as good. For instance, in this simple code:
for (var i:int=0; i < arr.length; i++) {
callMethod(arr[i]);
}
the arr.length expression is evaluated on each iteration of the loop. This is due to the fact, that it might have sideeffects or be dynamically overridden. So, do this:
var len:int=arr.length;
for (var i:int=0; i<len;i++) {
callMethod(arr[i]);
}

Method Closures
In AS3, we got method closures, which also mean we can create variables that are functions, and pass them around, while the function still retains the environment, it was created within ("this" is still what "this" was, when defined).

If you write code like this, with inner, anonymous function closures:
function foo() : void {
var handler : Function = function(event:Event) : void {
// blah blah
}

addEventListener("someEvent", handler);
}
The definition of the handler variable, will force a creation of an "Activation" object, that is to hold the environment state, at time of creation. You can get better performance and memory utilization by doing it this way:
function handler(event:Event) : void {
// blah blah
}

function foo() : void {
addEventListener("someEvent", handler);
}
as this will utilize the method closure functionality of the AVM2, but without an activation object.

JIT Compilation and Constructors
Now that flash player is a stackbased, byte-code oriented virtual machine, it is natural for it to have JIT compilation. And so it has. I think it is far from the JVM hotspot and compiler, but we must also recognize, that flash player clearly has another target. But the objectives of the JIT compiler are good, I think. They are:
  • Fast compile times
  • Limited passes
  • Cautious with memory
The fact that AVM2 does JIT compilation might also be why there is no 64bit version of the player as of yet. Doing JIT requires extra work for each CPU to target. AVM2 does have a MIR format (Macromedia Intermediate Representation), which is uses on its way from byte-code to JIT'ed machine code. The responsibility of the MIR is to abstract commonalities between CPUs, but it must still be extra work again and again for each architecture, when doing JIT.

There is one thing to take notice of with JIT in AVM2. Constructors are not JIT'ed, so if you have performance intensive code in a class, take it out of the constructor.

Garbage Collection
The memory management and garbage collector is to be found in the separate MMgc project. This is a Deferred Reference Counting (DRC) mechanism combined with an incremental, conservative mark/sweep collector. Of course, the garbage collector implementation has been tuned for best client performance, with small (30ms) time slices.

Overall, I find the new flashplayer architecture much nicer, than I have previously thought of it. I guess it has also come a long way, suddenly growing up to serve us real, complex applications on the web. This all actually gives me more peace in mind, with respect to the flex applications we are building. Large applications on the flex platform will have a hard time succeeding, if their executing runtime (the AVM2) is not up for the task of running real applications.

I am confident that it is!

Monday, November 05, 2007

Comparator or Comparable and more than one Ordering Implementation

To impose ordering on Java objects, you can either implement Comparable or Comparator. Which one to choose? Well, here is my five cents on the issue. When comparing the two approaches, I will use this little simple Java class:
public class Task {
private Integer priority;
private String description;

public Task(Integer priority, String description) {
this.priority = priority;
this.description = description;
}

public String toString() {
return String.format("Task[%d,%s]", priority, description);
}
}

Comparable
Is implemented to express what the docs mention as the implementing class' natural ordering. It is implemented on the class, that is to be ordered, and as such, it can only provide one single ordering implementation. In the example above, I will have to choose, if I want to order on priority or on description. A sample implementation could be like this:
public class Task implements Comparable<Task> {
...

public int compareTo(Task o) {
return priority.compareTo(o.priority);
}

...
}

which compares on priority and only on priority. I can then sort instances in a List like this:
        List<Task> listOfTasks = new ArrayList<Task>() {{
add(new Task(1, "Shop For Christmas Gifts"));
add(new Task(2, "Bring World Peace"));
add(new Task(3, "Make Love, Not War"));
}};

Collections.sort(listOfTasks);

Nice and easy, but I often go with the other implementation (Comparator), as it gives me the freedom to provide more than one implementation. Here is how:

Comparator
Where Comparable was for one single, natural ordering, an implementation of Comparator represents a total ordering of instances of a class. You can provide more than one implementation of Comparator, and then choose when sorting, which one you want to sort on.

I often go for a design, where I put the Comparator implementations as public static classes on the class, that they are implemented to sort on. Like this:
public class Task {
...

public static class ByPriorityComparator implements Comparator<Task> {
public int compare(Task t1, Task t2) {
return t1.priority.compareTo(t2.priority);
}
}

public static class ByDescriptionComparator implements Comparator<Task> {
public int compare(Task t1, Task t2) {
return t1.description.compareTo(t2.description);
}
}

...
}

I can then sort instances in a List like this:
        List<Task> listOfTasks = new ArrayList<Task>() {{
add(new Task(1, "Shop For Christmas Gifts"));
add(new Task(2, "Bring World Peace"));
add(new Task(3, "Make Love, Not War"));
}};

Collections.sort(listOfTasks, new ByPriorityComparator());

// or...

Collections.sort(listOfTasks, new ByDescriptionComparator());

Using SQL Templates in IDEA SQL Query Plugin

If you are like me, a regular user of IDEA, you might also have the excellent SQL Query Plugin installed and use it daily. But you might not have noticed the little neat templates feature, that it has. It can save you a great deal of typing.

When the SQL Query Plugin pane is open, click its settings icon and choose the templates tab. See the picture below:

Say you have a table called "user", which have a column "username". If you find yourself querying this table on "username" again and again, you might create this template:

SELECT * FROM user WHERE username='$param1'

And name it something short, like "seluser" (short for "select user"). In the query editor area, where you normally write your queries, you simply type:

@seluser admin

And hit execute query (Ctrl-Enter), and it will execute your template, but with "admin" replaced into $param1. Of course, templates can have many parameters, and they are given as a comma-separated list when executing it.

Simple and easy. The example query above is very simple, and the queries you create templates for should be more complex for any real benefit to be harvested. I have one in my current project, where I need to join two tables on four (4) different columns, two get an equi-join. A template can safe me typing there.

One idea that just popped into my head, was if SQL Query Plugin could come with some pre-written templates, that matched the JDBC driver of the current connection. Like a bunch of queries to query into the valuable v$xxx data dictionary of Oracle bases. Just a thought.

Friday, November 02, 2007

Coping with Flex Asynchronous Remote Calls - Part III

The fact that flex web service calls are asynchronous have implications on how you design your code. This part III of my series of posts on the subject, which shows how to handle concurrent, asynchronous calls to the same web service method.

In part II of this post series, I showed how to use callbacks to handle the asynchronity of the calls. But the solution presented there, does not handle concurrent calls to the same web service method. To handle concurrent calls, you need to use the call object itself, and assign tokens on it. In the solution I show here, I set the callback as the token.
package com.blogspot.techpolesen {

import mx.rpc.soap.WebService;
import mx.rpc.events.ResultEvent;

public class RemoteService {
private var remoteService : WebService = new WebService();

function RemoteService() : void {
remoteService = new WebService();
remoteService.wsdl = "/services/RemoteService?wsdl";
remoteService.loadWSDL();
remoteService.addEventListener(ResultEvent.RESULT, handleNextInSequenceResult);
}

public function callNextInSequence(parameterPassedToServer : String, callback : Function) : void {
var call : Object = remoteService.nextInSequence.send(parameterPassedToServer);
call.callbackFunction = callback;
}

private function handleNextInSequenceResult(event : ResultEvent) : void {
var token : Object = event.token;
token.callbackFunction(event.result);
}
}
}
Heres is a bit of explanation on the above code:
  • The as3 class above is a wrapper around the remote web service calls.
  • The constructor instantiates the web service client object, parses the wsdl and adds a listener for the result event
  • The callNextInSequence method is the one, that can call my remote web service method (called nextInSequence, which accepts a string and appends a sequence number after it and returns it)
  • IMPORTANT: Note how the callNextInSequence method, uses the send() method of the operation instance to call the web service. The return value of send() is a call object, on which you can put tokens.
  • Tokens, which are put on a call, will appear on the result event, when the result of that particular remote call returns with a result.
  • In this implementation, I use the token to remember who I need to callback on, with the result of the call.
  • Left is only to explain the handleNextInSequenceResult listener, which is quite simple. It simply obtains the token from the event (which is the function callback) and calls it with the result of the remote call.
Here is Main.mxml content, which uses this class:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
<mx:Script>
<![CDATA[
import com.blogspot.techpolesen.RemoteService;

private var remoteServiceProxy : RemoteService = new RemoteService();

private function callItConcurrently() : void {
results.text = ""; // clear result area

// do some concurrent, asynchronous calls
remoteServiceProxy.callNextInSequence("value 1", handleResult);
remoteServiceProxy.callNextInSequence("value 2", handleResult);
remoteServiceProxy.callNextInSequence("value 3", handleResult);
}

private function handleResult(result : String) : void {
results.text += "Result: result = " + result + "\n";
}
]]>
</mx:Script>

<mx:Button label="Do concurrent, asynchronous remote calls" click="callItConcurrently()"/>
<mx:TextArea id="results" width="80%" height="50%"/>
</mx:Application>

Downloading the source
I have zipped up the sources for you. It can be downloaded from here. Ready to be build with maven.

This is a multi-module maven build. There are two directories:
  • client : Contains the flex source and a pom to build it
  • server : Contains the web service and a pom to build it
The war artifact output from the server module have in it the flash output from the client module and a index.html which loads it.

To start the server after a build, you simply jump into the server directory and do a "mvn jetty:run-exploded".

Other Small Flex Tutorials
This was lesson 9 in my series of posts on what I learn about developing filthy rich flash apps using flex2. If you want to read more, the previous lessons can be found here: