Friday, June 29, 2007

Work-around to: Unable to find the mojo 'org.apache.maven.plugins:maven-dependency-plugin:2.0-alpha-1:copy-dependencies'

Just wanted to let you all know. If you are getting this error:
[INFO] Internal error in the plugin manager executing goal 'org.apache.maven.plugins:maven-dependency-plugin:2.0-alpha-1:copy-dependencies':
Unable to find the mojo 'org.apache.maven.plugins:maven-dependency-plugin:2.0-alpha-1:copy-dependencies'
in the plugin 'org.apache.maven.plugins:maven-dependency-plugin'

Component descriptor cannot be found in the component repository:
org.apache.maven.plugin.Mojoorg.apache.maven.plugins:maven-dependency-plugin:2.0-alpha-1:copy-dependencies.
Then you are not alone. I did too :-) But, after some debugging, I found it to be due to a multi module build of mine, where the first built module had a plugin that internally depended upon version 2.0-alpha-1 of the maven-dependency-plugin, and a later plugin not being compatible with that version.

Luckily, both plugins that depended upon the maven-dependency-plugin could work with version 2.0-alpha-4 instead, so doing the following in my top-level pom worked around the problem:
    <build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.0-alpha-4</version>
</plugin>
</plugins>
</build>

Thursday, June 28, 2007

objenesis, are we that far out?

Today I came across a java library called objenesis. From the site it says:
Objenesis is a small Java library that serves one purpose:
To instantiate a new object of a particular class.
Hmm. We are actually so far out in the Java camp, that we can write a complete library on 1314 LOC (only counting actual main sources), implementing various ways to, ... instantiate an object.

Now, the sad part about this is, that I do not think objenesis is a bad project. Actually, I think it can be quite useful, and this is what's sad. With objenesis you can instantiate objects in various ways dependent on the JDK (dependent on versions: 1.3, 1.4, 1.5, ... or on vendor Sun, JRockit,...). It also seems like objenesis actually tries not to call constructors, to avoid the "default constructor needed" problem when doing Class.newInstance.

Hmm, ...

Wow! JMock has completely changed its api, ... for the better

I have been a long time user of jmock 1.x and suddenly had the need of jmock in a new project. So, I went and took the latest and greatest release and started mocking.

But what the...? It did not work as I was used to.

So, I went to look at the documentation and found the apis completely rewritten.

Gone are constructs like this:
   mockSubscriber.expects(once()).method("receive").with( eq(message) );
Where the method mocked is inside a string literal.
Say hello to the new way:
   context.checking(new Expectations() {{
one (subscriber).receive(message);
}});
The method call one(subscriber) takes the subscriber mock (which is typed as the class being mocked), and returns an instance of that very same type (using generics).

Expressing the mock expectations by simply calling methods on an actual instance of the mocked class is nice, as it makes refactoring in IDEs easier. Actually, I think this is what easymock have been doing for quite some time now :-)

JMock uses something called hamcrest to match the expectations with what is actually executed when the test is running. It is also nice to see, that jmock has kept its fluent interface style of writing tests. I like that.

Actually, come to think of it, I think one of my good and really clever colleagues told me this in a tech-session he hosted at work. Hmm, I have been slow in the uptake of this one :-)

Wednesday, June 27, 2007

JetBrains should be making/maintaining the idea:idea maven plugin

I am a regular user of the maven idea plugin to auto-generate the IDEA project files from the POM sources. But, lately I have been having a bit of trouble.

There are settings that I apply in IDEA, that the plugin cannot generate. Hence, the next time I generate new project files, these settings are overridden. One example is the new facets feature in Selena, which adds support for JPA, Spring and Hibernate. The idea maven plugin cannot generate these settings in the project files.

So, why are JetBrains not actively contributing, to keep the plugin usable and up-to-date? It would make their IDE easy (easier) to use on maven projects. Just like they are now doing with the new maven integration in upcoming IDEA7.

Friday, June 22, 2007

Before going live in production with rails app...

Here are some steps that I found important before going live with my first rails application out there. Of course, there are others, but here we go:
  • Timeout old sessions
  • Send notifications on application errors
  • Roll log files
Timeout old sessions
I am using the activerecord store for my session data and hence, I need to delete some "timed out" session rows in the "sessions" table. This is easily done using this cronjob line:

00 02 * * * bash -c "cd $RAILS_APP_DIR && script/runner -e production \"CGI::Session::ActiveRecordStore::Session.delete_all(['updated_at < ?', 12.hours.ago])\""

If you are using the file based store (default) you can use a find with some -ctime parameters or something.

Send notifications on application errors
To be able to be notified by email when and application error occurs, I simply installed and configured the exception_notification plugin.

ruby script/plugin install exception_notification

Inserted in ApplicationController:

include ExceptionNotifiable

Edited config/environment.rb:

ExceptionNotifier.exception_recipients = %w(my@email.here)
ExceptionNotifier.sender_address = %("Application Error" )
ExceptionNotifier.email_prefix = "[MYAPP ERROR] "

Roll log files
It seems to be the advice not to let the log implementation in rails log the files, if one has multiple mongrel instances (they share the log files). So, I opt for a rails-external solution. Luckily, on linux we have something called logrotate.

In my linux setup /etc/logrotate.conf already contained the line "include /etc/logrotate.d", which makes adding more log rotation as easy as adding a file to the logrotate.d dir. I added a file shown below, which is fine for my setup:

/var/opt/rails/myapp/log/*.log {
  daily
  missingok
  rotate 21
 compress
 delaycompress
 notifempty
 copytruncate
}

Shit, that was easy. Gotta love deploying on linux :-)

Wednesday, June 20, 2007

False positives with FindBugs on IoC code

I just configured FindBugs on a current spring and JPA enabled project, using the maven-findbugs MOJO from codehaus.

A first run detected some bugs, which was purely due to our code using spring container for dependency injection and JPA for mapping to the database. Something is happing at runtime, that FindBugs does not account for. Examples follow...

Field not initialized in constructor (UWF_FIELD_NOT_INITIALIZED_IN_CONSTRUCTOR)
This is simply a dependency property on the class, that gets injected by a setter on the instance at runtime. What findbugs tells me is, that it cannot spot any initialization on the private variable in the constructor, before it is used in my class. The code is like this:
public class FooServiceImpl implements FooService {
private BarService barService;

public void setBarService(BarService barService) {
this.barService = barService;
}

public void someServiceMethod() {
// use of barService here...
}
}
Hmm, well. I want to filter that out. You can do that, with the following findbugs filter.
<findbugsfilter>
<match>
<package name="~com\.company\.system\.service.*"/>
<class name="~.*ServiceImpl"/>
<or>
<field name="~.*Dao"/>
<field name="~.*Service"/>
</or>
<bug pattern="UWF_FIELD_NOT_INITIALIZED_IN_CONSTRUCTOR"/>
</match>
</findbugsfilter>
Basically, it says: Filter out all classes named something that ends in ServiceImpl, which are placed below the com.company.system.service package, if the field in question is named something with Dao or Service at the end (like barService, in this example).

Private method is never called (UPM_UNCALLED_PRIVATE_METHOD)
This is in a JPA mapped class, where the setter for the "id" attribute is marked private, as it should not be called from application logic, but only from the JPA mapping layer through reflection. The code is like this:
public class Bleech {
private Long id;

private void setId(Long id) {
this.id = id;
}
}

So, findbugs cannot see this method is called, although it will be at runtime, when the JPA mapper kicks in. Again, I want to filter these bugs out, which can be done with this findbugs filter:
<findbugsfilter>
<match>
<package name="~com\.company\.system\.domain.*"/>
<method name="~set.*" returns="void" params="java.lang.Long"/>
<bug pattern="UPM_UNCALLED_PRIVATE_METHOD"/>
</match>
</findbugsfilter>

This filter matches classes in domain package with a "void setXxx(Long)" signature.

Configuring exclude filter on the maven-findbugs-plugin
I configured my filter settings above as exclude filters on the maven plugin like this in the pom:
<reporting>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>1.1</version>
<configuration>
<effort>Max</effort>
<excludeFilterFile>${basedir}/src/main/resources/findbugs-exclude.xml</excludeFilterFile>
<onlyAnalyze>com.company.*</onlyAnalyze>
</configuration>
</plugin>
</reporting>

and saved the findbugs-exclude.xml in src/main/resources (a specific one for each module: domain and services).

Initially, I had some trouble getting the findbugs plugin to find my filter file. Running maven with "-X" revealed a DEBUG statement about where it tried to find by filter file (which accidently, was not were it actually was).

Will it help me or irritate me?
Not having used FindBugs for real before, I am wondering if it will actually make my code better than it was before, or if I will be tired of it before that happens.

I already filtered out a bug about serialVersionUID not being defined for a serializable class (SE_NO_SERIALVERSIONID). I hate that one! Once worked with Eclipse for a short time, which kept complaining about that too. Who cares? I do not send class bytecode over the wire to other systems or store serialized versions for later retrieval. In the actual code, it was a servlet of mine, which, simply by inheriting HttpServlet, became serializable.

Anyway, I (and the complete team), use IDEA, which is already pretty good at spotting a lot of these errors and warnings on the fly, helping me to correct them too.

Will findbugs give me anything new?

Friday, June 15, 2007

Tip: Simple fileformat (dos/unix) conversion with VIM

Sometimes, someone drops you a text file with windows newlines, and you need it converted for your unix. There are oneliners to do it in sed, awk, perl, tr, ... but they always seems to have slipped my mind, when I need them.

Here is a simple way to do it with vim:
  • vim file
  • :set fileformat=unix
  • :wq
Done!

A Better logger hierarchy naming strategy for log4j

The common default when using log4j or commons-logging, are to create loggers with the fully qualified class name (FQCN) of the logging class, as the logger name. This makes it quick and easy to create loggers, without thinking too much about logger hierarchy (the assumption would be, that good package structure maps to a good logger hierarchy).

It is not neccessarily so though, that the package structure of the system maps to a good logger hierarchy structure.

If the FQCN is used, you will get a very fine grained hierarchy. A more simple hierarchy at the top, can make it easier to understand which parts to change levels on, to get the output you want.

An example
As an example, consider the application named "SuperApp". SuperApp" is a webapp, with a services and a dao layer. We could define only a few top-level logger names like this:
  • superapp.common
  • superapp.web
  • superapp.services
  • superapp.domain
Each logging class could use these as the first part of their name, and then append the simple class name of itself afterwards. Using this naming strategy will give you a very coarse level at the top of the hierarchy, while retaining a fine grained level at the leaf level.

Actually, I have few more top-level names, for example "superapp.security", which represents something that is cross-cutting to a lot of the application. Also, I sometimes have an extra naming level after "superapp.xxx", which is kind of "in the middle", just before the leaves.

What is good about this?
Often, when I debug, I find myself looking around in the hierarchy of loggers, turning something on here and there, to get the collected picture of the complete flow, that I want to follow. When using the naming strategy outlined below, I find it much easier to simply turn on a complete area of the system.

Also, when systems management are debugging, they have no or very little knowledge of the package structure of the application. They can better relate to the names, that are targeted the application, as above.

Thursday, June 14, 2007

Flex compilation 15x faster with the flex compiler shell

When compiling flex applications with the command-line compiler mxmlc (or compc), compilation is slooooow. Mostly due to startup and warmup time, I think. But this startup has to be done at each compilation.

Then I came across the Flex Compiler Shell (fcsh). It is kind of like the old maven 1.0 console plugin. Fcsh starts an interactive shell, in which you can execute compilations. The first compilation is slow like before, but compilations afterwards are much faster. The compiler is loaded and warmed up. In addition, it only does incremental compilation.

As an example: When using the maven plugin as described in my post about building flex with maven, compiling our current flex application into a flash movie takes ~15 secs on my development box. When I run it inside a warmed up fcsh, it takes ~1 second. A 15x times speedup!

Adobe say fcsh is targeted to be included in the Flex3 release, but it is ready now, to be used with Flex2. You just have to download and install it yourself.

Tip: The sources are in the Flex SDK already

I'm a bit slow, I know. But I just found out, that the statement from Adobe on open sourcing the flex platform means that, well, it is already open sourced.

I was looking for an svn repository with the sources, and could not find it. Think the infrastructure is not set for public yet. But then, I found a reference which tells me it is in the SDK already.

And yes, it is. Go take a look into: FLEX_SDK/frameworks/source

Nice! Thanks, Adobe.

Wednesday, June 13, 2007

Flash for Java Programmers: Lesson 4 - ActionScript and organizing large flex code bases

Steps on learning to develop Flash, with a Java developer focus...

This is lesson 4 in my series of posts on what I learn about developing filthy rich flash apps using flex2. The other lessons are:

In this post, I will show how we extract all our ActionScript code into files sepatate from the MXML files, to get the whole thing separated a bit. I will also show how large user interfaces can be split into separate and smaller mxml files.

What is ActionScript?
It is the scripting language that you program flash frontends with. With the ActionScript version 3, it is based on the upcoming ECMAScript specification (ECMA-262) and includes language extensions for working with XML based on ECMAScript for XML (E4X, ECMA-357). It is an object oriented, dynamic language, in which you can do everything you can do with mxml, and a whole lot more, to make your applications interactive.

The Problems
When you first start programming in flex, you can simply put your ActionScript (AS) code directly inside the MXML files inside <Script>-tags. Like this:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
<mx:Script>
<![CDATA[
import mx.controls.Alert;
private function clickHandler() : void {
Alert.show("Hello, from embedded ActionScript");
}
]]>
</mx:Script>
<mx:Button label="Click me" click="clickHandler();"/>
</mx:Application>
But when the application logic grows more complicated, you will feel the need to extract the AS code into files separate from the xml. If nothing else, then to have xml syntax and AS syntax separated.

What more is, when the user interface of the application grows larger, it quickly becomes unfeasible to have it all in one Main.mxml file. So, we also need to split up the MXML files into separate smaller files.

The solution
What we have been using is a two-step process, where we put AS code into helper files and extract views and UI components into separate MXML files. Take this simple UI as an example, which contains a TabNavigator with two tabs:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
<mx:TabNavigator>
<mx:Panel title="Login View">
<mx:Label text="We are now in Login View"/>
</mx:Panel>
<mx:Panel title="Application Main View">
<mx:Label text="We are now in main view"/>
</mx:Panel>
</mx:TabNavigator>
</mx:Application>
In this example, the UI in each tab are very limited, but more often than not, it will be quite complex with a lot of script logic too. So, I will show how to extract the content of each tab into separate mxml files. Here are the steps:
  • Create a directory "view" at the same level as the Main.mxml file
  • Inside this directory, create LoginView.mxml and MainView.mxml
  • Move UI definition from Main.mxml into each of the files, wrapping each in a Canvas
  • Reference the new, separate view files from Main.mxml using a view namespace
Here are the contents of the LoginView.mxml:
<?xml version="1.0" encoding="utf-8"?>
<mx:Canvas xmlns:mx="http://www.adobe.com/2006/mxml" width="100%" height="100%">
<mx:Panel title="Login View">
<mx:Label text="We are now in Login View"/>
</mx:Panel>
</mx:Canvas>
And here are the contents of the MainView.mxml:
<?xml version="1.0" encoding="utf-8"?>
<mx:Canvas xmlns:mx="http://www.adobe.com/2006/mxml" width="100%" height="100%">
<mx:Panel title="Application Main View">
<mx:Label text="We are now in main view"/>
</mx:Panel>
</mx:Canvas>
Only thing to notice in the above two files is that the root element is a Canvas. We are not allowed to use Panel as a root element.

And last, here are the contents of the Main.mxml:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:view="view.*">
<mx:TabNavigator>
<view:LoginView />
<view:MainView />
</mx:TabNavigator>
</mx:Application>
Looking at the above code we see, that the Application element now have a namespace definition xmlns:view="view.*", which makes flex resolve elements with the view namespace into the directory named view. And the content inside the TabNavigator has been completely replaced with references to the view files, using the namespace.

And what about AS code then? How to put that inside separate files. Well, what we did was create a helper file for each mxml file. If Main.mxml had an init() function called from creationComplete, we could create a MainHelper.as file with this content:
import mx.controls.Alert;
private function init() : void {
Alert.show("init() called");
}
And then include and use it from Main.mxml like this:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" creationComplete="init()" xmlns:view="view.*">
<mx:Script source="MainHelper.as" />
...
We actually tried calling the file Main.as, that is, the same as the mxml file, just with another extension. But this failed, in compilation, if I remember correct. My guess is, that the compiler translates the files into named types in the bytecode, where Main.mxml and Main.as then clashes. But it is a guess.

A little nice side-effect
Extracting AS code into .as files makes IDEA able to work better with them, even though it has no Flex support yet. I think it is its killer JavaScript support, that kicks in.

Still not good enough
All the organization we have done above are only for the developers to be able to manage the code base and maybe reuse some UI components. It is all still linked into one big .swf file, though. And this might become a problem when the application grows larger.

There seems to be solutions to this. I have just not had the need for it yet. If I get the need, I will blog about it here :-)

Downloading the source
I have zipped up the sources. It can be downloaded from here. Ready to be build with maven.

Tuesday, June 12, 2007

A good xml schema documentation tool?

I am looking for a good tool to help browsing around in a huge amount of xml schema definitions with a lot of interactions between. A little googling around gave me the list below, but I can't say I knew any of them beforehand, not even by name:
Do you have any positive or negative experiences about the above tools or know of another great tool for xml schema browsing and documenting?

WebLogic changed default for AnonymousAdminLookupEnabled in SP5 of WLS 8.1

If you suddenly, after a service pack upgrade, start getting the error:

User: '' has insufficient permission?

In you weblogic deployed application when accessing the WebLogic MBeans, you should be aware that BEA changed the default value for the AnonymousAdminLookupEnabled attribute in SP5 of WebLogic 8.1.

Prior to SP5, the default was true, allowing lookup of the MBeans in JNDI with the anonymous user. From SP5 and ahead, the default is false.

Either put a login in the InitialContext or edit your config.xml to read:
<SecurityConfiguration
AnonymousAdminLookupEnabled="true" ...

Currently, the docs are still documenting it to have a default of true.

Monday, June 11, 2007

Will JSR-277 take us out of the Dependency-Hell?

Back in time before .Net, on Windows there was something called DLL-hell. In Java, we have a, somewhat mildly version, of that problem, which we could call Jar-hell but which is actually more generally a dependency hell.

Take a common technology stack for a JEE application, it could contain these major dependencies:
  • Tapestry
  • Spring
  • Hibernate
Each of these dependencies depends on other libraries (jar files), which again depends on other.... More often than not, there will be conflicts. Tapestry might depend on commons-collections-3.0 and hibernate might depend on v2.0. Only one version can be the one loaded on a given classloader, so one of them have to work with a version it was not tested with.

These dependencies easily comes out of hand. The most common solution I see applied out there, is something in the lines of, taking the most recent version of the jars in conflict and hoping it works.

Another problem is that the dependencies between jars are not explicit. There is no standard way for a jar file to express a dependency on another jar file in a given version. Today, we need each individual project to document their dependencies. More often than not, this is not done and jar file dependencies distributed with a library, often have file names without versions.

.Net solved this
In the .Net platform, assemblies are the jars of .Net and all assemblies contain both their own version and the names and versions of other assemblies, that it depends upon. The CLR is responsible for using this information at execution-/load-time to locate the depended upon assemblies, in correct versions, and link them together.

To be able to do this, there are some important aspects of the .Net platform itself:
  • A standard, system-wide place to deploy assemblies (the Global Assembly Cache, GAC)
  • Version of assembly as a part of the assembly file format itself
  • Version and names of depended upon assemblies as part of the assembly file format itself
  • Code in the CLR (the VM of .Net) that makes use of this information
In addition to this, each application can use an application configuration file to further specify the assembly linking information. For example, if the need arises to link against something specific. Here is an example of such an application configuration file:
<?xml version ="1.0"?>
<configuration>
<runtime>
<startup>
<requiredRuntime version="v2.0.50727"/>
<supportedRuntime version="v2.0.50727"/>
</startup>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="foo-assembly" culture="" publicKeyToken="83d83231a2fe9534"/>
<bindingRedirect oldVersion="2.1.0.0" newVersion="2.2.0.0"/>
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>

This file will force the CLR version used to v2.0.50727 (and fail execution of not available), and redirect the dependency on v2.1.0.0 of foo-assembly to v2.2.0.0 of same assembly.

JSR-277 might come to rescue
Finally, someone has seen the need for this on the Java platform too. JSR-277: "Java Module System", seems to aim at solving this stuff. What JSR-277 will provide is:
  • A new distribution format called a Java Module, which will contain metadata, both about what the module itself contains, but also about which dependencies code in the module has.
  • A standardized versioning scheme
  • A repository for storing, discovering and retrieving modules
  • Runtime support in application launcher and class loader for using the module metadata
Java Modules will simply be jar files which contain a METADATA.module file in a MODULE-INF directory.

It looks like the JSR has taken a lot of its ideas from OSGi. For instance the idea of explicitly exporting which parts of the module, that can be depended upon from other modules. Actually, this opens up for a nice feature, as the JVM can hide internal dependencies from other modules, in effect making different versions of the same jar file possible in the same application, as long as they are non-exported dependencies.

All in all, JSR-277 seems like a huge bunch of features and functionality, that has the possibility of a major impact on the platform as we know it.

The JSR-277 seems to be scheduled for Dolphin and I hope it will make it into that. But I also hope for early adoption of its ideas by the community. The success of such a feature depends upon framework and library authors to embrace it. Just look at what maven has done for a common repository to locate dependencies from and version information in the names of jar files. There was a long time with maven1, where far from all projects embraced its use.

So, what do you think, will JSR-277 take us out of the Dependency-Hell? I sure do hope so!

Friday, June 08, 2007

Problems with file upload into MySQL in rails

Today, I was deploying a rails site of mine in production on a Linux virtual host. Hmm, not without problems. Some of them not due to Rails, others were.

MySQL troubles
The upload was failing and I was getting this error from MySQL afterwards:

Incorrect key file for table '/tmp/#sql_4108_0.MYI'; try to repair it

After some digging around I found out it was due to /tmp on my Linux virtual host being only 32MB and mysql seems to use a temporary MyISAM table when inserting large binary data. When there wasn't enough space in /tmp for the temporary table, mysql failed.

I fixed this by setting "tmpdir = /var/tmp" (which has lots of space on my host) in my.cnf

Rails / CGI Upload
Now, mysql did not crash and corrupt on me anymore, but the upload still failed.

It turns out that Rails calls CGI::QueryExtensions.read_multipart, which calls Tempfile.new, which again calls core class Dir::tmpdir and this method uses environment variables to determine tmp dir to place uploaded file into.

Fixed this by setting ENV['TMPDIR'] = '/var/tmp' in config/environments/production.rb.

Giving it a tmp with more space available resulted in files being fully uploaded okay in /var/tmp, but the files where left there and the mongrel process hangs from there on. No information available in mongrel.log or production.log. WTF!

Back in mysql trouble
I then remembered back in time when I had some other problems with mysql/activerecord communication, which I solved by installing a mysql gem compiled against the platforms mysql devel libs. Like this:

sudo aptitude install libmysqlclient-dev
sudo gem install rails

Which instantly gave me better error messages. Now, I could see an entry in production.log about "max_allowed_packet" being too small. Funny, cause I specificly set that to 16M in my.cnf and the file uploaded was only 8M. Hmm, maybe it is inserted in a more voluminous format, so I raised it to 32M.

And it worked!

Removing old CGI upload data?
One thing left. The temporary files from the CGI library from the upload keeps lying around until I restart the mongrel instanses. If the instanses die, the data does not get cleaned up. And waiting for a restart before cleanup is also not that nice. Digging a little deeper into Tempfile.new I notice it will be cleaned up at garbage collection time of the Tempfile instance unless an explicit close/unlink is done. As far as I can see, no explicit unlink is done in CGI::QueryExtensions.read_multipart. Guess I will have to wait for the ruby garbage collector to come by.

No one has ever been fired for choosing JSF

Inspiration for this blog post came from a colleague of mine, who sent me a picture from indeed.com showing job trends for JSF and other web frameworks. (see below picture)


There is a saying that goes "no one ever been fired for choosing IBM", which means, IBM is such a brand, that if they (IBM) do not match your needs after all when the deal has closed, you are not to blame--you just had a bad experience with a little part of IBM. Or at least, that is my understanding of the saying.

Now, I say the same holds for JSF! No one will be fired for having choosen JSF...

I have a hard time seeing anyting new or remotely interesting in JSF when compared to other web frameworks. A web framework like Tapestry has really innovated with its component focus, preview of templates, super error reporting and shielding from the HTTP stack. And the upcoming v5 looks as it will become absolutely awesome with its POJO focus, annotations and development cycles reaching rails productivity. Or Wicket, which provides a full-blown component model and a programming model more like swing programming. And these are just some to mention.

So why choose JSF? Because it is an industry standard, backed by Sun, the creators of Java and Enterprise Java. The tool support will be everywhere. You can find tons of people who can do JSF programming. Googling on problems and errors are sure to turn up lots of results....

But, even though you won't get fired for choosing JSF, it does not mean that it was a good decision. You can do much better than JSF by looking around.

By the way, the graph above can be interpreted in another way: JSF sucks, which makes programmers who are forced to use it quit their jobs, which produces a lot of new job postings for JSF :-)

Monday, June 04, 2007

NullPointerException in the Israfil Mojo?

If you experience a NullPointerException when you try to compile flex2 sources with the Israfil maven mojo, it is most likely due to either setting the flexHome property wrongly or simply forgetting to set it. Make this mistake (as I just did) and you will get these complaints:

[INFO] Scanning for projects...
[INFO] ----------------------------------------------------------------------------
[INFO] Building flash-lesson3
[INFO] task-segment: [install]
[INFO] ----------------------------------------------------------------------------
[INFO] [flex2:compile-swf]
[INFO] Compiling Flex 2 code
[INFO] ------------------------------------------------------------------------
[ERROR] FATAL ERROR
[INFO] ------------------------------------------------------------------------
[INFO] null
[INFO] ------------------------------------------------------------------------
[INFO] Trace
java.lang.NullPointerException
at net.israfil.mojo.flex2.AbstractFlexMojo.execute(AbstractFlexMojo.java:278)
at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:420)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:539)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:480)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:459)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:311)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:278)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:143)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:330)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:123)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:272)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315)
at org.codehaus.classworlds.Launcher.launch(Launcher.java:255)
at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430)
at org.codehaus.classworlds.Launcher.main(Launcher.java:375)
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1 second
[INFO] Finished at: Mon Jun 04 21:10:21 CEST 2007
[INFO] Final Memory: 4M/9M
[INFO] ------------------------------------------------------------------------

Flash for Java Programmers: Lesson 3 - View States and switching views

Steps on learning to develop Flash, with a Java developer focus...

This is lesson 3 in my series of posts on what I learn about developing filthy rich flash apps using flex2. The other lessons are:
In this post I will show how to control which "view" to show in a big flex2 application. I will be using the ViewStack component and what is called "View States" to nicely shift the UI state.

The Problem
As a Java web developer, used to build websites with HTML/JS (and no Ajax), there is no problem in switching from one view (page) of the application to another. It is simply a link or a button, and the server returns a completely new page. In flash on the other hand, one typically serve one big flash which is the complete UI of the application, and logic in the UI then takes care of which "view" to show at a given time.

Basically, with flash, we are met with some of the same challenges that for instance Swing developers have.

The Solution
Flex2 has builtin support for multiple views in the ViewStack container component. In the example flash just below, I show how to switch between a login screen (one view) and the application main view (another view):






When the "Login" button is clicked, the current view state is changed, which makes the UI change too. It switches to another view in the viewstack. Clicking "Logout" in the main view switches the state back. Let's take a look at the MXML code:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" creationComplete="currentState = 'Anonymous';">
<mx:states>
<mx:State name="Anonymous">
<mx:SetProperty target="{views}" name="selectedChild" value="{loginView}"/>
</mx:State>
<mx:State name="Authenticated">
<mx:SetProperty target="{views}" name="selectedChild" value="{mainView}"/>
</mx:State>
</mx:states>
<mx:Fade id="fader"/>
<mx:ViewStack id="views" width="300" height="150">
<mx:Panel id="loginView" title="Login View" showEffect="fader" paddingTop="5" paddingLeft="5">
<mx:Grid>
<mx:GridRow>
<mx:GridItem><mx:Label text="Username"/></mx:GridItem>
<mx:GridItem><mx:TextInput /></mx:GridItem>
</mx:GridRow>
<mx:GridRow>
<mx:GridItem><mx:Label text="Password"/></mx:GridItem>
<mx:GridItem><mx:TextInput displayAsPassword="true"/></mx:GridItem>
</mx:GridRow>
<mx:GridRow>
<mx:GridItem colSpan="2"><mx:Button label="Login" click="currentState = 'Authenticated';"/></mx:GridItem>
</mx:GridRow>
</mx:Grid>
</mx:Panel>
<mx:Panel id="mainView" title="Application Main View" showEffect="fader">
<mx:Label text="We are now in main view"/>
<mx:Button label="Log out" click="currentState = 'Anonymous';"/>
</mx:Panel>
</mx:ViewStack>
</mx:Application>
Some key pointer to the above MXML code are:
  • The "mx:states" and "mx:State" elements define named view states, that contain actions that trigger when state shift. The actions are simple here: Set the "selectedChild" property of the ViewStack component.
  • A ViewStack contains a view for each container in its children. Only one container in the ViewStack is visible at a time. Changing visible view can be done by setting "selectedChild".
  • The top-level "Application" element uses the "creationComplete" event, which triggers at application startup, to initially set view state to the "Anonymous" state, hereby showing the login dialog pr. default.
  • Clicking the "Login" button triggers assignment of the "Authenticated" string to "currentState" which again makes Flash runtime trigger the actions assigned to the view state with that name. This sets the visible view in the stack to "mainView".
All nice and easy!

Downloading the source
I have zipped up the sources. It can be downloaded from here. Ready to be build with maven.

Friday, June 01, 2007

Tip: Selecting random row with MySQL and ActiveRecord

Here's a quick tip on how to select a random model instance from an ActiveRecord model when the database is MySQL:

Model.find(:first, :order => 'rand()', :limit => 1)

Please note:
  • Using rand() is MySQL specific
  • It performs bad when there are many rows in table (rand() is applied to all rows and then all rows are sorted)