Thursday, January 25, 2007

Our first Tuna (eduna) blog!

Thanks to the first blogger (outside netlabs:)) that mentions e-duna! .... Of course, maybe he's not the first, but he is the first we found ;).


For some time now, we've been using AspectJ at work for several projects after playing with Spring 1.2 AOP support which was lacking a lot because it used proxies to implement AOP (or CGLIB which carried some other limitations).

It was pretty hard to convince the development team of trying aspects and I had to do it step by step. The main concern was having some code executing without being able to realize what was being executed from looking at the code. So I decided to mix aspects with annotations, and advice only parts of the code explicitly marked with a particular annotation. So if you wanted to use the advice, you just added an annotation, and by looking at the annotation you knew that an aspect was being weaved. Annotations mixed with aspects worked as a neat kind of code reuse which we could use as an alternative to inheritance and composition/delegation.

We used that mechanism to add statistics support to our messaging gateway (a gateway for MMS/SMS from and to different protocols including SMPP, HTTP, web services, MM1, MM7, RMI...) and it was quite successful. You only needed to annotate a method by specifying the type of the stat and it's namespace (the place of the stat in a browseable hierarchy of stats, which could include parameters values, return types, or properties of the annotated method class). We did something similar to check for authorization before executing some method, which also worked OK (even if, nonetheless, we needed additional layers of authorization before and after method authorization).

My next step was to try to implement synchronization using aspects. Our messaging gateway is based on the concept of independent services that share the same lifecycle (started, stopped, reloading, etc.). Most of the lifecycle methods needed to be executed with an exclusive lock (since we want to avoid start and stop from being executed concurrently, and reloading from being executed while handling a request), and the service methods (things like submitting a message) must not be executed when a lifecycle method was executing but could be executed concurrently with other service methods. To achieve this, I considered three alternatives:
  • Write code before and after each lifecycle and service method to synchronize the access. Since the gateway is basically a framework for implementing new services, programmers also needed to ensure to write this code or things could go wrong. This approach required A LOT of duplicate code, and required the framework user to write code at each service method to make sure its services where well behaved.
  • By using inheritance, through a template pattern, so the programmer sub-classed the template class which handled synchronization. This required a new class for each type of possible service (and there are more than a few!), and required an additional method for each template method (the template method wrapper that handled synchronization, and the template method itself).
  • Use aspects.
By using aspects, I could ensure than any class that implemented a given service interface had each of the required methods correctly synchronized, and when a framework user needed to add a new service, he just implemented the interface and didn't have to worry about handling the synchronization issues, reducing the code he had to write (which was the main objective for the framework.... we always want to have a simple as it gets "hello world" example ;)).

The team reception to this approach was not exactly warm. It's currently working OK, but there are some issues with lock escalation that must be taken in account by programmers in some border cases (like calling a lifecycle method from a service method).

Main concerns are that Aspects should not be used when the adviced code NEEDS the aspect to work correctly. While I share this view somewhat, I decided to leave the synchronization aspect because it greatly reduced the code base. I still have my doubts however....

Monday, February 27, 2006

Duna is on Spain!

Duna is a traffic shapping/bandwidth management appliance that can do by user shapping. It's a product developed in netlabs, the company that I work for in Uruguay, which we are selling in Spain in partnership with Inology under the name of Duna. Inology is truly a great company, very supportive and with a good long term vision; after all, they welcomed us even as we came from a small south american country ;).

Even if it's not fair for me to praise it since we developed it ;), it's actually quite a piece. Tuna is developed with usability in mind, and targeted at SoHo (small office, home office) companies that can't afford an expensive solution (or in which an expensive solution it's too much for their needs or complex to manage).

It allows the user to eliminate the bandwidth abuse of P2P users, regulating the amount of bandwith available to those users or blocking P2P applications altogether. It can also filter and block other kind of 'distracting' traffic such as MSN. Also, it can be configured to split bandwith equally among users or to assign an specific bandwith to individual network users. This feature is generally not available in this kind of equipment.

This traffic shaping options extend to VoIP traffic, so that Duna can be used to make VoIP possible by making voice run smoothly thru the network.

Indulge me and take a look at it ;). If you are looking for this kind of solution, I don't think you will be disappointed.

Sunday, August 14, 2005

SIP for Java

I started working on a SIP client for a VoIP application we are developing at work. Most of the code is in Java, so we wanted to handle SIP on Java too.

However, I must say that SIP libraries for Java seem to be a bit scarce. There's jain-sip, which is a low level SIP library that basically handles message formatting, but there are no high level APIs to implement a simple SIP phone, for example.

Jain-sip Lite (an API intented for SIP user agents) is no longer under development, even though much of its functionality has been absorbed by jain-sip. Theres also jsip which has also been abandoned.

SIP (described on RFC3261) its quite simple, but the call/transaction/dialog management (which changed from the previous version of SIP) and rules that define when to create a new transaction, can become a bit complex, at least for a newbie (like me for the moment;)).

Im currently trying to define an abstraction layer over jain-sip which will allow me to define sip interactions as java classes (similar to the ideas in FIPA Agent architecture which defines behaviors). After that, I plan to wrap some interactions inside a SIPPhone class, to allow me to make simple calls such as call(<sipaddress>), or register(<publicaddress>,<registeraddress>). I'm still have to think about how to handle incoming calls and the sort, but I suppose I will figure it out :P.

I'll post if there are any advances or I if decide to rewrite the whole thing and drop the interactions idea:P.

Virtual universe inflation rose 20% on saturday, hackers blamed.

Doing my weekly Gamespot navigation, I found this article. Apparently a group of hackers created a skyrocketing inflation in Ever Quest 2.

Is it me or news are becoming more and more science fiction like? It's also impresive the Station Exchange service, that allows EQ players to trade ingame goods for US dollars. This was done in ebay before, but obviously Sony realized that there was an oportunity here. I suppose that someday, in the near future, one will be able to make a living by playing MMORPGs. I somehow find that a bit frightening.

Thursday, March 31, 2005

Why XUL should stay with Mozilla

When we had to develop a client side application with a rich and dynamic (generated at runtime) GUI my first thought was that HTML was not an option. We needed a robust, operative, friendly, and well programmed GUI, in which client responsiveness was a must, and that could be somehow integrated with some additional libraries; I keep thinking my opinion was justified. However, I did like some of the benefits that HTML provided, like zero installation and updates, and specially how easy it was to generate HTML to create a dynamic GUI (in our case, a full fledged questionnaire, with all different kind of questions, validations, jumps and methodologically correct survey mumbo jumbo).

So we thought that it would be perfect if we could find an application development framework with an interpreted code and GUI language, that provided more powerful programming components and a clean interface/logic separation. In that case, we could generate the questionnaires from an internal representation easily and still have the static GUI code looking nice.

Mozilla XUL

That is how we got to Mozilla XUL. The XUL concept is an excellent one; have a very clean separation between presentation and logic, in which GUI elements are described in a quite simple XML dialect, and can be referenced and manipulated easily from a programming language. Add to that the fact that:
  • XUL widgets look as native widgets
  • almost all their attributes can be modified with CSS
  • you have a language to define new widgets based on previous ones (XBL, really cool idea) or totally modify the way an existent XUL tag is displayed
  • you can manipulate everything from an interpreted language (javascript) that has access to a seemingly impressive array of C++ components (XPCOM)
and you should be convinced that the tool must be the right one for the job. Well, don't.

In my opinion, XUL suffers from the same forced hackish style of programming that HTML GUIs impose on us. By hackish, I mean forced, unnatural and in which you end working around the platform limitation and idiosyncrasies instead of your application.

The most obvious and common problem with Mozilla XUL are the terrible security constraints. Ok, I know, they are extremely important in an open web based environment, they have to be there. But what about if I don't give a damn about security since I'm going to run in a controlled environment? You can disable the checks doing some client side configuration (or sign the application), but even so, you still need to ask for permission before doing any privileged operation. And what happens if some chrome Mozilla component (for Mozilla, if its chrome, then it was loaded from the local machine) thought that it should be running in a chrome context but it didn't? Simple, it won't work. The wizard element did exactly that.

But that is something that could be worked out (subclass the wizard, or copy/paste it in a new element), but lets take JavaScript. In my way of thinking, in an app you have an starting point, in which you may define the requirements of your program (name them imports or includes), and in where you start doing things, like creating the first window, initializing services, etc.. But hey, Mozilla is a browser, and JavaScript only gets loaded from <script> tags! So, if you want to run anything, create a window. Want to print "hello world" to the console? Create a window.

What about if a particular javascript "library" (a .js file) needed to access another .js? Well, you have to include the <script> tag in the window also. Does that means that I have to know and list all the dependencies when I want to use a particular .js? Yeap, that's it. Of course, you can write a function to import a .js file, and in fact there's jslib that already does it, and solve some of the problems. And what about namespace conflicts? Well, I'm feeling lucky, and that doesn't happen often, does it?

Now, suppose that your first window creates another window. Well, the imported javascript only exists in the context of the first window. Obvious, it's a browser! You wouldn't want one's page javavascript in another one! No problem, you can fix it. Just reference the previous window to access that code if you need it (make sure you disable the same origin security policy if you are accessing a remote page from a local one, obviously). You could even pass an object with all the needed code as a parameter to the second window. But if you do that, don't think about closing the first one, because all the rest of its code and objects will get collected, and, if later, you try to access the object you passed, it might be missing a few needed functions. You get a similar problem if a window opens a async connection (for example with a XmlHttpRequest object) and gets closed. Why, you ask me? I don't know is somewhat related to a LoadGroup of the HttpChannel object, look at the Mozilla source code and stop bugging me.

Even so, javascript is a great language. People say that you can't do object oriented programming with it just because they don't know how to do it. You can perfectly simulate OOP by using prototypes, and even define your own "uber" function to call super class methods or constructors!. And its even better, because "There's more than one way to do it". So don't look confused if you find things like:
function ConnectionListener () {...}
ConnectionListener.prototype = new Listener ();
function ConnectionListener_listen (socket) {...}
ConnectionListener.prototype.listen = ConnectionListener_listen;
ConnectionListener.prototype.listen = new function (socket) {...}
ConnectionListener.inherits (Listener);
ConnectionListener.method ('listen', new function (socket) {.....});

which shows you how writing a couple of helper functions you can transform javascript in an object oriented language! Just make sure all the team uses the same conventions or code could get a bit obfuscated.

You can even have reflection in javascript:

if (Math.round(obj)==obj) return "int";
else return "double";

this guy (which honestly deserves all my respect for just an awesome hack) even implemented attributes . Some guys also wrote (I did too, since I really missed it, and didn't like theirs :)) a framework to do some unit testing, its great but i couldn't integrate it in the build process (instead I ran it and look at the green/red bars). I suppose Mozilla developers have a way to do it anyway, but I wasn't able to find it (I didn't look at the source this time, my fault).

Enough of javascript. Its used mainly on HTML pages, an already forced abstraction, its cute to dispatch events, but (being polite) lacks of some needed constructions or standardized ways of doing things. What about Mozilla C++ components platform, XPCOM?

Well, first, I admit I wanted to avoid using XPCOM directly. When trying to be productive and coding things that I can maintain I avoid using C++ (well, I avoid it always, I hate reading it, but loathe writing it). So my experience with XPCOM is not by using the C++ components through XPConnect (which is used to access them from javascript). I don't suppose it does much of a difference anyway.

Well, XPCOM its nice except for the lack of good documentation (but you have Mozilla source code to look for examples). Its a good library, which shows an enormous effort, and covers the basics and a bit more. Even so, I did run into a couple of SEGFAULT, but I was guilty of all of them. For example, creating an object as an instance when it was service (a service is similar to a singleton, and should only be created once by the application), which was really easy to figure out looking at the code and seeing that it used some static variables, since the name of the component didn't include "service" in it, and services don't have any feature that distinguish them from instances. Of course, the SEGFAULT was created the third time I created the component, not the second, and without a warning.

Besides that, and the fact that the RDF DataSources components don't complain about anything and silently ignore content they don't understand, use an outdated RDF specification (or tell me if you heard of rdf:instanceOf/rdf:nextValue before...), and creates some awkward RDFs (<bar: rdf:type="foo">, maybe I'm wrong, but what is an XML element without a local name???),

You also have a lot of assertion failed messages from XPCOM and the Mozilla platform in general. But Mozilla keeps running nevertheless, so why would you give a damn about them if the platform doesn't?. At least they do have a nice and complete error reporting about unknown CSS attributes (some unknown to the platform, not the documentation).

I won't even try to talk about RDF templates (the standard XUL way to create dynamic content based on a RDF document). I did use them, and in fact, I liked them in the end ;). However they are limited in what you can do (conditional processing specially), and hard to learn. And of course, how to make them dynamic is also badly documented (uri attribute is ok, ref isn't).

Just as Gmail is now used as a way to show how you can do responsive and natural look&feel applications with HTML, both Firefox and Thunderbird are the proofs that XUL is also a fit to develop large scale client apps. By looking at Firefox I always said, I must be getting something wrong, this thing must be good, I must be understanding it wrongly. Later I realized that Thunderbird is not all that different in concept to Firefox, and that they are developed by the same foundation that develops the platform and core components (didn't check if by the same people). I'm not suggesting they are blinded by love to one's product, probably they obviously find it great, because they know intimately how it works, if they need a modification they do it, and I bet the platform also evolves with the needs of those and other Mozilla Foundation products. Its just that I don't see it fit for other large scale, "different that browser", developments.

Maybe Mozilla developers never intended it to be that way, but I think that XUL and the Mozilla platform is being promoted as something that its not. I didn't saw XAML yet, but I doubt that XUL will be able to compete with it in the rich thin/fat client platform niche.

Because its not general enough, because it hackish, because its too big for being documented from scratch, because XPCOM implementations are mostly in C++ (there are other options, apparently, but honestly I don't think they are being used intensively, otherwise Mozilla itself would start replacing that C++ codebase, kidding ;)), because javascript sucks and rich fat clients require more coding in the client side (and therefore more than event handling) and because a platform in which assertions are not enabled because they don't know if the assertions should really be there is just not reliable enough.

Its important for developers not only to have an enormous amount of content and features to play with, but also a standard way of doing common things. You shouldn't need to think about how to implement inheritance, or importing source files. Nor if you must implement an MVC by using commands, or calling actions from onclick events, or adding the onclick in the XUL instead of the .js, etc.. Sometimes this lack of flexibility (which I don't think its a lack at all) will just make programmers feel from the start they are doing it the right way, the way is intended to, because TOOWTDI (there's only one way to do it). That is, in my opinion, one of the purposes of a development platform also.

I'm sorry XUL, I did fight for you in this project (more than I should fought). I'm starting to work on eclipse RCP now, and everything looks fine at this time. Maybe Mozilla XUL is still the best option for a particular target (and I'm not implying it should be a silver bullet), but even in that case, I fear its just because of lack of competition than real product quality.

Our first Tuna (eduna) blog!

Thanks to the first blogger (outside netlabs:)) that mentions e-duna! .... Of course, maybe he's not the first, but he is the first we ...