Andrew‎ > ‎Technology‎ > ‎

J2EE and Weblogic Best Practices (2003)

This page contains some best practices for J2EE projects, along with some container-specific notes. The motivation for writing this page came as I was performing a post-mortem on yet another large and successful project. The project was a fairly large (20 KLOC) middle-tier intensive system for B2B-style message persistence. The XML schema for the message was very complex (600+ elements) and domain intensive, so the mapping to the relational model was the most difficult part to design. The pipeline went something like this: web service for message receipt, JMS queuing, Message-Driven Bean, XSLT, Castor XML binding, EJB persistence, status event propagation (sure makes six months seem like overkill).

J2EE Container/Weblogic Pitfalls

These are some of my recommendations for developing on J2EE containers, with an emphasis on BEA Weblogic Server 7.0:

  • Integrate early and often on a cluster. There are always issues in clustered deployment and testing.
  • Test locally on at least and admin/managed server configuration (as opposed to a standalone server). This configuration more closely mimics a cluster, without as much of a configuration hassle and with much lower system requirements.
  • Test hot-deployment of your application regularly. It is very hard to fix once it is gone.
  • Never create any threads. See above issue.
  • Avoid Weblogic JMX. See above issue.
  • Entity EJBs are great for transactional relational database updates and complex creates (with lots of relations), but for simples creates and massive queries expect to go with something else in the long term. Xdoclet (see above) simplifies the entity EJB process enough so that you can get it working with entities in the meantime. ...that something else is probably not Castor JDO if your database schema has complex relationships (but it is otherwise recommended).
  • I have found that the only context in which I see the Weblogic ".wlonotdelete" folders is when I need to delete them. If you are deploying an enterprise application, but it seems that your old version seems to continue operating, your best bet is to follow these instructions. Having my Weblogic start scripts actually delete those folders along with the "stage" folder for my project has eliminated many of the Weblogic issues I have come across, but of course causes the server to start more slowly. I have also found it necessary to delete the contents of the "security" directory on occasion, particularly when the server just refuses to start.
  • Messaging technologies allow for asynchronous processing, meaning processing at a later time than message creation. This does not alone provide for high performance, nor does it provide for "real-time" results. JMS is a heavy-weight transactional messaging API. None of the JMS implementations that I have tested supported high data rates with relibaility features enabled. This includes clustered JMS in Weblogic, Tibco JMS (whose clustering does not seem mature), and IBM Websphere MQ (formerly MQSeries) which I am unable to recommend entirely. I have heard good things about Sonic, but I do not know of any real projects using it.

I cannot stress enough how important it is to stick to the J2EE specification when writing for the container. Containers aren't very tolerant of nonstandard behaviors. It is much better that you build your transactional/persistence services for the J2EE container, and leave the exotic behaviors to standalone processes.

When to Break the Rules

Of course, avoidance of the above pitfalls can be unavoidable. It would be nice if straight J2EE could get everything done, but that is not very realistic. I rarely have the luxury of building a blueprint J2EE system, myself. The following are cases in which I feel that breaking the rules is unavoidable.

  • I have yet to see a JMS implementation with high-performance reliable messaging. Whenever such requirements pop-up, I test the JMS implementations that are available to me, then I opt for using a lower-level API like Tibco Rendezvous. With RV, I can get 1500+ certified messages per second over a mere 10 megabit network. I have yet to find a JMS implementation that can reach one tenth that rate between two hosts on any network. Going with a non-JMS messaging solution currently means that you will need to create threads in the container to dispatch messages. This is set to change with the new connector API which will allow you to use message-driven beans for your threading model. As a side note, Servlets are a fun replacement for message-driven beans, in that you can send in test messages via a web-interface.
  • Beyond high performance messaging, any requirement for dynamic message addressing also requires non-blueprint use of J2EE, although JMS may still be used. The model for subscription via message-driven beans is much too static for use in this context.s
  • JMS in a cross-platform architecture? Good luck on that one. Heterogenous Java systems are easy, but remove Java and JMS is a nightmare. A lot of whitepaper-oriented architects like to do .Net on the client with a good Java middle-tier on the server side, but JMS is not an ideal glue between those platforms. If you must use messaging, make sure that your middleware supports both platforms. If your vendor's product is JMS in its native mode (like Sonic), it is much more likely that they will support this sort of architecture. In any case, make sure your architect understands that a vendor must be chosen before the architecture can be validated, because there is no magic to the JMS cloud on the architecture diagram that will make Microsoft and Sun hold hands.

Tools to Exploit

These are some tools that have proven themselves invaluable, although they are often free:

  • Xdoclet removes most of the headaches of J2EE, regardless of the container you use. It takes care of interfaceces and deployment descriptors for EJBs, web applications (even with Struts), etc.
  • JetBrains IntelliJ IDEA is the best editor for Java projects. It has to be used to be believed, but I don't touch JBuilder, TogetherJ, or emacs much anymore.
  • Altova XMLSpy is the best editor for XML schema and XSL transforms. Its visual schema designer is unparalleled. Its XSL debugger has no competition.
  • PushToTest TestMaker is a great Jython script IDE. Jython scripts are a great way to run functional tests in Java, and this suite provides a good starting point.
  • The Grinder is a great tool to perform load tests, also executing Jython scripts.
  • Apache Xalan has more than just plain XSLT to offer. Transforms can tie into your Java classes for almost any purpose.
  • Ant, of course, is an unmatched build engine even for large projects. In large projects, it is often necessary to break your build into many scripts. Make sure that each of these scripts can execute independently. It makes a developer's life a lot easier if he can simply execute the exact target necessary. I have found it useful to pull in XML fragments from other files (via entities) to provide shared environment and targets. A modular build is not impossible after all.
  • Castor is an essential XML-object binding framework until JAXB gets mature. It can also do object-relational binding with the cleanest model I have ever used, but lacks performance on extensive relationship creation and loading.
  • TortoiseCVS is a great supplement to Windows Explorer that allows for CVS integration.

Other Notes and Pitfalls

Here are other considerations for your projects:
  • If considering Starbase Starteam, make sure your evaluation takes into consideration the Starteam client's shortcomings with respect to file system folders. New folders on the server are not available to the client without a restart. Local folders that do not have counterparts on the server are absolutely invisible to the client.
  • An XML schema that is very complex (hundreds of elements) can be unmanageable with respect to persistence. It is good to go with an external schema that is easy to understand and integrate with, but transform to a simplified internal schema for manipulation and persistence. The simplified schema allowed for an order of magnitude decrease in the build time of the system and the lines of code required for manipulation of that data. If you can transform to a schema that represents your relational schema, it will save you even more work.