J2EE design decisions

Learn how to discern which design patterns and frameworks would work best for your enterprise applications

1 2 3 Page 3
Page 3 of 3

Exposed Domain Model pattern

Another drawback of using a façade is that you must write extra code. Moreover, the code that enables persistent domain objects to be returned to the presentation tier is especially prone to errors. There is the increased risk of runtime errors caused by the presentation tier trying to access an object that was not loaded by the business tier. If you are using JDO, Hibernate, or EJB 3, you can avoid this problem by exposing the domain model to the presentation tier and letting the business tier return the persistent domain objects back to the presentation tier. As the presentation tier navigates relationships between domain objects, the persistence framework will load the objects that it accesses, a technique known as lazy loading. Figure 6 shows a design in which the presentation tier freely accesses the domain objects.

Figure 6. Using an exposed domain model

In the design in Figure 6, the presentation tier calls the domain objects directly without going through a façade. Spring AOP continues to provide services such as transaction management and security.

An important benefit of this approach is that it eliminates the need for the business tier to know what objects it must load and return to the presentation tier. However, although this sounds simple, you will see there are some drawbacks. It increases the complexity of the presentation tier, which must manage database connections. Transaction management can also be tricky in a Web application because transactions must be committed before the presentation tier sends any part of the response back to the browser.

Decision 3: Accessing the database

No matter how you organize and encapsulate the business logic, eventually you have to move data to and from the database. In a classic J2EE application, you had two main choices: JDBC (Java Database Connectivity), which required a lot of low-level coding, or entity beans, which were difficult to use and lacked important features. In comparison, one of the most exciting things about using lightweight frameworks is that you have some new and much more powerful ways to access the database that significantly reduce the amount of database access code that you must write. Let's take a closer look.

What's wrong with using JDBC directly?

The recent emergence of object/relational mapping frameworks (such as JDO and Hibernate) and SQL mapping frameworks (such as iBATIS) did not occur in a vacuum. Instead, they emerged from the Java community's repeated frustrations with JDBC. Let's review the problems with using JDBC directly in order to understand the motivations behind the newer frameworks. There are three main reasons why using JDBC directly is not a good choice for many applications:

  • Developing and maintaining SQL is difficult and time consuming—Some developers find writing large, complex SQL statements quite difficult. It can also be time consuming to update the SQL statements to reflect changes in the database schema. You need to carefully consider whether the loss of maintainability is worth the benefits.
  • There is a lack of portability with SQL—Because you often need to use database-specific SQL, an application that works with multiple databases must have multiple versions of some SQL statements, which can be a maintenance nightmare. Even if your application only works with one database in production, SQL's lack of portability can be an obstacle to using a simpler and faster in-memory database such as Hypersonic Structured Query Language Database Engine (HSQLDB) for testing.
  • Writing JDBC code is time consuming and error-prone—You must write lots of boilerplate code to obtain connections, create and initialize prepared statements, and clean up by closing connections and prepared statements. You also have to write the code to map between Java objects and SQL statements. As well as being tedious to write, JDBC code is also error-prone.

The first two problems are unavoidable if your application must execute SQL directly. Sometimes, you must use the full power of SQL, including vendor-specific features, in order to get good performance. Or, for a variety of business-related reasons, your DBA might demand complete control over the SQL statements executed by your application, which can prevent you from using persistence frameworks that generate the SQL on the fly. Often, the corporate investment in its relational databases is so massive that the applications working with the databases can appear relatively unimportant. Quoting the authors of iBATIS in Action, there are cases where "the database and even the SQL itself have outlived the application source code, or even multiple versions of the source code. In some cases, the application has been rewritten in a different language, but the SQL and database remained largely unchanged." If you are stuck with using SQL directly, then fortunately there is a framework for executing it directly, one that is much easier to use than JDBC. It is, of course, iBATIS.

Using iBATIS

All of the enterprise Java applications I've developed executed SQL directly. Early applications used SQL exclusively, whereas the later ones, which used a persistence framework, used SQL in a few components. Initially, I used plain JDBC to execute the SQL statements, but later on I often ended up writing mini-frameworks to handle the more tedious aspects of using JDBC. I even briefly used Spring's JDBC classes, which eliminate much of the boilerplate code. But neither the homegrown frameworks nor the Spring classes addressed the problem of mapping between Java classes and SQL statements, which is why I was excited to come across iBATIS.

In addition to completely insulating the application from connections and prepared statements, iBATIS maps JavaBeans to SQL statements using XML descriptor files. It uses Java bean introspection to map bean properties to prepared statement placeholders and to construct beans from a ResultSet. It also includes support for database-generated primary keys, automatic loading of related objects, caching, and lazy loading. In this way, iBATIS eliminates much of the drudgery of executing SQL statements. iBATIS can considerably simplify code that executes SQL statements. Instead of writing a lot of low-level JDBC code, you write an XML descriptor file and make a few calls to iBATIS APIs.

Using a persistence framework

Of course, iBATIS cannot address the overhead of developing and maintaining SQL or its lack of portability. To avoid those problems you need to use a persistence framework. A persistence framework maps domain objects to the database. It provides an API for creating, retrieving, and deleting objects. It automatically loads objects from the database as the application navigates relationships between objects and updates the database at the end of a transaction. A persistence framework automatically generates SQL using the object/relational mapping, which is typically specified by an XML document that defines how classes are mapped to tables, how fields are mapped to columns, and how relationships are mapped to foreign keys and join tables.

EJB 2 had its own limited form of persistence framework: entity beans. However, EJB 2 entity beans have so many deficiencies, and developing and testing them is extremely tedious. As a result, EJB 2 entity beans should rarely be used. What's more, it is unclear how some of their deficiencies will be addressed by EJB 3.

The two most popular lightweight persistence frameworks are JDO, which is a Sun standard, and Hibernate, which is an open source project. They both provide transparent persistence for POJO classes. You can develop and test your business logic using POJO classes without worrying about persistence, then map the classes to the database schema. In addition, they both work inside and outside the application server, which simplifies development further. Developing with Hibernate and JDO is so much more pleasurable than with old-style EJB 2 entity beans.

In addition to deciding how to access the database, you must decide how to handle database concurrency. Let's look at why this is important as well as the available options.

Decision 4: Handling concurrency in database transactions

Almost all enterprise applications have multiple users and background threads that concurrently update the database. It's quite common for two database transactions to access the same data simultaneously, which can potentially make the database inconsistent or cause the application to misbehave. Most applications must handle multiple transactions concurrently accessing the same data, which can affect the design of the business and persistence tiers.

Applications must, of course, handle concurrent access to shared data regardless of whether they are using lightweight frameworks or EJB. However, unlike EJB 2 entity beans, which required you to use vendor-specific extensions, JDO and Hibernate directly support most of the concurrency mechanisms. What's more, using them is either a simple configuration issue or requires only a small amount of code.

In this section, you will get a brief overview of the different options for handling concurrent updates in database transactions, which are transactions that do not involve any user input. In the next section, I briefly describe how to handle concurrent updates in longer application-level transactions, which are transactions that involve user input and consist of a sequence of database transactions.

Isolated database transactions

Sometimes you can simply rely on the database to handle concurrent access to shared data. Databases can be configured to execute database transactions that are, in database-speak, isolated from one another. Don't worry if you are not familiar with this concept; for now the key thing to remember is that if the application uses fully isolated transactions, then the net effect of executing two transactions simultaneously will be as if they were executed one after the other.

On the surface this sounds extremely simple, but the problem with these kinds of transactions is that they have what is sometimes an unacceptable reduction in performance because of how isolated transactions are implemented by the database. For this reason, many applications avoid them and instead use what is termed optimistic or pessimistic locking, which is described a bit later.

Optimistic locking

One way to handle concurrent updates is to use optimistic locking. Optimistic locking works by having the application check whether the data it is about to update has been changed by another transaction since it was read. One common way to implement optimistic locking is to add a version column to each table, which is incremented by the application each time it changes a row. Each UPDATE statement's WHERE clause checks that the version number has not changed since it was read. An application can determine whether the UPDATE statement succeeded by checking the row count returned by PreparedStatement.executeUpdate(). If the row has been updated or deleted by another transaction, the application can roll back the transaction and start over.

It is quite easy to implement an optimistic locking mechanism in an application that executes SQL statements directly. But it is even easier when using persistence frameworks such as JDO and Hibernate because they provide optimistic locking as a configuration option. Once it is enabled, the persistence framework automatically generates SQL UPDATE statements that perform the version check.

Optimistic locking derives its name from the fact it assumes that concurrent updates are rare and that, instead of preventing them, the application detects and recovers from them. An alternative approach is to use pessimistic locking, which assumes that concurrent updates will occur and must be prevented.

Pessimistic locking

An alternative to optimistic locking is pessimistic locking. A transaction acquires locks on the rows when it reads them, which prevent other transactions from accessing the rows. The details depend on the database, and unfortunately not all databases support pessimistic locking. If it is supported by the database, it is quite easy to implement a pessimistic locking mechanism in an application that executes SQL statements directly. However, as you would expect, using pessimistic locking in a JDO or Hibernate application is even easier. JDO provides pessimistic locking as a configuration option, and Hibernate provides a simple programmatic API for locking objects.

In addition to handling concurrency within a single database transaction, you must often handle concurrency across a sequence of database transactions.

Decision 5: Handling concurrency in long transactions

Isolated transactions, optimistic locking, and pessimistic locking only work within a single database transaction. However, many applications have use-cases that are long running and that consist of multiple database transactions that read and update shared data. For example, suppose a use-case describes how a user edits an order (the shared data). This is a relatively lengthy process, which might take as long as several minutes and consists of multiple database transactions. Because data is read in one database transaction and modified in another, the application must handle concurrent access to shared data differently. It must use the Optimistic Offline Lock pattern or the Pessimistic Offline Lock pattern, two more patterns described by Fowler in Patterns of Enterprise Application Architecture.

Optimistic Offline Lock pattern

One option is to extend the optimistic locking mechanism described earlier and check in the final database transaction of the editing process that the data has not changed since it was first read. You can, for example, do this by using a version number column in the shared data's table. At the start of the editing process, the application stores the version number in the session state. Then, when the user saves their changes, the application makes sure that the saved version number matches the version number in the database.

Because the Optimistic Offline Lock pattern only detects changes when the user tries to save their changes, it only works well when starting over is not a burden on the user. When implementing such use-cases where the user would be extremely annoyed by having to discard several minutes' work, a much better option is to use the Pessimistic Offline Lock.

Pessimistic Offline Lock pattern

The Pessimistic Offline Lock pattern handles concurrent updates across a sequence of database transactions by locking the shared data at the start of the editing process, which prevents other users from editing it. It is similar to the pessimistic locking mechanism described earlier except that the locks are implemented by the application rather than the database. Because only one user at a time is able to edit the shared data, they are guaranteed to be able to save their changes.

Chris Richardson is a developer, architect, and mentor with more than 20 years of experience. He runs a consulting company that helps development teams become more productive and successful by adopting POJOs and lightweight frameworks. Richardson has been a technical leader at a variety of companies, including Insignia Solutions and BEA Systems. He holds a BA and MA in computer science from the University of Cambridge in England. He lives in Oakland, California, with his wife and three children.

Learn more about this topic

This story, "J2EE design decisions" was originally published by JavaWorld.

Copyright © 2006 IDG Communications, Inc.

1 2 3 Page 3
Page 3 of 3