An ounce of prevention: Avoid J2EE data layer bottlenecks

Best practices for tackling data bottlenecks within J2EE environments

1 2 Page 2
Page 2 of 2
  • Complex business entities: Portfolios, positions, and security data, whose cardinality is in hundreds, thousands, and tens of thousands, respectively
  • Real-time position updates: Any change to a model portfolio causes automatic changes to its child portfolios, each of which is subject to a set of business rules
  • Large numbers of users: Hundreds of portfolio managers use the application to actively manage the portfolios and/or to model "what-if" scenarios

To meet these business requirements, any Java server-based solution must provide:

  • Near real-time performance guarantees
  • Guaranteed high levels of business data integrity and 24-7 operation
  • Ability to scale along with increased business loads

In building the portfolio management application, a best practices data services layer first provides a model-driven approach for defining the object model and specifying how it should map to the database for maximum efficiency. Using native database APIs and database-specific performance features will help optimize the mapping performance.

For example, data services layer tools can accept object-modeling input from a data dictionary, Rational Rose or Eclipse, and then use that information to generate standard entity beans that run transparently inside the J2EE server.

On deployment of the portfolio management application, a best practices data services layer caches local copies of frequently accessed business data, assuring maximum throughput rates, and cooperates in synchronizing data changes with other caches, ensuring that the data's local copy is synchronized across all server instances. The data services layer effectively provides a data virtualization model for the J2EE architectures, greatly reducing the complexity of scaling a J2EE application, while simultaneously enhancing its robustness and integrity.

For distributed applications, a data services layer can provide even greater value. Each geographically distributed, independent server instance uses a local cache as its data layer. The local cache handles read access in these remote sites, greatly accelerating performance, while writes still write back to the primary database, providing centralized data management.

Summary and conclusions

Under the J2EE specification, the container manages the mapping between Java components and the underlying database schema. This approach provides a clean, managed component architecture, but has the inherent limitation that each data access operation results in one or more physical disk I/Os. The 50/50 rule of thumb gives architects and developers an easy way to assess the potential for data bottlenecks.

Even for market-leading application servers such as WebLogic and WebSphere, the standard J2EE architecture leads to data bottlenecks which can only be addressed with significant hand-coding or the adoption of a third-party data services layer. A best practices approach for a data services layer must fit transparently within J2EE and integrate mapping and caching with J2EE transactions.

Christopher Keene is CEO of Persistence Software. He has appeared as a featured expert on real-time data convergence in numerous publications and conferences. Keene is a frequent presenter on such topics as the future of software development, real-time information management, and data convergence through the virtual data layer. Keene earned an MBA from the Wharton School and holds a bachelor's degree with honors in mathematics from Stanford University.

Learn more about this topic

This story, "An ounce of prevention: Avoid J2EE data layer bottlenecks" was originally published by JavaWorld.

Copyright © 2004 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2