Web-scouring robots anchor Kapow

Kapow's screen-scraper integrates, assembles your portal information with data from other online sites

For the past few years, all the talk about Web 2.0 has centered on helping Web sites knit themselves together in sophisticated remixes. Although most of the hype concerns the use of JavaScript to add intelligence to the browser client, a quieter group has been adding this power to the central server.

Kapow’s Web Integration Platform version 6.0 is one of the best examples of these central-server solutions. The suite is a big, automatic screen scraper that assembles the information into a portal, aggregating information from many different sites in a way that makes it easy for users to absorb.

Although the hype about doing the work on the client with JavaScript is exciting, there will always be advantages to a central service. Kapow’s solution doesn’t need to be debugged on the wide variety of browsers and it can also integrate with databases to store past information and give pages some historical content.

Robot results

The Web Integration Platform could be a hit with big IT shops that build information portals for employees and clients. I’ve seen a number of cases where portal projects bog down because one division doesn’t want to open up its databases and systems. One simple, easy-to-use connection system would be wonderful, but that means getting all parts of a company to support this central vision.

Kapow’s solution avoids the politics by offering a system of code-capturing robots that operate at the lowest-common denominator: HTML-marked up text. These robots are experts at extracting information from internal and external Web pages, and usually do not require much cooperation from the source.

The central server schedules the robots and aggregates their results. If someone goes to a portal page, the server will fire up the right robots to clip the correct information before bundling it together. This information can be cached temporarily or stored in a database for a long-term view.

The robots are blessed with a sophisticated language for understanding HTML. If you’ve ever done any screen scraping, you’ve probably said things like, “I’m looking for the second row of the table nested inside the second row of the main table.” Kapow’s internal nomenclature takes care of that by imitating the JavaScript DOM; in this case, the answer is: html.body.table.tr[2].td.table.tr[2].

Most users won’t need to worry about this language because Kapow includes a sophisticated workstation for taking Web sites apart. After you provide the URL, the Kapow suite loads the Web site and displays it in a section of the RoboMaker UI. You can then start snipping and cutting from the site by pointing and clicking on the parts you want. The HTML and the language for extracting the HTML appears in a window alongside the Web site.

The robot instructions are at the top of the UI; they’re built with a fairly traditional visual language, and you can add loops and branches. The result looks like a standard flowchart, although there are many special features tuned to the nature of HTML -- one loop command, for instance, will extract all but the top row of a table.

Event-handling upgrades

The biggest addition to Web Integration Platform 6.0 is its capability to handle JavaScript events. When Kapow started building robots, most Web pages were quite static and JavaScript-free, making it easy to specify the data location. When sites started embedding JavaScript for checking forms and rewriting the data, however, things started to break.

Adding JavaScript awareness to the robots rescued the server by giving it the power to execute the JavaScript. The robots now extract the data from the distant Web site and strip away the JavaScript before passing the information on to the portal user. The JavaScript code isn’t ignored -- it’s quietly simulated by Kapow’s server. It’s a complicated dance, but Kapow needed this feature if the robots were to deal with the new AJAX world.

The new features still won’t work at the most extreme Web sites, however. I’ve written AJAX pages that will calculate and rewrite tables after the user clicks a button; this type of page can’t be scraped easily.

I tested Kapow’s platform by building several robots and sending them off to collect information. The visual robot-building tool is surprisingly simple, yet powerful enough to handle many of the standard extraction jobs that it will be given.

Although it is nominally written in Java (Kapow has a partnership with BEA and also distributes a .Net version), most users will be able to build robots without knowing any Java. I suspect that some experienced programmers will be frustrated at times when they want to do something like produce odd Unicode characters, but average users will be able to develop much of the portal without help.

Kapow’s Web Integration Platform will find its greatest traction in two places: large shops with many legacy systems and centers of corporate intelligence. The developers in charge of linking the legacy systems will like the fact that they can scrape a screen without reprogramming that system. It may not be elegant to leave all of the old code in the path, but it could be a speedy integration solution.

Groups responsible for producing corporate dashboards and assembling intelligence will also appreciate Kapow’s wide-ranging site-scraping abilities. I could see someone in the hotel business using a system like this to watch the price of competitor’s hotel rooms.

Web Integration Platform version 6.0 is a well-polished mechanism for extracting data. If you need to gather the results from many different Web sites, this may be the fastest way to get your job done.

InfoWorld Scorecard
Value (10.0%)
Capability (30.0%)
Performance (15.0%)
Ease of development (30.0%)
Documentation (15.0%)
Overall Score (100%)
Kapow RoboSuite 6.0 9.0 8.0 9.0 8.0 8.0 8.3
Join the discussion
Be the first to comment on this article. Our Commenting Policies