The best CTOs of 2010

A major recession didn't stop these technology leaders from upping the ante on their technology or using it to survive tough times

1 2 3 4 5 6 Page 4
Page 4 of 6

Switching to open source for lower costs, increased flexibility

Mark Friedgan
CIO, Enova Financial

Over the past 18 months Enova Financial CIO Mark Friedgan has moved much of the company's technology from proprietary systems to open source ones. For example, he replaced a call center platform without significantly changing the user experience, so the company didn't have to retrain the call center staff. The switch in workstations from Windows to Linux also let Friedgan reuse his existing PC hardware, deploying a single boot image despite the use of several types of PCs. Furthermore, the switch to Linux lets Friedgan's team update and change workstations in real time over the network, only rarely requiring a reboot.

Enova now also uses an open source software PBX, which eliminates per-seat licensing fees. Plus, Enova can now use features such as least-cost routing, voicemail, and statistical tracking that would cost extra on a traditional PBX. And because of the PBX's open source nature, Enova has been able to write its own applications to interface with it and provide new functionality such as call recording and automated dialing.

The key to this project was choosing technologies that both satisfied the business needs of the users and prevented vendor dependence while keeping maintenance and deployment easy.

Parallelizing NFS file sharing

Garth Gibson
CTO, Panasas

Garth Gibson has been instrumental to the instigation, incubation, and adoption of Parallel NFS (pNFS) into version 4.1 of NFS, an IETF industry standard for file sharing. NFS v4.1 was offered to the IETF by the Network File System Working Group in late 2008, then approved and published as RFC 5661-5664 in January 2010.

NFS 4.1 introduces into the NFS standard mechanisms for parallel access, enabling a cluster of servers (exporting either file, object or block services) to satisfy client data requests in parallel without store-and-forward copying through an NFS metadata server. Known as Parallel NFS, or pNFS, parallel access enables an NFS service to scale single-system performance to meet the needs of large collections of high-performance clients.

Gibson has been a driving force behind the idea and adoption of pNFS, born in 2003, out of a conversation between Garth Gibson, Gary Grider of Los Alamos National Laboratory, and Lee Ward of Sandia National Laboratory. As a grad student at the University of California at Berkeley in 1988, Gibson did the groundwork research and cowrote the seminal paper on RAID.

With pNFS now incorporated into the NFS standard, Gibson is focused on gaining widespread adoption, which depends on the availability of client code in popular client operating systems, and Gibson and his Panasas team continue to lead in the development of a reference Linux implementation and its adoption into the Linux core. pNFS is expected to be deployed in Linux distributions and offered by multiple vendors by 2011. Getting pNFS in the NFS standard has required a lengthy process involving a community of storage technology leaders, including Panasas, IBM, EMC, Network Appliance, Sun Microsystems, and University of Michigan's Center for Information Technology Integration (CITI).

Restoring a company's faith in technology -- and in IT

Kris Herrin
CTO, Heartland Payment Systems

CTO Kris Herrin began transforming IT at Heartland Payment Systems from a startup-style company to a mature ITIL-oriented service organization during his tenure as CSO when he drove the response to the criminal intrusion of Heartland's card processing environment.

When Herrin took over as CTO in August 2009, he laid out three core principles for the IT service delivery and operations teams: security, reliability, and excellent service delivery. As fate would have it, within two weeks of Herrin's taking on the CTO role, Heartland experienced a core network switch hardware failure that cascaded into the main data center and brought the major revenue-generating systems offline.

Herrin set out a bold goal for his teams to rally behind: He announced that in November, he would personally pull the plug on a core switch to simulate the catastrophic failure. The project aimed to ensure the security and reliability of company's revenue-generating processing platforms and validate the ability of IT to deliver excellent information technology services. On November 17, two months after announcing the mission, Herrin did as promised and pulled the plug on the key switch.

This time, there was no disaster because the IT team executed on the efforts that Herrin set up just three months prior: analysis, design, and implementation of a new active/passive real-time processing environment, from the network layer through the many critical applications, that was designed to ensure card processing availability would meet the stringent needs of the business. The dramatic procedure helped restore morale of the IT service teams, who were demoralized by years of unmanaged growth, a major security breach in fall 2008, and the switch failure in August 2009. It also illustrated to both the IT teams and the corporation the importance of the work the IT teams do every day to plan and execute initiatives that are essential to the ongoing operations of the company.

Just one year to put an IT infrastructure in place -- and cut costs in half

Dennis Hodges
CIO, Inteva Products

The challenge for Dennis Hodges, CIO of Inteva Products, began when automotive supplier Inteva was spun out from Delphi as an independent company in 2008 and Hodges had to figure out how to structure its information systems and data management to support 17 facilities in six countries across three continents. Hodges was faced with leading a complete overhaul of the company's IT environment and its many different systems.

To complicate matters, the transition negotiated when Inteva was spun off from Delphi gave the new company just 12 months to migrate its entire infrastructure and application environment away from the former parent. And the company needed to reduce IT costs dramatically: from 2 percent of revenue to less than 1 percent.

One part of that effort involved implementing a single ERP system (Plex) across the company that provides a unified view of enterprise resources and financials. Hodges' team also launched a new quality management system that drives continuous improvement by emphasizing defect prevention and the reduction of variation and waste throughout the supply chain. The company has improved inventory management, streamlined purchase orders, improved product control and logistics functionality, and automated tool tracking.

The project will have paid for itself within five years, and Hodges cut IT expenses more than what management requested.

Improving automation testing

Jason Huggins
Co-founder, Sauce Labs

Jason Huggins is the original creator of Selenium, an open source tool with 2.6 million users that provides platform-independent automation testing. In the last 18 months, Huggins has been providing a great deal of support for the release of Selenium 2.0. The new primary feature is the integration of the WebDriver API into Selenium RC. This will address a number of Selenium 1.0 limitations, along with providing an alternative programming interface.

A main challenge Huggins is constantly facing is that Selenium can be slow, and functional tests are always slower than unit tests. Until the browsers can launch faster, there are always going to be speed issues. Parallel testing can solve some of these issues, so Huggins is actively investigating this area to improve Selenium further.

Huggins' realization of the chasm of adoption between beginner and advanced users (and thus between the Selenium IDE and Selenium RC versions) led him to develop a cloud service called Sauce OnDemand to bridge that gap for cross-browser testing.

1 2 3 4 5 6 Page 4
Page 4 of 6