As the processor revolution marches ever onward, we see significant gains in compute power year over year, with newer CPUs dwarfing the capabilities of their recent ancestors. We see core counts spiraling ever upward, even if we don't see massive gains in clock cycles. We see advents such as NUMA speeding up our systems exponentially under many workloads. In fact, we have never seen processing power plateau.
Within that bubble, software has grown exponentially in both features and bloat. We can do things with software now that were unthinkable even a few years ago, as the processing power necessary to make them happen simply wasn't available. This thread runs through the gamut of technology, from TV set-top boxes through smartphones and straight on into the data center.
[ Also on InfoWorld: 6 things every IT person should know and How to become a certified IT ninja. | Get practical IT advice and insight with Matt Prigge's Information Overload blog. | For the latest data center info and news, check out InfoWorld's Data Center newsletter. ]
What used to require custom ASICs and tightly coupled hardware and software can now be done with commodity components without a second thought. This is how we can easily pump massive amounts of data through software firewalls, and crunch through huge data sets in minutes where it took hours. The never-ending growth of processing power opens up new landscapes of technology, every step of the way.
Still, as any old-school computer jock knows, we could get a lot more out of that processing power. This is due to many factors, not the least of which is the rush to get new features and frameworks to market. That push generally leads to working with software stacks that are fairly wasteful and not especially tuned to their task. When you have a massive amount of processing power, you don't care so much about cleaning up the edges or optimizing as many processes as you can. If the software does more and runs faster than the last version on newer hardware, it's a win. So it is, and so it shall be.
But there exist some projects and tools that buck this trend, doing things with limited processing power that might shock you. A case in point would be iZ Technology's RADAR units, specifically the older ones. The RADAR units are essentially digital tape decks, originally designed to be drop-in replacements for 2-inch tape in professional recording studios. They function just like analog tape, but do so with digital audio.
Back in 1994, when the first unit appeared, this was a big deal. That original unit was capable of recording and playing back 24 tracks of 16KB 48KHz audio simultaneously to a single drive -- not too shabby with the computing technology of the day. But take a look at a later model, the RADAR 24. This system was released in 2000 and fits the same function as the original, but the capabilities were dramatically increased.
The RADAR 24 is essentially a standard PC mainboard with an Intel processor, standard RAM, and standard video and networking, surrounded by a host of analog and digital I/O options. It offered just about every possible way to get analog audio into digital form, and vice versa. The AD and DA converters were of stellar quality, integrated with the system via a custom PCI card that doubled as the SCSI card with 64MB of cache. The RADAR 24 booted from a standard IDE hard drive and boasted a CD/DVD burner. All audio was recorded to a generic SCSI disk in a hot-swap slot. This unit could even record and play back audio at rates up to 192KHz. That's way more than most professional studios would use, yet this unit could do it.