Sometimes it's OK to treat people like numbers

After all, the customers' own Web activity fuels analytic models, but a caveat: The data can grow too complex

One of the most hackneyed complaints in the modern world is the notion that being "treated like a number" is always a bad thing.

This isn't to say I'm disparaging the importance of treating customers as individuals, personalizing engagement with each of them, maintaining a 360-degree view of each, and keeping the human touch when possible. But given the focus of big data and its myriad applications on statistical analysis, most modern organizations have bet the business on treating every customer and everything else "like a number" (aka leveraging statistical data analysis).

[ Download InfoWorld's Big Data Analytics Deep Dive for a comprehensive, practical overview of this booming field. | For a quick, smart take on the news you'll be talking about, check out InfoWorld TechBrief -- subscribe today. ]

The concern has always been that big institutions are supposedly hiding behind opaque, abstract, impersonal "numbers" that prevent them from seeing each customer as a unique individual. However, today's irony is that the business world increasingly leverages the numbers (aka powerful statistical models and other advanced analytics) to drive fine-grained personalization of every customer recommendation, interaction, transaction, and experience.

Nonetheless, the "impersonal numbers" concern still retains a kernel of validity in the era of next-best actions powered by big data analytics. Inline recommendation engines (aka next-best action or decision automation infrastructure), which drive much of the personalization, often rely on extraordinarily complex analytical models, business rules, and other embedded logic.

As I noted in a recent LinkedIn post, the increasingly intricate yarn ball of logic that drives personalization can make it difficult to produce a full, transparent accounting of all the context, data, rules of thumb, and assumptions upon which specific automated decisions were made.

Just as problematic, personal responsibility for those decisions grows blurrier as the roster of data scientists and subject-matter experts who build and tune that logic grows more crowded.

Furthermore, these smart people collaborate to produce logic that drives highly unique, personalized, and situation-specific recommendations and guidance on across one or more engagement channels: portal, call center, smartphone interface, and so on. The specific recommendations and next best actions that their models drive cannot always be anticipated by any one human in advance.

My feeling is that it's OK to treat people "like numbers" as long as they are the correct numbers, drive desirable outcomes, and are available for deep introspection and clear explanation. These thoughts came back to me in spades as I read an excellent recent article on "deconstructing recommender systems." The piece, authored by Joseph A. Konstan and John Riedl, discusses a statistical modeling approach -- singular value decomposition (SVD) -- used by Amazon, Netflix, and others in their personalized recommendation engines. It discusses these and other modern-day recommendation engines within the context of industry innovations dating back to the early 1990s.

1 2 Page 1
Page 1 of 2