Reconciling political beliefs with career ambitions

Data professionals face personal dilemmas as AI contributes to ethical issues such as reduced privacy, algorithmic bias, and advanced military weapons.

Reconciling political beliefs with career ambitions
Matejmo / Getty Images

Today’s political environment is increasingly hostile to data management as a profession. If you’ve made your career in machine learning, data mining, predictive modeling, or related fields, these controversies may have you second-guessing your decision to pursue this line of work. The principal flashpoints center on issues of data privacy, algorithmic bias, and AI weaponization.

Prioritizing privacy over data-driven marketing

Data management professionals face growing scrutiny over privacy violations, surveillance, and other intrusive impacts of the applications they’re responsible for building and managing.

One of the most disturbing new themes is the ideological framing of how technology enables “surveillance capitalism.” This term, coined by Harvard Business School professor Shoshana Zuboff, essentially stigmatizes the collection, ownership, processing, and use of customers’ PII (personally identifiable information). This notion regards any enterprise use of customer PII—such as microsegmentation, contextual offer targeting, and cross-channel ad optimization—as a form of monitoring and control.

Concerns over data-driven CRM aren’t limited to academics. It's clear from recent congressional hearings that this perspective aligns with popular opinion regarding the practices of Google, Facebook, Amazon, and other 21st century digital businesses. The public is increasingly uneasy about privacy encroachment, as big brands compete to see who can acquire the most comprehensive range of intrusive data about every aspect of our lives, including our inner thoughts, sentiments, and predilections.

Consequently, many data professionals are facing a crossroads in their careers. On the one hand, their employers have built successful businesses fueled by predictive targeting, one-to-one personalization and multichannel engagement. On the other hand, data professionals are feeling more squeamish about the depth to which they use customer PII to fuel AI-driven CRM programs, such as predictive personality profiling, next-best-action targeting, or real-time behavioral pricing.

None of this is particularly strange, sinister, or shameful. These practices are central to how business is done these days. If you have ideological misgivings about these or other data-driven CRM methodologies, you’ll probably never work in enterprise data management or modern marketing. If you refuse to work on a program simply because it implements these modern customer-engagement methodologies, you won’t get much sympathy from your employers and, in fact, they are likely to show you the door.

But that doesn’t mean you have to stand idly by while your employer runs amok with customer data. You can become your company’s foremost data-privacy advocate, for example. If nothing else, you can make sure your firm complies rigorously with the European Union’s General Data Protection Regulation and similar privacy laws elsewhere.

Ridding our lives of data-driven algorithmic biases

Data has been on the front lines in recent culture wars due to accusations of racial, gender, and other forms of socioeconomic bias perpetrated in whole or in part through algorithms.

Algorithmic biases have become a hot-button issue in global society, a trend that has spurred many jurisdictions and organizations to institute a greater degree of algorithmic accountability in AI practices. Data scientists who’ve long been trained to eliminate biases from their work now find their practices under growing scrutiny from government, legal, regulatory, and other circles.

Eliminating bias in the data and algorithms that drive AI requires constant vigilance on the part of not only data scientists but up and down the corporate ranks. As Black Lives Matter and similar protests have pointed out, data-driven algorithms can embed serious biases that harm demographic groups (racial, gender, age, religious, ethnic, or national origin) in various real-world contexts.

Much of the recent controversy surrounding algorithmic biases has focused on AI-driven facial recognition software. Biases in facial recognition applications are especially worrisome if used to direct predictive policing programs or potential abuse by law enforcement in urban areas with many disadvantaged minority groups.

Many AI solution vendors have seen an extensive grassroots effort among their own employees to take a strong stand against police abuses of facial recognition. In June as the Black Lives Matter protests heated up, employees at Amazon Web Services called on the firm to sever its police contracts. More than 250 Microsoft employees published an open letter demanding that the company end its work with police departments.

This is an AI application domain in which practitioners will have to take their lumps as biases continue to surface, and associated legal and regulatory penalties follow closely behind. So far there is little consensus on viable frameworks for regulating uses, deployments, and management of facial recognition programs

Nevertheless, AI practitioners know that the opportunities in facial recognition are too numerous and lucrative to forgo indefinitely. Embedding facial recognition into iPhones and other devices will ensure that this technology is a key tool in everybody’s personal tech portfolio. More businesses are incorporating facial recognition into internal and customer-facing applications for biometric authentication, image/video autotagging, query by image, and other valuable uses. Social distancing has made many people more receptive to facial recognition as a contactless option for strong authentication to many device-level and online services.

Indeed, a recent Cap Gemini global survey found that adoption of facial recognition is likely to continue growing among large businesses in every sector and region, even as the pandemic recedes.

Bias isn’t an issue that can be fixed once and for all. Where decision-making algorithms are concerned, organizations must always make biases as transparent as possible and attempt to eliminate any that perpetuate unfair societal outcomes. In addition, ongoing auditing of AI biases—not just in facial recognition but in all other socially impactful application domains—must become a standard task in AI devops workflows.

­­All of this raises the possibility that more data scientists will have to decide whether to participate in such projects and to what extent. Many may take the limited, near-term option of opting out of contributing to law enforcement applications of facial recognition. As a longer-term sustainable approach, data scientists who aren’t ideologically opposed to facial recognition will need to redouble efforts to eliminate biases that are baked into these models and the facial-image data sets that train them.

Opting out of AI-driven weapons development

Data is central to modern warfare. Data-driven algorithms, especially deep learning, give weapons systems the capability of seeing, hearing, sensing, and adjusting real-time strategies far better and faster than most humans.

AI is the future of warfare and of defenses against algorithmic weapon systems. Future battles will almost certainly have casualty counts that are staggering and lopsided, especially when one side’s arsenal is almost entirely composed of autonomous weapons systems equipped with phalanxes of 3D cameras, millimeter-wave radar, biochemical detectors, and other ambient sensors.

If you’re a career-minded data scientist, you’ll be tempted to lend your talents to military projects, which tend to be the most exciting, cutting-edge, R&D-driven initiatives in AI. The shortage of highly qualified AI professionals practically ensures that if you have what it takes, you’ll fetch a high salary with numerous perks. Also, the amount of VC money flowing into startups in this sector ensures that many data scientists who’ve cut their teeth on AI-centric weapon programs will become quite wealthy and powerful.

AI professionals are highly ambivalent about participating in military projects. In a recent survey cited here, U.S. AI specialists reported more favorable than unfavorable attitudes about working with the Department of Defense, though a large plurality are neutral on the topic. Respondents reported being more favorably inclined to accepting DoD grants for basic research than applied research. In the survey, AI professionals’ most cited reason for taking DoD grants was to work on “interesting problems.” “Discomfort with how DoD will use the work” was the most frequently cited downside. Approximately three-quarters of those surveyed had negative attitudes about battlefield applications of AI.

AI professionals may think that hitching their career to a commercial cloud provider such as Amazon Web Services, Microsoft, Google, or IBM will help them avoid pressure to work on military projects. That’s just not so. Many tech firms have been applying their innovations to DoD projects for several years. In fact, Big Tech continues to cultivate close ties with the U.S. military, as shown in this recent study.

If you think you can limit your contributions to defensive and back-office AI applications in the military—per the “ground rules” that former Google CEO Eric Schmidt proposed a few years ago—you’re in for a rude awakening. No such rules for commercial AI vendors’ military engagement can realistically stop the underlying approaches from being used in weapons systems for offensive purposes. In fact, the likelihood of that possibility is heightened by the fact that many military projects’ underlying AI technologies, including open-source modeling software and unclassified image data, are available freely.

In the unlikely scenario that all AI companies walk away from projects with the United States’ and other nations’ military establishments, that would still leave an opportunity for universities and nonprofit research centers to pick up the work. Considering how much money the military is likely to funnel into such contracts, this could easily reverse the brain drain that’s causing the best and brightest AI researchers to leave academia and seek their fortunes in the private sector.

None of this means you can’t opt out of participating in the cyber-industrial complex that’s sprung up around militarized AI. If you’re morally opposed to this sort of work, you can spend your entire data career without ever having to compromise your principles. AI specialists can find plenty of humanitarian or other unobjectionable uses for their talents.

Alternately, you might consider working on technological countermeasures designed to neutralize an adversary’s AI-powered weaponry. One of the most promising new professional opportunity is in building AI-driven counterdrone defenses. In recent years, militaries all over the world have deployed drones successfully as a fast-strike, low-cost, AI-driven alternative to conventional warfare tactics such as armored vehicles and fighter jets. For example, Azerbaijan used drones successfully in a recent war against Armenia, deploying the miniature unmanned aerial vehicles to destroy tanks and other armored fighting vehicles.

Counterdrone defenses are a hot focus of R&D and startup activity. Many such projects use AI to automate detecting drones, pinpointing locations, and predicting likely flight paths of drones. Many use AI to automate classification of approaching drones by model, operator, and threat profile, taking care to minimize false positive and false negatives. They also rely on machine learning to automatically trigger security alerts and activate the mechanisms to physically destroy, disable, distract, or otherwise neutralize weaponized drones.

Finding a middle road

In a politically polarized cultural landscape, data professionals may find it difficult to keep their leanings under wraps. They may also have pangs of conscience that deter them from engaging in projects whose objectives contravene their deep convictions.

Before you opt out of some otherwise objectionable datacentric project, consider whether you can contribute to implementing effective controls such as privacy protection mechanisms, debiasing processes, and automated countermeasures that mitigate the more objectionable aspects. That could allow you, on some level, to reconcile your political convictions with your professional ambitions.

In the process, you would be doing your part to make the world a better place.

Copyright © 2020 IDG Communications, Inc.

InfoWorld Technology of the Year Awards 2023. Now open for entries!