Tips For building an effective AI ecosystem

As AI becomes more prevalent, organizations must make it easier for developers to unlock AI’s potential

Across business use cases and verticals, engineers and leaders are constantly discussing the value AI can bring—often, the opportunities seem endless. It can predict your interests, the people you know or your next job.

However, we often overlook the steps that must be taken to execute AI-powered systems at scale. Deploying AI can be costly in terms of talent, compute resources, and time, and to fully unleash the wave of innovation that AI promises, developers must be properly empowered and equipped. In fact, many of the key elements needed for successful AI implementation have less to do with algorithm particulars and more with the tooling and processes in place around them.

Several of these tools and processes revolve around standardizing the most frequent workflows. This can take the form of something as simple as a spreadsheet listing common features, or as sophisticated as a full AI developer platform. As we’ve scaled our AI efforts at LinkedIn, we’ve gradually built toward the latter, creating our “Productive Machine Learning” (“Pro-ML” for short) program to improve developer productivity and efficiency.

Here are a few key takeaways and tips for organizations of any size that we’ve accumulated through this work.

Clean data in, smart insights out

A prerequisite to the process of deploying AI is having a thorough understanding of your data. The performance of an AI model is intrinsically tied to the data it’s trained on, so it’s important to know you have clean data to work with. Then, in choosing which datasets to use for training, it’s helpful to collaborate with your business partners to understand what the ultimate business goal is. For instance, if you want to “increase engagement” with a news feed, do you measure that by the click-through-rate for articles and posts, or the rate of “likes” or comments on posts? By jointly determining the best data to use to support clear business goals, you’ll design a more effective model.

Another factor to consider when selecting training data is how it is labeled. Does the data have sufficient context to be fed directly into a model, or does it require annotation? In the case of the latter, it’s important to create a “code book” or “run book” that sets standards for how data should be classified. I once worked with a small team of experts seeking to label a data set by hand, and when we evaluated the finished product, we realized that the agreement rate among them was less than 0.2. This means that the expert annotators didn't agree with each other at all, and there is no reason to expect a model trained on such data will perform acceptably. If experts can’t agree on how data should be labeled, it’s unrealistic to expect that annotators with a service like CrowdFlower (now Figure Eight) will be able to do so effectively.

The key takeaway: remove ambiguity and headaches down the line by being very clear up front with standards for data labeling.

Make features standardized and repeatable

Across the different product lines of LinkedIn, different teams are using AI to solve for different problems (optimizing the feed, identifying recruiter-candidate fit, and suggesting courses for your next career move, to name a few). Each team uses different pipelines to produce the desired features of their machine learning models, as each use case is distinct. Yet across these teams, we saw similar features pop up again and again, and decided the process must be streamlined.

We created our feature marketplace, Frame, which helps address this issue by allowing teams to leverage existing features and knowledge. Frame acts as a common repository for teams to share, find, and manage features for their respective machine learning models. Its key innovation is to abstract how a feature is anchored from its name and semantics. This allows all teams to start from the same, standardized feature template, and then customize it further as needed for their particular pipelines or environments. As teams work on different types of projects, the marketplace prevents duplicate work, saving time and resources.

Be proactive in model maintenance

Models degrade over time; it’s an inevitable part of the machine learning lifecycle. We overcome this at LinkedIn by taking a proactive approach to model maintenance. From the very start, when we’re building models, we do so in a way that we know will make retraining easier. The models that we create and test are not viewed as throw-away experiments, but production-level quality, code-reviewed artifacts. That way, when the time comes to retrain the model, we have a solid definition to follow to make retraining easier.

We also engage in “scheduled retrainability,” enforcing a set schedule for when we retrain models. This helps take some cognitive load off of modeling teams, and also ensures that we discover any model weaknesses before the model stops working entirely. We’ve also invested in performance monitoring tools for health assurance. While any degree of monitoring is better than none, a good goal to work towards is having automated monitoring that sends alerts when certain metrics exceed preset thresholds.

Carrying out an AI deployment may only require certain elements—GPUs, models, data, etc.—but successfully implementing AI across an organization at scale requires a sturdy supporting toolkit that empowers developers. By equipping developers with best practices and tools surrounding AI work, we’re scaling our ability to apply AI in the best ways possible.

This article is published as part of the IDG Contributor Network. Want to Join?