Spectrum Labs
Spectrum Labs

SpectrumLabs provides better alignment with clients and provides nuanced model adjustments faster with Galileo.

SpectrumLabs keeps up with nuanced language faster in production than ever before thanks to Galileo.

Comapany: Spectrum Labs
Industry: Content Moderation

Galileo just helps us identify data drift and other critical data errors much more quickly and much more intuitively. It’s the ability to skip to the next step of updating our model which makes our process much faster.

Jonathan Prunell

Jonathan Prunell

VP of Data Science at Spectrum Labs

Project Objective

Type of data

(unstructured, structured, etc)

  • Unstructured - text & audio (NLP)

Source of data

(real-world data, from clients, from open datasets, etc)

  • Client data, (dating, social, e-commerce platforms)
  • Public data sets- kaggle (jigsaw unintended bias)
  • Various global & regional social networks

Task Type

  • Text classification - does it contain hate speech, insults, threats

Stage of model development

(training, production, etc.)- training and production

ML Stack

Frameworks used

  • Pytorch & Tensorflow - sklearn (not in production but an internal tool for training)

Services used

(storing data/models, serving models, labeling data)

  • Built a lot in house Java backed cloud based production system

SpectrumLab’s mission is to build a better, more inclusive internet by using Ai. The main idea is to use ML models to recognize and respond to behavior - so moderators have signals to take action on. We help teams cultivate their community on their platforms by building models to help recognize user behaviors that are detractors. Then they can respond to these behaviors and build a more positive and generally affirming community that respects their listed community guidelines.

The performance of our models and our solutions is directly tied to the value that we're giving to our clients. That’s what initially drew me to SpectrumLabs - I have the ability to apply my expertise in ML for social good in a significant way. The model is not an add-on or a secondary feature- it’s the primary feature of the business. It’s one of those few cases where ML and AI are core to the product and the service that we're providing.

One of our main challenges is alignment. In part due to general alignment with stakeholders but in a larger way it’s about moving quickly to respond to how quickly languages change in these communities and platforms.  Often people will mask what they are trying to say or maneuver around guidelines and we need to help our clients be prepared to respond.

Galileo helps us with this core problem because we can take the manual work out of identifying potential data errors in our models and where we may need to make adjustments in our production models. In the end, we were also able to use the visualization tools to identify similar patterns across languages in a specific project so we got more value and insight using Galileo than we had expected.



Gain visibility into the data we are using to inform our production models and support frequent nuanced changes across models.

As I mentioned earlier, one of our challenges is alignment in part due to stakeholder alignment and another due to how quickly language changes. For example, if I'm delivering a speech detection system for one client, how do I design and support that in a way that's sustainable and useful for multiple other clients? Each platform has its own nuance, its own particular type of audience- and a lot of the behaviors we're working in has their own ambiguity in their definition. So aside from the AI challenges just in working in NLP, there's also just the challenge of defining the problem in a way that's that can be consistently aligned with what the clients are trying to achieve.

This is also a challenge because the nature of language just evolves, right? There is new slang and new terms that arise all the time.

In particular, if you think about some of the more extreme forms one thing we help with is to identify premeditated violence in terms of an organization or group that intends disruption or violence. In these groups they're adversarially trying to code or hide their language, right? So there are a lot of dynamic and real-time challenges there and a higher demand for quick iteration. To quickly understand your data and be able to detect data drift when these things happen means you are able to respond to that quickly to get that into production.



Remove manual data debugging processes with Galileo to proactively highlight mislabeling and data that needs attention.

In our work, there's a lot of feedback. A lot of our iteration work is on identifying trends of language which usually have some underlying themes that are either new, outside of our training data, or are in the process of labeling. We might be trying to identify systemic bias in our users, right? This might come from how our lexicons were defined or it could come from how they were interpreted by the labelers. In all these cases, we need to do some analysis to go into the data. Usually, it's looking through the data to try to find those trends. Once we have an idea, we'll look for specific phrases and so on.

A lot of this is kind of manual or it's in python notebooks. So what Galileo is to us is that combination of both visualizing the data and being able to explore and quickly find similar and related data as well as having a signal there to proactively highlight potential mislabeling or difficult data that needs more attention.

Galileo just helps us identify data drift and errors much more quickly and much more intuitively. It’s the ability to skip to the next step of updating our model which makes our process much faster.



Galileo saved us a lot of manual labor and provided deeper insights into our data to improve model performance.

Galileo just helps us identify data drift and other critical data errors much more quickly and much more intuitively. It’s the ability to skip to the next step of updating our model which makes our process much faster.

We also got more insights than we originally expected. When we were doing the pilot, I was looking at Galileo to help us with a profanity detection solution. One of the things about our solution that's also challenging is that we have a multilingual solution and there's only so many languages that our company speaks natively.

So we have to rely sometimes on language experts. Also because it's toxic a lot of the off-the-shelf stuff is not trained for that and will tend to fall over and make bad translations.

So when I was working on the profanity project, not only was I able to find some good examples to dig into on the English side- but because of Galileo’s similarity clustering algorithms, in the embedding visualization, I found a lot of related terms in other languages. It was really great to see that. It helped us verify that, yeah, this labeling is consistent across languages as well as behavior.

Overall, we saved a lot of time and can improve our model performance faster with Galileo.

Sign up Today to start using
Galileo for Free