Finding Clusters of Fake Accounts at Scale

Fake accounts are a preferred means for malicious users of online social networks to send spam, commit fraud, or otherwise abuse the system. In order to scale, a single malicious actor may create dozens to thousands of fake accounts; however, any individual fake account may appear to be legitimate on first inspection, for example by having a real-sounding name or a believable profile.

In this talk we will describe LinkedIn’s approach to finding and acting on clusters of fake accounts. We divide our approach into two parts: clustering, i.e., assembling groups of accounts that share one or more characteristics; and scoring, i.e., classifying each cluster into benign or malicious. We will describe different scoring mechanisms, propose some general classes of features used to score, and show how our modular approach allows us to scale to handle many different types of fake account clusters. We will also discuss tradeoffs between offline and online implementation of the algorithms.

Measuring Performance of Fake Account Detection and Remediation

Effective fake account defense systems are important to preventing spam without impacting product growth.  This presentation will discuss some of the methods Facebook uses to understand the performance of fake account detection and remediation with a bottom-to-top operation approach to drive performance.

Using Weighted Sampling to Understand the Prevalence of Spam

To effectively fight spam, we need an unbiased estimate of how much bad content there is in the ecosystem and where it resides. In this presentation we discuss sampling schemes to identify the small percentage of bad content viewed from both user generated content and commercially-motivated content such as ads and sponsored posts. These methods specifically employ ML-derived classifiers to weight the sampling, increasing the volume of bad content in the samples. With more bad content we are able to segment it further, allowing us to measure the prevalence of bad material in certain segments, or as identified by certain policies.

How to Utilize User-Generated Feedback to Fight Spam

When used in aggregate, user reporting can provide valuable indicators that can supplement automated systems. Algorithmic-based systems rely on classes of signals that have previously been shown to correspond to spam attacks. However, spammers continuously work to obfuscate their signals through new techniques and user reports can be a clear signal of new attack vectors. Facebook developed a robust and systematic approach to leverage trends in user-generated reports by geography, method-of-attack, location of the spam (e.g. News Feed, Groups, etc.) and type of content (e.g. photos, shares) to identify spam attacks. Attacks identified through this system are categorized based on current and new signals which are fed back into algorithmic systems to prevent further attacks using these techniques.