Research Blog: Federated Learning: Collaborative Machine Learning without Centralized Training Data

Standard machine learning approaches require centralizing the training data on one machine or in a datacenter. And Google has built one of the most secure and robust cloud infrastructures for processing this data to make our services better. Now for models trained from user interaction with mobile devices, we’re introducing an additional approach: Federated Learning.

Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. This goes beyond the use of local models that make predictions on mobile devices (like the Mobile Vision API and On-Device Smart Reply) by bringing model training to the device as well.

Source: https://research.googleblog.com/2017/04/federated-learning-collaborative.html?m=1

CONIKS: Bringing Key Transparency to End Users

We present CONIKS, an end-user key verification service capable of integration in end-to-end encrypted communication systems. CONIKS builds on transparency log proposals for web server certificates but solves several new challenges specific to key verification for end users. CONIKS obviates the need for global third-party monitors and enables users to efficiently monitor their own key bindings for consistency, downloading less than 20 kB per day to do so even for a provider with billions of users. CONIKS users and providers can collectively audit providers for non-equivocation, and this requires downloading a constant 2.5 kB per provider per day. Additionally, CONIKS preserves the level of privacy offered by today’s major communication services, hiding the list of usernames present and even allowing providers to conceal the total number of users in the system.

Source: https://eprint.iacr.org/2014/1004.pdf

Building a RAPPOR with the Unknown: Privacy-Preserving Learning of Associations and Data Dictionaries

Techniques based on randomized response enable the collection of potentially sensitive data from clients in a privacy-preserving manner with strong local differential privacy guarantees. A recent such technology, RAPPOR, enables estimation of the marginal frequencies of a set of strings via privacy-preserving crowdsourcing. However, this original estimation process relies on a known dictionary of possible strings; in practice, this dictionary can be extremely large and/or unknown. In this paper, we propose a novel decoding algorithm for the RAPPOR mechanism that enables the estimation of “unknown unknowns,” i.e., strings we do not know we should be estimating. To enable learning without explicit dictionary knowledge, we develop methodology for estimating the joint distribution of multiple variables collected with RAPPOR. Our contributions are not RAPPOR-specific, and can be generalized to other local differential privacy mechanisms for learning distributions of string-valued random variables.

Source: https://research.google.com/pubs/pub45382.html

Privacy-Preserving Network Flow Recording

Network flow recording is an important tool with applications that range from legal compliance and security auditing
to network forensics, troubleshooting, and marketing. Unfortunately, current network flow recording technologies
do not allow network operators to enforce a privacy policy on the data that is recorded, in particular how this
data is stored and used within the organization. Challenges to building such a technology include the public
key infrastructure, scalability, and gathering statistics about the data while still preserving privacy.

We present a network flow recording technology that addresses these challenges by using Identity Based Encryption in combination with privacy-preserving semantics for on-the-fly statistics. We argue that our implementation
supports a wide range of policies that cover many current applications of network flow recording. We also
characterize the performance and scalability of our implementation and find that the encryption and statistics scale
well and can easily keep up with the rate at which commodity systems can capture traffic, with a couple of interesting
caveats about the size of the subnet that data is being recorded for and how statistics generation is affected
by implementation details. We conclude that privacy preserving network flow recording is possible at 10 gigabit
rates for subnets as large as a /20 (4096 hosts).

Because network flow recording is one of the most serious threats to web privacy today, we believe that developing
technology to enforce a privacy policy on the recorded data is an important first step before policy makers
can make decisions about how network operators can and should store and use network flow data. Our goal in
this paper is to explore the tradeoffs of performance and scalability vs. privacy, and the usefulness of the recorded
data in forensics vs. privacy.

Source: https://www.cs.unm.edu/~crandall/dfrws2011.pdf

Differential Privacy for Collaborative Security

Fighting global security threats with only a local view is inherently difficult. Internet network operators need to fight global phenomena
such as botnets, but they are hampered by the fact that operators can observe only the traffic in their local domains. We propose a
collaborative approach to this problem, in which operators share aggregate information about the traffic in their respective domains
through an automated query mechanism. We argue that existing work on differential privacy and type systems can be leveraged to
build a programmable query mechanism that can express a wide range of queries while limiting what can be learned about individual
customers. We report on our progress towards building such a mechanism, and we discuss opportunities and challenges of the
collaborative security approach.

Source: https://www.cis.upenn.edu/~ahae/papers/collaborative-eurosec2010.pdf

Replacing Sawzall — a case study in domain-specific language migration

In a previous post, we described how data scientists at Google used Sawzall to perform powerful, scalable analysis. However, over the last three years we’ve eliminated almost all our Sawzall code, and now the niche that Sawzall occupied in our software ecosystem is mostly filled by Go. In this post, we’ll describe Sawzall’s role in Google’s analysis ecosystem, explain some of the problems we encountered as Sawzall use increased which motivated our migration, and detail the techniques we applied to achieve language-agnostic analysis while maintaining strong access controls and the ability to write fast, scalable analyses.

Source: http://www.unofficialgoogledatascience.com/2015/12/replacing-sawzall-case-study-in-domain.html