You’ve Got Vulnerability: Exploring Effective Vulnerability Notifications

Security researchers can send vulnerability notifications
to take proactive measures in securing systems at scale.
However, the factors affecting a notification’s efficacy
have not been deeply explored. In this paper, we report
on an extensive study of notifying thousands of parties
of security issues present within their networks, with an
aim of illuminating which fundamental aspects of noti-
fications have the greatest impact on efficacy. The vulnerabilities
used to drive our study span a range of protocols
and considerations: exposure of industrial control
systems; apparent firewall omissions for IPv6-based services;
and exploitation of local systems in DDoS ampli-
fication attacks. We monitored vulnerable systems for
several weeks to determine their rate of remediation. By
comparing with experimental controls, we analyze the
impact of a number of variables: choice of party to contact
(WHOIS abuse contacts versus national CERTs versus
US-CERT), message verbosity, hosting an information
website linked to in the message, and translating
the message into the notified party’s local language. We
also assess the outcome of the emailing process itself
(bounces, automated replies, human replies, silence) and
characterize the sentiments and perspectives expressed in
both the human replies and an optional anonymous survey
that accompanied our notifications.
We find that various notification regimens do result
in different outcomes. The best observed process was
directly notifying WHOIS contacts with detailed information
in the message itself. These notifications had
a statistically significant impact on improving remediation,
and human replies were largely positive. However,
the majority of notified contacts did not take action, and
even when they did, remediation was often only partial.
Repeat notifications did not further patching. These results
are promising but ultimately modest, behooving the
security community to more deeply investigate ways to
improve the effectiveness of vulnerability notifications.

Source: https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_li.pdf

Hiring Site Reliability Engineers

Operating distributed systems at scale requires an unusual set of skills—problem solving, programming, system design, networking, and OS internals—which are difficult to find in one person. At Google, we’ve found some ways to hire Site Reliability Engineers, blending both software and systems skills to help keep a high standard for new SREs across our many teams and sites, including standardizing the format of our interviews and the unusual practice of making hiring decisions by committee. Adopting similar practices can help your SRE or DevOps team grow by consistently hiring excellent coworkers.

Source: https://www.usenix.org/system/files/login/articles/login_june_07_jones.pdf

Apples and Oranges: Detecting Least-Privilege Violators with Peer Group Analysis

Clustering software into peer groups based on its apparent functionality
allows for simple, intuitive categorization of software that
can, in particular, help identify which software uses comparatively
more privilege than is necessary to implement its functionality. Such
relative comparison can improve the security of a software ecosystem
in a number of ways. For example, it can allow market operators
to incentivize software developers to adhere to the principle of
least privilege, e.g., by encouraging users to use alternative, lessprivileged
applications for any desired functionality. This paper introduces
software peer group analysis, a novel technique to identify
least privilege violation and rank software based on the severity of
the violation. We show that peer group analysis is an effective tool
for detecting and estimating the severity of least privilege violation.
It provides intuitive, meaningful results, even across different defi-
nitions of peer groups and security-relevant privileges. Our evaluation
is based on empirically applying our analysis to over a million
software items, in two different online software markets, and on a
validation of our assumptions in a medium-scale user study.

Source: https://arxiv.org/pdf/1510.07308v1.pdf

Corporate Learning at Scale: Lessons from a Large Online Course at Google

Google Research recently tested a massive online class model
for an internal engineering education program, with machine
learning as the topic, that blended theoretical concepts and
Google-specific software tool tutorials. The goal of this training
was to foster engineering capacity to leverage machine
learning tools in future products. The course was delivered
both synchronously and asynchronously, and students had the
choice between studying independently or participating with
a group. Since all students are company employees, unlike
most publicly offered MOOCs we can continue to measure
the students’ behavioral change long after the course is complete.
This paper describes the course, outlines the available
data set and presents directions for analysis.

Source: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42855.pdf