For Good Measure Remember the Recall

Sherlock’s statement is most often quoted to imply that uncommon
scenarios can all be explained away by reason and logic. This is missing
the point. The quote’s power is in the elimination of the impossible
before engaging in such reasoning. The present authors seek to expose
a similar misapplication of methodology as it exists throughout information
security and offer a framework by which to elevate the common Watson.

Source: https://www.usenix.org/system/files/login/articles/login_summer18_11_geer.pdf

Advertisements

A Comprehensive Formal Security Analysis of OAuth 2.0

The OAuth 2.0 protocol is one of the most widely deployed authorization/single sign-on (SSO) protocols
and also serves as the foundation for the new SSO standard OpenID Connect. Despite the popularity
of OAuth, so far analysis efforts were mostly targeted at finding bugs in specific implementations and
were based on formal models which abstract from many web features or did not provide a formal treatment
at all.
In this paper, we carry out the first extensive formal analysis of the OAuth 2.0 standard in an expressive
web model. Our analysis aims at establishing strong authorization, authentication, and session
integrity guarantees, for which we provide formal definitions. In our formal analysis, all four OAuth
grant types (authorization code grant, implicit grant, resource owner password credentials grant, and
the client credentials grant) are covered. They may even run simultaneously in the same and different
relying parties and identity providers, where malicious relying parties, identity providers, and browsers
are considered as well. Our modeling and analysis of the OAuth 2.0 standard assumes that security
recommendations and best practices are followed in order to avoid obvious and known attacks.
When proving the security of OAuth in our model, we discovered four attacks which break the security
of OAuth. The vulnerabilities can be exploited in practice and are present also in OpenID Connect.
We propose fixes for the identified vulnerabilities, and then, for the first time, actually prove the
security of OAuth in an expressive web model. In particular, we show that the fixed version of OAuth
(with security recommendations and best practices in place) provides the authorization, authentication,
and session integrity properties we specify.

Source: https://arxiv.org/pdf/1601.01229.pdf

Cross Origin Infoleaks

Browsers do their best to enforce a hard security boundary on an origin-by-origin basis. To vastly
oversimplify, applications hosted at distinct origins must not be able to read each other’s data or
take action on each other’s behalf in the absence of explicit cooperation. Generally speaking,
browsers have done a reasonably good job at this; bugs crop up from time to time, but they’re
well-understood to be bugs by browser vendors and developers, and they’re addressed promptly.
The web platform, however, is designed to encourage both cross-origin communication and
inclusion. These design decisions weaken the borders that browsers place around origins, creating
opportunities for side-channel attacks (pixel perfect, resource timing, etc.) and server-side
confusion about the provenance of requests (CSRF, cross-site search). Spectre and related attacks
based on speculative execution make the problem worse by allowing attackers to read more
memory than they’re supposed to, which may contain sensitive cross-origin responses fetched by
documents in the same process. Spectre is a powerful attack technique, but it should be seen as a
(large) iterative improvement over the platform’s existing side-channels.
This document reviews the known classes of cross-origin information leakage, and uses this
categorization to evaluate some of the mitigations that have recently been proposed (CORB,
From-Origin, Sec-Metadata / Sec-Site, SameSite cookies and Cross-Origin-Isolate). We attempt to
survey their applicability to each class of attack, and to evaluate developers’ ability to deploy them
properly in real-world applications. Ideally, we’ll be able to settle on mitigation techniques which
are both widely deployable, and broadly scoped.

Source: https://www.arturjanc.com/cross-origin-infoleaks.pdf

PREDATOR: Proactive Recognition and Elimination of Domain Abuse at Time-Of-Registration

Miscreants register thousands of new domains every day to launch
Internet-scale attacks, such as spam, phishing, and drive-by downloads.
Quickly and accurately determining a domain’s reputation
(association with malicious activity) provides a powerful tool for mitigating
threats and protecting users. Yet, existing domain reputation
systems work by observing domain use (e.g., lookup patterns, content
hosted)—often too late to prevent miscreants from reaping benefits of
the attacks that they launch.
As a complement to these systems, we explore the extent to which
features evident at domain registration indicate a domain’s subsequent
use for malicious activity. We develop PREDATOR, an approach that
uses only time-of-registration features to establish domain reputation.
We base its design on the intuition that miscreants need to obtain
many domains to ensure profitability and attack agility, leading to
abnormal registration behaviors (e.g., burst registrations, textually
similar names). We evaluate PREDATOR using registration logs of
second-level .com and .net domains over five months. PREDATOR
achieves a 70% detection rate with a false positive rate of 0.35%, thus
making it an effective—and early—first line of defense against the
misuse of DNS domains. It predicts malicious domains when they
are registered, which is typically days or weeks earlier than existing
DNS blacklists.

Source: http://www.icir.org/vern/papers/predator-ccs16.pdf

Beyond Corp 1 – A New Approach to Enterprise Security

Virtually every company today uses firewalls to enforce perimeter
security. However, this security model is problematic because, when
that perimeter is breached, an attacker has relatively easy access to a
company’s privileged intranet. As companies adopt mobile and cloud technologies,
the perimeter is becoming increasingly difficult to enforce. Google
is taking a different approach to network security. We are removing the
requirement for a privileged intranet and moving our corporate applications
to the Internet.

Source: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43231.pdf

Beyond Corp 2 – Design to Deployment

The goal of Google’s BeyondCorp initiative is to improve our security
with regard to how employees and devices access internal applications.
Unlike the conventional perimeter security model, BeyondCorp
doesn’t gate access to services and tools based on a user’s physical location
or the originating network; instead, access policies are based on information
about a device, its state, and its associated user. BeyondCorp considers both
internal networks and external networks to be completely untrusted, and
gates access to applications by dynamically asserting and enforcing levels, or
“tiers,” of access.
We present an overview of how Google transitioned from traditional security infrastructure
to the BeyondCorp model and the challenges we faced and the lessons we learned in the process.
For an architectural discussion of BeyondCorp, see [1].

Source: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44860.pdf