Andromeda: Performance, Isolation, and Velocity at Scale in Cloud Network Virtualization

This paper presents our design and experience with Andromeda,
Google Cloud Platform’s network virtualization
stack. Our production deployment poses several challenging
requirements, including performance isolation among
customer virtual networks, scalability, rapid provisioning
of large numbers of virtual hosts, bandwidth and latency
largely indistinguishable from the underlying hardware,
and high feature velocity combined with high availability.
Andromeda is designed around a flexible hierarchy of
flow processing paths. Flows are mapped to a programming
path dynamically based on feature and performance
requirements. We introduce the Hoverboard programming
model, which uses gateways for the long tail of low bandwidth
flows, and enables the control plane to program
network connectivity for tens of thousands of VMs in
seconds. The on-host dataplane is based around a highperformance
OS bypass software packet processing path.
CPU-intensive per packet operations with higher latency
targets are executed on coprocessor threads. This architecture
allows Andromeda to decouple feature growth from
fast path performance, as many features can be implemented
solely on the coprocessor path. We demonstrate
that the Andromeda datapath achieves performance that is
competitive with hardware while maintaining the flexibility
and velocity of a software-based architecture.

Source: https://www.usenix.org/system/files/conference/nsdi18/nsdi18-dalton.pdf

Advertisements

PREDATOR: Proactive Recognition and Elimination of Domain Abuse at Time-Of-Registration

Miscreants register thousands of new domains every day to launch
Internet-scale attacks, such as spam, phishing, and drive-by downloads.
Quickly and accurately determining a domain’s reputation
(association with malicious activity) provides a powerful tool for mitigating
threats and protecting users. Yet, existing domain reputation
systems work by observing domain use (e.g., lookup patterns, content
hosted)—often too late to prevent miscreants from reaping benefits of
the attacks that they launch.
As a complement to these systems, we explore the extent to which
features evident at domain registration indicate a domain’s subsequent
use for malicious activity. We develop PREDATOR, an approach that
uses only time-of-registration features to establish domain reputation.
We base its design on the intuition that miscreants need to obtain
many domains to ensure profitability and attack agility, leading to
abnormal registration behaviors (e.g., burst registrations, textually
similar names). We evaluate PREDATOR using registration logs of
second-level .com and .net domains over five months. PREDATOR
achieves a 70% detection rate with a false positive rate of 0.35%, thus
making it an effective—and early—first line of defense against the
misuse of DNS domains. It predicts malicious domains when they
are registered, which is typically days or weeks earlier than existing
DNS blacklists.

Source: http://www.icir.org/vern/papers/predator-ccs16.pdf

Systematic Generation of Fast Elliptic Curve Cryptography Implementations

Widely used implementations of cryptographic primitives
employ number-theoretic optimizations specific to large
prime numbers used as moduli of arithmetic. These optimizations
have been applied manually by a handful of experts,
using informal rules of thumb. We present the first
automatic compiler that applies these optimizations, starting
from straightforward modular-arithmetic-based algorithms
and producing code around 5X faster than with off-the-shelf
arbitrary-precision integer libraries for C. Furthermore, our
compiler is implemented in the Coq proof assistant; it produces
not just C-level code but also proofs of functional
correctness. We evaluate the compiler on several key primitives
from elliptic curve cryptography

Source: https://people.csail.mit.edu/jgross/personal-website/papers/2018-fiat-crypto-pldi-draft.pdf

A Mathematical Theory of Communication

THE recent development of various methods of modulation such as PCM and PPM which exchange
bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A
basis for such a theory is contained in the important papers of Nyquist1 and Hartley2 on this subject. In the
present paper we will extend the theory to include a number of new factors, in particular the effect of noise
in the channel, and the savings possible due to the statistical structure of the original message and due to the
nature of the final destination of the information.

Source: http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf

Blocking-resistant communication through domain fronting

Abstract: We describe “domain fronting,” a versatile
censorship circumvention technique that hides the remote
endpoint of a communication. Domain fronting
works at the application layer, using HTTPS, to communicate
with a forbidden host while appearing to communicate
with some other host, permitted by the censor.
The key idea is the use of different domain names at
different layers of communication. One domain appears
on the “outside” of an HTTPS request—in the DNS request
and TLS Server Name Indication—while another
domain appears on the “inside”—in the HTTP Host
header, invisible to the censor under HTTPS encryption.
A censor, unable to distinguish fronted and nonfronted
traffic to a domain, must choose between allowing
circumvention traffic and blocking the domain entirely,
which results in expensive collateral damage. Domain
fronting is easy to deploy and use and does not require
special cooperation by network intermediaries. We
identify a number of hard-to-block web services, such as
content delivery networks, that support domain-fronted
connections and are useful for censorship circumvention.
Domain fronting, in various forms, is now a circumvention
workhorse. We describe several months of deployment
experience in the Tor, Lantern, and Psiphon circumvention
systems, whose domain-fronting transports
now connect thousands of users daily and transfer many
terabytes per month.

Source: https://www.bamsoftware.com/papers/fronting.pdf

Examining the costs and causes of cyber incidents | Journal of Cybersecurity | Oxford Academic

Abstract
In 2013, the US President signed an executive order designed to help secure the nation’s critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm’s annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.

Source: https://academic.oup.com/cybersecurity/article/2/2/121/2525524/Examining-the-costs-and-causes-of-cyber-incidents

Cutting the Cord: a Robust Wireless Facilities Network for Data Centers

Today’s network control and management traffic are limited by
their reliance on existing data networks. Fate sharing in this context
is highly undesirable, since control traffic has very different availability
and traffic delivery requirements. In this paper, we explore
the feasibility of building a dedicated wireless facilities network for
data centers. We propose Angora, a low-latency facilities network
using low-cost, 60GHz beamforming radios that provides robust
paths decoupled from the wired network, and flexibility to adapt to
workloads and network dynamics. We describe our solutions to address
challenges in link coordination, link interference and network
failures. Our testbed measurements and simulation results show
that Angora enables large number of low-latency control paths to
run concurrently, while providing low latency end-to-end message
delivery with high tolerance for radio and rack failures.

Source: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43860.pdf