In-Datacenter Performance Analysis of a Tensor Processing Unit​ TM

Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC–called a Tensor Processing Unit (TPU)–deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS), and a large (28MiB) software-managed on-chip memory. The TPU’s deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, …) that help average throughput more than guaranteed latency. The lack of of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters’ NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X – 30X faster than its contemporary GPU or CPU with TOPS/Watt about 30X – 80X higher. Moreover, using the GPUs GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.

Source: https://drive.google.com/file/d/0Bx4hafXDDq2EMzRNcy1vSUxtcEk/view

Advertisements

Cutting the Cord: a Robust Wireless Facilities Network for Data Centers

Today’s network control and management traffic are limited by
their reliance on existing data networks. Fate sharing in this context
is highly undesirable, since control traffic has very different availability
and traffic delivery requirements. In this paper, we explore
the feasibility of building a dedicated wireless facilities network for
data centers. We propose Angora, a low-latency facilities network
using low-cost, 60GHz beamforming radios that provides robust
paths decoupled from the wired network, and flexibility to adapt to
workloads and network dynamics. We describe our solutions to address
challenges in link coordination, link interference and network
failures. Our testbed measurements and simulation results show
that Angora enables large number of low-latency control paths to
run concurrently, while providing low latency end-to-end message
delivery with high tolerance for radio and rack failures.

Source: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43860.pdf

Flexible Network Bandwidth and Latency Provisioning in the Datacenter

Abstract
Predictably sharing the network is critical to achieving
high utilization in the datacenter. Past work has focussed
on providing bandwidth to endpoints, but often
we want to allocate resources among multi-node services.
In this paper, we present Parley, which provides
service-centric minimum bandwidth guarantees, which
can be composed hierarchically. Parley also supports
service-centric weighted sharing of bandwidth in excess
of these guarantees. Further, we show how to configure
these policies so services can get low latencies even at
high network load. We evaluate Parley on a multi-tiered
oversubscribed network connecting 90 machines, each
with a 10Gb/s network interface, and demonstrate that
Parley is able to meet its goals.

Source: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43871.pdf

Trumpet: Timely and Precise Triggers in Data Centers

As data centers grow larger and strive to provide tight performance
and availability SLAs, their monitoring infrastructure
must move from passive systems that provide aggregated
inputs to human operators, to active systems that enable programmed
control. In this paper, we propose Trumpet, an
event monitoring system that leverages CPU resources and
end-host programmability, to monitor every packet and report
events at millisecond timescales. Trumpet users can express
many network-wide events, and the system efficiently detects
these events using triggers at end-hosts. Using careful design,
Trumpet can evaluate triggers by inspecting every packet at
full line rate even on future generations of NICs, scale to
thousands of triggers per end-host while bounding packet
processing delay to a few microseconds, and report events
to a controller within 10 milliseconds, even in the presence
of attacks. We demonstrate these properties using an implementation
of Trumpet, and also show that it allows operators
to describe new network events such as detecting correlated
bursts and loss, identifying the root cause of transient congestion,
and detecting short-term anomalies at the scale of a data
center tenant.

Source: http://www.cs.yale.edu/homes/yu-minlan/writeup/sigcomm16.pdf

Onix: A Distributed Control Platform for Large-scale Production Networks

Computer networks lack a general control paradigm,
as traditional networks do not provide any networkwide
management abstractions. As a result, each new
function (such as routing) must provide its own state
distribution, element discovery, and failure recovery
mechanisms. We believe this lack of a common control
platform has significantly hindered the development of
flexible, reliable and feature-rich network control planes.
To address this, we present Onix, a platform on top of
which a network control plane can be implemented as a
distributed system. Control planes written within Onix
operate on a global view of the network, and use basic
state distribution primitives provided by the platform.
Thus Onix provides a general API for control plane
implementations, while allowing them to make their own
trade-offs among consistency, durability, and scalability.

Source: https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Koponen.pdf

Google Infrastructure Security Design Overview

This document gives an overview of how security is designed into Google’s technical infrastructure. This global scale infrastructure is designed to provide security through the entire information processing lifecycle at Google. This infrastructure provides secure deployment of services, secure storage of data with end user privacy safeguards, secure communications between services, secure and private communication with customers over the internet, and safe operation by administrators.
Google uses this infrastructure to build its internet services, including both consumer services such as Search, Gmail, and Photos, and enterprise services such as G Suite and Google Cloud Platform.
We will describe the security of this infrastructure in progressive layers starting from the physical security of our data centers, continuing on to how the hardware and software that underlie the infrastructure are secured, and finally, describing the technical constraints and processes in place to support operational security.

Source: https://cloud.google.com/security/security-design/resources/google_infrastructure_whitepaper_fa.pdf