Securing Clouds

Notes on “Lessons Learned from Securing Google and Google Cloud” talk by Neils Provos

Summary

  • Defense in Depth at scale by default
    • Protect identities by default
    • Protect data across full lifecycle by default
    • Protect resources by default
  • Trust through transparency
  • Automate best practices and prevent common mistakes at scale
  • Share innovation to raise the bar, support and invest in the security community.
  • Address common cases programmatically
  • Empower customers to fulfill their security responsibilities
  • Trust and security can be the accelerant

Continue reading “Securing Clouds”

Protecting resources behind an authenticating proxy

Today, we’re putting our core web services behind the protections provided by U2F and Google’s account takeover and anomaly detection systems. Not only will this provide phishing resistance through the authentication proxy, but also authorization through IAM roles assigned to the user’s Google account.

Prerequisites:

  • Google account
  • U2F Yubikey enrolled and enforced for the users/groups that will be accessing the application.
  • An hour or so.
  • A global cloud that has been operating at billions of rps for decades. (Beyond the scope of this article.)

Continue reading “Protecting resources behind an authenticating proxy”

Titan: a custom TPM and more

I listened to a podcast and cut out the chit-chat, so you don’t have to:

Titan is a tiny security co-processing chip used for encryption, authentication of hardware, authentication of services.

Purpose

Every piece of hardware in google’s infrastructure can be individually identified and cryptographically verified, and any service using it mutually authenticates to that hardware. This includes servers, networking cards, switches: everything. The Titan chip is one of the ways to accomplish that.

The chip certifies that hardware is in a trusted good state. If this verification fails, the hardware will not boot, and will be replaced.

Every time a new bios is pushed, Titan checks that the code is authentic Google code before allowing it to be installed.  It then checks each time that code is booted that it is authentic, before allowing boot to continue.

‘similar in theory to the u2f security keys, everything should have identity, hardware and software. Everything’s identity is checked all the time.’

Suggestions that it plays important role in hardware level data encryption, key management systems, etc.

Hardware

Each chip is fused with a unique identifier. Done sequentially, so can verify it’s part of inventory sequence.

Three main functions: RNG, crypto engine, and monotonic counter. First two are self-explanatory. Monotonic counter to protect against replay attacks, and make logs tamper evident.

Sits between ROM and RAM, to provide signature valididation of the first 8KB of BIOS on installation and boot up.

Production

Produced entirely within google. Design and process to ensure provenance. Have used other vendor’s security coprocessors in the past, but want to ensure they understand/know the whole truth.

Google folks unaware of any other cloud that uses TPMs, etc to verify every piece of hardware and software running on it.

GCP Ping times, updated

New us-west region added, and did each zone for all regions. It’ll be interesting to add Japan later this year. (#’s in ms)

Screenshot from 2016-07-24 13-49-34

Methodology is to spin up micro instances in each zone and ping the single instance in each zone from all other zones. A fairly short go binary makes this much easier, but it could be better. The ips of the spun up instances are hardcoded, have to ssh into each instance to run the binary, collate the responses, etc. This could all be automated through the apis.

 

Google network

Update: both the Council Bluffs and soon the Dalles locations have multiple footprints. Have added the *much* larger IA location (which is partially up but under further expansion), and added an under construction icon for the Dalles footprint slightly to the north of the primary site.

Update: Found precise location of Chile DC.

Will try to keep it up to date, as the new regions planned for this year (Oregon and Japan), and 10 others next year, come online.

This grew out of a small experiment to test the latency between the different GCP regions, one day’s result below. Working on a script to expand to an all-zone version.

Edit: New region US-West, will have to try that one too!

ping

Why Google Stores Billions of Lines of Code in a Single Repository | July 2016 | Communications of the ACM

Really cool overview of the tools they use to keep 2 billion loc up to date with a ridiculous churn rate – all made possible by usable tooling and autonomous systems.

Source: Why Google Stores Billions of Lines of Code in a Single Repository | July 2016 | Communications of the ACM

Continue reading “Why Google Stores Billions of Lines of Code in a Single Repository | July 2016 | Communications of the ACM”