SREcon17 Asia/Australia: SRE Your gRPC—Building Reliable Distributed Systems Illustrated with gRPC

SREcon17 Asia/Australia: SRE Your gRPC—Building Reliable Distributed Systems Illustrated with gRPC

Grainne Sheerin and Gabe Krabbe, Google

Distributed systems have sharp edges, and we have a wealth of experience cutting ourselves on them. We want to share our experience with SREs elsewhere, so they can skip making the same mistakes and join us making exciting new ones instead!

We will share practical suggestions from 14 years of failing gracefully:

– In a distributed service, every component is a frontend to another one down the stack. How can it deal with backend failures so that the service as a whole does not go down?
– In a distributed service, every component is a backend for another one up the stack. How can it be scaled and managed, avoiding overload and under-use?
– In a distributed service, latency is often the biggest uncertainty. How can it be kept predictable?
– In a distributed service, availability, processing, and latency costs contributions are hard to assign. When things (inevitably) go wrong, what components are to blame? When they work, where are the biggest opportunities for improvement?

We will cover best and worst practices, using specific gRPC examples for illustration.

Sign up to find out more about SREcon at https://srecon.usenix.org

via YouTube https://youtu.be/eoy9z0UlaII

Darren Bilby: A Decade of Lessons in Incident Response

Darren Bilby: A Decade of Lessons in Incident Response

A 10-year veteran at Google, Bilby was the tech lead for Google’s Global Incident Response Team for six years, managed Google’s European detection team in Zürich for two years and has also worked as a software engineer building out Google’s security tools. He was also the founder and a core developer of the open source GRR Incident Response project.
During his lecture, Bilby discussed the key lessons he learned in incident response at Google over the past 10 years, particularly those that were learned the hard way and what other security experts can take from them.

Read the full summary at https://www.first.org/blog/20170613-DarrenBilby_keynote

via YouTube https://youtu.be/6qssVEHrpWo

How to Speed up a Python Program 114,000 times.

How to Speed up a Python Program 114,000 times.

Optimizations are one thing — making a serious data collection program run 114,000 times faster is another thing entirely.

Leaning on 30+ years of programming experience, David Schachter goes over all the optimizations he made to his (secret) company’s data-collecting program to get such massive performance gains. In doing so, he might be able to teach you a thing or two about optimizing a python program.

Want to learn more about python? Check out more Marakana videos here:

https://marakana.com/s/tags/python

via YouTube https://youtu.be/e08kOj2kISU

LISA16: Zero Trust Networks: Building Systems in Untrusted Networks

LISA16: Zero Trust Networks: Building Systems in Untrusted Networks

Speaker: Evan Gilman, PagerDuty, Inc.

Abstract: Let’s face it—the perimeter-based architecture has failed us. Today’s attack vectors can easily defeat expensive stateful firewalls and evade IDS systems. Perhaps even worse, perimeters trick people into believing that the network behind it is somehow “safe,” despite the fact that chances are overwhelmingly high that at least one device on that network is already compromised.

It is time to consider an alternative approach. Zero Trust is a new security model, one which considers all parts of the network to be equally untrusted. Taking this stance dramatically changes the way we implement security systems. For instance, how useful is a perimeter firewall if the networks on either side are equally untrusted? What is your VPN protecting if the network you’re dialing into is untrusted? The Zero Trust architecture is very different indeed.

In this talk, we’ll go over the Zero Trust model itself, why it is so important, what a Zero Trust network looks like, and what components are required in order to actually meet the challenge.

Full Program: https://www.usenix.org/conference/lisa16/conference-program

via YouTube https://youtu.be/TI9Y1LWxjt4

Keynote: Cloud Native Networking- Amin Vahdat, Fellow & Technical Lead For Networking, Google

Keynote: Cloud Native Networking- Amin Vahdat, Fellow & Technical Lead For Networking, Google

Keynote: Cloud Native Networking- Amin Vahdat, Fellow & Technical Lead For Networking, Google

About Amin Vahdat
Google Fellow & Technical Lead for Networking
Amin Vahdat is a Google Fellow and Technical Lead for networking at Google. He has contributed to Google’s data center, wide area, edge/CDN, and cloud networking infrastructure, with a particular focus on driving vertical integration across large-scale compute, networking, and storage. Vahdat has published more than 150 papers in computer systems, with fundamental contributions to cloud computing, data consistency, energy-efficient computing, data center architecture, and optical networking. In the past, he served as the SAIC Professor of Computer Science and Engineering at UC San Diego and the Director of UCSD’s Center for Networked Systems. Vahdat received his PhD from UC Berkeley in Computer Science, is an ACM Fellow and a past recipient of the NSF CAREER award, the Alfred P. Sloan Fellowship, and the Duke University David and Janet Vaughn Teaching Award.”

via YouTube https://youtu.be/1xBZ5DGZZmQ

USENIX Enigma 2017 — Adversarial Examples in Machine Learning

USENIX Enigma 2017 — Adversarial Examples in Machine Learning

Nicolas Papernot, Google PhD Fellow at The Pennsylvania State University

Machine learning models, including deep neural networks, were shown to be vulnerable to adversarial examples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software.

In fact, the feasibility of misclassification attacks based on adversarial examples has been shown for image, text, and malware classifiers. Furthermore, adversarial examples that affect one model often affect another model, even if the two models are very different. This effectively enables attackers to target remotely hosted victim classifiers with very little adversarial knowledge.

This talk covers adversarial example crafting algorithms operating under varying threat models and application domains, as well as defenses proposed to mitigate such attacks. A practical tutorial will be given throughout the talk, allowing participants to familiarize themselves with adversarial example crafting.

Sign up to find out more about Enigma conferences: https://www.usenix.org/conference/enigma2017#signup

Watch all Enigma 2017 videos at: http://enigma.usenix.org/youtube

via YouTube https://youtu.be/hUukErt3-7w