Martin Görner | Devoxx

Martin Görner
Martin Görner

From Google

Martin Gorner, Google Developer Relations. Martin se passionne pour la science, la technologie, l'informatique, les algorihmes et tout ce qui s'en rapproche. Après avoir obtenu son diplôme d'ingénieur à Mines Paris-Tech, Martin a commencé sa carrière dans le groupe "computer architecture" chez ST Microelectronics. Il a ensuite passé les 11 années suivantes dans le domaine naissant des livres électroniques, d'abord avec la start-up mobipocket.com, qui est ensuite devenue la partie logicielle du Kindle d'Amazon et ses versions mobiles. I a rejoint Google en 2011.

Blog: https://plus.google.com/+MartinGorner

wm Web, Mobile & UX

Cloud endpoints, Polymer, material design: the Google stack, infinitely scalable, positively beautiful

Hand's on Labs

Google has been pushing the web forward for several years and designing cloud architectures for as long as it exists. Now it all comes together. In this lab you will use material design elements to design, develop and deploy an end-to end web application, front-end and back-end, ready to scale to millions of users.

You will learn to use the following technologies: - Google Cloud Endpoints (Java) and Cloud Datastore (used here with a web front-end but this part is also applicable to Android and iOS development) - Polymer and Web Components (for mobile and desktop) - The Paper Elements for Polymer (material design)

Mandatory Installs prior to lab:

+JDK 7 or 8 (200MB) +Eclipse (4.4 - Luna, 160MB)) +"Google Plugin for Eclipse" and "Google App Engine SDK" to install into Eclipse through "Help > Install New Software ..." from source "http://dl.google.com/eclipse/plugin/4.4" (157MB) +Bower (optional but recommended) +Lab starter code from Git: "git clone https://github.com/martin-gorner/endpoints-polymer-material-tutorial/" (90MB)

bigd Big Data & Analytics

"No one at Google uses MapReduce anymore" - Cloud Dataflow explained for dummies

Conference

Warning: this an an algorithmics talk, and it also involves parallel processing.

The MapReduce paper, published by Google 10 years ago (2004!), sparked the parallel processing revolution and gave birth to countless open source and research projects. We have been busy since then and the MapReduce model is now officially obsolete. The new data processing models we use are called Flume (for the processing pipeline definition) and MillWheel for the real-time dataflow orchestration. We are releasing them as a public tool called Cloud Dataflow which allows you to specify both batch and real-time data processing pipelines and have them deployed and maintained automatically - and yes, dataflow can deploy lots of machines to handle Google-scale problems.

What is the magic behind the scenes ? What is the post-MapReduce dataflow model ? What are the flow optimisation algorithms ? Read the papers or come for a walk through the algorithms with me.