Welcome to our liveblog for Google IO 2019. We’re at the event for the 10AM PDT / 1PM EDT / 6PM BST – and our Google IO liveblog is your source for real-time keynote updates concerning the annual developers conference – straight from Mountain View, California.
It’s the only place you’ll need to come to find out everything that’s happening from the latest news and updates from the site, flashed from our fingers right as they happen.
We’re on the ground today, picked up our badge yesterday and waiting to deliver to you minute-by-minute updates about Android Q, Pixel 3a, and possible updates to Nest and the Google Home devices.
And, of course, we’ll also be the first to report on any other surprises. Sure, you can always check out the Google IO livestream video, but for people at work (supposedly working), this is where you need to stay locked for all the latest live updates.
[ul]
[li]How to watch Google IO 2019[/li][/ul]
Google IO liveblog: real-time updates
All times in Pacific Daylight Time
11:13am: On to Google Home with more talk about… you guess it: Google Assistant. It’s the software that Google is pushing out to every device. It’s also talking about respective your privacy in bold bold letters. It means it everybody!
11:12am: No Android Q release date (although we’re fairly certain it’ll be in early August as always) and no official name (your guess is as good as ours).
11:10am: Android Q beta 3 is available on 21 devices, including 12 OEMS. We’re uploading a photo of all of the third-party company logos right now. This includes OnePlus and Nokia and definitely an improvement from last year’s seven beta participants.
[IMG alt="4WWWLaWcDPuBe4KGEC7cuU" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/4WWWLaWcDPuBe4KGEC7cuU.jpg[/IMG]
11:08am: Google’s just announced focus mode. It’s like an advanced Do Not Disturb mode and it’s coming to Android P and Q devices ‘this fall’ so likely around August or September.
[IMG alt="Fgt8AM6DrDQzyEwcSf5hkJ" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/Fgt8AM6DrDQzyEwcSf5hkJ.jpg[/IMG]
11:07am: Digital Wellbeing and privacy are what Google is talking about right now. You’ll have more control over your location (when you order pizza, you can allow the app to know your location, but it won’t follow you along indefinitely.
11:02am: Dark Theme is coming to Android Q. It’s officially among the more than 50 features of Android Q. It’ll burn less pixels (on OLED screens). It can be access from the notification shade (shade has a whole new meaning) via Dark Theme or the Battery Saver mode. You heard it here first on our Google IO 2019 liveblog.
[IMG alt="imJafdK7uxvAeuFYcbZxHk" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/imJafdK7uxvAeuFYcbZxHk.jpg[/IMG]
[IMG alt="ubz6yVTpHvcSHt4X795yCV" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/ubz6yVTpHvcSHt4X795yCV.jpg[/IMG]
11:01am: Here’s another Live Caption demo, this time the twist is that Google’s speech recognition technology can be used offline (the demo was in airplane mode). On-device machine learning happens protects user privacy, too.
10:59am: We knew about this Android Q feature ahead of our Google IO liveblog, but Continuity is coming to prepare for foldables. When you transfer from the folded state to an unfolded state, the app you’re using adjusts seamlessly. Samsung build this into its Android Pie phone, but it’ll come pre-packaged with Android Q.
[IMG alt="RjiMnDA7vxoT99zET7gRJ8" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/RjiMnDA7vxoT99zET7gRJ8.jpg[/IMG]
10:57am: Android Q is next, and Google announcing that there are over 2.5 billion active Android devices – from 180 device makers around the world. And now foldable phones are coming to Android OEMs.
[IMG alt="kgz633mYRZ8wWxW8kephid" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/kgz633mYRZ8wWxW8kephid.jpg[/IMG]
10:55am: Live Transcribe, Live Caption, Live Relay and Project Euphonia are among the very advanced disability features Google is working on in 2019.
10:52am: Live Relay is a similar on-device feature that lets you talk on the phone with someone while they read a text.
10:51am: “It’s such a simple feature, but has such a big impact on me,” said a person who is hearing impaired in a Google video. The use cases for the Live Caption feature are groundbreaking for the deaf. And for those who can hear but are in a subway, for example, it has a use case, too, says Google.
10:49am: Live Caption is new – with one click you can turn on captions for a web video, a podcast, or even a video you record at home.
10:45am: Federated Learning is what Google is using to anonymous data to improve the global model. Here’s an example: Take Gboard, Google keyboard. When new words become popular – as people are typing in BTS or YOLO – after thousands of people type that in (or millions in the case of BTS), Gboard will be able to harness this data without tapping into individual user privacy. It’s sounds a lot like Apple’s ‘Differential Privacy’ approach.
10:43am: Incognito Mode is coming to Google Maps (so it’ll be on Chrome, YouTube and Maps in 2019), and one-tap access to your Google account, Chrome, search, YouTube, and Maps.
10:40am: Google is talking about privacy and consumer control Incognito in Chrome is 10 years old, and Google Takeout is a valuable service for exporting data. Google says it knows that its work isn’t done. It’s making all privacy settings easy to access from your profile. You can view and manage your recent activity and even change your privacy settings. This goes along with auto-delete tracking controls (3 months and 18 months) that it announced last week. It’s rolling out in the coming weeks.
10:36am: Breaking news: Now you can ask a Google Home speaker to turn off an alarm without having to say “Hey Google” first. Just shout “Stop” and the annoying alarm will shut off. So helpful. It’s coming to English-speaking locales starting today, according to Google, so look for it soon.
[IMG alt="VNTCLwvZjpbUJ8QdfYMFik" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/VNTCLwvZjpbUJ8QdfYMFik.jpg[/IMG]
10:35am: Driving Mode will be available this summer on any Android phone with Google Assistant. That means on over one billion devices on over 80 countries. Google’s mission is to build the fastest most personal way to get things done.
10:33am: Google Assistant is coming to Waze in the coming weeks, and there’s going to be a driving mode on the Assistant app, bringing personalized suggestions and shortcuts (like to dinner directions, top contacts, or a podcast you want to resume in the car). Phones calls and music appear in a low-profile way so you can get things done without leaving the navigation screen.
10:32am: Google is expanding the Assistant’s ability with “personal references.” It’ll understand “Hey, Google, what’s the weather like at Mom’s house this weekend.” You’re always in control of what it knows about you, your family and your personal information. Google proposes this will be very helpful on the road with Android Auto.
10:30am: The next-generation of Google Assistant to newer Pixel phones later this year.
10:29am: Google showed off a more complex speech-to-text scenario in which the Assistant could send an email – and tell the difference between completing an action (opening an email and sending it), and receiving a dictation. Right now, Google Assistant can’t do that and requires tapping the screen. The future is touchless.
10:27am: Now onto a more practical (and speedy) demo: sending messages, looking up a photo to send to a friend, and replying with that photo. It’s all done without the need to touch, and includes multi-tasking.
10:26am: So far, the speed of Assistant seems to have improved a bit. But this is just how we’ve imagined it should work. It’s getting a lot of applause from people in the crowd (who probably don’t find Google Assistant to be instant).
[IMG alt="HZ34Ws6o6fxd3VMe3ey7iP" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/HZ34Ws6o6fxd3VMe3ey7iP.jpg[/IMG]
10:24am: Time for the next-generation of Google Assistant. Bold vision: What if the Assistant was so far that tapping to operate your phone almost seemed slow. Google wants it to be 10x faster.
10:22am: Importantly, Duplex on the web doesn’t require input from businesses. It’ll work automatically, according to the CEO of Google. It’s the company’s way of building a more helpful Assistant.
[IMG alt="QkRHWCfd8sruqgbTrgFPGh" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/QkRHWCfd8sruqgbTrgFPGh.jpg[/IMG]
10:21am: Google is tackling painstakingly slow online reservations. “Book a National Car rental for my trip” and Google will start filling out the details for you. It’s acting on your behalf, although you’re always in control of the flow: make, model, color, whether or not you want a car seat. It’s a lot less input and selection, and more modify where you need it.
10:20am: Sundar is back on stage talking about last year’s big surprise: Google Duplex. It’s Google’s AI voice assistant that calls restaurants for reservations. It’s now extending Duplex for tasks on the web.
10:20am: The updates for Google lens will roll out later this month, so you should see them by the end of May.
10:17am: At Google IO 2019, Google Lens is getting new language translation functions. If you see a sign in a foreign language, it’ll translate and even read the words aloud. The coolest part, it’ll highlight the foreign words at it pronounces them, so you can follow along (and maybe learn a bit).
[IMG alt="pbUjjDN2Q8GxVFdHa9sRG3" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/pbUjjDN2Q8GxVFdHa9sRG3.jpg[/IMG]
10:16am: Wondering what a dish even looks like? Google Lens will be able to pop up a picture based on the words seen on a menu. Not word on how it gets this picture (whether it’s based on actual photos from the restaurant or what the general dish looks like).
10:14am: Another Google Lens example: Point your phone camera at a menu in a restaurant, and it’ll point out the popular dishes at the restaurant and tell you what other people are saying about it on Google Maps (seems to be where it’s sourced). Lens can help you pay for the meal, even going as far as helping to calculate the tip.
10:13am: People have used lens more than a billion times so far. Indexing the physical world, much like search indexes the billions of pages on the web.
10:11am: There’s a 3D shark on the Google IO 2019 stage. How? The AR shark with layers of teeth (and the audience behind it) came from the Google search and clicking on a 3D model. Using a phone camera, it was placed in the real world, merging 3D objects and your world.
http://cdn.mos.cms.futurecdn.net/eiw...cg9p8eCs3G.jpg
10:09am: The camera is coming to Google Search. Search for muscle flexion. You can see 3D models in search results and place it in your our apartment.
http://cdn.mos.cms.futurecdn.net/TtH...Zg5iTwhhW5.jpg
10:08am: Google wants to “surfacing the right information in the right context.” This may be code for “We’re doing our job to defeat fake news and hoaxes.”
10:05am: Sundar says "Our goal is build a more helpful Google for everyone. And when we say helpful, we mean giving you the tools to improve your knowledge success, health and happiness." So far, it’s been a re-cap of what Google does now. We’re still waiting for what it’s doing next.
10:03am: Google is using AR in its official Google IO 2019 app to help users better navigating this outdoor developer conference.
10:03am: Sundar just took the stage. “I would like to say welcome to the language that all of our users speak, but we want to keep this keynote under two hours.”
10:00am: Google is starting with a video, with a retrospective of various technology, from the original cell phone to the N64 controller and what looks to be Star Trek with Leonard Nimoy.
9:59am: Last-minute before go time, and my teammates are contemplating live blogging about my live blogging. “He looks serious and stressed. Oh, wait, he just crinkled his forehead.” Yes, I’m in game mode.
9:57am: Predictions on the first thing that will be announced at Google IO 2019: lots of number about the success of the company and how developers are gravitating toward coding for Android.
9:50am: We’re 10 minutes from the Google IO keynote and here’s your team on the ground (left to right): Matt Swider, David Lumb and Nick Pino. We’re ready to ace this liveblog with real-time updates of everything announced.
http://cdn.mos.cms.futurecdn.net/TCU...4JJNyqKsE9.jpg
9:29am: I recall two human DJs at the last Google IO I was at (left). With Google’s AI DJ (right), I wonder if the human DJs are still getting gigs in this increasingly autonomous economy.
[IMG alt="Z53bqTvv7mxvQCzrMhNbSF" width="690px" height="222px"]https://cdn.mos.cms.futurecdn.net/Z53bqTvv7mxvQCzrMhNbSF.jpg[/IMG]
9:27am: Google’s AI DJ had to reset, because while the future is autonomous, it’s not yet perfect. It’s up and spinning again with what the kids would call sick beats.
9:22am: On the Google IO keynote stage, Google is has an AI DJ playing music accompanied by a human DJ (mainly to put the record on the turntable). It’s automatically adjusting the tempo.
http://cdn.mos.cms.futurecdn.net/ius...t8f9Wdg43Z.jpg
9:00am: We’re in our seats for Google IO 2019 and just one hour from the developer conference. We’re ready to tackle the keynote.
http://cdn.mos.cms.futurecdn.net/JD9...HRfPezAyTN.jpg
8:30am: Here’s the new Google IO signage for 2019.
May 7, 1:09am: We’ve wrapped up our Google IO 2019 planning, and updating the liveblog one last time. Expect early morning updates soon.
http://cdn.mos.cms.futurecdn.net/ej9...BEyqpkDQZd.jpg
Yesterday, 3pm PT: I have my Google IO badge. David Lumb and Nick Pino are also joining me to provide liveblog commentary. Only a few hours left before the keynote starts.
[IMG alt="ghjXFFx8FNj4AdSXpYAy9M" width="375px" height="500px"]https://cdn.mos.cms.futurecdn.net/ghjXFFx8FNj4AdSXpYAy9M.jpg[/IMG]
Image credit: TechRadar
10:30am PT: Why is it that the Google Maps estimate for ridesharing apps is always why off (especially for Lyft?). Maybe this is something the company can fix at Google IO (in addition to quick Android updates and Messaging).
http://cdn.mos.cms.futurecdn.net/x2C...vAKd7mYXNP.jpg
Image credit: TechRadar
10am PT: We’re 24 hours from Google IO and have successfully made it from New York City to Mountain View, California (with a flight to nearby San Fransisco).
You can really see the difference in weather here in the Bay Area:
http://cdn.mos.cms.futurecdn.net/GEe...hBf2far7jP.jpg
Image credit: TechRadar
Refresh for more Google IO 2019 as the event begins at 10am PT.
http://feeds.feedburner.com/~r/techr...~4/k5xbhP92NI4
Continue reading…
It’s the only place you’ll need to come to find out everything that’s happening from the latest news and updates from the site, flashed from our fingers right as they happen.
We’re on the ground today, picked up our badge yesterday and waiting to deliver to you minute-by-minute updates about Android Q, Pixel 3a, and possible updates to Nest and the Google Home devices.
And, of course, we’ll also be the first to report on any other surprises. Sure, you can always check out the Google IO livestream video, but for people at work (supposedly working), this is where you need to stay locked for all the latest live updates.
[ul]
[li]How to watch Google IO 2019[/li][/ul]
Google IO liveblog: real-time updates
All times in Pacific Daylight Time
11:13am: On to Google Home with more talk about… you guess it: Google Assistant. It’s the software that Google is pushing out to every device. It’s also talking about respective your privacy in bold bold letters. It means it everybody!
11:12am: No Android Q release date (although we’re fairly certain it’ll be in early August as always) and no official name (your guess is as good as ours).
11:10am: Android Q beta 3 is available on 21 devices, including 12 OEMS. We’re uploading a photo of all of the third-party company logos right now. This includes OnePlus and Nokia and definitely an improvement from last year’s seven beta participants.
[IMG alt="4WWWLaWcDPuBe4KGEC7cuU" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/4WWWLaWcDPuBe4KGEC7cuU.jpg[/IMG]
11:08am: Google’s just announced focus mode. It’s like an advanced Do Not Disturb mode and it’s coming to Android P and Q devices ‘this fall’ so likely around August or September.
[IMG alt="Fgt8AM6DrDQzyEwcSf5hkJ" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/Fgt8AM6DrDQzyEwcSf5hkJ.jpg[/IMG]
11:07am: Digital Wellbeing and privacy are what Google is talking about right now. You’ll have more control over your location (when you order pizza, you can allow the app to know your location, but it won’t follow you along indefinitely.
11:02am: Dark Theme is coming to Android Q. It’s officially among the more than 50 features of Android Q. It’ll burn less pixels (on OLED screens). It can be access from the notification shade (shade has a whole new meaning) via Dark Theme or the Battery Saver mode. You heard it here first on our Google IO 2019 liveblog.
[IMG alt="imJafdK7uxvAeuFYcbZxHk" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/imJafdK7uxvAeuFYcbZxHk.jpg[/IMG]
[IMG alt="ubz6yVTpHvcSHt4X795yCV" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/ubz6yVTpHvcSHt4X795yCV.jpg[/IMG]
11:01am: Here’s another Live Caption demo, this time the twist is that Google’s speech recognition technology can be used offline (the demo was in airplane mode). On-device machine learning happens protects user privacy, too.
10:59am: We knew about this Android Q feature ahead of our Google IO liveblog, but Continuity is coming to prepare for foldables. When you transfer from the folded state to an unfolded state, the app you’re using adjusts seamlessly. Samsung build this into its Android Pie phone, but it’ll come pre-packaged with Android Q.
[IMG alt="RjiMnDA7vxoT99zET7gRJ8" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/RjiMnDA7vxoT99zET7gRJ8.jpg[/IMG]
10:57am: Android Q is next, and Google announcing that there are over 2.5 billion active Android devices – from 180 device makers around the world. And now foldable phones are coming to Android OEMs.
[IMG alt="kgz633mYRZ8wWxW8kephid" width="666px" height="500px"]https://cdn.mos.cms.futurecdn.net/kgz633mYRZ8wWxW8kephid.jpg[/IMG]
10:55am: Live Transcribe, Live Caption, Live Relay and Project Euphonia are among the very advanced disability features Google is working on in 2019.
10:52am: Live Relay is a similar on-device feature that lets you talk on the phone with someone while they read a text.
10:51am: “It’s such a simple feature, but has such a big impact on me,” said a person who is hearing impaired in a Google video. The use cases for the Live Caption feature are groundbreaking for the deaf. And for those who can hear but are in a subway, for example, it has a use case, too, says Google.
10:49am: Live Caption is new – with one click you can turn on captions for a web video, a podcast, or even a video you record at home.
10:45am: Federated Learning is what Google is using to anonymous data to improve the global model. Here’s an example: Take Gboard, Google keyboard. When new words become popular – as people are typing in BTS or YOLO – after thousands of people type that in (or millions in the case of BTS), Gboard will be able to harness this data without tapping into individual user privacy. It’s sounds a lot like Apple’s ‘Differential Privacy’ approach.
10:43am: Incognito Mode is coming to Google Maps (so it’ll be on Chrome, YouTube and Maps in 2019), and one-tap access to your Google account, Chrome, search, YouTube, and Maps.
10:40am: Google is talking about privacy and consumer control Incognito in Chrome is 10 years old, and Google Takeout is a valuable service for exporting data. Google says it knows that its work isn’t done. It’s making all privacy settings easy to access from your profile. You can view and manage your recent activity and even change your privacy settings. This goes along with auto-delete tracking controls (3 months and 18 months) that it announced last week. It’s rolling out in the coming weeks.
10:36am: Breaking news: Now you can ask a Google Home speaker to turn off an alarm without having to say “Hey Google” first. Just shout “Stop” and the annoying alarm will shut off. So helpful. It’s coming to English-speaking locales starting today, according to Google, so look for it soon.
[IMG alt="VNTCLwvZjpbUJ8QdfYMFik" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/VNTCLwvZjpbUJ8QdfYMFik.jpg[/IMG]
10:35am: Driving Mode will be available this summer on any Android phone with Google Assistant. That means on over one billion devices on over 80 countries. Google’s mission is to build the fastest most personal way to get things done.
10:33am: Google Assistant is coming to Waze in the coming weeks, and there’s going to be a driving mode on the Assistant app, bringing personalized suggestions and shortcuts (like to dinner directions, top contacts, or a podcast you want to resume in the car). Phones calls and music appear in a low-profile way so you can get things done without leaving the navigation screen.
10:32am: Google is expanding the Assistant’s ability with “personal references.” It’ll understand “Hey, Google, what’s the weather like at Mom’s house this weekend.” You’re always in control of what it knows about you, your family and your personal information. Google proposes this will be very helpful on the road with Android Auto.
10:30am: The next-generation of Google Assistant to newer Pixel phones later this year.
10:29am: Google showed off a more complex speech-to-text scenario in which the Assistant could send an email – and tell the difference between completing an action (opening an email and sending it), and receiving a dictation. Right now, Google Assistant can’t do that and requires tapping the screen. The future is touchless.
10:27am: Now onto a more practical (and speedy) demo: sending messages, looking up a photo to send to a friend, and replying with that photo. It’s all done without the need to touch, and includes multi-tasking.
10:26am: So far, the speed of Assistant seems to have improved a bit. But this is just how we’ve imagined it should work. It’s getting a lot of applause from people in the crowd (who probably don’t find Google Assistant to be instant).
[IMG alt="HZ34Ws6o6fxd3VMe3ey7iP" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/HZ34Ws6o6fxd3VMe3ey7iP.jpg[/IMG]
10:24am: Time for the next-generation of Google Assistant. Bold vision: What if the Assistant was so far that tapping to operate your phone almost seemed slow. Google wants it to be 10x faster.
10:22am: Importantly, Duplex on the web doesn’t require input from businesses. It’ll work automatically, according to the CEO of Google. It’s the company’s way of building a more helpful Assistant.
[IMG alt="QkRHWCfd8sruqgbTrgFPGh" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/QkRHWCfd8sruqgbTrgFPGh.jpg[/IMG]
10:21am: Google is tackling painstakingly slow online reservations. “Book a National Car rental for my trip” and Google will start filling out the details for you. It’s acting on your behalf, although you’re always in control of the flow: make, model, color, whether or not you want a car seat. It’s a lot less input and selection, and more modify where you need it.
10:20am: Sundar is back on stage talking about last year’s big surprise: Google Duplex. It’s Google’s AI voice assistant that calls restaurants for reservations. It’s now extending Duplex for tasks on the web.
10:20am: The updates for Google lens will roll out later this month, so you should see them by the end of May.
10:17am: At Google IO 2019, Google Lens is getting new language translation functions. If you see a sign in a foreign language, it’ll translate and even read the words aloud. The coolest part, it’ll highlight the foreign words at it pronounces them, so you can follow along (and maybe learn a bit).
[IMG alt="pbUjjDN2Q8GxVFdHa9sRG3" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/pbUjjDN2Q8GxVFdHa9sRG3.jpg[/IMG]
10:16am: Wondering what a dish even looks like? Google Lens will be able to pop up a picture based on the words seen on a menu. Not word on how it gets this picture (whether it’s based on actual photos from the restaurant or what the general dish looks like).
10:14am: Another Google Lens example: Point your phone camera at a menu in a restaurant, and it’ll point out the popular dishes at the restaurant and tell you what other people are saying about it on Google Maps (seems to be where it’s sourced). Lens can help you pay for the meal, even going as far as helping to calculate the tip.
10:13am: People have used lens more than a billion times so far. Indexing the physical world, much like search indexes the billions of pages on the web.
10:11am: There’s a 3D shark on the Google IO 2019 stage. How? The AR shark with layers of teeth (and the audience behind it) came from the Google search and clicking on a 3D model. Using a phone camera, it was placed in the real world, merging 3D objects and your world.
http://cdn.mos.cms.futurecdn.net/eiw...cg9p8eCs3G.jpg
10:09am: The camera is coming to Google Search. Search for muscle flexion. You can see 3D models in search results and place it in your our apartment.
http://cdn.mos.cms.futurecdn.net/TtH...Zg5iTwhhW5.jpg
10:08am: Google wants to “surfacing the right information in the right context.” This may be code for “We’re doing our job to defeat fake news and hoaxes.”
10:05am: Sundar says "Our goal is build a more helpful Google for everyone. And when we say helpful, we mean giving you the tools to improve your knowledge success, health and happiness." So far, it’s been a re-cap of what Google does now. We’re still waiting for what it’s doing next.
10:03am: Google is using AR in its official Google IO 2019 app to help users better navigating this outdoor developer conference.
10:03am: Sundar just took the stage. “I would like to say welcome to the language that all of our users speak, but we want to keep this keynote under two hours.”
10:00am: Google is starting with a video, with a retrospective of various technology, from the original cell phone to the N64 controller and what looks to be Star Trek with Leonard Nimoy.
9:59am: Last-minute before go time, and my teammates are contemplating live blogging about my live blogging. “He looks serious and stressed. Oh, wait, he just crinkled his forehead.” Yes, I’m in game mode.
9:57am: Predictions on the first thing that will be announced at Google IO 2019: lots of number about the success of the company and how developers are gravitating toward coding for Android.
9:50am: We’re 10 minutes from the Google IO keynote and here’s your team on the ground (left to right): Matt Swider, David Lumb and Nick Pino. We’re ready to ace this liveblog with real-time updates of everything announced.
http://cdn.mos.cms.futurecdn.net/TCU...4JJNyqKsE9.jpg
9:29am: I recall two human DJs at the last Google IO I was at (left). With Google’s AI DJ (right), I wonder if the human DJs are still getting gigs in this increasingly autonomous economy.
[IMG alt="Z53bqTvv7mxvQCzrMhNbSF" width="690px" height="222px"]https://cdn.mos.cms.futurecdn.net/Z53bqTvv7mxvQCzrMhNbSF.jpg[/IMG]
9:27am: Google’s AI DJ had to reset, because while the future is autonomous, it’s not yet perfect. It’s up and spinning again with what the kids would call sick beats.
9:22am: On the Google IO keynote stage, Google is has an AI DJ playing music accompanied by a human DJ (mainly to put the record on the turntable). It’s automatically adjusting the tempo.
http://cdn.mos.cms.futurecdn.net/ius...t8f9Wdg43Z.jpg
9:00am: We’re in our seats for Google IO 2019 and just one hour from the developer conference. We’re ready to tackle the keynote.
http://cdn.mos.cms.futurecdn.net/JD9...HRfPezAyTN.jpg
8:30am: Here’s the new Google IO signage for 2019.
May 7, 1:09am: We’ve wrapped up our Google IO 2019 planning, and updating the liveblog one last time. Expect early morning updates soon.
http://cdn.mos.cms.futurecdn.net/ej9...BEyqpkDQZd.jpg
Yesterday, 3pm PT: I have my Google IO badge. David Lumb and Nick Pino are also joining me to provide liveblog commentary. Only a few hours left before the keynote starts.
[IMG alt="ghjXFFx8FNj4AdSXpYAy9M" width="375px" height="500px"]https://cdn.mos.cms.futurecdn.net/ghjXFFx8FNj4AdSXpYAy9M.jpg[/IMG]
Image credit: TechRadar
10:30am PT: Why is it that the Google Maps estimate for ridesharing apps is always why off (especially for Lyft?). Maybe this is something the company can fix at Google IO (in addition to quick Android updates and Messaging).
http://cdn.mos.cms.futurecdn.net/x2C...vAKd7mYXNP.jpg
Image credit: TechRadar
10am PT: We’re 24 hours from Google IO and have successfully made it from New York City to Mountain View, California (with a flight to nearby San Fransisco).
You can really see the difference in weather here in the Bay Area:
http://cdn.mos.cms.futurecdn.net/GEe...hBf2far7jP.jpg
Image credit: TechRadar
Refresh for more Google IO 2019 as the event begins at 10am PT.
http://feeds.feedburner.com/~r/techr...~4/k5xbhP92NI4
Continue reading…