Vint Cerf, Google's chief Internet evangelist. |
Making smart glasses isn't Google's primary goal. Glass is a vehicle for its
software platform, turning contextual data for each user into digital assistants that are as beloved as a favourite pet and as essential as food. Google has a plan. Eventually it wants to get into your brain. "When you think about something and don't really know much about it, you will automatically get information," Google CEO Larry Page said in an interview with Wired. "Eventually you'll have an implant, where if you think about a fact, it will just tell you the answer."
Google is a long way from inhabiting your brain, but the company is building wearable computers and investing heavily in artificial intelligence development to move closer to the brain. Currently, Google Glass is expensive, geeky, and forces you to look up and to the right.
But it can make what your smartphone can do more hands-free. With Google Now, the company has a good idea of what comes out of your brain if you are a user of its products. It can tell you about your next appointment and how long it will take to get there, but the digital assistant can't book your family vacation. But Google has big plans for the two products, which are core to Page's long-term goal to automatically and instantly send people information as they are thinking about something.
With his deep historical perspective, Vint Cerf, Google's chief Internet evangelist and one of the fathers of the Internet, has been exploring the possibilities of Glass. "You begin to see what can happen with a computer in the sensory environment you are in," told the media in May. "It's the early days of this thing. By 2014, we should have a good idea of what people will want to do with Glass."
The thousands who are test-driving Glass indicate that beyond accessorizing and performing some of the functions of a smartphone, it's being adapted to augmenting reality and vertical applications, such as financial trading, education, and navigation. For example, an orthopedic surgeon transmitted live video of a knee operation from Google Glass via a Google Hangout to a colleague and students.
Glass could read the text on signs, such as the name of a building, and automatically display additional information, or show related data while you watch TV. And with location awareness, Glass could lead you to a restaurant offering a dinner special at half off, and generate some revenue for the company.
Glass could read the text on signs, such as the name of a building, and automatically display additional information, or show related data while you watch TV. And with location awareness, Glass could lead you to a restaurant offering a dinner special at half off, and generate some revenue for the company.
Dr. Christopher Kaeding performs ACL surgery and transmits the progress to a Google Hangout.
In an interview with The Next Web, Cerf gave an example of how Glass might work between a blind German speaker and deaf American sign language speaker.
The German speaker speaks in German. The Google Glass of the deaf user hears German, translates it into English and then shows it as captions in the Google Glass for the deaf person. The deaf person responds with sign language which the blind guy can't see but his Google Glass does, translates the American sign language into English and then translates the English into German and then speaks German using the bone conduction audio system of the Google Glass that the blind person is wearing. Now we can do all of that except for the sign language interpretation which is actually pretty hard. But it's not completely out of the question, with image processing and the like advancing as time goes on.
Google's high-profile promotion of Glass, including a spread in Vogue magazine, is paving the way for a transition from handheld to head-mounted device, which will eventually transform how humans interface with computers and the cloud.
Making smart glasses isn't Google's primary goal, however. Glass is a vehicle for its software platform, turning the contextual data that it captures for each user, via 100 billion search queries per month as well as from more than half a billion e-mail and map users, into supersmart digital assistants that are as beloved as a favorite pet and as essential as food.
Of course, people are free to extract their data from Google if they don't like the service, but it could be difficult to repurpose it in a useful way. Nor will it be easy to aggregate the contextual data from various platforms, or walled gardens, that people use. Facebook, Twitter, Foursquare, Yelp, and Google are not sharing your contextual data with each other.
As Cerf told The Next Web, "I think that it might be hampering in the sense that if Google information about your calendar or searches that you have done or the e-mail that you have or the documents that you have, you probably would not want Google to arbitrarily and without your consent share any of that data with anyone else. So to the extent that that means that the various businesses that are trying to provide service to you can't aggregate everything that is known about you by everybody. That's probably in your best interest."
No comments:
Post a Comment