The connection between Google Street View and driverless cars

Written by Adrian Holovaty on June 5, 2012

I’m in the middle of taking the Coursera Machine Learning class -- which has been amazingly good -- and it recently covered how one could implement a machine-learning algorithm to power driverless cars.

Here’s how you might do it. You put a camera on the front of your car, and you set it to capture frequent images of the road ahead of you while you drive. At the same time, you capture all the data about your driving -- steering-wheel movement, acceleration and braking, speed -- plus stuff like weather conditions and number/distance of cars near you.

Putting that together, you’ve got some nice training data: a mapping between “situation the car is in” and “how the human driver responded.” For example, one thing the system might learn is: “when the car’s camera sees the road curve in this particular direction, then turn the steering wheel 15 degrees to the left and decelerate to 35 mph.”

Of course, the world is a messy, complicated place, so if you want your self-driving car to be able to handle any road you throw at it, you need to train it with a lot of data. You need to drive it through as many potential situations as possible: gravel roads, narrow alleys, mountain switchbacks, traffic-heavy city expressways. Many, many times.

Which brings us to Google Street View.

For years now, Google has been sending Street View cars around the world, collecting rich data about streets and the things alongside them. At first, this resulted in Street View imagery. Then it was the underlying street geodata (i.e., the precise longitude/latitude paths of streets), enabling Google to ditch Tele Atlas and make its maps out of data it obtained from Street View vehicles.

Now, I’m realizing the biggest Street View data coup of all: those vehicles are gathering the ultimate training set for driverless cars.

I’m sure this is obvious to people who have followed it more closely, but the realization has really blown my mind. With the goal of photographing and mapping every street in the world, Street View cars must encounter every possible road situation, sort of by definition. The more situations the driverless car knows about, the better the training data, the better the machine-learning algorithms can perform, the more likely it is that the driverless car will work. Brilliant.

When I originally heard about Google’s driverless car experiment, I assumed one big reason it was being developed was to make Street View data collection more efficient. No need to pay humans to drive those cars around the world if you can automate it, right? But it’s likely the other way around -- Street View cars inform the driverless cars.

The next question is, as Street View data improves the driverless cars, will the driverless cars get good (and legal) enough to eventually gather Street View data without humans, which will then lead to more driving experience, which will lead to smarter driverless cars, which will lead to more efficient Street View data gathering, in a vicious cycle of driving and learning?

I’m curious about the direction of Google’s strategic thinking. Did it start with “Let’s take photos of every street in the world” and lead to “Hey, we might as well collect data for self-driving cars while we’re making all this effort”? Or was getting data for driverless cars a goal from the get-go, with Street View imagery being a convenient diversion from the real plan (and a way to justify the effort to shareholders skeptical of such expensive R&D)? If the latter, it’s especially brilliant.

More broadly, this inspires me to think more “meta” about data collection. If you have an opportunity to collect data, there’s the value of the data itself, but there’s also the value of the data about the collection of the data. What data does a journalist tangentially (and maybe even unknowingly) collect as she goes about her reporting business? It might be more valuable than the stuff she set out to collect in the first place.

UPDATE: I changed the title of this post from “wolf in sheep’s clothing,” as that was a lame metaphor that didn’t actually make sense.

Comments

Posted by Michal Migurski on June 5, 2012, at 6:07 a.m.:

It’s the kind of “omg” connection that Google’s been really at good at in the past. Another example: setting up a free 411 phone service five years ago to gather phonemes for voice recognition. Lately I feel like they’re still making the connections but not wiring them up to the money-printing machine, in either direction. I’m curious how much speculative data collection journalists engage in, watching patterns to see what might ultimately become interesting before the story is clear.

Posted by bobx on June 5, 2012, at 6:07 a.m.:

While your comment seems fairly obvious, Google Street View does not (according to google itself and the design of the self driving car) contain near enough data to allow for self driving cars using mere Google Maps.

I recommend you read up on the design of Google's autocar to see for yourself that Google has to collect much much more data about any given route than Street View provides before sending the car from point A to point B.

Remember that the self driving car although functional now, requires not only collision avoidance and navigation but must respond to changing traffic patterns and situations. You wont see one on your street for a while.

As for the wolf, well, I think thats kind of a dumb metaphor.

Posted by mic on June 5, 2012, at 6:16 a.m.:

@bobx
data can be collected without publishing to the world
the data op is talking about is not necessarily visual

Posted by Nick Thompson on June 5, 2012, at 6:21 a.m.:

Indeed, the "streetview" cars have been collecting LIDAR for a while now, e.g.
http://www.beussery.com/blog/index.php/2010/04/google-streetview-lidar/

Posted by mike on June 5, 2012, at 6:26 a.m.:

@bobx you didn't understand this article...

Posted by Jesse Dhillon on June 5, 2012, at 7:02 a.m.:

A classifier like the one proposed here would be the wrong way to do it. No classifier is perfect, and when this classifier fails, it could have fatal results -- think accelerate when it should be braking, or changing to the left lane when it should be the right.

A safe and reliable approach would require CV, scanning (with LIDAR according to the previous comment), and other approaches to build up a discrete awareness of its surroundings. Driving itself is a rule-based activity -- don't run into other cars and stay on the road being the two main rules.

Posted by bobx on June 5, 2012, at 7:18 a.m.:

Sorry, you have no idea how the google car works. It has to be driven several times along a particular route collecting highly granular data using lasers and GPS data which is more accurate than what the google street view car collects. It's simply not safe otherwise.

Before assuming streetview is used for something learn how the implementation of the self driving car actually works.

A more advanced system might be able to use streetview data collection. The current one cannot safely drive without collecting a ton more data.

Posted by Noetic Jun on June 5, 2012, at 7:31 a.m.:

This is really just the tinfoil hat kind of claim. A system that simply used Google Maps as something like massive multi-dimensional lookup table to direct the car's movement would be simply far inferior, in terms of reliability and otherwise, to actual machine learning.

Posted by egdk on June 5, 2012, at 7:31 a.m.:

Street view data is collected at a snails pace, if you want a driverless car to drive like that you could use the data as machine learning. Otherwise, you will get driverless cars that drive like 90 yr old ladies.

Posted by Zoki on June 5, 2012, at 7:36 a.m.:

@bobx: And why do you thing they aren't doing that with Google StreetView cars? Why do you assume they're collecting materials only to create maps?

Posted by joe on June 5, 2012, at 7:50 a.m.:

Adrian, you are spot on of course.

Who was behind the driverless car?

http://en.wikipedia.org/wiki/Sebastian_Thrun

Who was behind Street View?

http://en.wikipedia.org/wiki/Sebastian_Thrun

Posted by bobx on June 5, 2012, at 9:04 a.m.:

@Zoki, because I KNOW its not except for the base mapping route which requires further refinement. Its called knowledge.

Posted by Kredyt on June 5, 2012, at 9:56 a.m.:

Very provocative article and probably it is not true.

Posted by regington on June 5, 2012, at 9:57 a.m.:

joe nailed it... some other clueless comments here

Posted by 0x7bc.com on June 5, 2012, at 10:28 a.m.:

Yeah, could work... Until we get the 1st people killed by a driverless car.

Posted by Jaggs on June 5, 2012, at 10:33 a.m.:

@Bbox: I wouldn't assume that just because the current system is not advanced enough for something, that Google is not running parallel development on Version x which WILL be advanced enough.

There are too many points of connection between Street View and driverless cars for it to be random (including the engineering personnel) I would suggest.

I genuinely see a real possibility of the two technologies colliding in a driverless public transport system which delivers people door to door using street view image recognition (notice how Google Navigation always ends your journey with a street view image?) along with driverless technology. A Robo-Bus maybe?

It may take a few years to perfect the technology, but Google has proved that it has patience as well as deep pockets, and this kind of tech could be perfect for weening us off our desperate addiction to cars as oil becomes more expensive.

Posted by Gareth on June 5, 2012, at 11:28 a.m.:

@bobox I don't think anyone is saying that Google's driverless cars work *solely* from the data gathered from Street View.

The article is pointing out that, if Google wanted to get hold of a large set of recorded data for the kinds of conditions that would be encountered in cars in normal (and not-so-normal) conditions, e.g. proximity data, behaviour patterns of nearby cars, then maybe they could do it by sending out cars to drive around the worlds major cities. Oh wait, looks like they already *did* when they were collecting Street View data!

Now if they were going to collect that kind of data then actually there's no need to stick a 360-degree camera on top of the car and build Street View, or even tell anyone about it at all, but maybe they just wanted a cover so that people wouldn't ask too many questions about what all those Google cars were doing all over the world.

Posted by Emellaich on June 5, 2012, at 12:20 p.m.:

... And so many commenters don't understand the article. It is not saying that the maps cars can be driverless cars. It says that the data that magps cars collects can be a valuable training aide for building situational awareness in a driverless car.

Don't know if that's the plan or not, but it is an interesting thought.

Posted by cristian on June 5, 2012, at 1:01 p.m.:

Google does nothing without purpose or a meaning. Maybe this article has lack of arguments or proofs. But if google is working in driverless cars, they ARE working in driverless cars. And if they are using Street Views or not, we'll know... that's for sure!

Otherwise, I liked the article...

Posted by Paul on June 5, 2012, at 1:02 p.m.:

That's a fair point, it would be a good for a data set to work on to train their models.

But it's all well and good until Google Maps get's things completely wrong, here is an example of the way it told me to go around a Roundabout / Rotary recently. https://twitter.com/paulsavage/status/209992805806911488/photo/1

Posted by joey on June 5, 2012, at 1:50 p.m.:

I took Thrun's Udacity course on programming driverless cars. Having data about the environment up front is a big help. This article is entirely plausible to me.

I suspect the StreetView cars are collecting LIDAR data, but even if they aren't, various groups have demonstrated 3D models built from collections of photos.

Posted by darren on June 5, 2012, at 2:07 p.m.:

This is a very insightful observation: Machine Learning + Google Street View Car Data could be used to train self driving cars. Whether this is how the Google Street View started or not, it could evolve to become this.

All you short sighted naysayers need to spend some time watching James Burke's "Connections" series.

Posted by WTPayne on June 5, 2012, at 2:28 p.m.:

I suspect that there are other items of interest (All related to location based services) that Google could glean from Street View data: the location and names of small businesses can be used to inform and enrich local search offerings; Classifying areas into categories can be used to inform and improve location-based targeting of ads etc... etc...

All of these are closer and more immediately relevant to Google's revenue streams than supplying training data for automatic cars.

Having said that, I really hope that Google is able to successfully commercialize their automatic car technology. Given the number of people who die on our roads every day, the potential for this technology to reduce this bloody toll by taking human hands off the steering wheel is tremendous.

Posted by jerel on June 5, 2012, at 2:32 p.m.:

This is a fascinating article to me as I worked in the automotive industry for a number of years before taking up programming. I remember explaining to my wife a few years ago how I saw our highways working in the future.

Something that I don't see talked about much is vehicle networking and I'm guessing it's going to be a major component. Imagine vehicles that request entrance to a freeway and the traffic already on the freeway responds by adjusting their speed ever so slightly to allow space for them to enter (humans currently brake too hard, change lanes unneccessarily, etc). That feature in itself would increase the efficiency of our highways many times. Of course the networking aspect would always be subject to the input of the physical sensors as networked data could possibly be corrupt. And there's the problem of backwards compatibility with old cars on the road; which is why we need to get started installing in new cars as soon as possible.

Posted by Chris on June 5, 2012, at 2:40 p.m.:

Paul, I'm not sure if your'e looking at it wrong or trolling, but if you're heading south (MA-28 S), then that would be correct.

Posted by Fred Penner on June 5, 2012, at 3:05 p.m.:

Have you ever wondered how Google can pinpoint your exact house location when you click on the little "my location" button in Google Maps? Wifi is a great way to broadcast your location. I noticed that following the Google Street View release in Canada, the location accuracy of my house was precise. Kind of chilling actually.

But, on another note, if we didn't have Google Street View, we would miss out on these entertaining images: http://www.streetviewfunny.com We would also miss out on high quality Google maps generated via Google Street View GPS tracks.

Posted by max kesin on June 5, 2012, at 3:11 p.m.:

Another thought I had about data collection: Google Glass, camera equipped eyewear, sends tons of data back to the mother ship (for mapping, face recognition, etc). Map the sh*t out of everything, inside and outside the buildings, forests, whatever.

Posted by Brian Cardarella on June 5, 2012, at 4:35 p.m.:

Great article, thank you for the detailed research.

Posted by Sam uekert on June 5, 2012, at 7:51 p.m.:

While the Street View cars might provide some info to the self-driving car platform, the critical data comes from the pilot cars that travel the road decked out with LIDAR before the automated cars can travel that route. But imagine the virtuous cycle when:
Data from automated cars themselves are uploaded to the system. When each subscriber to the data (the driver-less cars) also contributes updates, and includes information about exceptional events (sudden stops, driver intervention, etc) the quality of the information ramps up quickly. At just a handful of automated vehicles, the network effect takes over and outstrips the utility of the pilot cars.
I've been avidly following advances in this field for more than a decade. After the DARPA Grand Challenge, I mused "What the hell happened to that tech? You only have to invent it once." So I was thrilled when I found out that some of the challenge team members had joined Google for their project.
The social implication of this tech is staggering. Think of small driver-less delivery vehicles delivering packages to your door. Or being able to send an empty car to pick up your 13 year-old from soccer practice. And the elderly will no longer be abandoned to obscurity when their vision and reflexes flag. And why should I have a garage at my home and a garage spot downtown to store my care 21 hours a day? We pay an enormous amount of our income on car ownership and maintenance, only to pay to store them the vast majority of the time. It makes much more sense to pay for fractional ownership or a car service, provided you don't have to waste a seat and pay salary for the driver.

Posted by Ya Knygar on June 5, 2012, at 10:27 p.m.:

@Adrian Holovaty

Hello from Ukraine :)

The point of "data about the collection of the data" while I think in Augmented Reality kind of realm, inevitably leads me through the points of ongoing surveillance toleration in the societies, complete vs. consumer-grade transparency debates, confidentiality vs. privacy, and Web related, everyday life-data-generation related legislation processes that courts are facing now more than ever.

The "value of" the raw Data from all the sources, on the market, comes down to the value of the collected then processed Data, and here appeared "vicious" data-flows as the corporations profiles became "everything" doing "everything" with the users data, covering "everything" with one-click Agreements which users tend to barely read the title of. I'll elaborate.

The free web-courses and FLOSS software such as Stanford introduce, indeed provide a degree of anti-monopolization, decentralization of the current hi-tech. While corporations have made the effective, proprietary processing clouds, the variety and mighty of personal processing abilities increased, mesh networking, distributed computations was developed to provide the personalities with ability to share the power, being able of https://en.wikipedia.org/wiki/Big_and_Ugly_Rendering_Project kind of communities. While WebCL kind of tech emerges, Web powered crowdsourcing being introduced along with webification of money systems and flows. In conjunction it re-introduces the praise for the Internet and therefore the Web "neutrality".
https://en.wikipedia.org/wiki/Internet_neutrality (potential tl;dr)

The value of the corporate systems comes to surface with copyrights, patents. Crowd-sourcing increasingly produces copyleft, open-source, commons.

In the middle of it all the Data flows, I mean the Data as the resource as the “natural” language information, machine-readable information, the sensors produced data.

Here is a few data-flow examples from Google:

“The Company's position is straightforward. The Wiretap Act provides, "It shall not be unlawful
under this chapter or chapter 121 of this title for any person ... to intercept or access an electronic
communication made through an electronic communication system that is configured so that such
electronic communication is readily accessible to the general public."
“Federal Communications Commission NOTICE OF APPARENT LIABILITY FOR FORFEITURE
(NAL/Acct. No.: 201232080020)” from the http://www.wired.com/images_blogs/threatlevel/2012/05/unredactedfccgoog.pdf

1. Web-sites: AFAIK the company searches for and copies the content of the web-sites to its own servers then it processes the content regardless of the stated on the material copyright, then it shows the parts of that content on so called SERP along with it's own advertisement.

My Rights - You should explicitly exclude the so-called GoogleBot in robots.txt to avoid the crawling.
In general, other results which relates to the web-site (like gathered from the referring URL from some surfer)
would not be excluded, the answer is “before we remove a page from our search results, the owner has to change it or take it down.”. Exceptions for the https://www.google.com/transparencyreport/governmentrequests/
There were attempts to force the copyrights protection further on the crawled data and processing, but who knows where they are now, pattern is here.

2. Wi-Fi spots: the company searches for and copies the networking ID (SSID, MAC at least)
“The inclusion of your WiFi access point in the Google Location Server enables applications like Google Maps to work better and more accurately.”
I don't understand the “No. We do not use publicly broadcast WiFi information to identify the owner of an access point.” as I think Google uses the users Google ID, to provide the Latitude kind of service and the current privacy policy seems to merge of ID across services.

My Rights - “You can opt out by changing the SSID of your WiFi access point (your wireless network name) so that it ends with “_nomap””
Despite Google tells “A MAC address tells you nothing about the owner or user of the equipment”, the MAC address “is a unique identifier assigned to network interfaces”
and as such it could be viewed as an IMEI on the phone devices or such kinds of identification services.
http://support.google.com/maps/bin/answer.py?hl=en&answer=1725632

3. Street View/Maps, etc. service: the company searches for and copies the.. surrounding space and enables the increasingly high precision location services on it with the A-GPS (with the help of the previous step), IPS (indoor), “smart sensors”, etc.
“we collect 3D geometry data with low power lasers (similar to those used in retail scanners) which help us improve our maps. NavTeq also collects this information in partnership with Bing. As does TeleAtlas.”
http://googlepolicyeurope.blogspot.com/2010/04/data-collected-by-google-cars.html

“Several years ago, Google cofounder Larry Page drove around the San Francisco Bay Area and recorded several hours of video footage using a camcorder pointed at building facades. His motivation: Google’s mission is to organize the world’s information and make it universally accessible and useful, and this type of street-level imagery contains a huge amount of information. Larry’s idea was to help bootstrap R&D by making this kind of imagery truly useful at a large scale. His passion and personal involvement led to a research collaboration with Stanford University called CityBlock (http://graphics.stanford.edu/projects/cityblock) that soon thereafter became Google Street View (www.google.com/streetview). The project illustrates two principles that are central to the way Google operates. One is the hands-on, scrappy approach to starting new projects: instead of debating at length about the best way to do something, engineers and executives alike prefer to get going immediately and then iterate and refine the approach. The other principle is that Google primarily seeks to solve really large problems. Capturing, processing, and serving street-level imagery at a global scale definitely qualifies.”
http://research.google.com/pubs/pub36899.html
As you may see, the https://en.wikipedia.org/wiki/Google_Street_View_privacy_concerns doesn't include any issues about the pretty high precision and relatively long range (comparing to Kinect :) laser scan of my personal ecosystem.

My Rights – oh well.. the https://en.wikipedia.org/wiki/Google_Street_View_privacy_concerns and that is without the lasers.

(tl;dr now or it would be too late)

Need to mention that I am biased, as I think that latest incarnation of a Street View cameras could be like http://www3.elphel.com/eyesis-4pi + LIDAR from something like from the SICK AG (alone could not be less than 20 grands for a couple) as that company was mentioned in another post + some RADAR and that all makes me jealous.. as I think it costs more than my car itself and they don't even pay me for these “pictures”.

So.. the companies, the corporations like Google or Microsoft (Kinect), as giant examples, military, institutions and individuals like me, scan the surrounding space and people and all with all the possible “retail” cameras (light scanners, whatever), which enables the autonomous cars, robots, whatever and the AR/singularity/whatever is coming?
For sure.

But what's with that Data?
I am afraid it is the worst part of this story.
We (I mean – you, me, and that IT-being reading this blog) have led the WWW to the Web of Data and now it is being led into the Web of Things, so called “smart cities”.

At the very least, without adding the other cool sensors, it could be much like the Minority Report, in a way that these people in the water are that expensive neuro-processors a kind of processing grid, decentralized or not, but the lasers are built in everything.

What's bad with it?
In that movie – it is obvious (Spielberg owns me).

“If I look at enough of your messaging and your location, and use Artificial Intelligence, we can predict where you are going to go...”
“...show us 14 photos of yourself and we can identify who you are. You think you don't have 14 photos of yourself on the internet? You've got Facebook photos! People will find it's very useful to have devices that remember what you want to do, because you forgot...But society isn't ready for questions that will be raised as result of user-generated content..."
“...The only way to manage this is true transparency and no anonymity...”
“...In a world of asynchronous threats, it is too dangerous for there not to be some way to identify you.”
and finally, the worse on my top – the Data is coming to the
“Governments will demand it.” ..usually as for the IT systems, the demands are satisfied even if there is no legislation, consensus for it, you know..
Eric Emerson Schmidt
http://techonomy.typepad.com/blog/2010/08/google-privacy-and-the-new-explosion-of-data.html
Forbes Techonomy.
I am learning English from citations, have tons of such intimacy from any big corporation for any cases.

For me, the problems are:
* I can't see the when the line was crossed that some company could film my house, car, eventually - my life in some 4D without me signing the clever 100pages agreement?
The statements that my face and license plate is blurred doesn't play with me, I need all my body-mesh, all my ninja-batmobile-styled car's mesh, all my house and neighborhood be deleted as well as all places where me and my friends could went for a walk, ever.

* I can see and know that many otherwise principal people already see the kind of legal Rubicon as Crossed and this looks depressing. Anyway, there are many countries and we would see these which endorse such a technology evolution and other “purists” as well, the question is – for how long, with the current globalization tendency.

* I think that such a valuable Data that already is collected should be in commons as I often see the crawling or wardriving or laser scanning as the clear example of private property and privacy related intrusion. I also see any other, even laws-required data contribution as an intrusion, however, sometimes it is being justified for me by the open governmental data initiatives that then shares that data with general public, so I am not alone but in some funny system.

* If somebody “intercepts” my house with a laser, for free, it should be at least Free like in FSF - “readily accessible to the general public”.. I mean – the raw 4D point clouds, not that Google Earth experience near my house, thanks.

* That was largely the one corporation side, and I want to easily deal with annoying neighbor that would often accidentally scan me into some another corporation silos.

I am dreaming of course, I just want to have my Digitized ID under my full control with the trusted possibility of my AR choices revocation non-the-less.

So, what I try to do to help this situation:

1. I am trying to ensure the Web-way of AR (here I simply think of AR as of the “technology connected with the natural senses by the interfaces”) so that the sensing standards could end in the open standards and generous Web “Community” not for some gruesome proprietary separatists. Seeking help with the w3.org/community/ar

2. I'm looking for some modern, openly standardized and implemented (that one could comment, change) ID system that could deal with such a, unprecedented sensitivity of data. So far, no good, but I keep researching.

3. I am looking on TheFNF.org (Yes, I could not even trust the current ISP's that data-flows.), FreedomboxFoundation.org and other folks work to jump In with some clever distributed storage and processing solution for self-AR data processing, ID storage and all. Own cloud, yes.

4. (I think the Google would up its AR cloud in few years or less) I am looking for someone who may want to built the non-commercial Web-scale referendum platform so the people themselves would ease the forthcoming legislation processes. I think – there are some great lawyers (GIS, data capturing and web-related) and attorneys in the USA at least, that could think prospectively and make the brighter future. Now there are better laws for data handling in the EU in some cases, but I think that your (USA) precedent based legislation system and Bill of Rights still showing some promise in some cases.

Suggestions are very welcome!

PS: I don't use email or other contact at the moment, please share your opinion here or on the W3C AR CG public list with some meaningful [tag], if you don't mind.

PPS: someone could copy this message to the YCombinator article comments, if someone don't mind, thanks.

Posted by Willy Wonka on June 5, 2012, at 10:31 p.m.:

We need to somehow get a bunch of Finnish rally drivers inputting data into the database. Then the Google robocars will dive into corners like, well, Finnish rally drivers.

Posted by Ya Knygar on June 5, 2012, at 10:32 p.m.:

@joe
also https://en.wikipedia.org/wiki/Sebastian_Thrun
is one of the Google Project Glass leaders, AFAIK.

Posted by Someone on June 5, 2012, at 11:39 p.m.:

Interesting but I think you are misguided. The images of street view would not be helpful at all in real time for automated cars. Raster data is too large and too hard to process to be useful. Exacting coordinates maybe useful but it is much more cheaply obtained from satellite. The real reason for street view is scanning of store fronts. Its to identify, classify and map commercial locations. Google has even provided plaques that a store can put on the front of their building to make sure they are scanned/identified.

Posted by Someone on June 5, 2012, at 11:48 p.m.:

also....Google maps has even recently begun to provide maps for the insides of buildings such as malls.. This is to make sure you can make your purchase at the store.. google of course will get a click charge since they end to end provided search result and then navigated you there. The store will pay google for the sale.

Posted by Ya Knygar on June 6, 2012, at 9:28 a.m.:

@Someone
You are right about that the realtime processing of the raster could be too expensive with the modern stand-alone hardware but
following the official statement,

Google Street View vehicles and Google Driverless Cars are using
"The term 3D Flash LIDAR refers to the 3D point cloud and intensity data that is captured by a 3D Flash LIDAR imaging system. 3D Flash LIDAR enables real-time 3D imaging capturing 3D depth and intensity data characterize by lack of platform motion distortion or scanning artifacts. When used on a moving vehicle or platforms, blur-free imaging is expected. This is as the result of using single laser pulses to illuminate the field-of-view (FOV) creating an entire frame. Because the integration time is fast (e.g. at 100 meters range, capture requires 660 nano-seconds), the ability to produce real-time 3D video streams consisting of absolute range and co-registered intensity (albedo) for use in autonomous applications is a natural fit."
The Driverless just may use the scanners with the better precision along with the radar on the front which is effectively the same as LIDAR in other frequency bands."

There were people driving with Kinect (basically the laser that tracks the light beam distortion, as an alternative to the relatively expensive LIDAR, RADAR time-of-flight detection) on the roof, connected to the Android GPS, achieving the reasonable (I think around a 7cm density) point cloud precision for such a big robots as that Driverless (or Autonomous) Cars, at least for the test usage,
for the safer realtime/primetime they'd need the longer range realtime cloud than Kinect's IR is offering, ofc. just like that 20grands+ things.

Furthermore, the high quality raster of let's say 64MP, "When driving at 50 mph (22.3 m/s) and recording at 5 FPS a full panorama is captured every 4.46 meters." (I think if you have the powers of Google it is not the problem, they are doing the processing for the license plates and face blur.. and look at the "Tour the World: building a web-scale landmark recognition engine" paper, they have "utilized" the 21.4 million images to build the landmark visual model during the research, also Microsoft has a PAAS for such a tasks, so I think the commercial point could justified for such services, also that is a computable task for the modern PC) further processing could well assist to the better accuracy of the point cloud, leading to some under 1cm precision as a norm.. as well as assist to the free scanning from your that car for the Google silos and for the Ads, of-course.

-
Regarding the indoor.. http://bit.ly/indoornavigation has some list of the competitors and tech. I need to mention that to show their Google Ads with their Glasses or on their handheld devices they would not only need the IPS precision of the geo-location but the laser scanning from the eye-wear (which they have patented in the USA along with the car, along with rain-buy-umbrella "sensoring" recently, with the disappointing, un-competitively "all-encompassing" description, as for me) or the phones (which I think would take time for vendors to add the IR camera in, because of the patents on these otherwise pretty cheap cameras, so I think, the glasses would lead).. which would scan with let' say, a 0.5 cm precision and visually augment it exactly over the point of interest, in 3D, not that 2D, only for navigation to.

Listening to what Sergey Mikhaylovich Brin and other from that team have said, I think that AR system is supposed to go into the production, around the summer of 2014 if the competitors wouldn't rush and the legislation or toleration of the IR scanning of the surrounding ecosystem would be passed.. wait, it seems to be passed already?

"Google Maps with Street View lets you explore places .. You can .. go inside restaurants and small businesses .."

Posted by Ya Knygar on June 6, 2012, at 9:43 a.m.:

@Adrian Holovaty

“wolf in sheep’s clothing,” in a sense of the aggressive Google's space scanning (environment crawling :) politics, see the Germany privacy cases, at least.

that was justified with the attitude like
"Street View, we drive exactly once” “So, you can just move, right?"
http://articles.marketwatch.com/2010-10-22/industries/30715298_1_google-street-view-alan-eustace-chief-executive-eric-schmidt

and than it goes again for HD raster, and again with a better lasers and again.. isn't lame and makes sense, as for me.

Posted by kioopi on June 9, 2012, at 8:49 p.m.:

There is a course on udacity.com taught by the aforementioned Sebastian Thrun called "Programming A Robotic Car" where he seems to aim to explain the algorithms and heuristics driving the google driverless cars:

http://www.udacity.com/overview/Course/cs373/CourseRev/apr2012

It seems to be very "bottom-up". There is no "grand picture" lecture at the beginning. Also, the examples are painfully unpythonic. Neverhteless interesting, i thought i'd mention it here.

Comments have been turned off for this page.