Friday, October 24, 2014

'Citizenfour' Review: Quiet Moments in a Hong Kong Hotel Room as Edward Snowden, Journalists Fight to Save Democracy



Hong Kong has been ground zero this year in the fight for freedom, with students and Occupy leaders battling police for control of the streets in a desperate campaign to maintain the Chinese territory’s relative autonomy from erosion by the central Beijing government.

But the city hosted much quieter freedom fighters a year earlier, not on the streets but in the confines of an international hotel room. When journalist Glenn Greenwald and documentary film maker / journalist Laura Poitras responded to emails from an intelligence community member who identified himself at first only as “Citizenfour,” little did they how deep the rabbit hole would be and that unraveling history’s largest spying operation – a worldwide mass dragnet by the NSA that targets essentially everyone on earth – would mean traveling to Hong Kong and debriefing Snowden in a hotel room from which none of them would emerge unchanged.

Nor, it seems likely, will the audience of Citizenfour, which takes the viewer inside that hotel room via footage Poitras shot as the interviews with Snowden unfolded. Even though we now know, thank to the courage of Snowden, and of Greenwald, Poitras and their colleagues, that the NS’s warrantless programs hoover up everything from everyone, domestic and foreign, emails, telephone calls, metadata and apparently  contents as well, it’s nonetheless gripping to watch the interplay between source and journalists as the latter learn the details and attempt to figure out what to do about it.

Snowden, young, handsome, sincere, is outwardly calm, willing to give up his comfortable life, family ties, and perhaps his freedom to slow the U.S.’s slide towards a nation of subjects rather than citizens. But his Adam’s Apple betrays him. Take a close look and you can see he’s swallowing hard, even as he insists he’s made his peace with the choice to sacrifice his own comfort for the greater good. Later, when he rearranges his hair to muddle his identity as he prepares to slip underground, his curses at an errant cowlick underscore the anxiety he feels.

As a journalist, I’ve worked with anxious sources, tried to uncover truths, published secret documents and laid bare the backroom dealings of closed-door meetings. I’ve never worked on anything as momentous as Poitras and Greenwald’s brief, nor put my life (cf. war reporters) or freedom at risk, but Citizenfour nonetheless felt eerily familiar in its duet between source and reporter, as documents and thumbdrives change hands and broad contexts and telling details emerge.

As a former math geek and computer scientist, I’ve long been interested in the NSA, and knew that the nascent intelligence community during World War II had monitored every international telephone call and telegram from or to the U.S. As far back as the early 1980’s – when I worked for the military-contractor think tank that created the Internet’s predecessor for the Defense Department – I figured that computerized monitoring of international communications was taking place (though I never worked on such projects), so the Snowden revelations at one level were unsurprising. But they didn’t fail to chill last year, nor when seen again in the film.

Not all of the documentary takes place in the hotel room. Particularly unnerving is footage of earthmovers in the Utah mountains relentlessly clawing at the landscape as they construct a massive NSA data center to store the intercepts that Snowden says the agency in drowning in – so much so, he says, that the intelligence community has lost the ability to find the valuable information in its own hoard. Dogged in their pursuit of bedrock, the steamshovels’ unforgiving appetite for raw earth and their inexorable attack on the landscape become in Poitras’s sure hands visual signifiers of the greed for information – and thus power – that characterizes the security apparatus that has flowered since 9/11 under both Bush and Obama.

The tension between national security and freedom has seldom been tauter, nor the balance harder to strike in a world where every bit, byte and packet is subject not just to interception but also to weaponization. With his actions, Snowden ignited a debate that will long continue, and Citizenfour, the third film of a post-9/11 trilogy – the other films landed Poitras on a secret watch list from which she found no escape and drove her to relocate to Berlin –  will stand as an urgent and gripping record. See it this weekend. The NSA probably already has.



Check out “The New Zealand Hobbit Crisis,” available on Amazon in paperback, Kindle and audiobook. Visit my website (jhandel.com), follow me on Twitter or friend me on Facebook or LinkedIn. If you work in tech, take a look at my book How to Write LOIs and Term Sheets

Thursday, August 14, 2014

An Actor's Cautionary Tale: Cancer Diagnosis and a Drawn-Out Battle Over Residuals


Actors often complain about late residuals checks, although SAG-AFTRA has cut processing delays lately. But few stories compare to the battle waged by Alex Doe (a pseudonym), a voice actor who was diagnosed with cancer in 2012 and endured a 3-1/2-year residuals runaround from Warner Bros. and SAG-AFTRA that ultimately  threatened Doe's health insurance.

(Residuals are royalties that are paid to actors, writers, directors and musicians when movies and TV shows are rerun or are released in other media such as DVD or the Internet. They're not small potatoes: residuals can amount to 40 percent of an actor's income, and total about $2 billion per year.)

How did this happen? Boomerang, an offshoot of Time Warner's Cartoon Network, failed to report thousands of reruns of the actor's show for several years, and the Warner Bros. residuals department resisted the union's contrary data. The actor filed a claim with SAG in February 2011, and the union and studio began arguing about the number of reruns and whether Doe had been overpaid on a DVD release.

Warners repeatedly promised more information -- surprisingly, collective bargaining agreements don't require that any particular data be provided -- and months often passed between emails and phone calls. In 2012, the head of the union's residuals claims department referred the matter to a legal department attorney.

But even with both departments involved, the delays continued. For more details, and the surprising resolution, see The Hollywood Reporter.

Check out “The New Zealand Hobbit Crisis,” available on Amazon in paperback, Kindle and audiobook. Visit my website (jhandel.com), follow me on Twitter or friend me on Facebook or LinkedIn. If you work in tech, take a look at my book How to Write LOIs and Term Sheets

Thursday, August 7, 2014

Review: NudeAudio Super-M and aiia SSSSSpeaker on Kickstarter




Two new Bluetooth speakers offer a great reason to jet on over to Kickstarter.

The NudeAudio Super-M ($99, campaign ending on August 15) offers great sound in a package thin enough to slip in a back jeans pocket. During my recent visit to the company’s South of Market offices, a head-to-head comparison showed that the unit delivered deeper bass and higher volumes without loss of fidelity than the Jambox Mini and was more rugged.


NudeAudio is the same company that offers the delightful Move S, Move M (which I reviewed last year) and Move L speakers and the Studio 5 Lightning Dock with Bluetooth (found in hotel rooms at The W). How does the company manage to deliver attractive products at aggressive prices?

As chief design officer Peter Riering-Czekalla answered that question, I found myself distracted by the highly tactile silicon-sleeved speakers in pleasing yet subdued colors arrayed around his workspace. But in essence, his philosophy, honed by years of experience at design powerhouse IDEO, is to focus on acoustics rather than unnecessary features, keep product construction simple and cost effective, and defer from spending millions on advertising that positions the speakers as costly status symbols.



NudeAudio’s home – San Francisco’s SoMa distract – is an expected place to find innovation, but Kiev is not. Yet Kiev – yes, the one in the Ukraine – is the origin of another interesting speaker on Kickstarter, the SSSSSpeaker ($29, end tomorrow) from aiia. This one looks even more nude than the NudeAudio offerings: it’s literally just a speaker mechanism and a silicon speaker cone. But there’s a twist: the cone is collapsible like a those camping cups you might have had when you were a kid. Bright, collapsible, portable and light, the speaker produces enough volume for a pup tent and weighs very little.

Disclosure: The companies provided product for this review.


Check out “The New Zealand Hobbit Crisis,” available on Amazon in paperback, Kindle and audiobook. Visit my website (jhandel.com), follow me on Twitter or friend me on Facebook or LinkedIn. If you work in tech, take a look at my book How to Write LOIs and Term Sheets

Tuesday, July 1, 2014

How Do We Know Driverless Cars Are Safe? Google Says ‘Trust Us’




Driverless cars are on the road now – Google’s fleet has logged about 700,000 miles of autonomous driving – and the California DMV will be issuing regulations in a matter of weeks allowing self-driving cars to be sold to the public, possibly setting the regulatory pattern for the rest of the country (video). Google has predicted 2017 for first commercial availability, while Nissan and Mercedes say it will be 2020. The cars are highly complex systems whose sheer quantity of software surely exceeds the hundred million lines of source code in today’s non-autonomous vehicles. They weigh thousands of pounds and hurtle down public roads – robots with human payloads. But how will we know they’re safe?
Google’s answer at DMV workshops in March 2014 and May 2013 was, essentially, “trust us” (video, video, video). The company’s representative, Ron Medford, said Google would “self-certify” the car’s roadworthiness and then the DMV could take it for a test drive (video) akin to what a human driver undergoes today (video, video, video, video). “That the company … [is] ready to submit its vehicle for [DMV] testing is kind of the proof that it’s ready,” he said, and argued against any more detailed scrutiny than that (video, video).
This is the same Google that so betrayed customers’ trust that it’s under twenty years of outside privacy monitoring, imposed by a consent decree which it later allegedly violated (leading to a record multi-million dollar settlement). Yet the company’s answer is to supplement “trust us” with nothing more than a simple road test – even though computer experts agree that such testing is no way to verify safety-critical software.
A Chrysler representative, Steve Siko, also endorsed self-certification (video). “It’s not like any manufacturer is trying to skirt around some safety rules,” said another Chrysler rep, Ross Good (video). From VW, Barbara Wendling also sang the self-certification tune (video), and said she had “no worries that anybody in the industry is going to put an unsafe product into California or any other state” (video).
Really?
That comment ignores GM’s defective ignition switch that may have killed 309 people (the company says it’s 13) – almost 15 million vehicles recalled to-date amid criminal probes and possible cover-ups – and Toyota’s deadly accelerator pedals (89 fatalities, according to the National Highway Traffic Safety Administration), to name only two of the unsafe products automakers have in fact put on the market in recent years.
Nevada accepted Google’s “trust us” gambit (pdf, p. 13) but like that state’s slot machines, it’s a bad bet. Google’s approach is not good enough, because the tech company and automakers have already demonstrated they’re not trustworthy and because experts acknowledge that complex software can’t be adequately verified simply by testing it. That’s especially true of safety-critical hard real-time systems. “Hard” means that the system must respond in timely fashion to avoid devastating consequences, like an automobile crash.
Indeed, a mere “road test” is not how validation works in other safety-critical applications like medical device software or avionics hardware and software
The Right Way to Validate Software
As the Food and Drug Administration says, “Typically, testing alone cannot fully verify that software is complete and correct. In addition to testing, other verification techniques and a structured and documented development process should be combined to ensure a comprehensive validation approach. Except for the simplest of programs, software cannot be exhaustively tested. Generally it is not feasible to test a software product with all possible inputs, nor is it possible to test all possible data processing paths that can occur during program execution.”
And who should do the verifying? The FDA addresses this too: “Validation activities should be conducted using the basic quality assurance precept of ‘independence of review.’ Self-validation is extremely difficult. When possible, an independent evaluation is always better, especially for higher risk applications.”
Not surprisingly, the agency requires pre-market submission of very detailed flowcharts, design documents and the like (apparently including source code) for devices that present a risk of death or major injury.
Avionics is even more stringent. In the U.S., knowledgeable reviewers called Designated Engineering Representatives or DERs are certified and appointed by the Federal Aviation Administration. Some DERs (Consultant DERs) are independent contractors while others, although they work for the manufacturers (Company DERs), are part of a culture where they are deemed to owe their loyalty to the FAA rather than their employers; in fact, DERs are appointed to represent the FAA and are legally required to maintain objectivity (pdf, p. 21). Europe uses outside testing labs.
In both jurisdictions, detailed documents govern the certification of in-the-air avionics software and hardware (in particular, custom devices like FPGAs, PLDs and ASICs) as well as ground-based avionics software. Test plans, source code, design documents and more are required to be submitted, and the programming process itself has to meet certain standards. Every line of safety-critical software code has to be justified – traced back to its rationale – and has to be executed during testing.
The difficulty with the avionics approach is that it enormously increases costs and product development time. In the automotive sphere, that would mean people continuing to die from human-caused accidents – which is about 95% of the annual 32,000 fatal accidents today (pdf, see pp. 24-25) – while we waited for (hopefully-safer) driverless cars to emerge from testing. It’s a real issue, but there has to be some compromise between “trust us” and hyper-punctilious verification.
You don’t have to be a high-tech expert to understand the concept. Self-certification is not even the way it works when you want to add a roof deck to your house. At least in urban areas, you have to show your plans to the city or county building department first (and the plans are usually drawn up by an architect or engineer who passed a state licensing exam; not so with software). There’s a physical inspection after the deck is built.
Google wants to skip the first step and jump to the second, but why should we regulate robocars less rigorously than roof decks?
Certification of driverless car software and hardware by an independent testing company, coupled with DMV review and multiple test drives, is the best way to give the public confidence that these cars are as safe as can be. I’m well aware – as a former programmer and as a lawyer who has represented software vendors – that source code is sensitive. But so is public safety. Ultimately, meaningful certification will be good for the driverless car industry as well as for the public.
At the DMV sessions, a Bosch representative named Soeren Kammel endorsed third-party certification and re-certification when a product is updated (video, video). A consumer advocate, John Simpson of Consumer Watchdog, focused on privacy issues (video), which are also critical, but he didn’t explore certification.
Google declined to comment for this article and the DMV told me the issue would be “addressed in the regulations when we release them … later this summer.” They’re writing the regulations now: you can weigh in by clicking here to email the DMV (and cc me).
Bugs are Inevitable
Bear in mind, there will be bugs in the software and flaws in the hardware, as even Google admits (“should it fail .. we want to reduce the impact”; “there will be some failures,” video). That’s the nature of electronic hardware and especially of software: we’ve been designing and building physical structures for thousands of years – bridges, for instance, for 3300 years – whereas even the term “software engineering” is less than 50 years old and the actual discipline of rigorous programming, in any meaningful sense, is younger still.
As the FDA says, “because of its complexity, the development process for software should be even more tightly controlled than for hardware, in order to prevent problems that cannot be easily detected later in the development process.” (They put that in bold, not me.)
Programming is still a dicey business, in other words, as is obvious to anyone with a smartphone that crashes, overheats or drains its battery without reason. (And that’s not to mention the prospect of your phone being hacked.) I have just such a phone, a Samsung Galaxy S4, which initially worked quite well. Now, not so well. It runs on Android software written by Google.
Google engineers and programmers are smart, but they can’t escape humankind’s collective immaturity of knowledge around the task of engineering software that is safe, reliable, error free and resistant to malicious attack. These cars will kill people, albeit hopefully many fewer than the number of people killed today by human drivers.
And sometimes when they do maim or kill, the cars will have to make ethical decisions: if three pedestrians dash across the road one after the other right in front of a car, should the car jam on the brakes knowing it will hit and kill two of them, or should it swerve and kill only the third one instead? In the latter case, fewer lives will be lost but the pedestrian it kills wouldn’t have died had the car not affirmatively decided to swerve. Passively kill two people or deliberately kill a different person? Kill one pedestrian or swerve into another car and perhaps only injure its occupants?
There are many variants of this so-called “trolley problem” and the car’s software will have to make hard choices. These are moral dilemmas that society – government regulators – should at least have input into. They’re not decisions that Google programmers should make alone at their white boards.
Those programmers and their colleagues – engineers, designers, businesspeople and others – are brilliant, and they appear to have the most ambitious goals – and be the furthest along – of any company working on autonomous cars. Google has brought the world some great products, and a commercially available self-driving car, although it will bring enormous dislocations, will bring enormous benefits (pdf) as well, if it’s safe.
Why Google and Automakers Can't be Trusted
Notwithstanding Google’s old “don’t be evil” motto, the  company has overreached and been slapped for it just like any other large company. In 2010, it deployed privacy-busting features in Google Buzz, which resulted in the FTC imposing an astonishing twenty years of outside privacy audits starting in 2011. Yet the following year, Google allegedly violated that already-unprecedented consent decree and ended up paying a record $22.5 million settlement. And in 2008-2010, the company made an allegedly “intentional mistake“ by sniffing personal data from WiFi routers as its cars photographed the streets, and then apparently attempted to conceal this breach of people’s privacy. The FCC fined Google $25,000 for impeding its investigation.
Google is a great company, but it’s a big company, and there isn’t a day since the era of 19th century robber barons that big companies haven’t overstepped and run roughshod, absent determined regulation.
Like most large companies, Google aggressively looks after its own interests even at the consumer’s expense. That makes “self-certify” a pretty thin answer when it comes to Google car safety – or, for that matter, privacy, since driverless cars are “data guzzlers” that will be acquiring 30 to 130 MB of external visual and other sensor data per second when in operation. (Even now, the race to control the dashboard is on.) And that’s not to mention the GPS data associated with the passengers (i.e., Google will know where you go in the physical world, not just on the Internet) and the likely cameras inside the vehicles, at least for those owned by rideshare services such as Uber, in which Google invested over a quarter-billion dollars last year.

That Uber deal is no outlier. As the Internet morphs into the Internet of Things, the company is transforming itself from an Internet giant into an enterprise with access to – and control over – the physical world. In the last seven months, Google has bought both the smart-device maker Nest and seven robotics companies including Boston Dynamics, which makes robots than can gallop at 28 mph or carry heavy payloads. One of them is even in humanoid form. Six feet tall and weighing 330 lbs., it looks like the Terminator, minus the skin. So while Google’s newest driverless cars – which omit steering wheel and pedals – were deliberately designed to look “friendly” and “cute,” this is not your father’s Google.

And Google, of course, is not the only company developing driverless cars. It's an industry-wide project, yet many automakers have an appalling disrespect for public safety: they fought seat belts, set private detectives to harass and intimidate consumer advocate Ralph Nader, and lobbied against air bags for years, delaying a mandate until 1998.
Meanwhile, also in the 1990’s, over 200 people died in crashes linked to failure of Firestone tires, but they weren’t recalled until 2000. Consumer advocates charged that both Firestone and Ford (in whose vehicles a majority of the deaths occurred) covered up the problem and failed to inform NHTSA.
Present-day behavior is no better: In March of this year, Toyota agreed to pay a record $1.2 billion criminal penalty stemming from charges that it intentionally hid information about safety defects from the public and made deceptive statements to protect its brand image. That was in addition to an earlier $48.8 million civil penalty and a litigation settlement of over a billion dollars. All were in connection with unintended acceleration: ultimately, in two separate recalls in 2009 and 2010 totaling nearly 8 million vehicles Toyota acknowledged that its gas pedals could get trapped under floor mats or that the pedals themselves could simply stick, leading to out of control vehicles. Deaths and injury were the result.
Even more ominously for driverless cars, a jury in 2013 found Toyota liable for unintended acceleration stemming from a third cause, alleged software errors. An expert on embedded systems (real-world machinery that incorporates software), Michael Barr, charged that he found 80,000 violations of software reliability programming rules in Toyota’s code (pdf, p. 29), even after an investigation by NASA (on behalf of NHTSA) had found no software cause for unintended acceleration but not ruled out the possibility either (pdf, p. 17; pdf, p. 13); a NASA engineer charged that its team was taken off the investigation prematurely. The jury awarded $3 million in compensation and was set to consider punitive damages when the case settled.
Toyota is not the only automaker with unintended acceleration problems. Just last weekend, NHTSA disclosed that it’s investigating 360,000 Nissan cars for the same problem, in this case allegedly caused by improper design of a trim panel. Last month, Ford recalled 1.4 million vehicles, for defects including loss of power steering and (as with Toyota) floor mats that can interfere with the accelerator.
That followed a recall by Ford of about 700,000 vehicles for software bugs that could cause airbags to delay inflation and – a separate problem on the same vehicles – doors that could come open while the car is moving. Also last month, Chrysler recalled almost 500,000 SUVs because cracks in a circuit board could lead to a faulty signal that caused the transmission to shift by itself from Park into Neutral. Toyota too had a software recall, in 2010, involving anti-lock brake software and another last month, involving airbag electronic control unit software.
Also on the airbag front, chemical and mechanical defects have been disclosed in the last several weeks in airbags from a company called Takata, resulting in recalls or halted sales by eight car companies: Ford, Chrysler, Honda, Mazda, Nissan, Toyota, BMW – and GM, a company which this year has recalled over 28 million cars in about 30 separate recalls.
The most noted of those recalls, of course, have been for an ignition switch problem. In the last five months the company has recalled about 14.8 million vehicles (6.6 million in the last several months plus 8.2 million yesterday) from model years 2003-2011 because the ignition switch can turn off while the car is in motion, disabling the power steering, power brakes and airbags. According to the government, GM knew of the problem since at least November 2009, but didn’t tell NHTSA for almost 4-1/2 years. The New York Times reports that GM’s knowledge goes much further back: a GM engineer approved the faulty design in 2002 before the car was released, even though the part maker said the switch didn’t meet specifications. GM investigated later complaints, but closed the inquiry in 2005 because “none of the solutions represents an acceptable business case.”
Four months later came the first death tied to the ignition switch.
The following year, the same GM engineer allegedly signed off on a redesigned switch intended to be safer, but there was no recall for another eight years. Last year, the engineer claimed under oath in a wrongful death suit that he had not authorized the change (notwithstanding a signed document to the contrary); he told Congressional investigators in May that he didn’t remember.
Also last month, the company paid a record $35 million civil penalty even as the Secretary of Transportation said “what G.M. did was break the law.” According to NHTSA, GM employees were actually trained in how to obscure safety problems. Documents released last week by a Congressional panel reportedly show that a current top GM executive knew of the problem as far back as 2005. Criminal, congressional, SEC and state investigations are ongoing. Hundreds of lawsuits have been filed, and a compensation plan was unveiled yesterday that could cost the company billions.
In addition, a second GM ignition switch problem has emerged: in many of the same cars with switches that could turn off while driving, it was also possible to remove the ignition key while the car wasn’t in Park or while the engine was running. A GM report said that the company knew about this in the early 2000’s but didn’t recall the cars until April of this year. Meanwhile, although NHTSA received the report in April, it wasn’t posted on the agency’s website until a reporter asked for it in June.
These are the sorts of companies that would self-certify their driverless cars.
What About the Regulators?
Not only did NHTSA delay posting the safety report, it seems to have fumbled the GM ignition switch issue from the beginning. The agency received the first complaint in 2003 but didn’t propose an investigation until 2007. Even at that, no recall occurred until 2014. Instead, despite receiving an average of two complaints per month, NHTSA repeatedly responded that “there was not enough evidence of a problem to warrant a safety investigation,” according to the New York Times. Now Congress is investigating NHTSA as well as GM.
NHTSA doesn’t only handle recalls; it’s the federal agency that issues automobile safety rules. Under California’s self-driving car law, any NHTSA regulations on the subject will preempt whatever the DMV is doing now.
Commenting on the agency’s approach to software, embedded systems expert Barr said, “NHTSA … needs to step up on software regulation and oversight. For example, FAA and FDA both have guidelines for safety-critical software design within the systems they oversee. NHTSA has nothing.” (See also pdf, p. 44.) There are techniques to reduce automotive software risks, but NHTSA doesn’t require them.
And the agency is not currently regulating driverless cars at all.
That leaves the field to the less technically-experienced, more resource-poor state DMVs. It’s not hard to play one state off against another, and that looks like what Google did with California. Although a Stanford report says that self-driving cars were “probably legal” already, the company presumably wanted more certainty, so it got Nevada to pass a driverless car law to its liking in 2011, then Florida, and then used those statutes to incite California legislators to act. Sacramento, fearful that Google’s car operation would decamp to the home of legalized gambling, passed legislation requiring the DMV to issue regulations by January 1, 2015, legalizing the cars on California roads. Gov. Jerry Brown signed the bill at Google headquarters in September 2012.
But a section of the law, CVC 38750(d)(2), requires that the regulations include “any testing, equipment, and performance standards ... that the department concludes are necessary to ensure the safe operation of autonomous vehicles on public roads, with or without the presence of a driver inside the vehicle.”
The only way to even have approximate confidence in the safety of a complex embedded system is for a knowledgeable independent third-party certification authority to review (under confidentiality agreement) the software and firmware source code, custom chip (ASIC) designs, hardware specs, design documents, flowcharts, and other technical documentation. 
The Problem with Road Testing
If you’re not a programmer, you may still be wondering why road testing is inadequate for driverless cars. Once again, the FDA explains, “One of the most significant features of software is branching, i.e., the ability to execute alternative series of commands, based on differing inputs. This feature is a major contributing factor for another characteristic of software – its complexity. Even short programs can be very complex and difficult to fully understand.”
It is hard enough to test garden variety software (say, Microsoft Word), whose operation is deterministic and does not require sophisticated judgments, involve real-time sensor inputs or perform safety-critical functions. A word processing program is comparatively straightforward, yet MS Word, for instance, still manages to be buggy.
A robocar’s software is non-deterministic, in that (a) it makes high level decisions (brake? accelerate? swerve? turn? etc.) in real time and (b) it does so in response to huge quantities of extremely complex sensory inputs representing dozens or hundreds of stationary and moving objects that are never the same twice. No matter how precisely you try to replicate a road test, something different will always happen:
● One time the sun will glint off the windshield of an opposing car and blind a sensor, perhaps causing the car the mis-steer – or maybe the sun will be at a slightly different angle that day and the problem will remain latent, ready to affect an unwary consumer.
● Maybe a splash of mud mixed with oil and gravel will fly up past the radar in the front bumper, leading the car to swerve away from an imagined obstacle – or maybe the road will be dry that day, or the mix of oil, mud and gravel just different enough that the software properly processes and ignores it.
● Maybe a journey will be long enough and the number of objects encountered large enough that a heavily-used area of memory referred to as the “stack” will overflow, with potentially catastrophic consequences. Stack overflows can be hard to debug or predict. But perhaps the road test will be too short to trigger the bug, again leaving latent a problem that will only show up when a car crashes in real-world operation.
● Perhaps one day the car will pass by a strong electrical field – say, from a power plant or even a downed wire – that evokes an electric current that modifies memory or triggers a false sensor reading, again leading to a dangerous malfunction not detected during a DMV road test.
● How vulnerable are the car’s sensors to snow, ice and mud? A fair-weather road test won’t answer that question – and we already know from the crash of Air France Flight 447 off the coast of Brazil just how deadly the consequences of sensor malfunction can be.
● Maybe one day the car will hit exactly 32 mph – a binary number that looks like this, 100000 – at precisely the same time a car in the next lane is decelerating and hits 16 mph (10000) and that confluence of zeros and velocity changes triggers some quirky bug that only appears if GPS signal is momentarily lost and a train crosses in front of the car. Of course this is a contrived example (here’s one from the real world), but the point is that software can fail for strange reasons in weird and sometimes catastrophic ways.
As the DMV assistant general counsel conducting the driverless car sessions, Brian Soublet, pointed out, there are a large number of traffic maneuvers – in his words probably too many to test (video). Even worse, there are an unbounded number of driving scenarios – combinations of maneuvers, pedestrians, cars, trucks, motorcycles, cyclists, road geometry, stop signs, traffic lights, weather conditions and more. You can’t road test all of them or even any large fraction because the test space is effectively infinite. The only way to probe them is to examine and audit the source code, and also to run the software through simulators – under the watchful eyes of DMV or third-party experts.
As a teenager, I managed to halt a “crash-proof” computer by directing it to take input from the display screen and send output to the keyboard, both of which were obviously impossible. It was a simple prank that the programmers somehow hadn’t anticipated. What pranks and scenarios will driverless car companies overlook as they turn their machines loose?
Cybersecurity and More
Yesterday’s pranks are today’s cybersecurity threats, of course, and these cars will be vulnerable. Even today, cars are at risk. Yet, John Tillman of Mercedes, Chrysler’s Good and VW’s Wendling all urged the DMV not to regulate driverless car cybersecurity (video). They didn’t feel that DMV was the right agency for the task, but had no alternative proposals.
There will be a lot of pressure for driverless car software upgrades to be transmitted wirelessly over the air (OTA), but this should only be allowed with extraordinary security. OTA mechanisms represent enormous threat vulnerability. Moreover, even under the best of circumstances, software updates are problematic, especially when the initial code is complex. It’s easy to “break something” when you’re in the process of fixing something or adding capabilities. The FDA found that 79 percent of software-related recalls in medical devices were due to defects introduced in updates, not in the original code.
There are also legal issues, because adhering to the Vehicle Code isn’t always straightforward. Soublet highlighted a left turn issue which Google acknowledged ruefully (“I can’t believe you’re bringing that up,” said Medford; video). Soublet also mentioned that obeying the speed limit on a freeway isn’t always the right thing, since it can lead to vehicles traveling too slow for traffic conditions. Here again, the decisions should be made by the DMV, not Google – and certainly not Google alone.
(Other legal issues – who’s liable for accidents, who gets the ticket and the points when a self-driving car breaks a traffic law, how will insurance work – were discussed at the DMV hearings but don’t relate to the point of this article.)
Adding to the seriousness of all of these issues – safety, ethics, privacy, cybersecurity, legal –  is software monoculture, or the presence of the same software in many vehicles. This means that failures will affect numerous people, just like a bug in MS Word affects millions of people.
Can we trust Google or car companies to protect our privacy, ensure our safety, guard against bugs, prevent cars from being weaponized into self-driving human-free suicide bombs, make ethical decisions when crashes are unavoidable, and all the rest? No, not without rigorous oversight and meaningful independent certification. 

Finally, we also have to put these cars in context. They're the first safer-yet-dangerous autonomous robots the public will encounter. There will be more -- eventually Honda's ASIMO robot will be released from the lab, ATLAS will come untethered and BigDog and other Boston Dynamics (i.e., Google) robots will be set loose, whether as helpers, packbots, monitors, security bots or more. Will we get the same answer then -- "this technology is a net good, regulators are self-interested, auditors wouldn't catch the bugs, so let the company self-certify and be done with it"?

Google plays a long game and a large one. They're aware that they may be setting a precedent for the way robots are regulated, or not.
The California DMV regulations – which may set a national and even international template (video) – will be out soon. Hopefully, they’ll be adequate to the task. Click here to tell the DMV what you think (and cc me).
Jonathan Handel (jhandel.com) is an entertainment/technology attorney and journalist in Los Angeles. He does not have clients related to self-driving cars. The opinions expressed here are his own. Images via Wikipedia and do not imply endorsement or affiliation.

Wednesday, June 18, 2014

2030 May Be The Year They’ll Take Your Driver's License Away



I was stuck in traffic yesterday, which I didn’t really mind because I have a fun little yellow convertible, and I was thinking about Uber ($17 billion! – that’s the company’s valuation, not the price of a ride) and Google’s driverless cars (development cost unknown), and I decided it was time to connect the dots: once a car learns to drive, there’s no need to own it and there’s no need for a driver. That’s because the car can come when called, take you to your destination, then go off and pick up someone else.
That sounds great and I’m hardly the first to connect those particular dots, but there’s a corollary that seems to have gone largely (though not entirely) unnoticed: when driving oneself becomes unnecessary, it will eventually become more expensive, less convenient and – ultimately – unlawful, because the cars will do it a lot better than we can. Traditional cars will find themselves in a death spiral, and they’ll be gone in less than – well, not less than 60 seconds, but sooner than you think.
In other words, Google is engineering all of us right out of the driver’s seat. If they manage to get self-driving cars on the market by 2020 – as they’ve said they hope to – then I’d give human drivers another ten years before we all get our licenses pulled and registrations revoked.
Welcome to the Jetsons era of driverless Cars as a Service, or d-CaaS if you will.
Improbable? Not really. Consider how the laws changed around smoking on aircraft. Until 1973, you could smoke anywhere on an airplane. Next came smoking in designated sections and then, in 1988 – just 15 years later – an outright ban on most flights. It was all motivated by health concerns. Today, with safety issues, shoelace bombers and explosive underwear added to the mix, no sane person would touch a cigarette while aloft. What once was commonplace is now understood to be reckless, and punishable accordingly.
Or consider horses. Once common on city streets such as LA’s Sunset Blvd., they disappeared in the twinkling of an eye. (Well, not completely.) Who at the time could have imagined?
Already, of course, the elderly lose their right to drive when they can’t do so safely enough. But my point is that the definition of “safely enough” will shift: when self-driving cars are safer than human drivers, at some point none of us humans will be deemed safe enough to worthy of a license.
Let’s back up and look at where we are now. Google’s fleet has already logged nearly 700,000 autonomous miles. How smart are the cars? Consider this, from a 2013 New Yorker article:
[Google lead programmer Dmitri Dolgov] was riding through a wooded area one night when the car suddenly slowed to a crawl. “I was thinking, What the hell? It must be a bug,” he told me. “Then we noticed the deer walking along the shoulder.”
The deer wasn’t even on the road, but was alongside it. In the dark, in a fraction of a second (one presumes), the car had detected an object, recognized it as a deer, inferred that it could leap onto the roadway unpredictably, released the gas and applied the brakes. Now that’s a smart car, and the night vision is just icing on the cake.
Maybe the deer was the only thing out there that night. Perhaps the car itself turns into a deer in the headlights when faced with complexity? No. Said Google’s project director Chris Urmson in a recent blog post, “our software … can detect hundreds of distinct objects simultaneously –  pedestrians, buses, a stop sign held up by a crossing guard, or a cyclist making gestures that indicate a possible turn.”
Now, there is some “trickery” involved: the car is preloaded with extremely high-def maps of the areas it drives in. Far more detailed than Google Maps, these maps are models of the physical world, with information like the height of traffic signals, the position of curbs and more. Those models make it easier for the computer to process sensor inputs, because it knows some of what to expect.
Of course, you and I use the same trick when we drive on familiar roads. And Google is hardly going to be deterred by the need to map the world’s roadways.
That deer story is already more than six months old. The software has probably moved on to recognizing cats and raccoons by now. Meanwhile, on the hardware front, the latest Google cars don’t have steering wheels or pedals. The driver is just another passenger along for the ride. And guess what: in most states, that’s probably legal already. Anyway, those cars are intentionally limited to 25 mph. That may sound kind of limiting, but it’s actually the same as the proposed New York City speed limit.
The Google team are very clever folks, and they’re not the only ones working on these kinds of vehicles, though they seem to be the furthest ahead. Readers who hate government should note that much of this was spurred by a U.S. government sponsored competition; indeed, Google’s Urmson, then at Carnegie-Mellon University, was the technology leader of the team that won the government’s 2007 Urban Challenge.
In any case, it’s just a matter of time and determined engineering before autonomous cars become better drivers than people are. (By some measures, says Google, they already are.) That will lead to commercial availability. When that happens, the conversation will begin to shift from “is it ok to let cars drive themselves?” to “is it ok to let people drive themselves?”
And the answer we’ll arrive at, soon enough, will be no.
That’s because (as the New Yorker article points out) people are terrible drivers. They’re always distractible, and often tired, preoccupied, drunk, drugged, on the phone or sending a text. They’re emotional, error-prone, drive too fast and react too slow. They have blind spots, can’t see in the dark and usually can’t be trusted to use both feet at once, meaning that in a unexpected crisis even the best driver loses seconds shifting his right foot from the accelerator to the brake. A well-programmed computer, kitted out with cameras, lasers, radars and cameras, will be able to do much better.
When that happens, the social cost of human driving will no longer be simply unfathomable – in the U.S. alone, 33,000 deaths, almost 2.5 million injuries, $277 billion in economic losses and $871 billion in total social harm per year – it will become unacceptable. Over 95% of that cost is attributable to driver error (pdf; see pp. 24-25).
Divide those dollar figures (2010 data, the latest available) by the adult population and you’ll see that human driving costs the nation the equivalent of $1200 to $3500 per adult per year. Some of that is accounted for by auto insurance rates, but too many people are uninsured or underinsured, and the insurance system arguably doesn’t do a good job of reflecting the true cost of accidents.
So money is probably where the regulation will start. Somewhere between perhaps 2025 to 2030, you might have to pay an annual “human driver” fee if you want to keep driving the old fashioned way. It could be a hefty add-on to present-day license fees – probably not as much as the true social cost at first, but it might increase over time. Or it might take form as an increase in car insurance rates for those who insist on driving themselves.
Of course, self-driving cars are likely to be more expensive than the old fashioned kind, at least at first. Would the poor and middle class suffer, forced into a Hobson’s choice between paying more for a license and insurance or paying more for a driverless car or a retrofit kit? Not at all. They’re likely to do neither, and instead use d-CaaS – driverless cars that appear when summoned, ready to whisk you to work, play, restaurants or one of the diminishing number of retail outlets not rendered superfluous by Amazon (with or without flying drones).
And who will provide those cars? Why Uber, of course (there’s a reason Google invested a quarter-billion dollars in the company), and Lyft, Sidecar, ZipCar (owned by Avis), Car2Go, RelayRides, Zimride (owned by Enterprise), and Enterprise, Hertz, Avis, Dollar, Hailo, Taxi Magic, Flywheel, local taxi and limo companies, and, gosh, have I left anybody out? They’ll all be in competition with each other. Some will own fleets, while others will provide financing or monetization for individuals who do buy autonomous cars or retrofit their existing vehicles.
But really, why buy? Americans spend roughly 3 hours a day in their cars. That means that about 21 hours a day, the vehicle sits idle. What a waste of money to buy a self-driving car when instead you can use one when you need it and the car can go about its business when you don’t. Do you really want to be in the business of owning a self-driving car, always keeping up with hardware and software updates, cleaning up spilled soda from the backseat and restocking the candy dish? Uber is not going to be about human drivers forever.
So, a far smaller number of self-driving cars could fill people’s needs than the number of cars owned today. During surge periods – primarily, rush hours – commuting in autonomous cars could involve ride-sharing (at a reduced cost per person) and/or multimodal trips (the car takes you to the train or subway station). Prefer your privacy? That will be available too, but at a cost in money and time too, since you won’t get to use the commuter lane on the freeway.
Fewer cars and less time sitting idle means less need for garages, lots and on-street spaces, and little or no time spent looking for parking. Fewer cars, tighter clustering and greater efficiency translates to less congestion. There will be more mobility for the elderly and disabled. And driverless cars will probably all be electric so that they can just sidle up to a charging station and rejuice without human intervention. So, less pollution.
What happens to all those existing traditional cars? I think the government will get them, and your license too.
That may make you think of gun rights – “they’ll pry my steering wheel from my cold, dead hands” –  but there’s no Second Amendment for cars, and even today, driving is considered a privilege, not a right.
Anyway, the government probably wouldn’t take your car from you at first. Instead, they’d buy it. The advantages of getting traditional cars off the road will be so great that some states might decide to pay owners to scrap and recycle them – just as California today pays owners of cars that fail smog check $1000 to $1500 to have their cars dismantled.
And, of course, the other thing they can do is regulate your car’s value away, by pursuing policies that favor d-CaaS. As driving oneself becomes more and more a pastime of the rich, the price of licenses, registration, insurance and gasoline will likely increase, further narrowing the user base – and political constituency – for traditional cars. As demand lessens, gas stations will close up shop, and owning a non-autonomous gasoline vehicle will become unfeasible. That will drive another nail in the traditional car’s coffin – or in its tire. You’ll wish you sold it to the state when you had the chance.
Meanwhile, densely-packed cities like New York, Boston and San Francisco might outlaw traditional cars altogether, or levy a heavy use fee, just as London and a few other cities impose congestion pricing today, with plans afoot in New York.
Mothers Against Drunk Driving might rebrand as “Mothers Against Drivers Driving” and maybe Car & Driver magazine will become Car & Operating System, if it doesn’t fold altogether.
Increasingly, those who choose to drive themselves will bear more of the consequences of their risky behavior. The damages awarded in the event of an accidents may skyrocket, not just because the average driver will be wealthy, but also because the accident will be seen as easily avoidable. When the police show up at an accident scene, they might charge a fee, just as ambulances do. Ultimately, the decision to drive oneself might become reclassified as what the law calls “ultrahazardous,” making liability stricter and penalties more severe.
Of course, to take off the rose-colored Google Glass for a moment, it will be a weird and different world when nobody drives, and it’s bound to bring a loss of privacy. There have already been fights over driving data; there will be more. Inside the car, you’ll be in a public space: even now, the Google cars have cameras inside as well as out, and we can expect more of that, not less, as time goes on. And talk about a captive audience: at present, there are no ads in Google’s self-driving cars, but that won’t last.
There will no doubt be a loss of autonomy as well. Chinese cars will probably be programmed to avoid sensitive places like ongoing demonstrations, or Tiananmen Square on the anniversary of the massacre. Call it the Great Traffic Cone of China. (Yet countries like China and India might leapfrog the U.S. in the application of d-CaaS technology.) Even in the U.S., people on probation or subject to protective orders will have their mobility reduced. Teenagers might find that self-driving cars refuse to stop at liquor stores and head shops, or won’t stay out after midnight (“Chad,” the car will text an errant teen, “I’m leaving in five minutes.”). Any one of us might be offered a discount on our ride by Google if we let the car take us to a restaurant or store the company favors. Today, our attention is bought and sold in the form of advertising; tomorrow, our physical presence will be for sale as well. We might even have to agree to stay at the store for at least 15 minutes in order to get that ride discount.
Then there are the technical challenges. Yes, Google engineers are smart – but how smart? My Android-powered Galaxy S4, which initially worked so well, now tends to freeze, overheat and run down the battery. And that’s not to mention the prospect of it being hacked. Programming is still a dicey business: we’ve been designing bridges for 3300 years but the very term “software engineering” is less than 50 years old. As a rigorous discipline, it’s even younger – and it’s subject to little regulation.
One kind of program code that is regulated is medical device software, and in that context, the FDA makes a point that applies equally well to driverless cars: “Because of its complexity, the development process for software should be even more tightly controlled than for hardware, in order to prevent problems that cannot be easily detected later in the development process.”
Soon, another exception may be mapping apps: the National Highway Traffic Safety Administration wants the authority to regulate those for safety reasons.
So the road ahead for self-driving cars is likely to be studded with potholes. Will we need outside software auditors to reduce error and guard against the sort of cover-up that seems to have afflicted the GM ignition switch team and that company’s management? Probably so. Google itself, after all, already operates under the scrutiny of outside privacy audits imposed by a 2011 consent decree, and the company later paid a record $22.5 million fine for violating that consent decree.
Can we trust Google or car companies to protect our privacy, ensure our safety, guard against bugs, prevent cars from being weaponized into self-driving human-free suicide bombs, make ethical decisions when crashes are unavoidable, and all the rest? Not without oversight. Federal regulators are still figuring out the issues (see pdf of May 2013 NHTSA policy paper), but thanks to Google’s lobbying, California is moving ahead on driverless cars even as it struggles with Uber. Whoever brings us smart cars – whether Detroit, foreign car companies, Google or some combination – will be doing the world (and themselves) a great service, but will also need to be regulated with more technological precision than we usually apply to software. Self-driving cars, after all, are robots with human payloads, and are far more dangerous than Roombas.
Even with those caveats, self-driving cars are on their way and human drivers on their way out. And even with all the efficiencies gained, I’ll miss my yellow convertible, I will. But I expect that by 2030 I’ll have more than a Roomba to console me. I’ll be holding out for a robocat.
Jonathan Handel (jhandel.com) is an entertainment/technology attorney at TroyGould in Los Angeles and a contributing editor at The Hollywood Reporter.