Augmented Reality Law, Privacy, and Ethics

Augmented Reality Law, Privacy, and Ethics

Law, Society, and Emerging AR Technologies

Brian D. Wassom

Allison Bishop, Technical Editor

AMSTERDAM • BOSTON • HEIDELBERG LONDON • NEW YORK • OXFORD PARIS • SAN DIEGO • SAN FRANCISCO SINGAPORE • SYDNEY • TOKYO Syngress is an Imprint of Elsevier

SYNGRESS.

To the two mentors who had the most meaningful influence on the first 15 years of my professional career. The Honorable Alice M. Batchelder personifies integrity and excellence, and taught me to respect the legal system. Herschel P. Fink, Esq. taught me to love the law I practice, and to practice the law I love. Both gave me amazing opportunities to serve in ways that fundamentally shaped my career. I hope to pay forward to others all that I can never repay to them.

Endorsements

“Any techie that follows Augmented Reality knows that AR continues to surge under Moore’s law. Brian Wassom is the indisputable, top legal expert in the realm of Augmented Reality. His perspective and legal lens continues to focus on AR and its journey to revolutionize technology. This book is a must read for any person looking to delve into the augmented world and absorb the rapidly changing legal and ethical landscape of cutting-edge high technology and its influence on society. This book is an unmatched AR resource that yields a powerful comprehension of an evolving mass medium.”

-Joseph Rampolla - Cyber-crime expert/Augmented Reality Dirt Podcast cre-ator/Co-author of Augmented Reality: An Emerging Technologies Guide to AR book

“Brian D. Wassom is my go-to resource on anything having to do with how Augmented Reality and emerging technology relates to legal issues. His writing is clear, impactful, and highly accessible regarding complex legal and technical issues. His book Augmented Reality Law, Privacy, and Ethics provides compelling evidence as to why Augmented Reality will drastically change culture over the next few years and how people need to prepare for what lies ahead. Brian’s wisdom, humor, and insights make Augmented Reality Law, Privacy, and Ethics a pleasure to read, and a must-have resource for anyone wishing to understand how our vision of the future will be perceived through the lens of Augmented Reality.”

-John C. Havens, Contributing writer for Mashable, Slate, and author of Hacking H(app)iness - Why Our Personal Data Counts and How Tracking it Can Change the World.

“We’re at the precipice of the next, visual era, with smart glasses that will forever change how we look at the world. More than just a comprehensive look at the related legal, social, and ethical issues, this book will get you thinking about the full impact of what’s to come.”

-Dave Lorenzini, CEO of Arc

“As the mass-media industries adapt to the newest mass medium, Augmented Reality, the combined abilities of digital, mobile, social, and virtual, all produce a quagmire of challenges and threats - as well as opportunities. Brian’s groundbreaking book is an invaluable guide to the treacherous ground that media owners, content creators, talent, news organizations, and others will face as they rush to stake their claims in the AR world. A must-read and invaluable resource for the next ten years.”

-Tomi T. Ahonen, author of 12 books including “Mobile as 7th of the Mass Media.”

xvi

Endorsements

“Brian Wassom is the world’s leading expert on augmented reality (AR) law. Was-som’s research pioneered the field of AR law and currently defines the way it is understood by developers. His writing points out the heart of the salient issues facing the rapidly growing field of AR. Wassom’s texts are required reading for my Mobile AR graduate course at NYU.”

- Mark Skwarek, a full-time faculty member at New York University (NYU) and the director of the Mobile AR Lab at NYU.

“Brian Wassom brilliantly illuminates some of the tricky issues of privacy, law, and ethics that will determine whether Augmented Reality results in an enhanced or degraded future for humanity.”

-Tish Shute, Head of Product Experience at Syntertainment and cofounder and chief content officer of Augmented World Expo.

“Wassom thoroughly highlights most of the key issues facing AR today while establishing a clear path for analysis in the future. From advertising, to smart cars, to the augmented criminal organisations of the future, Augmented Reality Law, Privacy, and Ethics is must read for anyone looking to become deeply involved in AR over the next decade.”

-Brendan Scully, Senior Business Development Manager, Metaio, Inc.

Author Biography

Brian D. Wassom litigates disputes and counsels clients from Fortune 50 companies to startups concerning copyright, trademark, publicity rights, privacy, and related intellectual property and advertising issues. He is a partner in the law firm of Honigman Miller Schwartz and Cohn LLP, and chairs the firm’s Social, Mobile and Emerging Media Practice Group. Brian authors a popular blog on emerging media at Wassom. com that features the section Augmented Legality®, the first regular publication devoted to the law governing augmented reality. Brian presents regularly to industry groups, legal education seminars, and conferences across the country on intellectual property, digital media, and related topics.

Technical Editor Biography

Allison Bishop is a highly experienced criminal reviewer with 10 years experience reviewing cases, criminal research, and criminal activity. Allison also works as a paralegal performing tasks such as briefing cases and reviewing case law and the working within the legal process. As an editor, Allison enjoys reviewing and editing works based on criminal justice, legal studies, security topics, and cybercrime. When Allison is not entrenched in studies, she loves to exercise, cook, and travel.

CHAPTER

What is “Augmented Reality Law,” and Why

Should I Care?

INFORMATION IN THIS CHAPTER:

  • • The “horizontal” nature of studying augmented reality law

  • •  The inevitability of augmented reality technology

  • • The economic significance of augmented reality technology

WHAT IS “AUGMENTED REALITY LAW”?

One of the joys of writing the first book on a topic is having the freedom to frame the discussion however seems best to me. The topics of discussion in the following chapters are the ones that I find the most important to explore based on my own experience practicing law and spending time with members of the augmented reality (or “AR”) industry.

But there are also downsides to a project like this. Among those is the need to justify the book’s existence before convincing anyone to read it. In the case of AR law, I am often required to explain to listeners what “AR” even is before I can broach the subject of why the law governing it is distinct and significant enough to require its own book.

That is the function of this chapter and the next. Here, I will attempt to persuade you that the AR industry is one to take seriously, and that it will be important to understand (and to help shape) the law governing the use of AR technology. Assuming that you remain sufficiently open to these conclusions to follow me to Chapter 2, I will explain in greater detail the nature of AR and its related technologies. From that foundation, the rest of the book will survey a number of different legal and ethical topics that are likely to be, or are already being, implicated by AR.

I hope you will stick with me to the end, and agree that it was worth the ride.

A HORIZONTAL STUDY

If you are a student, then you are likely accustomed to studying one concept - such as contract law, chemistry, or grammar - at a time. Even in professional settings, individuals and entire companies often find themselves, consciously or unconsciously, thinking and operating within defined tasks, categories, or industrial segments, to the

Augmented Reality Law, Privacy, and Ethics

Copyright © 2015 Elsevier Inc. All rights reserved. exclusion of all other subjects. We frequently refer to these areas of concentration as “silos” or “verticals,” implying that the people inside them may build up quite a bit of knowledge of, or experience in, the given topic, but have relatively little idea how that topic relates to anything else. For example, an automotive engineer may spend years, even an entire career, immersed in the inner workings of a particular subsystem of a car, with no understanding or concern as to how that subsystem relates to or affects the rest of the vehicle. Similarly, many legal and medical professionals develop highly specialized (and expensive) skills in a niche practice area, but would not have the first clue how to help a random client who walks in off the street with a basic, everyday problem.

This is not such a study. “Instead of a deep ‘vertical’ look at one legal doctrine, this [book] will survey several disparate topics ‘horizontally.’1 In the current professional vernacular, it cuts across several verticals. Put another way, this book takes as its starting point one particular industry - the companies and innovators developing AR and related technologies - and surveys the various legal issues that members of that industry are likely to encounter. This approach has the advantage of being enormously more useful for the members of that industry and the professionals (like me) who would serve them, but it can be a bit disorienting (at least at first) for students accustomed to more abstract analysis.

That is not to say, by any means, that vertical studies of legal principles do not have their place in academia, or that students should avoid reading a book like this one. To the contrary, courses in basic legal doctrines provide the building blocks necessary for applying the law to complex problems. Horizontal exercises like this one can be ideal vehicles for transitioning from book learning to the ability to counsel clients in real-life situations. That is one reason why horizontal studies like this one are not uncommon during the third year of law school.

Perhaps the most direct audience for this book, however, is the growing ranks of those business people and technological dreamers who are out there, even now, literally building a new world around us all by means of what we currently call “AR” or “augmented world” technology. I have been privileged to meet and interact with scores of these innovators who are rapidly forming an industry out of concepts that were pure science fiction mere months earlier. They have the foresight to recognize just how much our world will change when we finally master the art of interweaving our digital and physical means of experiencing the world.

When I speak at AR conferences and events or counsel clients in this industry -usually after the audience has already heard from several entrepreneurs who cast grandiose visions of what can be done with the technology - I sometimes joke that it is my job as the lawyer in the group to crush their dreams and bring them back down to earth. Yet my actual intention (both there and here) is quite the opposite.

1I borrow this description from another exercise in horizontal legal analysis, the excellent e-casebook Advertising & Marketing Law: Cases & Materials, by Rebecca Tushnet and Eric Goldman. Rebecca Tushnet & Eric Goldman, Advertising & Marketing Law: Cases & Materials, i (July 2012) (available at: Announcing a New Casebook: “Advertising & Marketing Law: Cases & Materials” by Tushnet & Goldman - Technology & Marketing Law Blog).

These innovators’ dreams are so inspiring because they actually have a chance at being realized. But if AR entrepreneurs are going to successfully bring their visions to fruition, they need informed guidance from advisors who understand the realities and requirements of the legal and business worlds. These advisors must shepherd the innovators through the tricky landscapes and potential pitfalls of regulatory checklists, investment deals, IP protection, and all of the minutiae on which visionaries ought not spend too much of their time. I want more members of this industry to recognize their need for such guidance for the legal services industry to be better prepared to provide it.

This leads to two more currents that are important for me to mention at the outset. First, this book cannot, and does not attempt to, provide legal advice. Consult a lawyer directly before making business decisions. Second, the laws discussed herein are almost exclusively those of the United States. Although the AR community is truly worldwide and many legal and ethical principles cross national boundaries, it is the American legal system in which I practice and that forms the context for my analysis.

THE LAW OF THE HORSE

Today, there is almost no one who could honestly be called an “AR lawyer.” This will remain true for some time, even as the industry begins to mature. One reason for this is that “AR law” is a concept much like a term I learned in my law school days: “the law of the horse.” This phrase illustrates the difference between vertical and horizontal legal studies. The idea behind it is that there is no such thing as “horse law.” Rather, if I own a horse and have a problem with the jockey, for example, I would seek counsel from an employment lawyer. If my shipment of hay doesn’t arrive, I should consult a commercial transactions attorney. And if my neighbor complains about the smell of horse ranch, I might consult an attorney experienced in nuisance law.

Each of these lawyers would be practicing some aspect of “horse law,” in some colloquial sense, but you would not call any of them a “horse lawyer,” because lawyers do not usually hold themselves out in that manner. Lawyers typically market their services according to particular categories of legal doctrine or practice. Historically, relatively few lawyers have packaged their services according to the needs of a particular industry, even though it might be more efficient for our hypothetical horse owner to find a lawyer or law firm specializing in “horse law” than to seek counsel from different specialists on each issue.

I first heard this “law of the horse” analogy applied to “Internet law,” to make the point that there was no such thing. Rather, the Internet and its use implicate virtually every legal vertical, depending on the context. “Internet law” is a horizontal subject (and thus not worthy of study in a law school, or so was the implication when I first heard the term used).

In the same way, “AR law” is also like the law of the horse. Defined literally, “AR law” encompasses all of those fields of legal practice that AR companies will encounter - including corporate, tax, intellectual property, real estate, litigation, and personal injury, among others. Indeed, if AR reaches even half of its potential, it is poised to revolutionize society at least as much as the Internet itself has done. It is inevitable, therefore, that such a sea change in how we conduct ourselves on a daily basis would also influence the laws governing that behavior and how they are applied. Yet, even today, we still see relatively few lawyers marketing their expertise in “Internet law,” and virtually none have yet grasped the significance of AR as such.

Today, more lawyers and law firms recognize the value of organizing their services according to clients’ needs rather than by traditional categories of practice. This, in part, is why many law firms have assembled “industry teams” focused on the needs of particular types of companies and comprising a number of specialists from relevant legal disciplines. Practice groups like these are one way in which legal professionals can more comprehensively and efficiently serve the needs of horse owners or any other given industry. Working within a general practice firm composed of lawyers working in dozens of different focus areas is another.

As only one such example, I help to lead my firm’s “Social, Mobile and Emerging Media Practice Group,” so named so as to encompass both the social media that presents today’s most pressing digital media issues and tomorrow’s emerging media such as AR. For several years now, I and other members of this team have gotten to know professionals within the AR industry and - together with other members of our general practice firm - helped them solve the issues they encounter across a broad spectrum of legal disciplines. It gratifies me to say that I am not personally aware of any other legal practice group as focused on the AR sector. As the inevitability of AR becomes more apparent, however, I expect that we’ll see more such teams intended to serve this important industry.

WHY STUDY AR LAW?

If you are not already as enamored of the AR industry as I am, you may not yet be convinced that AR law is worth your time to study. In that case, allow me to recount some of the reasoning that led me to conclude that this field will be so important.

INEVITABILITY

In this chapter I have already used the word “inevitable” to describe the increasing prominence and impending ubiquity of AR. That is because I see AR not so much as a brand-new concept that will someday suddenly emerge onto the scene, but rather as a medium that has existed for decades and that is beginning to manifest itself with increasing speed as we finally see the development of the technology that can make it happen.

There are dozens of factors fueling the inevitability of widespread AR. Since consonance makes things more memorable, however, I will summarize them as the three C’s of convenience, creativity, and capability.

Convenience

Not many years ago, humanity’s best means of reading and recording data was on two-dimensional pieces of paper, which we stitched together and stored in books. When that data began to migrate onto computers, we displayed it on monitor screens, and books became files and folders. Over time, the screens became incrementally more aesthetically pleasing - flatter, higher-resolution, and more mobile - and even displayed some digital images that had the illusion of three-dimensionality. But the context in which these displays have appeared - the computer screen - has always been a two-dimensional rectangle.

AR is a unique step forward in the way we experience digital data, because it liberates that data from its two-dimensional box to make it truly appear (as far as human senses can perceive) to be three-dimensional. Granted, there will almost always be some medium (such as eyewear, a window, or a mobile device) through which we experience the display, and those media will remain two-dimensional for the foreseeable future. But AR creates the illusion that the display is present among, and even interacting with, our physical surroundings. Perception is reality, as the saying goes, and it is the perception of this illusion that we call “AR.”

One fundamental reason that there will always be an impetus to experience data in this format is that physicality is intuitive to us. As children we have to learn to read and write, but playing with physical objects comes naturally. The less work our brains need to do in order to translate and process data, the more readily our minds will embrace it.

Take, for example, the yellow line of scrimmage and the blue first-down line that appear in most televised football broadcasts these days (Fig. 1.1 ). The technology to

FIGURE 1.1

NFL broadcasts contained some of the earliest examples of mass-market AR.

FIGURE 1.2

The Iron Man films are among the most popular depictions of AR.

create this illusion is actually one of the earliest forms of AR in mass media. Today it is even more sophisticated, with all manner of game statistics appearing as if they were on the football field itself. And the images themselves are so high-resolution and rendered so fluidly that the illusion of physicality is complete. The result has been to make it significantly easier for viewers not schooled in the rules of the game to comprehend the action. It’s one thing to say “the offense needs to carry the ball 15 more yards to the 30-yard line”; it’s another thing entirely to say “they need to reach that blue line.” One statement takes significantly less mental processing to understand, which, for some viewers, is the difference between enjoying the broadcast and changing the channel. Indeed, I have heard from several people who attended their first live football game and were disappointed by the experience of trying to follow the game without the digital overlays on the field. For some children who have never watched a game on television without those overlays, the effect is jarring; they had never considered the fact that the lines weren’t actually there!

For the same reasons, there is a certain level of understanding about a thing that we as humans cannot reach unless we experience the thing physically. In my line of work, when young litigation attorneys are arguing a case involving a specific place or product, they learn the value of actually visiting the place or holding the product in their hands. That experience does not always reveal more quantifiable data about the thing, but there is a qualitative level of understanding that the attorney gains. They feel as if they understand the thing better, and are therefore often better able to form and express arguments about it.

The Iron Man movies offer another example of the same truth. In each of the four films in which Robert Downey, Jr.’s version of the Tony Stark/Iron Man character has appeared to date, we see him use AR to design complex machinery, architecture, or landscapes (Fig. 1.2).1 Whatever it is that he’s studying, Stark views digital

« Jon Favreau <&@Jon_Favreau                        24 Aug

'MB Like in Iron Man? RT @elonmusk: Will post video of designing a rocket «-• part with hand gestures & immediately printing in titanium

•; Elon Musk                                     ar Follow

jBjr’i @elonmusk

@Jon_Favreau Yup. We saw it in the movie and made it real. Good idea!

10:35 PM-23Aug 2013

405 RETWEETS 330 FAVORITES                             4- tl ★

FIGURE 1.3

Elon Musk acknowledged Iron Man as the inspiration for his own AR system.

renderings that are projected into the space in front of him. By means of poorly explained but fantastically acute holographic and motion-sensing equipment, he physically grasps, manipulates, and alters the data as easily as he would a physical object. (Actually, it’s even easier, since a real physical object would offer resistance and could not hang motionless in empty space.) When Stark needs to study an object more closely, he sweeps his arms in broad gestures to expand the display to hundreds of times its original size. If Stark needs to walk among the digital objects as if they were surrounding him on all sides, he can do that.2 Each such cinematic sequence comes at a point in the plot in which Stark needs to overcome a design problem or gain new insight that he could not grasp merely by reading lines of code or digital images on a computer screen. And each time, it works.

Despite all of the entertaining, fast-paced action and gee-whiz effects of the Iron Man movies, these AR design sequences have so stirred viewers’ imaginations that they remain some of the most memorable scenes in the films. Perhaps that is because this way of interacting with data just feels so natural to so many people - and also so tantalizingly plausible that we wonder why we don’t already have such devices in our own offices and living rooms.

No less than Elon Musk feels the same way. Musk is the billionaire entrepreneur behind Tesla Motors, SpaceX, and the proposed Hyperloop train that could carry passengers from Los Angeles to New York in half an hour. As such, he is already the closest thing that our actual reality has to Tony Stark. He cemented that parallel on August 23, 2013, when he tweeted: “Will post video next week of designing a rocket part with hand gestures & then immediately printing it in titanium.” Iron Man director Jon Favreau responded, “Like in Iron Man?” Musk replied, “Yup. We saw it in the movie and made it real. Good job!” (See Fig. 1.3.)

The next week Musk followed through, demonstrating on YouTube how SpaceX engineers were combing such devices as the Leap Motion gesture sensor, the Oculus Rift virtual reality headset, and a 3D projector to design rocket parts more or less exactly the way that Tony Stark would.

The point of this exercise was not merely to emulate Iron Man (or any of the other Hollywood films that depict AR being used in such a utilitarian manner, such as Terminator, Serenity, Mission: Impossible - Ghost Protocol, and G.I. Joe: The Rise of Cobra, just to name a few). Rather, Musk explained in his YouTube video that designing three-dimensional objects using “a variety of 2-D tools ... doesn’t feel natural. It doesn’t feel normal, the way you should do things.”3 Interacting with digital objects that appear to be real, on the other hand, only requires a designer to “understand)] the fundamentals of how the thing should work, as opposed to figuring out how to make the computer make it work.”4 “Then,” Musk said, “you can you can achieve a lot more in a lot shorter period of time.”5 In the terminology of this chapter, the AR experience becomes a more convenient way to interact with the data.

Notice the importance of feelings in Musk’s explanation of the technology. His premise is that if an interaction feels normal and natural on an intuitive level, it will be a more efficient and effective interaction. And that is a difficult premise with which to argue. The fact that interacting with data in this manner just feels right is one reason that humanity will inevitably design its technology to function in precisely that manner.

Creativity

Another fundamental characteristic of human nature is the need to express ourselves as individuals. The unique potential of AR to fuel such creative expression also contributes to the technology’s inevitability.

When a medium of expression is more convenient and intuitive to use - in other words, when we don’t have to think about how to use it, but can focus more on what we want to do with it - the medium will be an effective means of expressing ourselves. At the same time, the depth of what we can express is also limited in many ways by our chosen medium. For example, coloring, pointing, screaming, and grunting all come to young children more naturally than actual words. But one reason kids soon turn to language is because they quickly reach the limits of how much they can express with these other forms of communication. On the other end of the spectrum, I have a good friend who is a master violinist. His instrument gives him a “voice” that can express emotion to a depth that mere words cannot reach. But only through years of rigorous training did that means of expression become natural enough for him that he could use the violin to express actual music, as opposed to the painful shrieks the instrument would emit if I tried to use it.

In a similar way, digital imagery has become a rich medium for creative expression. And although two-dimensional rendering still requires a significant amount of training and skill to do well, the means to create it is becoming cheaper and easier to use all the time. As we add more dimensions to those images, the potential for creative expression goes up, but so do the practical barriers to entry. High-quality three-dimensional imagery is still difficult to do well; just witness the difficulties that movie companies faced getting audiences to accept 3D movies, despite the constant pressure to make them commercially viable. Taking those three-dimensional images and making them appear to be physical objects that persist and adapt to human interaction over time - what some in the AR industry refer to as “4D” - remains an even tougher nut to crack.

The cornucopia of creative expression that awaits when the public at large is able to experience AR is a big part of what keeps innovators working on the technology. To illustrate the qualitative difference between creative expression in standard 2D versus 3D or 4D, picture (the original) General Zod and his Kryptonian cohorts taking bodily form again as they escaped their two-dimensional “Phantom Zone” prison in the 1980 movie Superman II (Fig. 1.4). Or, even more aptly, consider the “Space Liberation Manifesto” advocated by science fiction author and Wired columnist Bruce Sterling at the 2011 Augmented Reality Event in Santa Clara, California. There, as part of his keynote address, Sterling arranged for a group of “rebels” dressed in faux-futuristic jumpsuits to “hijack” the speech to spread flyers advocating a populist agenda for this new “blended reality.” The manifesto - which Sterling promptly published in his Wired blog - read, in part:

The physical space we live in has been divided, partitioned and sold to the highest bidder, leaving precious little that is truly a public commons. The privatization of physical space brings with it deep social, cultural, legal and ethical implications. Private ownership of physical space creates zones of access and trespass,

FIGURE 1.4

The two-dimensional Phantom Zone prison in Superman II. participation and exclusion. Private use of physical space becomes an appropriation of our visual space, through architecture, so-called landscape design, and ubiquitous advertising whose goal is to be seen well beyond the boundaries of privately owned property. Simultaneously, private space becomes the preferred canvas for street artists, graffiti writers and other cultural insurgents whose works seek the reclamation [sic] of our visual space, the repurposing of private political and commercial space for their alternative cultural messages.

The nature of SPACE is changing. In the past, space primarily meant physical space - the three dimensional cartesian world of people and places and things. Networked digital computing brought us the notion of cyberspace - an ephemeral “consensual hallucination” that nonetheless appeared to have an almost physical sense of place, a separate and parallel universe alongside the physical world. Today, as computing and connectivity become pervasive and embedded into the world and digital information infuses nearly every aspect of the physical environment, space has become an enmeshed combination of physical and digital - a ‘blended reality’. Cyberspace has everted; reality is enspirited.

This new physical+digital SPACE brings new characteristics, new affordances, new implications for culture. Its physical dimensions are finite, measurable, subject to ownership and control, but its digital dimensions are essentially infinite, subjective, and resistant to centralized control or governance. The new SPACE opens tremendous opportunities for access, expression and participation, but also for commercialism, propaganda, and crushing banality.6

Prolixity aside, this passage does a good job of foretelling the “tremendous opportunities” for creative expression in a world where the digital and physical can be combined in a meaningful, perceptible way. The manifesto’s example of graffiti illustrates the point well. When people are limited to physical means to express themselves, one person’s artistic appropriation of a given object (such as a brick wall) necessarily conflicts with the interests of others who would use that object for different purposes (such as the landowner). With AR, a potentially infinite number of people could superimpose their own expression on the same physical wall without changing anything about the wall in “real” space. As Sterling notes, this explosion of creative democracy will, over time, have profound implications not only for our art but for our culture as well.

Capability

Before society at large can experience the medium of AR, it first needs the technological capability to do so. The fact that we are now beginning to cross that practical threshold is what makes the future potential of AR an important consideration for the present. Sterling’s manifesto is right to note that “computing and connectivity [have] become pervasive and embedded into the world,”7 because it is that development that will lay the groundwork for ubiquitous AR. The sheer amount of computational ability that we all carry with us each day has reached a critical mass that enables some truly amazing experiences. And by application of Moore’s law, which holds that processing power doubles roughly every 2 years, we can expect that potential to grow exponentially.

We are at the point where each step forward in computational ability promises an entire new layer of digital-physical interaction. Brian Mullins, president and cofounder of the industry-leading company Daqri, has often noted in his public presentations that it was the addition of a compass, accelerometer, and enhanced processing power to the iPhone 4 that allowed AR apps to make the jump from simply detecting QR codes and other 2D markers to recognizing three-dimensional objects and overlaying data onto them in four dimensions.

That device hit the market in June 2010 - 3 short years before this writing. Now Apple considers the iPhone 4 too antiquated to sell any longer. Virtually all of the devices the Elon Musk used in his Iron Man-esque YouTube video - e.g., the Leap Motion sensor and the Oculus Rift headset - have been introduced in the interim. If we have gone from relatively simple iPhone apps to gesture-controlled rocket design in 3 years, what will be possible in another year? In 3? Ten?

FOLLOW THE MONEY

The progression of digital technology to date and the multiple visions of our augmented future from people who understand the technology are persuasive evidence of AR’s imminence. But these are not the only indicators. Investors and market watchers are also increasingly placing their bets on AR.

SmarTech Markets Publishing’s revenue forecasts

In 2013, SmarTech Markets Publishing released a report called Opportunities for Augmented Reality: 2013-2020. Despite identifying a number of practical hurdles that must still be overcome, “SmarTech believes that there is enough in this analysis to suggest a strong and profitable future for AR.”8 Some of the reasons SmarTech offered for this conclusion include that:

  • •  The “mobile industry” is already “huge” and “sophisticated.”

  • •  “AR is already out there as a deployed technology to some extent.”

  • •  “It also fits in well with other important trends such as the rise of tagging/RFID, NFC, location-based services, image recognition, and visual search.”

  • •  “Strong business cases can be made for AR using today’s technology.”

  • •  “Many of today’s backers are firms with deep pockets.”9

The report lists several well-known companies in the AR industry that had already received recent venture capital investments of between $1 and 14 million, including Layar, Tonchidot, Total Immersion, Ogmento, Ditto, Wallit, Flutter, GoldRun, CrowdOptic, Blippar, and Wikitude.10 In June 2013, shortly after the release of SmarTech’s report, Daqri announced $15 million in private investment to support its own AR platform.11

These funds are called “investments” for a reason; the people making the investments expect a return on their money. The SmarTech report gives good reason to expect one. It forecasts revenue in the AR industry to exceed $2 billion by 2020, and to surpass $5 billion just 2 years later.12

Tomi Ahonen’s predictions on AR usage

SmarTech is not the only prognosticator to talk in numbers of this magnitude. Tomi Ahonen is an oft-quoted consultant and author of 12 books on the mobile industry. He characterizes AR as the “Eighth Mass Medium” of human expression, following print, recordings, cinema, radio, television, Internet, and mobile.13 He has followed the growth rates of these technologies, and come up with his own forecasts for the rate at which society will adopt AR. Ahonen predicts 1 billion users of AR across the globe by 2020, with that number climbing to 2.5 billion by 2023 (Fig. 1.5). Translated into revenue, these figures are more optimistic than SmarTech’s prediction.

Even more notable, however, is the similar exponential growth curve in both charts. Whether expressed in terms of dollars or users, both forecasters see the technology catching on first with a core market, and then taking off like wildfire from there. The experience of the Internet and mobile industries over the last few decades lends credence to these predictions.

Gartner’s estimations concerning workplace efficiency

In November 2013, the information technology research and advisory company Gartner estimated that digital eyewear had the potential to net field service companies $1 billion in savings by 2017.14 “The greatest savings in [this field] will come from diagnosing and fixing problems more quickly and without needing to bring additional experts to remote sites,”15 it said. But the report also saw “potential to improve worker efficiency in vertical markets such as manufacturing, field service, retail and healthcare.”16 Numbers like these are certain to catch the attention of professionals from several industries.

FIGURE 1.5

Tomi Ahonen’s predictions on AR usage.

CONCLUSION

If you’ve read this entire chapter, the chances are good that you are becoming as convinced as I already am that the AR is poised to be a major force in American and global industry over the coming decade. If so, then follow me to Chapter 2 to learn in a little detail more about what the “AR” medium looks like.

CHAPTER

A Summary of AR Technology

INFORMATION IN THIS CHAPTER:

  • •  Defining terms

  • • Augmenting each of the senses

  • •  Supporting technology

  • •  Levels of adoption

INTRODUCTION

The goal of the previous chapter was to persuade you that augmented reality (AR), and the multiple ways in which it will intersect the law, are important. This chapter is meant to help you understand what AR actually is. After all, most good legal analyses begin by defining their terms.

More precisely, I want you to understand what I and the sources I will cite mean when we talk about AR, because the term means significantly different things to different people. Indeed, there are many within the AR community who do not care for the term “augmented reality” at all, believing that it sounds too stilted and artificial to ever be meaningful to the general public. Others have invented slightly different terms to refer to specific types of, or approach to, what could be considered AR.

Further, complicating this discussion is the fact that AR is not an island unto itself. If technologies that squarely fit within the definition of “augmented reality” are ever going to fully manifest themselves in society, they will require support from, and need to work in harmony with, panoply of related technologies that do not, in and of themselves, fit entirely within the “AR” box. A proper discussion of AR’s role in society, therefore, must take these technologies into account as well.

Recognizing this fact, the industry’s leading conference changed its name in 2013, from the “Augmented Reality Event” to the “Augmented World Expo.” Following that lead, this book will likewise use the term “augmented world” to mean the full range of devices and technologies that work together to digitally enhance everyday life.

Augmented Reality Law, Privacy, and Ethics

17

Copyright © 2015 Elsevier Inc. All rights reserved.

DEFINING OUR TERMS

AUGMENTED REALITY

Let’s begin with the phrase “augmented reality”. The subject of the phrase is “reality”. That’s the thing being “augmented” by AR technology. So, what do we mean by “reality” in this context? Obviously, we could answer that question in several ways. For example, when asked recently to give an example of “augmented reality” that the general public could easily understand, one commentator responded (perhaps jokingly): “drugs”.

That’s not what the emerging AR industry has in its mind. It doesn’t encompass the dream worlds of such films as Inception or Sucker Punch, or a drug-enhanced vision quest. Poetic license aside, we’re not talking about mental, emotional, spiritual, or metaphysical “reality” when we discuss the latest AR app. Instead, we mean the actual, physical world we all inhabit.

What, then, does it mean to “augment” that reality? Starting again with what it doesn’t mean, it’s important to note the distinction between AR and virtual reality, or VR. This, more familiar term describes a completely self-contained, artificial environment. Think Tron or The Lawnmower Man, or the web-based worlds of Second Life and World of Warcraft. The Oculus Rift headset that debuted in 2013 is another example of virtual reality because the display completely covers the user’s eyes (Fig. 2.1).

FIGURE 2.1

Oculus Rift is a contemporary example of virtual, not augmented, reality.1

1© Flickr user Sergey Galyonkin, used under CC BY-SA 2.0 license. See https://creativecommons.org/ licenses/by-sa/2.0/

ONLY FIVE SENSES?

Technically, biologists identify anywhere between nine and 21 separate physical “senses” in humans. Those outside the classical five sense understanding include pressure, itch, proprioception (body part location), nociception (pain), equilibrioception (balance), thirst, hunger, magnetoception, and time, among others.59 An exploration of how these senses could also be digitally augmented would (and likely will) be fascinating, but is beyond the scope of this book.

59 “How Many Human Senses are There?” wiseGEEK, available at http://www.wisegeek.org/how-many-human-senses-are-there.htm (last visited September 13, 2014).

The user’s actual, physical surroundings don’t enter into the experience. (That said, several developers have begun equipping the Oculus Rift with external, forward-facing sensors capable of incorporating the user’s surroundings into their virtual environment, thereby enabling a true AR experience through the device.17)

AR, then, is a blend of VR with plain old physical reality. The American Heritage Dictionary defines the verb “augment” as “to make (something already developed or well under way) greater, as in size, extent, or quantity.”18 That’s what AR does. It uses digital information to make our experience of actual, physical reality “greater.” It doesn’t create a brand new, standalone plane of existence; it simply adds to the information we already process in the physical world. (This is an objective description, of course: whether AR makes our experience subjectively “greater” or promises to be a fascinating and very context-specific debate.) This book will frequently use the word “virtual” to describe the digital information displayed by AR devices. This is an accurate use of the word “virtual” - which means “existing or resulting in essence or effect, though, not in actual fact, form, or name”19 - because AR often creates the illusion that digital information exists in, and interacts with, physical reality. But do not confuse the usage with “virtual reality.”

Tying this understanding of “AR” with the word “reality” shows why it’s important to define our terms. How does this technology increase the “size, extent, or quantity” of our physical reality? To answer that question, we need to recall how it is that we experience the physical world. And the answer, of course, is through our five senses: sight, smell, touch, taste, and hearing. “AR,” therefore, is a technology that gives us more to see, smell, touch, taste, or hear in the physical world than we would otherwise get through our non-augmented faculties.

Again, it is important to recognize that even this definition of AR does not command universal consensus. When I first proposed the foregoing formulation in March 2011, Bruce Sterling, the science fiction author and Wired columnist who has headlined several AR industry conferences and earned the nickname “The Prophet of AR,” responded that he preferred a definition first articulated by Dr. Ronald Azuma, computer science professor at the University of North Carolina Chapel Hill. This definition “formally insists on some real-time interactivity with an augment in a registered 3D space.” Otherwise, Sterling explained, “you get into trouble with adjunct technologies like 3D movies, digital billboards or projections. They have the AR wow factor, but, they’re not using the core techniques of the field i.e. real-time processing of real 3D spaces.”20

Sterling’s point is a fair one. If we define our subject matter so broadly as to encompass too many commonplace technologies, then we dilute the significance of our conversation about AR, and we detract from the innovation currently underway in the AR field. At the same time, however, Sterling also admitted that, “in practice, these academic distinctions aren’t gonna slow anybody down much.”21 We have laid out a sufficient understanding of what AR is to appreciate and what makes it special.

SYNONYMS

As I mentioned, some of the most prominent names in the AR industry do not care for the term “augmented reality” at all. For some, the concern is the one expressed above - that the phrase encompasses too many disparate technologies to be meaningful. To others, the term is such a mouthful that it shuts down conversation. Still others find “augmented reality” too reminiscent of “virtual reality,” which they feel already lacks mainstream credibility, or else has too much of a science fiction ring to it. Underlying all of these views is the fear that, by using the wrong terminology, the AR industry will scare off too many potential consumers and unnecessarily stunt the technology’s growth and its mainstream adoption.

For example, some commentators use such terms as “enhanced reality” or “reality with benefits.”22 As time goes by, assuming that AR experiences continue to become more prevalent, we may just call it another aspect of “reality,” and leave it at that way. After all, twenty years ago, it was in vogue to refer to the internet as the “information superhighway,” and the verbal imagery of on- and off-ramps was everywhere. No one speaks in such terms today.

VARIATIONS ON THE THEME

Some phrases that sound similar to “augmented reality” actually have a sufficiently different meaning to merit discussion. One such example is “augmediated” or simply “mediated” reality, terms preferred by University of Toronto professor and wearable

FIGURE 2.2

Steve Mann.10

computing pioneer Dr. Steve Mann. “[C]onsidered by many to be the world’s first cyborg,”23 Mann has worn one version or another of his EyeTap digital glasses for more than 20 years. In a 2012 interview, Mann said that “augmented reality doesn’t make sense. Augmented reality just throws things on top, and you get a certain amount of information overloaded. We call it mediated reality.” (See Fig. 2.2.)24 25

Mann expounded on the difference during his keynote address at the “2013 Augmented World Expo.” As opposed to simply adding digital content, Mann’s “augme-diated reality” enhances things that are dark and filters out bright lights to allow the user to see the physical world more completely. For example, he used this technology to weld steel without wearing a conventional mask. He also gave the real-life example of standing in the headlight beams of an oncoming car and being able to see both the license plate and the driver’s face. Of course, both examples fall neatly within our definition of augmented reality as “making greater” one’s visual perception of the world, even though the additional content that Mann perceived was physical rather than digital. His perception was nevertheless “augmented” and it was accomplished by digital means. Nevertheless, some people do use “AR” in the same narrow sense that Mann does, and it is valuable to understand the distinctions. If anyone has earned the right to be heard on the subject of how we speak about digital eyewear, it is Steve Mann.

Moving in the opposite direction, we encounter the phrase “diminished” or “decimated” reality. These terms refer to the use of AR technology to decrease the amount of content we perceive. Mann has used this term when describing a feature of his “EyeTap” device that filters out visual ads for cigarettes.26 This, too, fits our working definition of “AR,” if we think of such filters as making our perception of the world qualitatively greater. Will Wright, creator of the SimCity, Sims, and Spore video game franchises and another keynote speaker at the “2013 Augmented World Expo,” spoke favorably of diminished reality as a means of lessening the amount of unwanted visual distractions in our lives. He encouraged the developers present to “add beauty to the world rather than using it as another way to browse the internet.” Diminished reality will play a large role in this book’s discussion of trademark law and civil society, among other things.

RELATED VOCABULARY

A few terms pop up often and uniquely enough in discussions of AR that are worthwhile to point out their meanings here. I’ll introduce several during the course of the book, but a few are worth pointing out at the outset.

Chief among these is the concept of “immersion” or “immersiveness.” This refers to the degree to which a user’s mind subconsciously accepts a digital illusion as being physically real. Even when the user objectively understands that the digital representation is not tangible, their visceral experience of the content can still feel that way. The more immersive an “AR” experience is, the more effectively it has done its job. (Little wonder then, that one of the oldest companies in the AR field is named as “Total Immersion.”)

“Geolocation” is a fairly self-explanatory term, but “geofencing” may not be. This refers to the establishment of invisible boundaries in real space, the crossing of which triggers a digital response. For example, the owners of a sports stadium may establish a “geofence” around the stadium’s perimeter, allowing only those who cross inside to access exclusive content about the game. Advertisers establish “geofences” within a certain radius around a particular store to trigger advertisements to passersby.

“Digital eyewear” is a generic term I prefer, but that is not yet widely adopted in the mainstream press. “Smart glasses” have become a more popular synonym. Both terms are meant to include all forms of head-worn devices that directly intersect and digitally alter a user’s field of view. Such generic language also sidesteps the debate about whether a particular device is truly “AR,” and to avoid devoting a lopsided amount of attention to any one particular manufacturer or product. This is especially helpful now, when most mainstream press uses “Google Glass” as a stand-in for, or segue to, the entire AR field. “Google Glass” is ahead of the pack in terms of getting to market and in stirring conversation. But there is disagreement over whether it is truly “AR” (it isn’t marketed as such), and it is only one limited expression of what digitally enhanced eyewear can achieve.

A TECHNOLOGY FOR ALL SENSES

Although we usually think of the visual sense when discussing and creating AR, our definition of the term encompasses digital enhancement of all five senses. Therefore, to round out our understanding of AR, let’s survey some examples of how each sense could be, or is being, augmented.

VISION

A picture is worth a thousand words, and by some estimates, the brain processes visual imagery up to 60,000 times faster than text alone.27 So, it is not surprising that most AR research has focused on how to augment the sense of sight. Yet this may also be the most difficult sense to augment well. Because our eyes perceive so acutely, it is difficult to trick them into accepting digital content as physically real. That is especially true because, with the exception of true holographic projections, digital displays must remain two-dimensional, and rely on some form of intermediary filter to create the illusion of three- (or four-, if you like) dimensionality.

AR developers are experimenting with several media to create this illusion. The earliest and simplest was the television screen. Leaving aside the academic debate of whether this truly fits the definition of AR, the digital scrimmage and first down lines that appear on the screen in every NFL broadcast are some of the earliest and most effective examples of digital content intermixed with physical reality in a way that our minds accept as real. As mentioned in Chapter 1, I personally know several people (most, but not all, of whom were children) who attended a professional football game in person, for the first time, only to feel cheated and disappointed that the lines were not actually there on the field. Their habit of reliance on this digital enhancement made it much harder to follow the action of the non-augmented game.

One of the earliest ways that actual AR was first distributed was through computer webcams. Programs running on desktops or laptops use the webcam to recognize a certain “target” - usually a printed code or special image, although it could also include particular body parts such as a hand or face - and then display digital content atop that target on the computer screen. Early examples included a promotion for the Transformers movies called We Are Autobots, which superimposed a robot’s head over the user’s, moving the image along with the user’s in real-time video to create the illusion that he or she was really wearing the mask. (Fig. 2.3) Retailers have used a similar line of “virtual try-on” websites to allow users to see themselves on-screen wearing such products as rings, watches, eyeglasses, or clothes. In fact, as discussed in Chapter 5, these applications have been some of the first to spur patent infringement disputes related to AR.

The Lego company installed this same technology into in-store kiosks to promote its toys. A customer holding a Lego® box up to such a kiosk would see animated versions of the toys that could be constructed using the bricks in the box, moving around in three dimensions as if they were physically standing atop the actual box. Larger versions of the same concept have been installed on billboard-sized screens to promote various goods or causes.

Most AR available today exists in apps available for download and use on a mobile device. These perform a variety of functions, from displaying digital content on two-dimensional images, to augmenting physical objects recognized in the real world, to displaying walking directions directly atop the sidewalk a user sees through their device’s video camera.

Navigation is also a prominent motivator for augmenting vehicle windshields, another potential medium for augmented displays. Pioneer has already launched its first version of an AR navigational aid, and more are sure to come. Several automotive companies have announced that they are developing technology for displaying driving directions over a driver’s field of view, or for enhancing safety by highlighting roads in foggy weather or calling attention to road signs. A scene from Mission Impossible: Ghost Protocol even shows a car windshield with a proximity detector

FIGURE 2.3 that displays the heat signatures of pedestrians in the vehicle’s path. Toyota has also discussed the concept of augmenting the view for backseat passengers with all manner of entertainment content.28 These applications are discussed more fully in Chapter 7.

The technology known as “projection mapping” also deserves mention here. This is the use of precisely positioned and timed digital projections to create the illusion of altering the illuminated object.29 There are many who would not consider it AR, including Bruce Sterling in the above-mentioned quote. But, when done well (YouTube hosts a number of fine examples), projection mapping creates as complete an illusion of digital reality as any device I have seen. Adding to that, a projection mapping system currently under development at the University of Tokyo is capable of tracking fast-moving objects so precisely that it can project an advertisement on a bouncing tennis ball without missing a frame,30 and I believe it accurate to describe the technology as “augmented reality.”

Digital eyewear, however, is the real holy grail of visual AR. Users can only hold a mobile device out in front of themselves to see the augmented overlay in its video feed for so long before developing the feeling of exhaustion that AR developer Noah Zerkin aptly calls “gorilla arms.”31 Webcams and windshields are effective media, but not portable. Combining the best of both worlds requires a medium that is always in our field of vision, but that doesn’t require any extra work or thought to stay there. Only digital eyewear fits that bill.

Unsurprisingly, developers have been working on such eyewear for at least three decades - as Steve Mann demonstrated in 2013 through his traveling exhibit of 30 years’ worth of AR headwear. (Fig. 2.4) Now, several companies are poised to begin selling digital eyewear that is aesthetically acceptable and affordable enough to be enjoyed by the public at large - including Meta, a company Mann helps run. Most of the digital eyewear devices announced as of this writing are aimed at the consumer market. In September 2014, however, Daqri announced the “DAQRI Smart Helmet,” a hardhat-like device designed to be used in industrial settings. “It has a transparent visor and special lenses that serve as a heads-up display, along with an array of cameras and sensors that help users navigate and gather information about their en-vironment.”32 Once it hits the market, this device could help field service and similar

FIGURE 2.4

Steve Mann’s traveling exhibit showcasing 30 years’ worth of digital eyewear.

industries that would begin to realize the cost savings predicted by the Gartner report discussed at the end of Chapter 1 (Fig. 2.5).

TOUCH

Technology that enhances the sense of touch is better known as “haptic.” After visual technology, this field has the most promise for delivering meaningful AR experiences. The Finnish company Senseg has been developing technology to turn touch screens into “feel” screens that generate any number of artificial textures, edges, or vibrations. It accomplishes this illusion through the use of touch pixels, or “Tixels™ ,” that employ Coloumb’s force, the principle of attraction between electrical charges. By passing an ultra-low electrical current into the insulated electrode, Senseg says that its “Tixels” create a small attractive force to finger skin. Modulating this attractive force generates artificial sensations.33

Scientists at the University of Tokyo have gone one step further and developed “a flexible sensor thinner than plastic wrap and lighter than a feather.”34 When overlaid

FIGURE 2.5

The DAQRI smart helmet.

on human skin, it creates an “e-skin” that is as persistent and imperceptible as modern AR eyewear is to the eye.

At 2013’s ISMAR and Inside AR conferences, Disney introduced a different sort of haptic technology called “AIREAL.” It uses precisely timed puffs of air to create physical sensations at defined points in open space.35 And on September 9, 2014, Apple’s announcement of the Apple Watch advanced the general public’s understanding of what is possible with haptic feedback. “Right now, phones can provide physical feedback in one way: they buzz. But the Apple Watch can provide different kinds of haptic feedback and buzzing directionally to provide subtle directions or tapping lightly when a friend wants to say hello .. ,.”36 Apple also promised an eccentric feature that transmits the wearer’s heartbeat to another person.

HEARING

Aural AR gets less attention, and would appear to hold less potential for innovation. Hearing aids, after all, have been digitally enhancing our aural sense for many years.

Yet there are those doing important work in this field. One of the most interestinglooking projects is run by Dr. Peter B.L. Meijer called “vOICe.” This technology aims to use and adapt sound waves into “synthetic vision through auditory video representations”37 that allow totally blind individuals to effectively “see.”

Others are working on next-generation hearing aids for use by people even with normal hearing. These would allow users to focus on a particular conversation in a crowded room, or automatically block sudden, unexpected loud noises. As of this writing, a start-up called Intelligent Headset is preparing to ship a pair of headphones by the same name with a type of aural AR it calls “3D sound,” together with a variety of AR apps designed for gaming, education, and tourism.38

Meanwhile, some developers in the “quantified self” effort to collect even-more-detailed information about individuals’ physical health have discovered that the ear is a much better location on the body for taking measurements than the wrist, where most present-day fitness devices are worn.39 If this spurs manufacturers to sell, and users to wear, more ear-based devices, it is logical to expect that this will increase the demand to give these devices additional capabilities that actually effect, and augment, our hearing.

TASTE AND SMELL

Neither of these two related senses has ever offered much room for digital enhancement, but some work toward this end has been done. “The ‘Tongueduino’ is the brainchild of MIT Media Lab’s Gershon Dublon. It’s a three by three electrode pad that rests on your tongue ... [and] connects to one of several environmental sensors. Each sensor might register electromagnetic fields, visual data, sound, ambient movement - anything that can be converted into an electronic signal. In principle, this could allow blind or deaf users to ‘see’ or ‘hear’ with their tongues, or augment the body with extra-human senses.”40

The most prominent work in augmented taste is being done by Dr. Adrian Cheok and his students at the Mixed Reality Lab at Keio University. “Smell and taste are the least explored areas because they usually require chemicals,”41 Dr. Cheok told an interviewer. But “we think they are important because they can directly affect emotion, mood, and memory, even in a subconscious way. But, currently its difficult because things are still analog. This is like it was for music before the CD came along.”42 Eventually, Dr. Cheok hopes to simulate smells in a digital manner, such as through magnetic stimulation of the olfactory bulb. The Mixed Reality Lab has even

FIGURE 2.6

Daqri’s MindLight.

solicited papers for a workshop dedicated to augmenting the lesser-explored senses of touch, taste, and smell.43

Not far from this program, at the University of Singapore, another team of researchers is “trying to build a ‘digital lollipop’ that can simulate taste.”44 The group’s leader, Dr. Nimesha Ranasinghe, says “a person’s taste receptors are fooled by varying the alternating current from the lollipop and slight but rapid changes in tempera-ture.”45 If the technology were perfected for commercial use, “[a]dvertisers might include the taste of a product in an add on your computer or television. Movies could become more interactive, allowing people to taste the food an actor is eating. [P]eople with diabetes [could] taste sugar without harming their actual blood sugar levels.”46 The team even hopes video game designers will offer players taste-based rewards and penalties in response to gamer’s performance.47

EXTRA-SENSORY AR

Just when I thought I understood the boundaries of AR, I encountered the latest technology being developed by Daqri. The New York Times recently covered an application Daqri calls “MindLight,” which detects a user’s brainwaves using wireless EEG sensors connected to digital eyewear. 48 When the sensors detect that the user is concentrating on a light bulb, the bulb turns on or off. Think “The Clapper,” but with brainwaves (Fig. 2.6).

Next generation 40 user interface

FIGURE 2.7

Device interactivity through the Daqri Smart Helmet.

Flipping light switches, however, is the least impressive application of this revolutionary development. Daqri envisions “[t]his system increasing] the efficiency of industrial processes many fold by pointing workers toward targets for action in a specific sequence, measuring their concentration at critical junctures, and enabling pre-visualization of each action to reduce mistakes.”49 At the 2013 Augmented World Expo, CEO Brian Mullins demonstrated that the system could learn the brainwaves associated with many different images, allowing the images to be displayed as soon as the user thinks of them. The potential ramifications of this technology are simply astounding.

SYNTHETIC SYNTHESIS

Of course, the greatest potential in these separate means of augmentation lies in their combination. Just as we need all five senses working together with each other in order to fully appreciate our physical reality, so too will the effect and utility of AR be exponentially increased when multiple senses are augmented simultaneously. Such augmented experiences would be, in a word, far more real.

The initial teaser videos for the Daqri Smart Helmet provide a glimpse of one type of this interactivity between devices. The company intends that “users will have the ability to touch and control the interface through integration with new form factors, such as smart watches.”50 The video shows a worker viewing an augmented display hovering over his wrist, through which he can scroll by swiping his watch (Fig. 2.7).

SUPPORTING, OR “AUGMENTED WORLD,” TECHNOLOGIES

Any discussion or implementation of AR also necessarily involves a variety of technologies that are not, in and of themselves, AR, but without which an augmented experience would not be possible. As explained above, I will refer to these as “augmented world” technologies.

MESH NETWORKING AND THE PANTERNET

I have come to believe that certain types of augmentation will not be practical on a mass scale until devices become much more autonomous, capable of sensing and interacting with each other, rather than relying on a single cloud server - or even a single internet - to provide all of the necessary data. This leads into a discussion of what is called the Internet of Things (“IOT”), an already emerging ecosystem in which digitally networked physical devices talk to each other and can register their geopositions in real time. At current rates, it will not be long before virtually every physical object we encounter on a daily basis will be hooked into the IOT.

But the concept I am aiming at here is also broader than the IOT. It also includes mesh networks - digital infrastructures “in which each node (called a mesh node) relays data for the network [and a]ll nodes cooperate in the distribution of data in the network.”51 Applied to AR, this means that an individual’s wearable devices will be able to perceive and interact with networked devices the person encounters without having to rely on a connection to a central internet. This sort of infrastructure would greatly shorten the distance that data has to travel in order to become available to the user, and increase the number of pathways that data can take to get to the user, thus reducing potential lag time in transmission and eliminating devices’ reliance on a steady Wi-Fi or LTE signal. In this environment, augmented user interfaces would be more reliable, and hence more likely to be adopted.

The slowly unfurling infrastructure to support networked automobiles is likely to spur development of this sort of technology. Networked vehicles will be expected to interact with fixed, roadside nodes in order to exchange data with a traffic control system. They will also be designed to communicate with other vehicles in order to reduce traffic accidents. Neither type of interaction can afford to be dependent on a strong Wi-Fi signal or a central cloud server.

Another step in this direction came in the form of the “goTenna,” a phone-sized device that emits a wireless signal to create its own closed network, allowing participating devices to connect with each other. After another decade or so of miniaturization, one could imagine a similar capability being built into tiny, perhaps even microchip-sized devices and distributed broadly, allowing every business, family, or social group to create its own ad hoc network independent of the internet.52 Already, in September 2014, Sequans Communications and Universal Scientific Industrial announced their plans to release to market an all-in-one modem module capable of equipping IOT devices with the ability to transmit an LTE wireless signal.53

A related development is the steady progression toward universal, high-speed access to the internet at all points on Earth - what I will call the “Panternet.” Both Facebook (through its Internet.Org project and a fleet of “drones, lasers and satellites”) and Google (through “Project Loon, [an] effort [to] launch[] Internet-beaming antennas aloft on giant helium balloons”), among others, are working toward this goal.54 That sort of infrastructure would also alleviate much of the connectivity issues that stand in the way of an always-on, instantly responsive infrastructure for the exchange of data in augmented form. It would still rely on a central network, and thus not be as robust and responsive as mesh networking, but it could support and be a backstop to such networks.

MECHANICAL VISION AND SENSORS

Mechanical vision is obviously important to the performance of AR eyewear because the devices must be able to detect that something is there before they can augment it. By the same token, the more advanced the eyewear becomes, the better it will need to be at tracking the movements of the user’s eyes. Orienting displays so that they appear to overlap a particular physical object is notoriously difficult. To do it well, the device will need to know where the user is looking.

Location-sensing data will be important to the delivery of augmented content. Today’s devices are mainly limited to using GPS signals, but those are only accurate to within a few feet and do not travel well through walls. Newer devices use nearfield communication (NFC) or Bluetooth low-energy (BLE) sensors to detect location more precisely and indoors.

TAGGANTS FOR PINPOINT-ACCURATE PERCEPTION

My time involved with the AR industry has educated me on the enormous difficulty that computer vision applications have in precisely identifying the exact location, edges, depth, and identity of a three-dimensional object in uncontrolled environments, especially when the object (or sensor) is or is poorly lit. That is why even the most impressive vision-based AR devices rely on controlled environments, ample lighting, and a high number of pre-programmed details about the image or object being recognized. These make for impressive displays under the right conditions, but not necessarily for robust AR applications suited for everyday use.

Nor is this a minor hurdle. Some of the best computer vision scientists in the world have been working on this problem for decades, and the technology still has an awfully long way to go in this respect.

This is why I have suggested that digital eyewear and other vision-based AR devices will soon rely on micro- or even nano-scale taggants to assist in locating physical objects.55 As counter-intuitive as it may seem at first, it may actually turn out to be less practical to design computer vision sensors capable of perceiving the world as accurately as the human eye does than it would be to simply paint the entire world with location-aware dots that a machine could locate much more easily. “Taggants,” according to one company that makes them, “are microscopic or nano materials that are uniquely encoded and virtually impossible to duplicate - like a fingerprint. They can be incorporated into or applied to a wide variety of materials, surfaces, products and solutions.”56 Some of the taggants available today can be detected from a distance; it is logical to expect that methods of remotely locating taggants will only continue to diversify and miniaturize.

Applied in a way designed to enhance visual AR, taggants might work in a way analogous to present-day radio frequency identification (RFID) tags, but a much smaller. RFID tags already “are tracking consumer products worldwide,” reports the website HowStuffWorks57 “Many manufacturers use the tags to track the location of each product they make from the time it’s made until it’s pulled off the shelf and tossed in a shopping cart. Outside the realm of retail merchandise, RFID tags are tracking vehicles, airline passengers, Alzheimer’s patients and pets. Soon, they may even track your preference for chunky or creamy peanut butter.”58 A British design student even called for “[i]ncorporating small, edible RFID tags embedded in your food.”59 Such a system would allow tracking food products along the entire food chain, from production to digestion, and even enable such devices as “smart plates” that scan your meal via Bluetooth and alert you to potential food allergens.

According to a 2011 L.A. Times article, “the Air Force asked for proposals on developing a way to ‘tag’ targets with ‘clouds’ of unseen materials sprayed from quiet, low-flying drones.”60 The paper quoted the president of one company that’s developing such nano taggants as saying that tagging, tracking and locating “is a hot topic in government work. It isn’t easy tracking somebody in a crowded urban environment like what is seen in today’s wars.”61

According to that company’s website, its “nano crystal taggants are deployable in solvents, inks/paints, and aerosols, allowing them to be easily integrated into various [military] applications. .. and customized for the unique needs of other operations [as well].” 62 It already makes “nano crystal security inks that can be incorporated directly into clear laminates, plastics, or appliques[,] ... and dye- and pigment-based inks (including black inks) for use in banknotes, concert tickets, lottery tickets, or CDs -and even in varnishes and lacquer finishes.” The transparent, “nanophotonic” taggants are optically clear, but can be designed to respond to a specific range of UV radiation.

Add these trends together, and what do you get? A technology capable of literally painting the world with AR markers. Micro- or nano taggants baked into paint, plastics, asphalt, ink, or even dust would be invisible to the naked eye, but capable of marking all manner of 3-D objects in a way that appropriately equipped AR optics could potentially be designed to recognize.

These technologies are especially exciting for those developing what AR enthusiasts call a “clickable world,” in which a person can physically interact with a physical object and get a digital response. Just as a real estate developer needs an infrastructure of water pipes and power lines in order to build a subdivision of houses, so too will software developers need an infrastructure of geolocation-aware similar sensors in place before the augmented world truly takes shape.

HAND AND GESTURE TRACKING

One of the most important augmented world technologies is gesture tracking. Hand gestures and other physical movements are likely to become the most common way of interacting with digital objects that appear to be physical, if only because it is the most natural way to interact with physical things. As but one example, in 2012 Google obtained U.S. Patent No. 8179604 B1, titled “Wearable Marker for Passive Interaction.” The patent describes “[a] wearable marker [in] the form of a ring, a bracelet, an artificial fingernail[, a fingernail decal,] or a glove, among other possible wearable items.”63 Sensors in a user’s digital eyewear would “function together to track position and motion of the wearable marker via reflection, and by doing so can recognize known patterns of motion that correspond to known hand gestures.”64 For many, this means of interacting with digital data calls to mind scenes from Minority Report and similar sci-fi films (Fig. 2.8).

Closely related to tracking gestures is the ability of digital eyewear to register where the user’s hand touches. In May 2014, German AR company Metaio, one

FIGURE 2.8

Illustration from Google’s patent on a gestural interface device.

of the most prominent companies in the industry, demonstrated a prototype of the “Thermal Touch” technology it is developing.65 “Consisting of an infrared and standard camera working in tandem and running on a tablet PC, the prototype registers the heat signature left by a person’s finger when touching a surface.”66 Digital eyewear equipped with such technology “could turn any surface into a touch-screen.”67 Still very much in early stages of R&D, however, Metaio projects the technology to be ready for widespread use in 5-10 years.68

FACIAL RECOGNITION

Facial recognition technology will be another important augmented world technology. To date, industry-leading companies have shown remarkable restraint in implementing such features. Google has disallowed facial recognition apps on its Glass headset, and Facebook has refrained from rolling out the technology to the degree that it could. But, as AR hardware proliferates, it will be impossible to keep this genie in the bottle. The potential commercial applications are just too numerous and profitable to expect such restraint from all service providers. AR concept videos are chock full of examples in which digital data - including links to their social media profiles, dating service information, even whether they’re a registered sex offender - is seen hovering in the air over a person’s head. As noted in Chapter 8, the law enforcement community is particularly eager to implement this technology. Facial recognition is by far the easiest and most direct means by which to associate such displays with a particular person.

LEVELS OF ADOPTION

The analysis in this book largely presumes a world that does not quite yet exist. Although the pieces for realizing a fully augmented world are either in place or about to be, there is still progress to be made before the technology penetrates all levels of society and reaches its full potential. I will occasionally contrast these various levels of adoption when discussing AR’s legal ramifications, but it might be helpful to consider at the outset what these various stages might look like. These are only predictions, of course, and they get fuzzier the further out we go. But they are based on years of interaction with the people at the forefront of this industry.

NOW: EMERGENCE

The real world at the time of this writing is one in which AR is beyond its infancy, but not quite yet at its adolescence. There are hundreds of AR apps available for our mobile devices, and thousands of marketing campaigns have used the technology. Dozens of start-up companies are touting various AR innovations, and a few of those have received significant funding. Tech columnists write eager words about what’s just around the corner in this field. Although AR concepts have shown up in mainstream entertainment for decades, the public is just starting to grasp the idea that this technology is real, thanks in large part to the buzz about Google Glass and a handful of other products.

THE NEAR FUTURE: LEGITIMACY

One of the primary themes at the 2014 Augmented World Expo was the industry’s shift in emphasis from the consumer market to enterprise applications. Large companies like Daqri, Raytheon, and the Newport News Shipbuilding Company, among many others, have shown that AR can solve real problems and generate revenue. Investment from this sector will allow the technology to improve without being limited by the aesthetic whims and cost constraints of the consumer market. Within the next few years, AR technology should be able to cross the threshold from gimmicky marketing technique to an everyday method of consuming data used by a large segment of the general public.

FIGURE 2.9

The Gartner Hype Cycle.54

At time of this writing, the most recent prediction of when we may see this transition is contained in the 2014 Gartner Hype Cycle. Gartner Hype Cycle provides a graphic representation of the maturity and adoption of technologies and applications.69 70 The graph on emerging technologies charts the progression of various innovations through the phases of development that experience and collective wisdom have demonstrated virtually all technological developments to pass through. These have come to be known as “Innovation Trigger,” “Peak of Inflated Expectations,” the “Trough of Disillusionment,” the “Slope of Enlightenment,” and the “Plateau of Productivity.” In the annual chart released in August 2014, augmented reality ranked within the middle phase, “the Trough of Disillusionment.” In this phase, the buzz of expectation has begun to wear off as some promising early innovations failed to deliver, and unproductive start-ups begin to fold. “Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.”71 (Fig. 2.9)

As the graph suggests, however, this is an inevitable, even healthy period for any new technology to pass through. Next comes the “Slope of Enlightenment,” in which “more instances of how the technology can benefit the enterprise start to crystallize and become more widely understood. Second- and third-generation products appear from technology providers. More enterprises fund pilots; conservative companies remain cautious.”72 Finally, on the Plateau of Productivity, mainstream adoption starts to take off. Criteria for assessing provider viability are more clearly defined. The technology’s broad market applicability and relevance are clearly paying off.”73 Gartner’s 2014 chart includes an additional notation that it will likely be 5 to 10 years before the technology reached the “Plateau of Productivity.”

Once AR begins to approach the Plateau of Productivity, digital eyewear will have become a mainstream product category, available from half-dozen manufacturers or more. Indeed, these will be the centerpiece of the already emerging ecosystem of connected, wearable devices, including subtle gestural controls. Sales of mobile phones will begin to decrease as more consumers come to expect the data they consume to exist in midair rather than on a flat screen.

In this stage of AR’s development, entertainment and advertising companies will have begun creating content intended to be consumed in augmented form. Consumers will expect to discover content this way, and the real world will begin to look like a vast, unused canvas ready to be digitally painted. That canvas will include people, as facial recognition and other biometric data become widespread means of identifying each other and associating people with digital content. There will be plenty of debate about when it is appropriate to augment certain people, places, and things, and who has the right to do so. Privacy, intellectual property and obscenity debates will be common (and are previewed in subsequent chapters of this book).

AR will also have become an indispensable tool in a variety of industrial settings, where AR can cut down on production and worker training costs. For the same reasons, augmented methods of teaching will be the hottest trend in educational circles as well. The word “augmented,” though, will become less common, as people begin to think of AR as the natural way to consume digital data. But the technology will still have its kinks, in light of how difficult it is to precisely augment moving physical objects. People will complain of motion sickness, and visual augmentations will still rely on various forms of targets and assists to improve image quality, such as location sensors, tags, and projection mapping.

THE MEDIUM TERM: UBIQUITY

At this stage, no one uses “phones” anymore, and two-dimensional screens of any type are used only in rare, special-purpose applications. This will be celebrated as an aesthetic advancement, since physical signage is less necessary, but the virtually nil cost of digital advertising will mean that there is far more clutter in our field of view. AR starts to become reliable enough that, especially in combination with the now-commonplace self-driving cars, even traffic signs have started to become digital-only.

Most social interaction and consumer experiences will have an augmented digital component to them. Visual augmentation will finally have gotten to an acceptable level of acuity, and companies will be experimenting with prototypes of augmented contact lenses.

There will also be a significant digital divide between those using the latest AR technology and those who cannot afford it or who are prevented from enjoying it by physical disabilities. We already hear talk of a “digital divide” today, but the implications of the inequality will grow into a full-fledged social justice issue as reliance on the augmented medium grows. Chapter 10 explores this issue from the perspectives of ethics and social science.

THE LONG TERM: MATURITY

At this stage - decades from now - AR is old hat. All digital data comes in augmented form, and we are so accustomed to receiving input by digital means through all five senses that to communicate by any other means will seem quaint. By then, society will be on to the next big thing, whatever that might be. Meanwhile, the ability to interact in this manner will have shaped our societal norms and ethics to a degree that is difficult to foresee. Our society may not be one that people living today would recognize or be comfortable in.

But enough about the far-flung future. Let us now begin to examine the legal principles that will govern the use of AR technology over the next few years.

1

See, e.g., Iron Man (Paramount Pictures, 2008).

2

See, e.g., Tony Stark Makes an Atom, YouTube.com (July 22, 2010), available at: http://www.youtube. com/watch?v=6W8Q6wJ_TT8.

3

The Future of Design, YouTube.com, available at: http://www.youtube.com/watch?v=xNqs. S-zEBY#t=18.

4

Id.

5

Id.

6

7Bruce Sterling, Augmented Reality: Space Liberation Manifesto, Wired.com (May 19, 2011), available

7

at: http://www.wired.com/beyond_the_beyond/2011/05/augmented-reality-space-liberation-manifesto/. 8Id.

8

Smartech Markets Publishing, Opportunities for Augmented Reality: 2013-2022 [this should be a page number if you have access to the actual source] (2013).

9

Id.

10

Id. at [page number].

11

Nick Summers, DAQRI Raises $15M to Develop Its Augmented Reality Platform, Will Support Google Glass at Launch, TheNextWeb.com (June 4, 2013), available at: http://thenextweb.com/ insider/2013/06/04/daqri-raises-15m-to-develop-its-4d-augmented-reality-platform-will-support-google-glass-at-launch/.

12

Smartech Markets Publishing, supra, note 9, at [page number].

13

Tomi Ahonen, Augmented Reality—The 8th Mass Medium, Ted Talks (June 12, 2012), available at: http://tedxtalks.ted.com/video/TEDxMongKok-Tomi-Ahonen-Augment;search%3Atag%3A%22 tedxmongkok%22.

14

“Gartner Says Smartglasses Will Bring Innovation to Workplace Efficiency,” Nov. 6, 2013, available at: Newsroom, Announcements and Media Contacts | Gartner.

15

Id.

16

17Id.

17

See, e.g., wizapply, “Augmented reality device for the Rift : Trial production,” Oculus VR Developer Center, June 27, 2013, available at https://developer.oculusvr.com/forums/viewtopic. php?f=29&t=2042

18

’American Heritage Dictionary (4th ed. 2000).

19

Id.

20

Brian Wassom, Defining Terms: What is Augmented Reality?, Wassom.com (March 30, 2011) http://www.wassom.com/defining-terms-what-is-augmented-reality.html

21

Id.

22

See KTP Radhika, Reality, With Benefits, Financialexpress.com (May 3, 2013) http://computer.fi-nancialexpress.com/features/1272-reality-with-benefits.

23

Nick Bilton, One on One: Steve Mann, Wearable Computing Pioneer, The New York Times, August 7, 2012, available at < http://bits.blogs.nytimes.com/2012/08/07/one-on-one-steve-mann-wearable-computing-pioneer/?smid=tw-share&_r=0 >.

24

Id.

25

© Steve Mann. Used under CC BY-SA 3.0 license. See http://commons.wikimedia.org/wiki/ File:SteveMann_self_portrait_for_LinkedIN_profile_picture_from_dsc372b.jpg.

26

"Id.

27

Media Education Center, Using Images Effectively in Media, available at < http://oit.williams.edu/ files/2010/02/using-images-effectively.pdf > (citing research by 3M).

The “We are autobots” promotion.

28

See Dave Banks, Toyota’s “Window to the World” Offers Backseat Passengers Augmented Reality, Wired (July 29, 2011) available at http://www.wired.com/geekdad/2011/07/toyotas-window-to-the-world-offers-backseat-passengers-augmented-reality/

29

See Definition of Projection Mapping, VJForums (January 17, 2012) http://vjforums.info/threads/ definition-of-projection-mapping.37607/ for a healthy debate on the meaning of this phrase and its relationship to AR.

30

See < http://www.wimp.com/trackingcamera/>.

31

Dan Farber, The Next Big Thing in Tech: Augmented Reality, C-Net (June 7, 2013) http://www.cnet. com/news/the-next-big-thing-in-tech-augmented-reality/

32

Don Clark, “Augmented Reality Experts Unveil Hardhat 2.0,” Wall Street Journal September 5, 2014, available at http://blogs.wsj.com/digits/2014/09/05/augmented-reality-experts-unveil-hardhat-2-0/.

33

See A New Solution for Haptics, Senseg.com, http://senseg.com/solution/senseg-solution (last visited May 27, 2014).

34

John PughNew E-Skin Brings Wearable Tech to the Next Level, PSFK, (August, 14, 2013) available at New E-Skin Brings Wearable Tech To The Next Level

35

See Rajindwer Sodhi et al., AIREAL: Interactive Tactile Experiences in Free Air (2013) available at http://www.disneyresearch.com/project/aireal/

36

Alexis Madrigal, “What Apple’s New Products Say About the Future,” The Atlantic, September 9, 2014, available at http://m.theatlantic.com/technology/archive/2014/09/what-apples-new-products-say-about-the-future/379907/.

37

Peter B.L. Meyer, Augmented Reality for the Totally Blind, available at http://www.seeingwithsound. com/(last visited October 5, 2013).

38

See Jabra and BlueParrott Developer Program.

39

Rachel Metz, “Using Your Ear to Track Your Heart,” MIT Technology Review, August 1, 2014, available at http://www.technologyreview.com/news/529571/using-your-ear-to-track-your-heart/

40

Tim Carmody, Trick out your tongue and taste the sensory-augmented world with Tongueduino, The Verge) February 21, 2013) available at http://www.theverge.com/2013/2/21/4014472/trick-out-your-tongue-taste-the-world-with-tongueduino.

41

Rick Martin, The Next Step in Augmented Reality: Electrify Your Taste Buds, SD Japan (June 21, 2013) available at The next step in augmented reality: Electrify your taste buds – BRIDGE

42

Id.

43

Bruce Sterling, Augmented Reality: Touch, Tast, & Smell: Milti-Sensory Entertainment Workshop, Wired, (August 17, 2013) available at http://www.wired.com/beyond_the_beyond/2013/08/augment-ed-reality-touch-taste-smell-multi-sensory-entertainment-workshop/

44

Nick Bilton, “Getting to the Bottom of a Digital Lollipop,’ New York Times, November 22, 2013, available at http://bits.blogs.nytimes.com/2013/11/22/getting-to-the-bottom-of-a-digital-lollipop/.

45

Id.

46

Id.

47

Id.

48

Quentin Hardy, Thinking About the Next Revolution, New York Times (September 4, 2013) available at http://bits.blogs.nytimes.com/2013/09/04/thinking-about-the-next-revolution/?_r=2.

49

Gaia Dempsey, Controlling Objects Through Thought: 4d And EEGs, Daqri Blog http://daqri. com/2013/09/mindlight-controlling-objects-through-through-4d-and-eegs/#.UlG5LIash8E (last visited October 5, 2013).

50

Daqri, Smart Helmet Features, available at http://hardware.daqri.com/smarthelmet/features.

51

“Mesh networking,” Wikipedia, available at http://en.wikipedia.org/wiki/Mesh_networking (last visited September 13, 2014).

52

Jordan Crook, “The GoTenna Will Let You Communicate Without Any Connectivity,” Tech Crunch, July 17, 2014, available at http://techcrunch.com/2014/07/17/the-gotenna-will-let-you-communicate-without-any-connectivity/.

53

“New LTE Module for IoT,” Connected World, September 11, 2014, available at http://connected-world.com/new-lte-module-for-iot/.

54

“Facebook launches lab to bring Internet everywhere,” Yahoo! News, March 27, 2014, available at Yahoo.

55

“A Trillion Points of Light? Taggants as Ubiquitous AR Markers - Part 1,” http://www. wassom. comZa-trillion-points-of-light-taggants-as-ubiquitous-ar-markers-part-1.html (June 2, 2011).

56

Microtrace, Taggant Technologies, http://www.microtracesolutions.com/taggant-technologies/?gclid= CL7asN_Ok6kCFZQbKgod22OSdw(last visited November 29, 2013).

57

Kevin Bonsor & Wesley Fenlon, How RFIS Works, http://electronics.howstuffworks.com/gadgets/ high-tech-gadgets/rfid.htm (last visited June 8, 2014).

58

43Id.

59

44Kyana Gordon, Nutrismart: Edible Food Tags That Track Food Down the Supply Chain, (June 1,2011) NutriSmart: Edible RFID Tags That Track Food Down The Supply Chain

60

W.J. Hennigan Pentagon seeks mini-weapons for new age of warfare, Los Angeles Times (May 30, 2011) available at Archives - Los Angeles Times.

61

46Id.

62

.

63

U.S. Patent No. 8,179,604 B1 (filed May 15, 2012)

64

49Id.

65

Metaio, “Press Release: Metaio unveils thermal imaging R&D for future use in wearable augmented reality headsets,” May 22, 2014, available at http://www.metaio.com/press/press-release/2014/ thermal-touch/

66

Id.

67

Id.

68

Id.

69

Used under CC BY-SA 3.0 license. See Creative Commons — Attribution-ShareAlike 3.0 Unported — CC BY-SA 3.0.

70

Gartner, “Hype Cycles,” available at http://www.gartner.com/technology/research/methodologies/ hype-cycles.jsp (last visited September 13, 2014).

71

Wikipedia, “Hype Cycle,” available at http://en.wikipedia.org/wiki/Hype_cycle (last visited September 13, 2014).

72

Id.

73

Id.

你可能感兴趣的:(算法治理,ar,其他)