Great episode from Radiolab on the vast underground networks that link trees together. Does that sound dull? It’s a testament to the Radiolab team’s skill that this remains one of the most riveting podcast episodes I’ve heard in months.
Pokémon GO has been all over the news since its launch, talked up everywhere from NPR and Forbes to regional and local news sites. Even my old law school blockmates have been posting about it on Facebook (Law students agog over a gaming app! Imagine that.), which isn’t surprising considering the numbers the game has been pulling in: Time calls it a worldwide phenomenon, and SurveyMonkey’s numbers indicate it’s “the biggest mobile game in US history.” (Lately the numbers have tapered off, but that’s something to gab about another day.)
It’s an impressive run for a game that’s built on a pretty simple mechanic: superimposing random creatures on the real world for players to catch. And it’s a mechanic I’ve been thinking a lot about since listening to a recent Naked Scientists podcast which had a segment dedicated to EuroHaptics 2016.
Augmented reality (which is what fuels Pokémon GO) tends to rely heavily on sight and, to a lesser extent, sound to add information to users’ perceived reality. Just look at this overview from LiveScience, which starts by saying, “Augmented reality is using technology to superimpose information on the world we see.“ (emphasis mine) The article’s brief list of examples highlights how AR’s precursors mainly involved adding information to one’s field of view, a predilection that continues in more recent developments like 2013’s Google Glass and today’s phone and tablet apps. (Like, you know, Pokémon GO.) Even the sole mention of finger sensors, in the MIT SixthSense project, mainly involves the manipulation of projected images.
In other words, AR as we usually know it creates visual realities. Touch is supplementary, if not derived completely from the existing physical aspects of the user’s environment. This is all well and good — for those of us who aren’t visually impaired.
Which brings us to haptics, or human-computer interaction using touch and bodily movements. Hearing about the research presented at EuroHaptics 2016, I got to thinking about how the expansion of what constitutes mainstream AR could help a lot of users whose needs aren’t always accommodated by common tech interfaces. Deeper integration of haptic technology — going beyond haptics as supplementary mechanism for sight — could provide more accessible means for navigating and controlling various tech, which could, in turn, make it easier to interact with the real world in general.
One project from EuroHaptics, for example, looked at the effectiveness of haptic feedback mechanisms for pilots landing aircraft at night/in featureless environments. Haptic cues proved helpful in countering the “black hole illusion” that arises from such visual conditions. It’s easy to imagine these mechanisms, with some tweaking and additional sensors, being useful not just in instances with unfavorable visual input, but ones where there are no visual cues at all.
This isn’t a new idea, of course. I mean, EuroHaptics has been convening since 2006, and we’ve heard a lot about various technologies helping people with a range of disabilities. However, there are still a lot of avenues for improvement when it comes to integrating sensory input with a lot of commonly available tech, particularly in AR, and I point this out because there doesn’t seem to be as much of a push in these directions.
I can’t speak for other people, but the futures I grew up dreaming about were often distinguished by vision-centric developments: glassy holograms; light-based interfaces dancing on smooth worktops; vast networks accessed through complicated headsets. As our technology progresses, some of those developments inch closer to being part of our everyday: Google Glass happened, Microsoft now ships the HoloLens, the world is agog over Pokémon GO. However, the question remains as to how many people that “our” actually includes.
True, a lot of that technological progress has also given us robotic limbs (even exoskeletons!) and other impressive solutions to various conditions that limit people’s abilities to interact with both the real and technological worlds. But as this Al Jazeera article points out, a gap remains in the everyday spaces — and that gap raises important questions about what goals we intend these technologies to achieve vis-à-vis disability, and what views of disability those goals and attitudes imply.
In that same article, the filmmaker Regan Brashear asks illuminating questions about the perceptions of disability that come to inform the development of assistive technologies:
“Is it a valuable part of human life that will always be with us, or is it a problem to be fixed or eliminated? These perspectives lead us towards very different futures. One is about fighting for inclusion on all levels of society, ending stigma and developing useful and needed assistive technologies to enhance quality of life in conversation with the intended users. The other perceives disability as an inherent negative to be “fixed” at all costs.”
Augmented reality seeks to enhance our experience of the real world by overlaying important information and expanded controls over our interactions with that world. So far that enhancement has taken primarily visual forms, at least in AR’s mainstream implementations. But conventions like EuroHaptics 2016 tell us that this limitation can’t exactly be called a consequence of technological deficit: there’s a lot of research and development involving senses other than the visual, and it’s happening (and available!) right now.
So why aren’t we seeing more extensive, dynamic deployments of this research in everyday technologies like smartphone-based AR? What’s keeping us from using advances in fields like haptics to implement more widespread — and better — instances of, as that Al Jazeera article says, the “simple technologies and accommodations” that enable persons with disabilities to participate more fully in society? Or to put it in Pokémon GO terms: while droves of us can now head out to try and catch them all, not all of us can take part in the catching, and it’s worth thinking about why.
Augmented reality, as with other kinds of tech, develops towards goals that we set our sights on. Considering the dominance of the visual in these technologies’ current iterations, wouldn’t it be ironic if we missed out on more inclusive technological commons because of a lack of vision?
A couple of months ago, the BBC reported new findings on puquios, which are spiralling holes scattered across Peru’s Nasca region. Through satellite imagery, a team of Italian researchers deduced the purpose of the once-mysterious holes: based on their placement and proximity to settlements, puquios seem to be part of a complex water retrieval and distribution system.
The BBC report carries a standout quote from the lead researcher:
“What is clearly evident today is that the puquio system must have been much more developed than it appears today,” says Lasaponara.
There are a lot of other notable quotes regarding this breakthrough, but that one dredged up a memory from one of the anthropology classes I took in college.
Two interesting stories popped up on my various feeds this week, and it just so happened that they were about beer.
First, an article from Science Alert about the discovery of the oldest known brewery in China. Two points in particular caught my attention. First, this:
According to McGovern, the brewery processes unearthed at Mijiaya reveal a ritual that has changed little in the millennia since. “All indications are that ancient peoples, [including those at this Chinese dig site], applied the same principles and techniques as brewers do today,” he told Madeline K. Sofia at NPR.
Of course, beer — both its drinking and its brewing — has been a fairly common part of everyday life for a while, but I can’t help but wonder how the craft brewers out there will react to this discovery, if at all. Craft brewing has been in vogue for a couple of years now (the boom started sometime in 2012, if Google search trends are any indication), and its popularity is such that even geek icon Wil Wheaton and local food blogs like Pepper have gotten involved somehow. I’d imagine there are enough brewing communities around now that more-than-passing interest in this discovery might be likely, especially since the Science Alert article goes on to discuss how residue in some of the unearthed pottery reveals a “surprising beer recipe.” As projects like the Inn at the Crossroads and the unique beer brews mentioned in that same article show, the urge to “recreate” things from seemingly unreachable or irretrievable sources is not new, and there’s no reason craft brewing would be immune to it. Should we expect Ancient Chinese flavors on tap soon?
It’s not like the ingredients will be hard to get. The archaeologists behind the discovery highlight the presence of barley in the brewery’s residual stock — a detail which carries some fascinating implications:
“Barley was one of the main ingredient[s] for beer brewing in other parts of the world, such as ancient Egypt,” Wang told NPR. “It is possible that when barley was introduced from Western Eurasia into the Central Plain of China, it came with the knowledge that the crop was a good ingredient for beer brewing. So it was not only the introduction of a new crop, but also the movement of knowledge associated with the crop.”
Aside from ferrying information on how to use the grain, the introduction of barley could also have had profound cultural consequences, with the hip ingredient playing a part in helping to define social hierarchies inside China.
Here we have a fine example of how science and the humanities can mix better than their widespread (and, might I point out, false) dichotomy would have us believe. Too often we’re told to envision an irreconcilable divide between the supposedly pure quantitative work of science and the supposed qualitative work of the humanities, which is a damn shame. There are a lot of ways in which these seemingly disparate fields can and do intersect, as demonstrated by the use of this archaeological dig’s chemical findings to extrapolate cultural history.
This reminds me of another, more recent article from Science Alert, actually. Just yesterday, the site also reported on a scientific study that points to a possible explanation for the Mongol Empire’s abandonment of its attempt to conquer Europe. Climate was likely to blame, claims the study, and the evidence was in the tree rings. As the article notes, the sparseness of primary Mongolian accounts had left many historians at a loss; the study answers that problem by digging up another kind of record. Like the speculation spun from the Chinese brewery discovery, this study serves as a good illustration of the effectiveness of applying the tools and methods of science and the humanities to questions that lie beyond their many sub-fields’ usual purview.
In less “serious” news, our second beer tale for today comes courtesy of Kotaku and Overwatch fever. TIL that one of the game’s main sound effects was essentially generated by opening a beer.
“Another extremely challenging sound is the ‘hit-pip.’ When you hit someone, you need to know you made contact. The sound needs to cut through the mix but not feel like it comes from any hero. It went through tons of iteration. Finally, one night I thought, ‘It should be satisfying to hit an enemy.’ Just think about what’s satisfying: beer. So I literally opened a beer bottle. Pssht. The sound is reversed and tweaked a little, but that sound is our hit-pip.”
The excerpt above, culled from the Overwatch Virtual Sourcebook, gives us a nice peek into sound design process, especially the kind of thinking that guides the choices that have to be made in that field. Take these lines in particular: “The sound needs to cut through the mix but not feel like it comes from any hero. … ‘It should be satisfying to hit an enemy.'” Sound is a practical element in Overwatch, as in any game, and sound design supervisor Paul Lackey tells us that each sound is crafted to conform to certain specifications, perform certain functions. In this case, the practical requirement: to alert players to a hit, and to do so effectively.
But take a look at that second line, that thought that led to the beer bottle sound: It should be satisfying to hit an enemy. It still implies a function for the sound to perform, but now that function goes beyond the strictly practical (i.e., alert) and goes into the realm of the emotional. Sure, Overwatch might not exactly fall under the same category as “prestige/legacy games” like Mass Effect and Uncharted, but it’s still shaped by the recent gaming landscape that (quite like TV, at least to my barely-a-gamer eyes) envisions games not just as entertainment but as an immersive, if not meaningful, experience.
Games these days want us to be invested — more so, I think, than ever before.
And hence, of course, Overwatch using the satisfying pssht of a fresh beer.