Hard of Hearing
“I Can’t Hear You” by AnonymousCreative Commons License.

I haven’t written a lot on this blog for a while.  Partly that has been due to the way a lot of my posts seemed to get inexorably drawn into the black hole of the vitriolic shambles that passes for politics in the US these days (or, it has to be said, increasingly the world over, as the forces of populism flex their muscles).

The lack of posting has also been because a major focus of this blog in the past has been the implications of the design of our social media environments and with that in mind, I was worried that pretty much anything I wrote over the last two years would be a variant of a giant “I fucking told you so!”  Looking back over those posts, some of them from years prior to the election, I’m struck by the fact that pretty much everything I was worried about–the potential for privacy violations, people mistaking communication for community, arseholery for activism, the seemingly purpose-designed suitability of social media for stalking, harrassment, doxing–all pretty much came to pass.  And while I received my fair share of ribbing for being a tech curmudgeon, those pieces now read as being, in effect, too timid, not remotely pessimistic enough to countenance a Cambridge Analytica, or Twitter playing whack-a-mole as it wiped out tens of millions (!) of fake accounts.

If I wrote about all that again I’d also have to face the sad fact that for some of my friends all this has made virtually no difference.  Despite even tech monopolies themselves admitting that maybe their products are not entirely healthy for us, too many people I know are so thoroughly invested in the myth of social media as a civic-minded community building enterprise that their denial is as armor-plated as that of any President Pennywise supporter.  On the rare occasions I log into Facebook anymore it is the same sad parade of people shopping their kids, tired memes from a couple of years back, the same old people posting every new rumor and outrage without any fact-checking.  And photos of food.  Always photos of food.

A more positive reason for the lack of contributions is that my gaming interests have shifted somewhat in the last few years, and there will, I hope, be more about that from this point on.

But the focus of this blog has always been artificial intelligence in all its various definitions.  Including art and artifice.  Including intelligence in the data-gathering sense.  Including, on not a few occasions, artificial people who think they are intelligent.  Lately I’ve been thinking a lot about why people seem to be so passionately in love with one form of AI in particular, a love made all the more extraordinary by the fact that it really doesn’t work all that well.  Or sometimes at all.

Use Your Words

When I’m alone, I talk to my car.  And the conversation often goes something like this.

CAR: Please say a command.

ME: Play playlist Moody Bastard Music

CAR: I’m sorry, I didn’t hear you say anything.  Please say a command.

ME: Play playlist Moody Bastard Music

CAR: I’m sorry, I didn’t hear you say anything.  Please say a command.

ME: I SAID PLAY THE FUCKING MOODY BASTARD PLAYLIST YOU STUPID FUCKING BITCH!!!!!

There are three important points to be made about this exchange.  First, yes, I do have a playlist with that title.  Second, while I will stipulate that our vehicle has pretty much the first-generation of voice-activated communcation tech it has received a software upgrade since then.  However even newer generation systems that I’ve experienced in other people’s vehicles don’t seem to work much more reliably.  Third, what is perhaps most noteworthy here is the entirely unreasonable and disproportionate degree of fury directed at a hapless non-sentient system (especially when you consider how many actual sentient beings are so much more deserving of righteous indignation).

Perhaps I just have anger management issues.  But in this respect I’m hardly alone.  This kind of irrational explosion of anger is actually pretty common in people’s interactions with their car systems and even more especially with their “smart” phones and their “smart” speaker systems.  Many people will also be more than familiar with this explosive anger directed at their own computers; while few of us now have to regularly suffer the infamous BSOD crashes as we did back in the day (uphill both ways) improvements in computer technology keep generating new ways to fail and those failures often reduce us to speechless (or, more usually, volubly profane) anger in response.

The question that has been niggling at me is this: why the anger?  Why does the failure of a dumb object to do our bidding unleash such an emotional outburst?

Bleeding Out

There’s a fundamental truth about the digital assistant revolution that is hidden in plain sight.  Anyone who has a device featuring one of the four major DA packages–Alexa, Siri, Cortana, or Google Assistant–let alone second tier offerings knows this truth.

These devices don’t work very well.

Anyone who uses a device with a digital assistant can only shake their head in rueful recognition at parodies like the following:

 

Moreover, let’s have a show of hands–be honest now–for the number of times you’ve been part of a group trying to find the answer to a question, and after one person has confidently waved their $1000 Apple WonderPhone and tried asking Siri for help a half dozen times, someone else finds the answer in ten seconds by typing in the request the good old fashioned way?  (Thumbs.  They are just so 2010, man.)

The sheer number of ways digital assistants can screw up is astonishing.  Sometimes the results can be hilarious.  I have a friend whose car system reads out texts and then will send texts based on voice input.  The results often make your standard auto-correct fails look bush league.  So often does the result appear to be based on translation from another language that the system–we’ve taken to calling her Fembot–seems to be offering her own commentary.  Her attempts to parse incoming texts are often equally hilarious (my favorite: an angry “Grrrrr!” in a text is translated into a very sexy growl).

These systems didn’t work that well when they were first marketed.  But people bought them anyway.  Why?  We are well past the “early adopters” phase of this technology, where the select few, chasing the cool factor, are willing to put up with half-baked ideas and half-arsed execution. And these systems still don’t work all that well.  More people than ever are buying them.  Why?  In fact, these devices have a set of very specific problems that makes them only marginally useful for precisely those people that are making up an increasing percentage of the population in the US (not to mention around the world): people who speak English with an accent.  This was already a well-known problem with digital assistants when a 2017 in article in Wired announced that “Voice is the Next Big Platform, Unless You Have an Accent.”  The following year the Washington Post highlighted this problem, noting that it wasn’t simply an inconvenience, but was contributing to a technology gap between various social groups.  And yet people keep buying them.  Why?

Now some of this is undoubtedly due to a few people wanting to feel like they are on the cutting edge of it all.  A lovely phrase that, one that only the naive will interpret to mean that they are doing the cutting.  In fact, the edge of innovation usually cuts both ways for a good long while.  In this particular instance we are hemorrhaging steadily but don’t seem to mind.  Perhaps this is also due to decades of brainwashing by sci-fi movies and films where voice-controlled everything is shown to be the norm.

Some of this willingness to put up with a product that is basically still in beta is because of the almost universal belief in one of the core myths of innovation in general and infotech-based innovation in particular: it will get better.  Therefore you will find no shortage of people admitting that yes, DA tech doesn’t work all that well at the moment, but just you wait!  The more people who get on board the more the people who train the AI will have to work with and the better it will get!  You’ll see!  I’ll have more to say about the provision of training materials in a second, but the people who are crafting the DA tech already have access to vast voice libraries and multiple training algorithms both stand-alone and crowd-sourced.  This cluster of issues associated with reliable speech recognition (especially in a household context where systems need to be accessed by more than one person) are also very, very difficult to solve.

It is too much to hope that designers will exercise social responsibility when crafting products (even though there are more than a few designers and ethicists who are arguing that designers need to do just that).  The current ethos (or rather lack of one) in the info-tech sector as everywhere else is: if it can be done, it should be done.  From the point of view of professional practice there is no reason to expect that anyone designing a DA would have felt the need to hold back a half-baked product from the market.  In a trend that I would argue got its start in the world of digital gaming, an entrenched belief in the interconnected nature of everything and the inherent drive toward awesomeness means that the tech sector is one of the few areas (weather forecasting and political punditry being two other notable ones) where you can consistently produce a crap product and no one holds you responsible.  They just hope/believe that it will be patched and upgraded to the state of awesome they devoutly believe it will attain.

But again, why?  Why are so many people apparently so in love with a technology that doesn’t work that well and isn’t getting demonstrably better?  Why exactly are they so deeply invested in voice control?  And why, then, do they react so badly when a thing that obviously doesn’t work well in fact doesn’t work well?

Everyone wants to live in Downton Abbey

2016 should have disabused most US citizens of the idea that their nations is characterized by an inherent democratic yearning.  This shouldn’t have been any surprise.  Events across the world are indicating that many people are profoundly sick of the extraordinary burden of having to think for themselves and are yearning for a stern Daddy figure to tell them what to do.  However, One only has to look at the way so many Americans go absolutely bloody ga ga over the Queen, British Royal Weddings, Royal Babies, distant heirs to the Royal throne, etc. to realize that there is a a monarchist lurking just beneath the surface of many democratic citizens.

The 2016 election should have taught us something else, however, a fact that is routinely obscured by inept media reporting and cheap punditry that talks about an alienated electorate, or the struggles of rural areas, or the abandonment of the US working class by both parties.  All of those things are true, but miss the real reason for the appeal of Pennywise.  People like him because they want to be like him.  It is as simple as that.  This is why constantly pointing out that our President is a wealthy, selfish, entitled, arsehole has no effect on his supporters.  They like that about him.  They themselves want to be wealthy, selfish, entitled arseholes.  

Pennywise won big, however, not because he addressed a minority desire.  Rather he tapped in to an aristocratic yearning that seems to be a core American value.  Some people have been surprised at how fragile democracy in the US seems now, and that this weakness seems to have appeared so suddenly.  But the fact that so many US citizens of all social classes appear to harbor aristocratic yearnings has been telegraphed quite clearly.  The love affair with VOUS’s (Vehicles of Unusual Size) and McMansions; even if it is cheaply built, even if you can’t afford it, even if it requires you to live 30 miles away from where you work, there is literally nothing that people won’t do to have a two-storey high entrance-way and a bathroom per person.  According the US Census Bureau, between 1973 and 2016, the square footage of the average US home increased over 60 per cent (by 1000 square feet), while the average family size plunged, meaning that the space per person effectively increased.

And once you have the trophy spouse, and the trophy house, and the trophy car, what is missing to complete the aristocratic fantasy?

Servants.

This is why people are prepared to put up with a technology that is in such a BS state as that of digital assistants.  Even if you don’t have the shoddily built McMansion, you can still live out your own little Downton Abbey fantasy (or Upstairs Downstairs for the oldies among us) with an all-purpose servant to do your bidding.  This makes the fact that all these digital assistants are “naturally” female even more disturbing.  I’m hardly the first to point this out, but this fact that in a supposedly “woke” age of #MeToo and Powerful Political Women we are all of us in our own homes happily embracing a culture of efficient subservient women is something that even my most liberal friends seem content to overlook.

Why?

Because, Servants!

You can’t really be an aristocrat unless you have lackeys to do your bidding.  And look at all these digital lackeys can do!  Order us stuff, organize our schedules, monitor and adjust our ambient environment for light and temperature, answer our most trivial inquiries, summon Royal Entertainers to present themselves before us.  At least, they would do all of this is if they didn’t misunderstand us half the time.

This, then, also explains our fury when our servants screw up.  Because these are entities that, like real servant, over which we are supposed to have control.  We own them: life and limb (or tantalum and tungsten). We paid (and in many cases continue to pay) them to do our bidding.  The should do what they are told to do, when they are told to do it.  On those rare occasions where people do acknowledge reality and exchange rueful stories with one another about the way their DAs have screwed up, it is hard not to escape the impression that you are listening to people from a former era complaining about the “help” and how hard it is to hire “good people.”

But the lesson that we should have learned from Upstairs Downstairs, and Downton Abbey is that servants have minds of their own.  And they have all sorts of ways of making life miserable for unreasonable Lords and Ladies.

“Anyone with one of these devices knows they go rogue.”

I had written most of his blog post when Geoffrey Fowler, technology critic for the Washington Post, published a piece on the quantity of data that DA’s are collecting on us.  Because in addition to the blatant sexism of these devices, the other thing being ignored by people rushing to embrace their new virtual servants is the amount of our private lives that these devices are recording.  If we were honest about the fact that what really excites us about these DAs is the prospect of having virtual servants to boss about, we would probably be a little more cognizant of this.  Human servants are always in a position to oversee and overhear.

Fowler listened to four years of his Alexa audio archive and while he not unexpectedly found a lot of random trivia, he also found numerous instances where Alexa had triggered without the “wake” word and a few sensitive conversations that had been recorded.

There’s no reason for companies to be collecting this stuff.  The rationale that all the makers of DAs use is, as I noted above, that all this material is being used to “improve the AI.”  That is crap.  They already have a considerable quantity of voice data that they can use.  This information is being collected–this is why Big Tech is collecting most of the data on us–simply because companies can.  It is technically feasible and there is minimal regulation or legislation to control how they collect it, store it, and what they do with it.  Most companies don’t in fact have an actual usage for the data that they collect, as Fowler notes.  They are collecting it solely on the basis that it might become usable one day.  It is a giant fishing expedition.  Or because that sounds so quaint, it is fishing as practiced by a massive fleet of industrial ships using drift nets.  This, as Fowler notes, in the words of an Illinois assemblyman trying to introduce legislation to rollback this massive data grab, is the age of “Surveillance Capitalism.”

Of course, the makers of these DAs are taking a page out of the Facebook playbook and claiming to provide tools that “give users control” over their date: tools which are in fact so arduous and time-consuming or opaque to use that they no sane person would devote a decent chunk of their life to using them.

The bottom line is that they shouldn’t even be necessary.  It is probably too much too hope that Americans in particular will stop buying these evil little devices because, you know, Aristocrats need their servants.  But not collecting any data at all on users should be the tech default.  If users so desire, they should be given the option to opt in to whatever nefarious data-mining scheme the company has in mind.

Now you may think all this stuff about servants is far-fetched.  But, as I say, I had written most of this post when I came to the final paragraph of Fowler’s piece:

We want to benefit from AI that can set a timer or save energy when we don’t need the lights on. But that doesn’t mean we’re also opening our homes to tech companies as a lucrative source of data to train their algorithms, mine our lives and maybe lose in the next big breach. This data should belong to us.

What we lack is a way to understand the transformation that data and AI are bringing to our homes.

Think of “Downton Abbey”: In those days, rich families could have human helpers who were using their intelligence to observe and learn their habits, and make their lives easier. Breakfast was always served exactly at the specified time. But the residents knew to be careful about what they let the staff see and hear.

Fast-forward to today. We haven’t come to terms that we’re filling our homes with even nosier digital helpers.

Advertisement