Here’s a piece I contributed to Dialogue magazine recently. Huge thanks to Kirsten Levermore for diligently making sense of my slightly dyslexic waffle and shaping it into what’s emerged.
When we woke up our computers we gave them superpowers. Now we have to decide how to use them, writes Pete Trainor
The world is different today than it was yesterday and tomorrow it will be different again. We’ve unleashed a genie from a bottle that will change humanity in the most unpredictable of ways. Which is ironic, because the genie we released has a talent for being able to make almost perfect predictions 99% of the time.
We have given machines the ability to see, to understand, and to interact with us in sophisticated and intricate ways. And this new intelligence is not going to stop or slow down, either. In fact, as the quantities of data we produce continue to grow exponentially, so will our computers’ ability to process and analyze — and learn from — that data. According to the most recent reports, the total amount of data produced around the world was 4.4 zettabytes in 2013 – set to rise enormously to 44 zettabytes by 2020. To put that in perspective, one zettabyte is equivalent to 44 trillion gigabytes (about 22 trillion tiny USB sticks). Across the world, businesses collect our data for marketing, purchases and trend analysis. Banks collect our spending and portfolio data. Governments gather data from census information, incident reports, CCTV, medical records and more.
With this expanding universe of data, the mind of the machine will only continue to evolve. There’s no escaping the nexus now.
We are far past the dawn of machine learning
Running alongside this new sea of information collection is a subset of Artificial Intelligence called ‘Machine Learning’, autonomously perusing and, yes, learning, from all that data. Machine learning algorithms don’t even have to be explicitly programmed – they can literally change and improve their own code, all by themselves.
The philosophical and ethical implications are huge on so many levels.
On the surface, many people believe businesses are only just starting to harness this new technological superpower to optimise themselves. In reality, however, many of them have been using algorithms to make things more efficient since the late 1960s.
In 1967 the “nearest neighbour” code was written to allow computers to begin recognizing very basic patterns. The nearest neighbour algorithm was originally used to map routes for traveling salesmen, ensuring they visited all viable locations along a route to optimise a short trip. It soon spread to many other industries.
Then, in 1981, Gerald Dejong introduced the world to Explanation-Based Learning (EBL). With EBL, computers could now analyze a data set and derive a pattern from it all on their own, even discarding what they thought was ‘unimportant’ data.
Machines were able to make their own decisions. A truly astonishing breakthrough and something we take for granted in many services and systems, like banking, still to this day.
The next massive leap forward came just a few years later, in the early 1990s, when Machine Learning shifted from a knowledge-driven approach to a data-driven approach, giving computers the ability to analyze large amounts of data and draw their own conclusions — in other words, to learn — from the results.
The age of the everyday supercomputer had truly begun.
Mind-reading for the everyday supercomputer
The devil lies in the detail and it’s always the devil we would rather avoid than converse with. There are things lurking inside the data we generate, that many companies would rather avoid or not acknowledge – at least, not publically. We are not kept in the dark because they’re all malicious or evil corporations, but more often because of the huge ethical and legal concerns attached to what data and what processes lie the shadows.
Let’s say a social network you use every single day is sitting on top of the large set of data generated by tens-of-millions of people just like you.
The whole system has been designed right from the outset to get you hooked, extracting information such as your location, travel plans, likes and dislikes, status updates (both passive and active). From there, the company can tease out the sentiment of posts, your browsing behaviors, and many other fetishes, habits and quirks. Some of these companies also have permission (that you will grant them access to, in those lengthy terms and conditions forms) to scrape data from other seemingly unrelated apps and services on your phone, too.
One of the social networks you use everyday even has a patent to “discreetly take control of the camera on your phone or laptop to analyse your emotions while you browse”
Using all this information, a company can build highly sophisticated and extremely intricate, explicit models that predict your outcomes and reactions – including your emotional and even physical states.
Most of these models use your ‘actual’ data to predict/extrapolate the value of an unseen, not-yet-recorded point from all that data – in short, it can predict if you’re going to do something even before you might have decided to do it.
The machines are literally reading our minds using predictive and prescriptive analytics
A consequence of giving our data away without much thought or due diligence is that we have never really understood its value and power.
And, unfortunately for us, most of the companies ingesting our behavioural data only use their models to predict what advert might tempt us to click, or what wording for a headline might resonate because of some long forgotten and repressed memory.
All companies bear some responsibility to care for their users‘ data, but do they really care for the ‘humans‘ generating that data?
That’s the big question.
Usernames have faces, and those faces have journeys
We’ve spent an awfully long time mapping the user journey or plotting the customer journey when, in reality, every human is on a journey we know nothing about.
Yes, the technical, legal and social barriers are significant. But what about commercial data’s potential to improve people’s health and mental wellbeing?
It’s started to hit home for me even harder over the last few years because I’ve started losing close friends to suicide.
The biggest killer of men under 45 in the UK, and one of the leading causes of death in the US.
It’s an epidemic.
Which is why I needed to do something.
“Don’t do things better, do better things” – Pete Trainor
Companies can keep using our data to pad out shiny adverts or they can use that same data and re-tune the algorithms and models to do more — to do better things.
The emerging discipline of computational psychiatry, uses the same powerful data analysis, machine learning, and artificial intelligence techniques as commercial entities – but instead of working out how best to keep you on a site, or sell you a product, computational psychiatrists use data to explore the underlying factors behind extreme and unusual conditions that make people vulnerable to self-harm and even suicide.
The SU project in action
The SU project: a not-for-profit chatbot that is learning how to identify and support vulnerable individuals.
The SU project was a piece of artificial intelligence that attempted to detect when people are vulnerable and, in response, actively intervene with appropriate support messages. It worked like an instant messaging platform – SU even knew to talk with people at the times of day you were most at risk of feeling low.
We had no idea the data SU would end up learning from was the exact same data being mined by other tech companies we interact with every single day.
We didn’t invent anything ground-breaking at all, we just gave our algorithm a different purpose.
Ai needs agency. And often, it’s not asked to do better things, just to do things better – quicker, cheaper, more efficient.
Companies, then, haven’t moved quite as far from 1967s’ ‘nearest neighbor’ than we might like to believe.
The marketing problem
For many companies, the subject of suicide prevention is too contentious to provide a marketing benefit worth pursuing. They simply do not have the staff or psychotherapy expertise, internally, to handle this kind of content.
Where the boundaries also get blurred and the water murky is that to save a single life you would likely have to monitor us all.
Surveillance.
The modern Panopticon.
The idea of me monitoring your data makes you feel uneasy because it feels like a violation of your privacy.
Advertising, however, not so much. We’re used to adverts. Being sold to has been normalized, being saved has not, which is a shame when so many companies benefit from keeping their customers alive.
Saving people is the job of counsellors, not corporates – or something like that. It is unlikely that data mining for good projects like SU would ever be granted universal dispensation since the nature and boundaries of what is ‘good’ remain elusively subjective.
But perhaps it is time for companies who already feed off our data to take up the baton? Practice a sense of enlightenment rather than entitlement?
If the 21st-century public’s willingness to give away access to their activities and privacy through unread T&Cs and cookies is so effective it can fuel multi-billion dollar empires, surely those companies should act on the opportunity to nurture us as well as sell to us? A technological quid pro quo?
Is it possible? Yes. Would it be acceptable? Less clear.
– Pete Trainor is a co-founder of US. A best-selling author, with a background in design and computers, his mission is not just to do things better, but do better things.
Recent Comments