It is a truism we are living in a “digital age” This would be more correct to suggest that we are living in an algorithmically designed age – that is, a time when many of our choices and preferences are influenced by machine-learning algorithms that nudge us into directions preferred by those who employ the programmers who write the code required.
This would be a fair way to identify them as recommender engines. They track your digital trail and take note of what you are interested in – as demonstrated by what you have browsed or purchased online. For example, Amazon offers me daily recommendations for products which are “based on your browsing history.”
It also gives me a rundown of what people have bought the item that I’m considering. YouTube’s engine records what kinds of videos I’ve seen – and tracks how much of each of them I’ve viewed before clicking on – and then shows an endlessly-scrolling list of videos on the right side of the screen that might interest me based on what I’ve just seen. There were few, if any, of those engines in the early days of the internet. But they became increasingly popular from 2001 onwards, and are now almost omnipresent. This expansion had been motivated by many factors.
One was the need to help users deal with the flood of knowledge that came with the web: recommendation engines should sift through the torrent and produce a custom distillation just for you. But the primary driving force was the business model we now call surveillance capitalism – recording our online activity to make ever more sophisticated assumptions about our desires and future needs that could be sold to marketers eager to sell us things. Before social media began, the news feed for every user consisted of a simple chronological listing of what was shared by their peers.
Yet all this changed on Facebook in September 2011: a machine-learning algorithm “curated” the news feeds for users from then on. Mark Tonkelowitz, then head of engineering at Facebook, described the curated news feed as follows: “When you pick up a newspaper after not reading it for a week, the front page quickly clues you into the most interesting stories. In the past, News Feed hasn’t worked like that. Updates slide down in chronological order so it’s tough to zero in on what matters most. Now, News Feed will act more like your own personal newspaper. You won’t have to worry about missing important stuff. All your news will be in a single stream with the most interesting stories featured at the top.”
Yet all this changed on Facebook in September 2011: a machine-learning algorithm “curated” the news feeds for users from then on. Mark Tonkelowitz, Facebook’s engineering manager at the time, described the curated news feed as follows: aIt turned out that some of those “interesting stories” were of great commercial value to Facebook as they inspired users to engage with the content – and so prioritized it. Since 2016, we have become more and more aware of how this algorithmic curation can be used to persuade us to purchase not only products and services, but also thoughts, mis- and misinformation, theories of deception and hoaxes.
I lovingly dreamed for years that curating ideas was the domain of social media only. But a last year article by leading online disinformation specialist, Renée DiResta, indicated that the trend goes beyond Facebook et al. Scrolling through a quick keyword search for “vaccine” in the top-level books section of Amazon she found “anti-vax literature prominently marked as ‘#1 Best Seller’ in categories ranging from Emergency Pediatrics to History of Medicine to Chemistry. The first pro-vaccine book appears 12th in the list. Bluntly named Vaccines Did Not Cause Rachel’s Autism, it’s the only pro-vaccine book on the first page of search results.”
Over in the oncology division of Amazon, DiResta has found a book with a bestseller mark that alludes to juice as an alternative to chemotherapy. She noted, for the word “cancer” in general, that The Truth About Cancer, “a hodgepodge of claims about, among other things, government conspiracies,” had 1,684 ratings (96 percent of them five-star ones) and was granted front-page placement. Last week, just out of curiosity, I tried a search for “cancer cure” in the Amazon.co.uk books section. Of the first 11 titles that came up, only one looked like the subject was a traditional scientific treatment. The others focused on herbs, oils and “natural cures they don’t want you to know about.”
This is not because Amazon has a grudge against modern medicine, but because in this field there is something about unorthodox books that its machine-learning algorithm recognizes – perhaps from feedback posted for non-science approaches by evangelists. (DiResta thought that could also be the explanation: Amazon did not confirm that.) But it is possible that in very contentious – and actually topical – areas such as vaccination, organized anti-vaxxer consumer feedback might be effective in gambling the algorithms. And Amazon has been accused in the past of being “a giant purveyor of medical quackery.”