
Our information ecosystem is in trouble. Here’s how we can fix it. Part III
By Tessa Sproule
THE WAY WE TALK ABOUT artificial intelligence is full of hype. We’re really only at the beginnings of the beginning of AI and its intersection with humanity. It is doing some incredible things right now. It will do absolutely remarkable things in the future, I’m sure. But right now it’s about as smart as an earthworm. (That’s a favourite analogy of AI researcher Janelle Shane, author of the hilarious and telling book, “You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place”, which documents how AI can do some ridiculously awful things — while showing us who we are in the process.)
In my couple of decades as a journalist, and as a human growing up at the time when a computer went from a “business machine” (this was the first computer I used in my grade 6 ‘computer lab’) to a personal butler in your pocket, companion and master in less than three decades, I’ve learned that technology really only moves as fast as we do. But sometimes it feels like we’re not going in the same direction. We all need to pay attention and know when our paths diverge, especially if you’re already using AI in your newsroom or as a tool for content recommendation.
AI doesn’t “know” what you’re talking about
It’s difficult, but not impossible, to come up with signals about complex and evolving news stories. But we need humans, journalists, to teach the machines when it comes to information content.
This has become the core issue of our lives today, because the unexpected is our new normal. The world has gone from being complicated to being complex. There are patterns (which machines can spot and identify), but they don’t repeat themselves with regularity (confounding those same machines).
Much of our world defies forecasting now. Maybe Iran will retaliate against the USA, but we don’t know why or when and whether it will be physical or cyber or something else. Climate change is real, but we can’t predict what will happen in Australia’s bush-fire crisis, and what the impact will be when climate migrants begin to move in significant numbers. Brexit may finally happen. Or not. And we’re all at the mercy of one guy’s Twitter quips from the White House (remember when “microblogging” sounded cute?).
Uncertainty rules the day. The “news”, what is happening and what might happen next — it defies so much forecasting, efficiency doesn’t help us, it specifically undermines and erodes our capacity to adapt and respond.
Google’s own algorithm misinterpreted what was going on and began recommending stories about 9/11 to citizens watching footage of Paris’ Notre Dame cathedral ablaze on April 15, 2019.
“We do the work that AI needs most now.”
I have had this feeling before. Maybe you have too. To me it feels exactly like the days in and around 9/11 — and I’m feeling a warning coming on: when we abdicate responsibility for understanding the complex issues of our day to technology, it makes mistakes.We make mistakes. Like Google’s own algorithm that began recommending stories about 9/11 to citizens watching footage of Paris’ Notre Dame cathedral ablaze on April 15, 2019. (see photo above)
What irony — watching Google’s own tagging training set provide the basis for inaccurate and outright stupid recommendations on Google’s own video platform, at a time when people need access to reliable, factual information.
But hey — It’s going to be okay. That’s why we’re in this business. We journalists roll with uncertainty. We literally work to find the facts amid ambiguous noise to help our fellow citizens understand what’s going on today. That’s our job. We do the work that AI needs most now. Structured, reliable, dependable data that a machine can learn from: it is the foundation of AI.
And the great news is that when it comes to “ambiguous” information content — that structured data belongs to all of us.
We just have to keep it that way.
A call for collaboration between AI and journalism
Our world is awesome, chaotic and confusing. The complexity of human life is more than the 1’s and 0’s a machine can understand. When it works well, AI can very quickly perform repetitive, narrow and defined tasks.
When you’re working with AI, it’s not like working with another human, it’s more like working with some weird force of nature. It is really easy to give AI the wrong problem to solve. We as humans aren’t always great at defining a narrow problem, because our brains are wildly complex. Our brains do a lot of really broad, advanced problem solving without us even noticing.
“Is playing a game of chess more complex than doing the laundry?”
Chess is a complicated game. But it’s also based on rules, logic and probability. Machine learning can handle that, and while it surprised many when a supercomputer named Deep Blue beat world chess champion Garry Kasparov in 1997, it makes perfect sense that a machine could learn from our moves and mistakes and ultimately kick our pants in an even more complex game, Go, in 2016.
Is playing a game of chess more complex than doing the laundry? You might say yes, but let’s dive in for a moment. What about the different fabrics? Can they all be washed the same way? Sure, you might be super high-tech with your smart-labelled clothes, but what of the items that don’t? What about the colours? Your kid’s tie-dyed shirt from camp, can that go in? Where did that other blue sock go?
What we might consider the simple chore of doing laundry is actually a much more complicated task than at first glance. (Incidentally, I would be remiss if I didn’t take a moment to flag that there are some problems tech just doesn’t need to solve for us. We need to get better at deciding when that is.)
This is why it’s so hard to design a problem that AI can understand and make dependable predictions and recommendations on. This problem gets infinitely more complicated when we’re dealing with video.
The AI that’s used to recommend video content on YouTube and now by some media publishers for their own information video, these algorithms are optimized to bias in favour of clicks and views — popularity signals are the main drivers of recommendations, because more clicks and views means more exposure to advertisements, the revenue source of most content publishers and the BigTech.
But here’s something we know about humans. Content that is sensational, that makes us angry, that kind of content really fires us up — we click, we comment, we share, we give that content a lot of our attention. Our engagement behaviour around that content, in turn, provides signals to the machines recommending it, amplifying its spread. This is why within a few clicks, you’ll likely be recommended misinformation, conspiracy theories, and worse. The AI itself doesn’t have a concept of what this content is, or what the consequences might be for recommending it. It’s just recommending what we’ve told it to.
(This is part three of four. Click here for part one and here for part two. Next time we’ll explore a narrow opportunity we have at this very moment to bolster the world’s information ecosystem, putting us, curious, thoughtful human thinkers at the centre again.)
Tessa Sproule is the co-founder and co-CEO of Vubble, a media technology company based in Toronto and Waterloo, Canada. Vubble helps media and educational groups (like CTV News, Channel 4 News, Let’s Talk Science) by cloud-annotating news video, building tools for digital distribution and generating deeply personalized recommendations via Vubble’s machine-learning platform.