Cable / Telecom News

Canada got a good deal with Google, and now has to deal with AI, say panelists at Prime Time


By Christopher Guly

OTTAWA — The federal government might have scored a win by getting agreement from the world’s most popular search engine to compensate news publishers and broadcasters for posting their online content, but it now has to address the impact of artificial intelligence on what Canadians read, see and hear, according to panelists discussing the balancing act of safeguarding national interests while promoting growth amid the challenges of digital and AI technology at the Canadian Media Producers Association’s (CMPA’s) annual Prime Time conference in Ottawa on Feb. 1.

As Meta blocked Canadian news from Canadian users of its Facebook platform as pushback against the federal Online News Act, or Bill C-18, Google was “a good partner” and “thoughtful” in its approach when it negotiated an agreement with the federal government to provide $100 million annually to Canadian news organizations, said Paul Deegan, president and CEO of News Media Canada, a national public-policy advocacy association for the country’s print and digital media industry.

Under the Google cash deal, news publishers will receive $63 million — $30 million will go to broadcast news outlets and CBC/Radio Canada will receive the remaining $7 million.

The amounts represent about $20,000 per journalist, positioning Canada in a “good spot” with Google with no end date, offered Deegan.

“The government did something brilliant. It negotiated most-favoured-nation status. If there is a better deal in another jurisdiction, the government can go back and renegotiate,” he said.

“By international standards, this is a very good deal,” said Deegan.

With C-18, the federal government “had to make sure that the news-media industry is protected” and “to ensure that we helped as much as possible to level the playing field,” said British Columbia Liberal Member of Parliament Taleeb Noormohamed, who serves as parliamentary secretary to Heritage Minister Pascale St-Onge.

A former technology executive, who attended Princeton, Harvard and Oxford universities, Noormohamed said that he has “seen exactly what Big Tech could do, unchecked and what can happen with small and medium-sized businesses are genuinely trying to do good things and are completely at the mercy of Google and Facebook, when it comes to advertising.”

“At the end of the day, the question some of these companies have to ask themselves, ‘Is there value in what I am doing?’”

On AI, Canada is behind internationally, said Charles Morgan, national co-leader of the cyber-data group at national law firm, McCarthy Tétrault LLP.

The European Union is poised to introduce the world’s most comprehensive AI law after EU countries endorsed on Dec. 2 a political deal reached last December on a regulatory framework the European Commission proposed in April 2021.

Once the European Parliament votes on the Artificial intelligence Act this spring, the law – which will require such generative AI as ChatGPT to disclose that the content was AI-generated – could enter into force this summer.

Morgan said that by contrast in the United States, the view is that AI is “too important economically” to involve legislators and has left it to the courts to impose “regulation through litigation.”

The most significant case, in his opinion, is a New York Times’ lawsuit against Open AI and Microsoft over the AI use of Times’ copyrighted work in which the newspaper of record claims that millions of its articles were used to train chatbots that now compete with it.

Canada, said Morgan, “sits in between” innovators and regulators.

In 2022, the federal government proposed the Artificial Intelligence and Data Act (AIDA), which it said “would ensure that AI systems deployed in Canada are safe and non-discriminatory, and would hold businesses accountable for how they develop and use these technologies.”

The following year, Innovation, Science and Industry Minister François-Philippe Champagne announced a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

“AIDA is an EU AI Act light,” said Morgan.

He explained that Canada’s law “would require entities to indicate data sources with some form of watermarking technology.”

Morgan also said that the voluntary code “provides some guidance while the regulatory framework gets into place and gives some tealeaves as to what the government is thinking about what the regulatory framework is going to look like once it’s ultimately in place.”

But he noted the code has acquired few signatories. Of the 20, BlackBerry and IBM are included.

Noormohamed acknowledged “that there is skepticism around the voluntary code – but that’s the starting point – [and] there is frustration that government hasn’t done more to regulate.”

“But,” as he added, “what exactly are you trying to regulate?”

The balancing act, as Noormohamed outlined, is to not interfere with the creative process while stopping the spread of “false information at such pinpoint accuracy that it’s almost impossible to judge what is real and what is fake.”

“That is a massive, massive concern,” he said.