In a recent report, the Canadian Security Intelligence Service (CSIS), Canada’s primary national intelligence agency, has sounded the alarm on the escalating threat posed by disinformation campaigns utilizing artificial intelligence (AI) deepfakes. The agency expressed concern over the growing realism of deepfakes coupled with the inherent challenge of recognizing or detecting them, identifying this as a potential menace to Canadians.
The CSIS report highlighted instances where deepfakes were employed to harm individuals, emphasizing the broader threat to democracy as certain actors leverage uncertainty or disseminate ‘facts’ based on synthetic or falsified information. The agency underscored the risk of governments being unable to definitively prove the authenticity of their official content, further exacerbating the challenges posed by advanced AI technologies.
Referring to various coverage of deepfakes targeting crypto investors with fabricated videos of Elon Musk, the CSIS drew attention to the use of sophisticated techniques since 2022. Bad actors have manipulated deepfake videos to deceive unsuspecting crypto investors, convincing them to part with their funds based on misleading information.
Privacy violations, social manipulation, and bias were identified as additional concerns associated with AI, prompting the CSIS to advocate for the evolution of governmental policies, directives, and initiatives to align with the increasing realism of deepfakes and synthetic media. The agency cautioned that if governments fail to address AI issues promptly and independently, their interventions may become obsolete in the face of rapidly advancing technology.
CSIS recommended collaborative efforts among partner governments, allies, and industry experts to tackle the global distribution of legitimate information. Canada’s commitment to addressing AI concerns on an international scale was underscored by its participation in the Group of Seven (G7) agreement on an AI code of conduct for developers on October 30. The code, comprising 11 points, aims to promote safe, secure, and trustworthy AI globally while acknowledging and mitigating the associated risks.