AI computer
iStockphoto/Autthapol-Champathong

While artificial intelligence (AI) can help boost productivity by automating some day-to-day tasks and processing data, advisors should verify its output and be wary of client privacy, a law and technology expert told FP Canada’s annual online conference Thursday.

Generative AI is “supercharging and augmenting” financial professionals’ capabilities, said Abdi Aidid, an assistant professor in the University of Toronto’s faculty of law, who researches and teaches privacy, law and technology and civil adjudication. It can automate repetitive tasks so advisors can focus on solving their clients’ problems.

For example, AI can help advisors review a higher volume of accounts, interact with clients in online portals, perform preliminary analyses, create documents as part of due diligence requirements and provide clients with summaries and instant reports.

AI thrives in financial services since “there’s a ton of data” to work with, Aidid said.

However, AI is also prone to so-called hallucinations, and can expose advisors to privacy risks when it operates in an uncontrolled environment, he warned.

Though an AI-generated answer may sound compelling, it may not be correct. Large-language models like ChatGPT aren’t great research tools, Aidid said.

Advisors can use domain-specific tools to instruct an AI system to only draw its answers from a limited pool of trusted information, such as reference books, to prevent incorrect or misleading results. But every factual claim made by AI should still be double-checked by a human, Aidid said.

As well, when AI is used to summarize information, people should confirm that summary’s completeness. AI often pulls information from the beginning and end of a document or transcript as it looks for topic sentences. Advisors can mitigate this by feeding information into the AI piece by piece, Aidid said.

Don’t trust free browser-based AI tools to keep information confidential, he added. Instead, advisors can purchase enterprise plans with additional privacy protections.

The same privacy concerns existed with Outlook and Gmail when they were first released, Aidid said. “The challenge is that we don’t have the same level of latent trust [with] AI.”