Share this post
This is some text inside of a div block.
Copied!
Back to all posts
A lawyer shares insights and best practices for employers
Published on
September 25, 2023
Note: This post was written by a human.
Writing anything about AI is sort of like building a plane while flying it. Except imagine that the plane suddenly morphs into something that doesn’t even exist yet.
That is, as soon as you put your virtual pen to paper, most of what you’re going to write is likely already out of date. As April Gougeon, an associate in the Privacy and Data Protection group at Toronto-based firm Fogler, Rubinoff LLP, says, “AI is chaotically evolutionary.”
But the good news for employers, she adds, is that unlike an editorial piece, an AI policy is supposed to be regularly updated to keep it current.
We asked April for some best practices for employers when it comes to drafting a corporate AI Acceptable Use policy and have summarized her tips and insights below—food for thought when developing your AI policy.
Note: This post is intended for general information purposes only; it should not be relied upon as legal advice or step-by-step guidance for creating your policy.
Every organization is different and will use a variety of AI-powered tools in a myriad of ways and to varying degrees—from not at all to all the time. And those tools will change over time, as new tools and capabilities come to market.
This means there’s no such thing as a one-size-fits-all AI Acceptable Use policy.
However, regardless of your organization, before you write your policy, the first step is to understand what AI is, and know what its risks are and how you intend to mitigate those risks.
Tips from April:
“Deploying AI tools means assessing the tension between the risks and benefits of using AI,” says April. And once you’ve made that assessment, your policy should outline what is an acceptable level of risk and what is an unacceptable level of risk.
AI tools can be used in many ways by all teams across an organization. You should know how you intend to use AI in your workplace, why you’re using it and what the benefits of using it are.
Tip from April: Consider adding an appendix to your policy with detailed use cases outlining in plain language how each team in your organization can use AI-powered tools responsibly. Always avoid vague language because it leaves you open to regulatory risk.
Your AI policy is only one piece of puzzle when it comes to your “culture of AI management,” as April puts it.
That is, in addition to outlining how you can use AI in your organization, and the risks and benefits of doing so, your policy could outline the governing principles for AI and the roles different people have for enacting those principles at your company. For example, consider outlining which team members are able to answer questions about the policy or provide training on your company’s use of AI.
Tips from April:
Another best practice is to include built-in, regular checks (ideally, quarterly) to ensure your policy stays as current and effective as possible.
These checks run the gamut from staying current with relevant legislation to monitoring the terms of service of all tools you currently use to see if they’ve changed.
At time of writing, there is no regulatory framework in Canada specific to AI, and no approach to ensure that AI systems address systemic risks such as bias or discrimination during their design and development. However, the Artificial Intelligence and Data Act (AIDA), which was tabled in June 2022 as part of Bill C-27, has gone through two readings so far.
If passed, AIDA would:
While AIDA likely won't become law until at least 2025, in the interim, the government has issued a voluntary generative AI Code of Practice aimed at ensuring "developers, deployers, and operators of generative AI systems are able to avoid harmful impacts, build trust in their systems, and transition to smoothly to compliance with Canada's forthcoming regulatory regime."
Tips from April:
Be prepared to explain to clients and customers how you use AI at your organization. Consider drafting a public-facing statement like ours, highlighting which tools you use, and how you use them in an ethical, responsible way.
“I think people would appreciate knowing that a company is using AI in a trustworthy and responsible manner,” says April. However, the key with a public statement is to keep it current, just like your policy, updating it whenever your organization adopts new AI tools or practices, or as new legislation comes into effect that applies to your industry.
✓ Do your research to understand what AI is and how it applies to your business.
✓ Create an AI taskforce or working group.
✓ Know your AI tools—which ones you can use, what they are and how you will use them.
✓ Develop an AI risk assessment for your organization.
✓ Develop a transparent, clear and concise policy, outlining the acceptable use of AI at your organization.
✓ Develop use cases for teams across your organization.
✓ Develop specific training so you can empower people to know how to use AI in the workplace.
✓ Monitor the tools and legislation and build in periodic reviews of your AI policy.
April Gougeon is an associate in the Privacy and Data Protection group at Toronto-based firm Fogler, Rubinoff LLP, where she focuses her practice on assisting clients in the implementation and preparation of appropriate privacy policies, privacy management programs, conducting privacy audits and responding to privacy complaints and investigations.
Prior to joining Foglers, April was a Senior Advisor and Senior Strategic Policy and Research Analyst with the Office of the Privacy Commissioner of Canada (OPC), where she advised on the Personal Information Protection and Electronic Documents Act (PIPEDA) and other international and domestic privacy-related policy and legislation.