As AI continues to re-shape our world, it’s creating new challenges for lawyers. Below, we’ve broken down some of the key issues that come across the desks of Australian lawyers.
Intellectual Property and AI
This issue is a very hot topic. There have been countless articles written about the application of copyright law to the input, and output, of AI. For the purpose of these musing, we will say that one of the most pertinent issues is that in order for something to be protected by copyright, it must have been authored by a human. This raises obvious issues the protection of content that has been authored by AI. The other pertinent issue is that in order to train an AI model, copyright protected content usually needs to be fed into the model so that the model can learn. This raises concerns of wide scale copyright infringement where creators are often not earning a cent from their works being used by global behemoths such as Open AI.
Disclaimers and AI
With AI companies increasingly scraping online content for training data, businesses and creators are having to find creative ways to protect their original works. Recently, publishing giant Penguin Random House added language into its copyright notices to explicitly prohibit the use of their content for AI training.
While a disclaimer alone won’t prevent web scraping, there may be a solution in contract law. For instance, if users agree to terms stating that content cannot be used to train AI, an AI company that ignores this could be liable for breach of contract.
Privacy and AI
AI models are often trained on vast amounts of data – including information about you. In privacy circles, we call this “personal information” or “personal data”. These systems scrape and store personal information – including your location data, medical history or social media pictures – often without your knowledge or consent.
In Australia, the Office of the Australian Information Commissioner advises against inputting any personal information into publicly available AI models, as seemingly innocuous data can be pieced together to reveal comprehensive, invasive and sometimes blatantly false information about people. Take, for example, Microsoft’s AI tool, Copilot, which recently ‘hallucinated’ a false biography for a German journalist.
As AI technology outpaces existing legislation, companies caught by the Privacy Act will need to carefully consider how to manage privacy risks.
Confidentiality and AI
There are two kinds of modern day office workers – those who use AI at work, and those who lie about using AI at work. As AI systems are tasked with handling commercially sensitive information, maintaining confidentiality has become increasingly tricky.
Many workers don’t understand that an input into a chatbot or large language model is also a disclosure. Companies need to take steps to ensure that their employees understand not to include confidential or commercially sensitive information in any AI models. This is critical for industries like healthcare and finance, where mishandling confidential information can have serious repercussions.