The Legal Implications of Artificial Intelligence: Data Privacy, Intellectual Property and Contracting Considerations
Imagine this: You’ve created artificial intelligence (AI) software to revolutionize its operations. With AI seamlessly integrated into your clients’ workflows, productivity soars, and success seems inevitable.
But a legal minefield might be lurking underneath the surface.
In the latest episode of the Founder Shares podcast, my law partner, Lucas Beal, and I dive into the potential legal implications of AI.
“AI can be an amazing tool, and it can create an amazing product that could be a commercial success,” Lucas said. “But we’ve just got to get those foundations before we can build upon it.”
There are two types of AI: traditional and generative. Traditional AI, which has been around for a long time, involves computers performing specific tasks based on predefined rules and algorithms, often relying on labeled data to make decisions and predictions. Think old chess computer or a calculator.
In contrast, generative AI creates new content by learning from data patterns and generating wholly new outputs in response to certain prompts, such as text, images, or even software code, mirroring or simulating human-like creativity. Think ChatGPT.
But legally, according to recent statutes coming out of the EU this year, the definition of AI is a little more broad.
“The EU Act defines AI as a machine-based system that is designed to operate with varying levels of autonomy,” Lucas said. “And that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
On the other hand, the National Institute of Standards and Technology (NIST) proposed understanding AI as “an interdisciplinary field, usually regarded as a branch of computer science, dealing with models and systems for the performance of functions generally associated with human intelligence, such as reasoning and learning.”
The bottom line is AI is trying to get machines to mimic human intelligence, and the way it produces that can have a different impact.
And it requires a lot of data to do.
“The output that the AI generates, is that considered personal data?” Lucas said. “And I think the general feeling in the data privacy world now is that that does not equal personal data. I think that’s still a question to be determined in the future.”
It can look a lot like personal data, though, can’t it? But it’s not attached to an actual human, so there’s no individual person, really.
But when someone’s training that AI to create that output, they probably are using personal data. And so that’s one of the major concerns: where are these training data sets coming from? Do they have appropriate rights to use these data sets? Do they have the ability to, if somebody exercises their individual rights under any of our privacy statutes, fulfill those requests from the users?
To find out the answers to these questions and so much more, tune in to the latest episode of Founder Shares, available wherever get your podcasts.
The blog content should not be construed as legal advice.