Corporate, Emotional Intelligence

Who Owns Copyright on Ai-Generated Content And Who Is Liable When It Errs?

Who Owns Copyright on Ai-Generated Content And Who Is Liable When It Errs?
Who Owns Copyright on Ai-Generated Content And Who Is Liable When It Errs?

Mastering the Legal Maze of AI: A BoardroomPerspective on IP Ownership and Liability

As an Attorney, Intellectual Property (IP) Expert, Certified Board Director, Master Trainer in Neuro-Linguistic Programming (NLP), and holder of Executive Certifications from MIT and Harvard in Leadership, Digital Business and Strategy and Innovation, I have witnessed firsthand how rapidly Artificial Intelligence (AI) is transforming industries. From streamlining customer experiences to automating creative outputs, AI has moved from experimental to essential. However, with these innovations comes a rising tide of legal and ethical complexities, particularly in the areas of intellectual property rights and legal liability.

One critical question that is always repeated across boardrooms and legal consultations alike: Who owns the work created by AI, and who is liable when it errs?

These questions are not just theoretical. They strike at the heart of how businesses operate, how creators are protected, and how trust is maintained in the digital age. Allow me to take you on a journey through two real-world cases that illustrate these issues vividly, while also unpacking the legal and governance implications that every business leader, innovator, and advisor must now consider.

The Ownership Dilemma: Can AI Be an Author?

As AI systems grow more autonomous, writing articles, composing music, coding software, and designing artwork, questions about ownership of these outputs become pressing. Under traditional IP law, authorship and ownership are reserved for humans. AI, lacking consciousness, intent, and legal personhood, cannot hold copyrights or patents.

Let’s be clear: AI cannot create under the law, not in the way humans do. The U.S. Copyright Office has reinforced this by denying protection to works created by AI without significant human input. This leaves us with two main contenders for ownership:

  • The AI Developer: The party who programmed and trained the AI may claim rights over its outputs, especially if the model operates under a proprietary license.
  • The AI User: If a user provides meaningful direction or prompts that influence the AI’s output, they may argue ownership, though courts are still navigating what counts as “meaningful” in legal terms.

This tension, between automation and authorship requires urgent clarity, especially for sectors like digital media, entertainment, design, and software development. Until legislation evolves, companies must define these ownership rights clearly in contracts, internal policies, and licensing terms.

The Monkey Selfie Precedent: A Lesson from Nature

Before AI, the legal world faced a curious case involving a different kind of non-human creator: a monkey.

In the now-famous Naruto v. Slater case, a macaque named Naruto took selfies using a photographer’s unattended camera. The photographer, David Slater, claimed copyright, but the court ruled that a non-human cannot hold authorship rights. No matter how artistic the selfie is, it lacked the crucial element: human creativity.

This precedent has been cited extensively in AI debates, reinforcing the idea that non-human agents, even highly advanced ones, cannot hold IP rights. The implication is profound: AI may produce the content, but a human must always own the rights, or no one can.

The Air Canada Chatbot Case: Liability in the Age of AI

Let’s pivot to the second pressing issue: legal liability.

In 2020, Air Canada’s AI-powered chatbot made headlines when it incorrectly offered a discount to a customer, who then demanded it be honored. The airline argued it wasn’t responsible; the bot made the mistake. But the court saw it differently.

Here’s why this case matters:

  1. AI as an Agent: The chatbot was acting on behalf of the company. Its digital nature did not negate the fact that it served as an agent of Air Canada, binding the company to its representations.
  2. Unregulated Authority: The bot wasn’t properly constrained or supervised. It lacked controls to prevent unauthorized commitments, an oversight that exposed the company to legal risk.
  3. Reasonable Expectation: From a consumer protection standpoint, the customer had every reason to believe the chatbot’s offer was valid. That’s a key principle in contract law, and it applies whether the promise comes from a human or a machine.

These rulings signal that corporations cannot hide behind the “autonomy” of AI. When AI makes a mistake, liability flows back to the business, as it should, in the absence of legal personhood for AI.

Five Legal and Strategic Imperatives for Boards and Businesses

Whether you’re a board director, legal advisor, or tech entrepreneur, these takeaways must inform your approach to AI integration:

  1. Clarify Ownership from Day One: Define, document, and communicate who owns what when AI is involved, especially in creative or high-stakes environments.
  2. Acknowledge and Manage Liability: AI systems represent your brand, your services, and your commitments. Establish internal policies that govern AI authority and human oversight.
  3. Redesign Governance Models for AI: Traditional governance frameworks must evolve. Boards should receive AI risk training and ensure accountability mechanisms are in place across departments.
  4. Ensure Transparency in AI Interactions: Let customers know when they’re interacting with a machine. Full disclosure not only builds trust, but it can also help avoid lawsuits.
  5. Advocate for Regulatory Reform: Business leaders and legal professionals must actively participate in shaping future AI laws. Lobby for clarity, consistency, and fairness in how AI ownership and liability are defined.

The Path Forward: Leadership in the Age of Algorithms

AI is not just a technological shift; it’s a governance revolution. As a board director, I urge my fellow leaders to approach AI adoption with both innovation and intentionality. As a lawyer and IP expert, I emphasize the importance of contractual clarity, risk management, and legal foresight. And as an EI and Neurolinguistic Programming Master Trainer and consultant, I encourage all of us to remain emotionally intelligent, flexible, and responsive in how we lead in this complex new landscape.

We must ask not only what AI can do for us, but also what frameworks we need to ensure it serves humanity, lawfully and ethically.

The cases of Air Canada’s chatbot and Naruto the monkey are more than legal oddities, they are beacons, pointing us toward a future where AI will test our laws, challenge our systems, and redefine accountability.

Let’s be ready, not just as technologists or businesspeople, but as responsible leaders of the digital age and let’s build an AI-powered future that is not only intelligent, but also accountable, human-centered, and legally sound.

Feel free to connect with me or share your perspectives on how your organization is managing AI ownership and risk. The conversation is just beginning.