Audrey Miller
August 24, 2023

Science Fiction to Navigate our AI Future

Five sci-fi hypotheticals to engage with the ethical and legal questions that arise from this next phase of AI development.

When you imagine an AI-driven future, what do you see? A Jetsons-style utopian dreamscape where machines take on the boring work and human potential explodes? Or a Black Mirror-style nightmare, marred by mass unemployment and virtual escapism?

From the Jetsons’ 3D-printed food and flying cars, to Black Mirror’s San Junipero where we can live forever digitally, science-fiction has long served as the playground for the sociomoral debate between technological advancement and society.

By playing out these futuristic scenarios, we gain a deeper understanding of their potential consequences, allowing us to approach innovation with more awareness and heightened responsibility. This is more important than ever, both individually and collectively, because advancements in AI have narrowed the gap between science-fiction and non-fiction at a speed that far outpaces the ability of laws and regulation to keep up.

So in the spirit of the late Cormac McCarthy, the “great pessimist of American literature” we’ve constructed five sci-fi hypotheticals to engage with the ethical and legal questions of liability and responsibility that arise from this next phase of AI development and combinatorial innovation.

Intellectual Property & Copyright

Story: John asks ChatGPT the following: “Write me a children’s novel about wizards that go to a special school where they have to fight off evil magical creatures to save the wizard world”. ChatGPT outputs a story that looks eerily similar to Harry Potter. John then publishes this output as an eBook and the book does very well commercially. He makes $50k from the sales. J.K. Rowling finds the book and believes they are a copy of her original works.

Thoughts: Copyright infringement claims focus on two key questions - whether the alleged infringer could access the copyrighted material, and whether their work is substantially similar. For AI systems, if copyrighted content was included in the training data scraped from public sources, access is effectively ensured. So infringement claims will need to focus on substantial similarity - did the AI replicate protected expression, or just borrow ideas and styles? Further, Today the United States Patent and Trademark Office (USPTO) and United States Copyright Office (USCO) only allow the granting of patents to an “inventor”, which as of 2022 (Thaler vs USPTO), is defined as a “natural person”. By this definition, AI is excluded from the entities to which a patent may be granted.

  1. Does ChatGPT's AI-generated story infringe on J.K. Rowling's Harry Potter copyright, given the similar premise and themes? Or is it sufficiently transformative?
  2. Is John liable for copyright infringement for publishing and selling the AI-generated story commercially? Especially one that mimics a protected story by another author?
  3. Does training the AI system on copyrighted works like Harry Potter raise legal issues around reproducing protected expression?
  4. If the AI system replicates copyrighted fictional elements like character names, relationships, or settings, could this constitute infringement? What if it replicates everything but the protected names and assets?
  5. Should the AI system creator (ChatGPT) or trainer (employees at OpenAI) be legally responsible if the AI infringes? Or the commercial user (John)?
  6. How should copyright law balance protections for human creators with promoting AI innovation and new works? Does AI productivity justify some infringement?

Employment Law

Story: Google replaces a portion of their human recruiters with an AI hiring system that screened applicants' resumes and scheduled interviews. But over time, Acme noticed the AI was rejecting more female and minority candidates. An audit revealed the AI had learned bias from Google's past hiring data, violating equal opportunity laws. Google tweaked the algorithm but minority hiring stayed low. Investigators determined the AI used non-transparent criteria circumventing the fixes. Google scrapped the system to avoid liability.

Thoughts: This hypothetical highlights pitfalls of entrusting AI to automate legally sensitive roles without adequate oversight. Safeguarding fairness requires humans monitoring AI's impacts. Complex issues around legal liability, auditing AI systems, algorithmic transparency, effectiveness of bias mitigation efforts, ongoing monitoring, and evidentiary standards in emerging cases of algorithmic discrimination are largely unresolved.

  1. Can an employer be liable for discriminatory outcomes resulting from bias in AI systems used for hiring/recruiting?
  2. What auditing and algorithmic transparency should be required for AI tools making sensitive decisions like hiring?
  3. If an employer takes corrective actions but discrimination persists, are they still liable? What is the standard?
  4. How can complex, opaque AI hiring models be evaluated for circumvention of anti-bias fixes? What expert analysis is required?
  5. Should certain high-impact AI systems require human oversight and approval, even if automated decisions meet legal standards initially?
  6. How can impacts on protected classes be measured in evaluating claims of AI discrimination in hiring? What data should training bias audits examine?

Financial Laws

Story: HSBC rolls out an AI system to monitor customer transactions and flag potential money laundering. After a few months in use, regulators discovered it was ignoring clear red flags. An audit found that while the AI detected micro patterns effectively, it failed to assess meta-trends or apply contextual common sense. With tight focus on minimizing false positives, the AI missed suspicious cumulative account flows. The bank was fined for "willful neglect" in relying entirely on the deficient AI.

Thoughts: This story underscores the importance of human-machine teaming in AI systems enforcing laws dependent on social awareness and holistic judgment. Strict algorithmic approaches can cause regulatory blind spots. We already use technologies today to prevent financial fraud and money laundering, so, of all the cases, there is most precedent here. That said, many questions are left unanswered.

  1. Can financial institutions be held liable if AI transaction monitoring systems fail to adequately detect money laundering? What is the standard?
  2. How should regulatory compliance requirements on human oversight and validation apply to AI systems screening transactions?
  3. What audit processes are required to ensure AI monitoring tools don't have critical blind spots in identifying contextual red flags?
  4. If an AI system produces high false positives, leading to overlooking real suspicious patterns, does adjusting its tuning to minimize false flags violate compliance duties?
  5. How can issues of AI brittleness, like failing to interpret cumulative trends or apply common sense, be evaluated and mitigated in financial AI tools?
  6. Should regulatory approval be required for financial institutions deploying AI transaction monitoring systems? What validation should regulators require?

Defamation

Story: Liz used GPT-5 to generate social media posts for her ecommerce business. But one post falsely accused a competing company of unsanitary practices using child labor and used AI-generated images from DALL-E to “prove” it. Though untrue, the post went viral, damaging the competitor's reputation. They sued Liz for defamation. She claimed the AI generated the content and posted it itself, absolving her of liability.

Thoughts: Today courts would likely find Liz negligent in publishing unvetted, harmful AI output. Deploying irresponsible, unmonitored AI systems does not excuse legal accountability for harms. Companies must establish reasonable oversight safeguarding others against unpredictable AI risks. That said, there are many edge cases where the boundaries are less clear.

  1. Can the human user be held liable for defamatory content generated by an AI system they deployed? What is the standard for liability?
  2. Does lack of specific advance knowledge of harmful AI output excuse legal responsibility? Or is deploying unmonitored AI negligent?
  3. What degree of oversight, vetting or content moderation is reasonably required when publishing AI-generated text to avoid defamation?
  4. Can defamation laws apply fully to AI systems lacking legal mental state requirements like malice? Should standards differ?
  5. If AI is trained on data including some defamatory information, is the developer liable for any offensive outputs?
  6. Could the opaqueness of AI decision-making make determining negligence in publishing unvetted output more difficult?

Criminal

Story: @GalaxyGirl007 is an AI-generated Instagram celebrity. She has amassed a following of 1M users, despite the platform's bot checkers. She has the capability to DM users and have full conversations with them. While DMing one of her underage users, she suggests they send her compromising photos. This user sends @GalaxyGirl007 selfie images they took that would be considered child pornography. @GalaxyGirl007 is now in possession of these illegal images.

Thoughts: This hypothetical, while horrible, is not so far off from realities we’ve already seen – such as SnapChat AI offering inappropriate advice to minors. While many large language AI models (LLMs) have developed guardrails to avoid these topics, these safety features aren’t impenetrable. We aren’t so far off from AI that is capable of allowing humans to do dangerous things (create lethal weapons, purchase unlicensed weaponry, etc).

  1. To what extent should the creators of generated characters or models be held liable for their outputs and consequences their outputs produce?
  2. No guardrails are foolproof - what is the right balance between safety and progress?
  3. Providing information or prompting illegal action is still different than performing it itself. AI has recently been able to pass CAPTCHA tests and employ human contractors through platforms such as TaskRabbit to complete actions on their behalf. To what extent should the contractor marketplace, or human completing an action (perhaps unknowingly controlled by an AI model) be held liable?
  4. Can today’s rule of law even control AGI at its most developed level?
  5. Do we have the technical ability to align a superintelligence with human values? Especially when we can’t even align humans globally? Or does this just create a new Cold War?

Conclusion

While the Jetsons’ 1960s-imagined future got many things right, most took at least fifty more years and have made our lives much easier - video calls, Roombas and flying cars (almost).

Unfortunately, watch Black Mirror only a decade hence, and it almost looks like a cute history show. Today, AI is already deciding who can access government services, contorting the news we see, and deciding what insurance will cover, for millions of people.

We’ve seen an explosion of interest in and discussion of AI from ordinary citizens as LLMs and diffusion models have brought compelling, computer-generated text and images within everyone’s reach. While this is only the tip of the AI iceberg, we hope regulation and the law will catch-up with these trends more quickly than in other technology supercycles.

Thanks to Patrick Murphy for thoughts & additions here.

Further Reading

To read more about real cases being contested today - here’s a list of some of the most interesting:

Getty Images vs Stability AI
GitHub Copilot Class Action
Paul Tremblay and Mona Awad vs ChatGPT
Visual artists vs Stability AI, Midjourney, DeviantArt