Introduction
Artificial intelligence and all its many iterations, such as neural networks and machine learning, are not new to video games. As early as the 1950’s, video games have been using procedurally generated responses in reaction to the choices of human players.
However, in recent months, AI has become much more widely available to the public and has progressed remarkably in its advancement. From this moment onward, AI will only become more sophisticated.
In this article, we’ll look at two forms of AI: “generative AI” and what we’ll refer to as “traditional AI.” Generally, “generative AI” refers to the forms of AI that create new content, such as images, text, audio, and videos based off of users prompts. “Traditional AI” goes by many names, but for the purposes of this article, we’re using this term to describe artificial intelligence that performs specific tasks and makes predictive or automated decisions that does not involve the creation of “artistic” or expressive works. For example, within the video game industry, traditional AI may be used in a wide variety of contexts. In this article, we’ll be focusing on the complexities of using traditional AI for limiting access to certain in-game content based upon a player’s age.
Across the gaming industry, generative and traditional AI pose new and complicated legal issues, many of which have no clear solutions. In short, we’ll explore some legal issues game companies should consider before booting up their favorite AI tools.
Using Generative AI to Create Video Games
The State of Play
When AI is thought of today, creating content is typically top of mind. What if I told you this entire article was generated by ChatGPT? Well, it wasn’t.
Generative AI poses a variety of intellectual property issues, particularly in regards to copyright protection and infringement. The Congressional Research Service (CRS) released (and subsequently updated on May 11, 2023) a Legal Sidebar examining the role of copyright law as applied to outputs from generative AI.
Copyright protection hinges partially on the concept of “authorship” and artistic manipulation of the works, as noted by the CRS. Authorship generally requires that a work was “created by a human being.” The Copyright Office further clarified this point in recent guidance, denying copyright protection to works that have been created “without any creative contribution from a human actor.” In its recent Zarya of the Dawn decision, the Copyright Office granted protection to the human-created text of an AI-generated comic book, but refused protection to the images that had been created by Midjourney.
Additionally, potential copyright infringement caused through generative AI inputs and outputs is an open-ended question. As of this writing, the Copyright Office has yet to address whether using copyrighted works in generative AI constitutes infringement. Yet, we may get an answer in the form of a court decision.
In January 2023, a group of artists filed a class action lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these businesses used protected artwork to train generative AI models without permission from the artists. Getty Images similarly filed a lawsuit against Stability AI for allegedly copying “at least 12 million copyrighted images” from Getty’s website to train Stability’s Stable Diffusion model.
Notably, these lawsuits allege that both the act of using copyrighted works as an input and the subsequent outputs generated in response constitute copyright infringement.
The Nemesis System Example
When thinking about copyright protection and infringement, video game companies need to consider how they’ve used the AI-generated content in their games and where the underlying content originated from.
Regarding copyright protection, it will be key for developers to document and think through how certain generative AI outputs have been incorporated into a game. The Nemesis System within the game Middle-Earth: Shadow of Mordor is a perfect example.
The Nemesis System is a form of so-called ‘artificial intelligence’ that, based upon player hierarchies and individual responses, would generate non-player character behaviors and attributes. Simply put, the Nemesis System learned and adapted to players’ interactions within the game, generating unique enemy orcs that seemed to know you and would exploit your weaknesses.
While Warner Brothers was first granted a patent to the Nemesis System in 2021, its underlying code is likely separately subject to copyright protection. Yet, in light of the Copyright Office’s guidance regarding AI-generated works, certain ‘outputs’ of the Nemesis System may not copyrightable if they are purely procedurally generated.
However, the Nemesis System is integrated into a larger game, which separately and as a whole, contains protectable elements. Additionally, human artistic development or manipulation of these orcs and their responses could also transform the outputs into copyrightable works.
Let’s expand this example to copyright infringement. Hypothetically, what if the Nemesis System used several beautiful (and copyrighted) portraits of orcs to generate its outputs without permission of the artists?
If the lawsuits referenced above are successful, the training input and resulting output could both be copyright infringement. Because of this legal ambiguity, game developers should be cognizant of the risks in training models and creating outputs based upon copyrighted works without a proper license in place with the original creators.
Using Traditional AI for Age-Assurances
The State of Play
Like artificial intelligence, data privacy has become a rapidly evolving area of interest to the public and regulators alike. Although the United States has yet to pass a harmonized federal data privacy law, ten States (and counting) have enacted separate comprehensive data privacy laws with varying requirements. Regulators have also been paying particular attention to children’s privacy and harmful online content.
In early June, the FTC announced a $20 million dollar settlement with Microsoft for allegedly violating the Children’s Online Privacy Protection Act (COPPA) through Microsoft’s Xbox Live services. This marked the third action within a month taken by the FTC that focused on children’s privacy, coming on the heels of the Edmodo and Amazon settlements. Notably, in the Edmodo settlement, the FTC requires the deletion of “Affected Work Product,” which included models or algorithms that were developed using information collected unlawfully from children.
At the State level, California passed a new data privacy law, the Age-Appropriate Design Code Act (AADC), effective July 1, 2024. We explored in a blog post earlier this year what impact the AADC may have on the video game industry. In response to these mounting concerns, some companies have begun implementing artificial intelligence tools to improve their data practices.
One intersection of data privacy and artificial intelligence is “automated decision-making” (ADM), the process of using algorithms or machines to make decisions with little or no direct human involvement. A number of States already have ADM laws, such as California’s Bolstering Online Transparency Act (BOT) which requires companies to disclose the use of automated bots when attempting to incentivize or influence consumers.
There are a variety pitfalls gaming companies should consider before employing artificial intelligence in their data practices. However, this article explores a common and growing concerns for gaming companies: age-assurance methods.
Age-Assurance Considerations
Under the California AADC, game companies will need to begin estimating players’ ages within a reasonable level of certainty. Once ages have been determined, game companies will have to use that information to either (1) prevent children from accessing potentially harmful in-game content (through so-called “age-assurances”) or (2) design games that are child-friendly.
The California AADC was modeled closely after the UK Age-Appropriate Design Code, referred to as the “Children’s Code.” In early 2023, the UK Information Commissioner’s Office released guidance specifically for video game designers, which might shed light on how California regulators may enforce the AADC. Of this guidance, game companies implementing AI-powered ADM should carefully consider how they will estimate ages with a sufficient level of certainty without running afoul of data protection laws.
Game companies may choose to implement traditional AI to automate their age-assurance methods, such as through verifiable government IDs or through facial scanning to determine player ages. Some of the risks with age-gating through traditional AI is the proper handling of children’s data and mitigating potential biases of such AI tools.
For example, the FTC-Microsoft order requires Microsoft to separately handle children’s information collected prior to parental consent, subject to a separate 2-week deletion period. However, this FTC decision dealt with COPPA, not the California AADC. Under the AADC, there is no verifiable parental consent contemplated. As such, children’s data that is collected for age-assurance purposes needs to be handled with particular care, as no parental consent can permit other uses. Rather, game companies must establish players’ ages while still complying with data protection laws.
If game companies deploy traditional AI for age-assurances, children’s data cannot be used for any other purpose than to verify the child’s age. AI development, training, and improvement may likely be a prohibited use that falls outside of that permitted scope. Any other information about child players gleaned by any traditional AI should be maintained separately and not used to improve any AI tools.
Additionally, game companies may wish to estimate players’ ages through facial scanning. Without getting into the intricacies of biometric health data laws (looking at you, Illinois and Washington), age-assurances that rely on AI-powered facial recognition pose a variety of risks, such as producing inaccurate results. A recent study found that AI technology suffers from exaggerated biases when attempting to estimate a person’s age.
This circles back to the above issue regarding using children’s data improperly. Game companies cannot scan children’s faces for both age-assurance purposes and to improve the accuracy of their traditional AI. Rather, children’s data gathered for estimation purposes can only be used for estimation. If the traditional AIs are not operating as they should in automated age-assurances, game companies need to be prepared to address and mitigate those risks.
Before using traditional AI in age-assurance mechanisms, game companies need to think through the vast and complicated risks presented. For future regulation of AI, game companies should be ready to provide regulators with documentation showing how their traditional AI has been trained and how the AI makes those decisions. Particularly, game companies should be able to demonstrate that their AI age-assurance methods are not improperly using children’s data.
Conclusion
This article is not, by any means, an exhaustive list of issues that those in the gaming industry may face in using artificial intelligence. This article is also not meant to discourage or encourage the use AI. Rather, AI is a powerful tool that will undoubtedly be wielded for years to come, and the video game industry had employed AI long before it gained any public traction.
As game companies continue to implement generative and traditional AI, they need to be aware of the complicated and interwoven tapestry of laws and regulations that govern artificial intelligence.