The rise of generative AI models that can convincingly replicate the unique visual style of Studio Ghibli has sparked fresh debates in creative and legal circles. AI-generated “Ghibli-style” portraits — where users upload personal images and receive reimaginings in the studio’s iconic aesthetic — are going viral. But behind this playful trend lies a serious concern: what happens when a machine can mimic an artist’s soul without asking permission?
At the core of the debate lies a legal paradox: copyright protects the expression of ideas—not the ideas themselves. And critically, it doesn’t cover an artist’s “style,” even if it’s as recognizable and distinctive as that of Hayao Miyazaki, the legendary founder of Studio Ghibli. While styles like ‘expressionism’, ‘dada’ or ‘disco’ are open to inspiration and reuse, the ability of GenAI to replicate them with uncanny precision complicates this legal grey zone.
A Signature Style at Risk
Studio Ghibli’s visual aesthetic is iconic—soft, immersive worlds, richly layered characters, and a dreamlike charm. This makes it a prime target for GenAI, which can now generate images mimicking this visual tone with astonishing accuracy.
The concern is no longer just about copying a particular artwork but about AI learning the entire language of an artist’s creative universe.
The impact is already being felt by real artists. Digital painter Greg Rutkowski, whose ethereal fantasy style was mimicked in over 93,000 AI-generated prompts, has warned that this trend could be a “threat” to the very careers of artists. He’s not alone. Thousands of creators — including Nobel Prize-winning author Kazuo Ishiguro and Radiohead’s Thom Yorke — have signed open letters condemning the unlicensed use of creative works for training generative AI models. Studio Ghibli’s founder, Hayao Miyazaki, once called AI in animation “utterly disgusting” and “an insult to life itself.”
Legal Limbo: Copyright, Style, and Personality Rights
Copyright law has long held that while expression is protected, ideas — and by extension, artistic styles — are not. What happens when imitation becomes indistinguishable from the original? And worse — when it begins to dilute the artistic and economic value of that original?
Despite these concerns, not all artists are opposed to AI. Some embrace it as a creative partner — a collaborator in visualizing data, exploring new forms, or even meditating on the relationship between human and machine. But such partnerships differ significantly from the unauthorized use of copyrighted content in AI training — a practice many view as ethically murky and legally suspect.
The Studio Ghibli phenomenon raises particularly complex questions for intellectual property law. While copyright protects original works, it doesn’t necessarily shield an artist’s visual persona unless it can be tied directly to an existing copyrighted work. For instance, if AI is used to transform user photos into Ghibli-style images — drawing directly from the studio’s past works — there’s a stronger case for copyright infringement. In India, some courts have gone a step further, recognizing such stylistic elements under the doctrine of “personality rights,” offering protection via tort laws such as passing off and misappropriation.
Still, enforcement remains tricky. Proving that a particular style constitutes a unique, identifiable artistic persona — and linking it directly to one creator — is no easy task. These rights must also be claimed and enforced by the individual artist, not necessarily the studio or company that owns the underlying copyright.
Subhash Bhutoria, Founder & Principal at LAW SB, believes change is coming. “Organisations like WIPO are working towards GenAI creating work by holding conversations relating to IP and frontier technologies, and we are hopeful that a global policy, which clearly defines the rights, rules and responsibilities relating to AI-generated work, would see light of the day very soon,” he said.
Copyright Law in India
The legal implications in India are particularly pointed. Under the Copyright Act of 1957, using copyrighted material without permission for the commercial training of GenAI models could amount to direct infringement. Section 14 of the Act protects reproduction rights, including electronic storage — a category into which AI training squarely falls. The core issue is that most AI models are trained on massive datasets scraped from the internet, often without any licenses or permissions.
“Artists and studios who’ve invested years creating original IP are now afraid their work will be reverse-engineered or devalued,” said Prachi Shrivastava, Founder of Vakil Vetted. “Startups need to innovate fast but don’t know where the legal line begins, especially with models trained on the open web. The law hasn’t caught up — but the risk is very real. And no founder wants to build a business that’s a legal time bomb.”
GenAI models work by learning statistical patterns and features from vast datasets. When trained on copyrighted images, these models can then reproduce remarkably similar outputs — sometimes nearly indistinguishable from the originals. That’s what raises moral and economic concerns for creators, particularly when AI-generated content in a unique style like Ghibli’s starts flooding the web.
Even though current copyright law does not protect style per se, the act of reproducing the essence of a creator’s persona — particularly without consent — might violate both reproduction rights and moral rights. In many cases, this imitation devalues the artist’s labor, distorts their identity, and can harm their ability to commercialize their work in the future.
Importantly, Section 52 of the Copyright Act provides limited exceptions under “fair dealing,” none of which clearly apply to text and data mining (TDM) — a crucial part of GenAI training. Large-scale commercial AI projects often fall far outside the permissible scope, especially when their outputs compete with the original works they mimic.
Global Legal Pushback and the ANI vs OpenAI Case
Globally, artists are pushing back. Nobel laureate Kazuo Ishiguro, Radiohead’s Thom Yorke, and thousands of other creators have signed an open letter demanding safeguards against unlicensed AI training. In India, the ongoing ANI v. OpenAI case before the Delhi High Court is addressing similar concerns. An amicus curiae in the case highlighted how the initial stage of training — data acquisition — and the final stage — model output — could both qualify as infringement. The case may set an important precedent for how Indian courts view copyright in the age of artificial intelligence.
This case is particularly significant as it could define how Indian courts deal with training data transparency, the distinction between expression and style, and whether AI outputs are mere imitations or unlawful reproductions.
Creative Disruption Meets Economic Opportunity
Meanwhile, the potential disruptions of GenAI are profound. On the one hand, the technology promises to boost GDP, productivity, and innovation across industries. On the other, it poses direct risks to creative sectors like film, design, and publishing — not by replacing artists, but by flooding the market with derivative works that undercut them economically and culturally.
Rather than eliminating jobs, GenAI is expected to transform them. Human skills like creativity, critical thinking, and emotional intelligence will become even more valuable — especially when paired with AI tools. But this also demands a major investment in upskilling and education. AI will force us to rethink not only how content is made, but how talent is valued and protected.
Disruption, Not Destruction: The GenAI Paradox
Ethical concerns are mounting as well. Issues around transparency in training data, algorithmic bias, deepfakes, misinformation, and accountability are now front and center. These aren’t fringe concerns — they’re foundational to the trust we place in digital content.
For startups, MSMEs, and their legal advisors, three emerging trends are becoming increasingly urgent:
1. Legal ambiguity at scale — Founders are uncertain whether AI-generated logos, marketing copy, or visual assets are truly “theirs,” or whether they’re opening themselves up to future litigation. “Can I use this AI logo?” “Is this blog really mine?” These are no longer hypothetical questions.
2. The rise of micro-litigation — Smaller creators and studios are beginning to push back, not necessarily through lawsuits but via takedown requests, cease-and-desist notices, and licensing enforcement on platforms.
3. Legal becoming strategic — Smart founders are now treating legal clarity like product hygiene — something to be built into the business model from day one, not just handled when something goes wrong.
AI also brings undeniable benefits. It helps creators produce content faster, experiment more widely, and reduce production costs. But it also threatens to blur the lines between homage and theft, creativity and mimicry. When AI models are trained on the hard-won work of human artists without permission, it cheapens both the art and the artist. The volume of AI-generated content flooding the market makes it harder for original voices to stand out, raises questions about ownership and authenticity, and could ultimately devalue the artistic ecosystem itself.
What Can the Government Do?
While the Indian government has maintained a “pro-innovation” stance, many believe that existing legal frameworks may be inadequate to address the complexity of AI in creative industries. Legislative amendments to the Copyright Act could be one solution — possibly by adding licensing protocols for data mining, clarifying AI authorship rights, or building specific liability frameworks for AI misuse.
Swati Sharma, Partner and Head of Intellectual Property at Cyril Amarchand Mangaldas, said, “Mandating enhanced transparency concerning training datasets, requiring conspicuous labeling of AI-generated content, and implementing technical standards for content provenance represent further options. Strengthening the protection of personality rights against AI-driven misuse, considering the creation of specific AI-related laws, and ensuring alignment with national data governance frameworks (such as the DPDP Act) are also significant considerations.”
The question remains: Is it ethical — or even legal — to use creative works without permission to train AI?
From a purist’s legal perspective, the answer is clear: using copyrighted content without consent undermines the exclusive rights granted to creators. Just because something is online doesn’t mean it’s free to use. Artists like those behind Studio Ghibli have spent decades crafting their distinct visual language — allowing AI to absorb and reproduce that without acknowledgment or compensation feels like a breach not just of law, but of trust.
Ankit Sahni, Partner at Ajay Sahni & Associates, put it bluntly: “Governments must now play a proactive role — through clear regulation, ethical standards for AI training, and mechanisms to ensure artists are compensated and credited. Copyright law needs to evolve, but its core purpose — to protect creativity — must not be compromised in the rush toward innovation.”
Chirag Ahluwalia, Advocate at the Delhi High Court, echoed the sentiment. “Governments have a critical role to play — not just in updating copyright frameworks, but in establishing licensing norms, promoting transparency in AI training datasets, and fostering mechanisms that ensure artists are not left behind in the AI revolution.”
Ultimately, the ability of AI to mimic the unique signature styles of artists like Studio Ghibli reflects both the promise and peril of the technology. While it opens new creative frontiers, it also raises profound questions about ownership, authorship, and fairness. Striking a balance between innovation and integrity has never been more important — or more urgent.