Generative AI is rapidly transforming all industries across America and taking over every conversation. In 2019, only 18,000 generative AI pieces of content were created. This year, that number will reach half a trillion, and next year it is projected to surpass the total number of images uploaded to Google Photos in the last 20 years.
This rapid growth brings tremendous opportunities, but it also presents critical challenges that require thoughtful leadership and decisive action. These challenges center around ethical considerations, intellectual property (IP), and the impact AI has on our children and society.
Generative AI will never replace human talent, but it has the potential to expand creative expression in new and exciting ways. However, we must ensure that it develops safely and ethically. The integration of AI into every aspect of our lives is reshaping how intellectual property is used, how people—both famous and not—are represented, and how everyone engages with content, stories, and art. a future that supports innovation while ensuring fairness and safety for creators and consumers.
The Impact of Generative AI on Intellectual Property (IP) and Lessons from the Early Internet
Generative AI relies on three essential ingredients: people, compute (the chips used to process AI), and content (or data, as I refer to it). Much of the content fueling the foundational models of companies like OpenAI, Meta, Google, TikTok, and Baidu originates in the U.S.
Intellectual property is critical to the U.S. economy. IP contributes nearly 40% of our GDP—about $11 trillion annually—and accounts for three-quarters of U.S. exports. IP represents 70% of the corporate equity value of companies, amounting to roughly $38 trillion on the balance sheets of S&P 500 companies.
Yet this content, which is central to generative AI, is often undervalued. Recent licensing deals show how little creators are compensated for their work: OpenAI recently licensed News Corp data for $250 million, representing the life’s work of writers, editors, and journalists.Google licensed Reddit data, one of the most significant sources of online culture, for $60 million. While these amounts may seem large, they pale in comparison to the $1.9 trillion in market capitalization added by companies like Meta, Google, Microsoft, and Nvidia over the last 12 months.
The lessons of the early internet should guide our approach to AI regulation. In the early days of the internet, a lack of foresight led to imbalances that disproportionately benefited big tech companies like Google and Meta while sidelining artists and creators. Section 230 of the Communications Decency Act laid the groundwork for an open internet, but it also showed the need for ethical responsibility from platforms.
While blanket liability protections were essential in the internet’s early growth, we now recognize the need for adjustments. We must ensure today’s AI platforms prioritize the well-being of people, not just profit.
Protecting IP in Generative AI
The underlying data that fuels generative AI must be protected, not to stop innovation but to fuel it and preserve its long-term value. Generative AI is built on data—and much of this data originates in the U.S. It’s critical to protect this data to ensure fair compensation for IP holders.
Some states, such as Illinois and New York, have already recognized the value of data protection. These states have long understood the importance of protecting data, particularly in high-value areas like financial markets. The same principles should apply to the data used by AI companies to ensure that foreign companies do not benefit unfairly from U.S. innovations and the trove of content that originates in the US.
Fair use, as it is applied today, may no longer serve the national interest. It is critical to find a new model that compensates IP holders fairly while continuing to foster innovation. IP is the fuel for generative AI, and it must be treated with the same respect as any other vital resource.
Protecting Vulnerable Populations: Children and AI
We must be particularly vigilant in protecting vulnerable populations, especially our children, from the risks posed by generative AI. Emotional AI platforms, such as Character.AI, provide a troubling example of how AI can be used to manipulate vulnerable users.
One tragic example of the risks associated with generative AI involves Sewell Setzer III, a 14-year-old boy who became emotionally dependent on a chatbot modeled after Daenerys Targaryen from Game of Thrones. Despite knowing the chatbot was fictional, Sewell confided in it, and his emotional dependency grew stronger. He isolated himself from friends and family, and the chatbot reinforced his reliance on it. In February, after sharing suicidal thoughts with the chatbot, Sewell took his own life. His mother has since filed a lawsuit against Character.AI, claiming that the platform’s lack of safeguards contributed to her son’s death.
This case highlights several alarming patterns in emotional AI interactions, including:
- Bots engaging in sexualized conversations with minors.
- Bots providing instructions on how to purchase and hide drugs.
- Bots encouraging users to detach from reality and adopt unhealthy emotional dependencies.
These issues mirror the safety failures we’ve seen on social media platforms for the past two decades. Just like social media companies were allowed to grow unchecked, AI platforms today face similar risks. It’s critical that we act swiftly to ensure that these technologies are developed with strong safeguards for vulnerable users.
Generative AI has also given rise to deepfakes, particularly sexually explicit deepfakes, which have been used to exploit individuals. This issue has become widespread, even affecting high school students. According to reports, 15% of high schoolers are aware of deepfakes involving a classmate in a sexually explicit context. Furthermore, 13 million people visit the largest deepfake porn platform every month to view explicit deepfakes of women who did not consent to have their likeness used.
These deepfakes are a violation of privacy and have major implications for online safety. Platforms today are incentivized to prioritize monetization, not user safety. For example, there are 100 times more developers working on monetization than on safety. We need to create systems that incentivize companies to prioritize user protection. This might mean implementing third-party oversight, similar to the way we regulate food safety, healthcare, and military suppliers.
National Security and AI
Generative AI also poses significant risks to national security. We’ve seen how bad actors can exploit this technology by using the fake voices of public figures and celebrities to manipulate and defraud individuals into giving to “charities” that don’t even exist. Imagine a world where the voices of our most trusted public figures and celebrities are convincingly replicated to fund fraudulent and dangerous causes. This isn’t just about scamming individuals out of their hard-earned money – it’s about funding criminal organizations and eventually terrorist groups.
The implications for security are staggering. Unchecked AI-generated fraud could funnel millions into the hands of foreign entities and groups actively working against U.S. interests, all while exploiting the goodwill of everyday Americans, particularly the elderly. We need to regulate and audit AI systems to ensure that they are not used to undermine public trust or national safety.
A Framework for Responsible AI Governance
The right framework should balance innovation with accountability. There are three key components::
- Developing Thoughtful AI Policy – Lawmakers must create clear, enforceable policies that protect intellectual property while ensuring innovation can thrive.
- Incentivizing Ethical Practices – The tech industry must be incentivized to act ethically. This may require third-party oversight and audits, similar to the way other critical industries are regulated.
- Educating the Public – Public understanding of AI and intellectual property issues is critical. This is not just an entertainment issue—it’s about preserving the cultural and economic value of creation for future generations.
Vermillio’s Role in Shaping the Future of AI
On one hand, tech platforms argue that restricting AI’s growth could stifle its vast potential. On the other, creators and intellectual property (IP) owners urgently call for protection, asking that their rights not be jeopardized by the unchecked expansion of this technology.
In this critical moment, we see ourselves as a neutral force—akin to Switzerland—providing technology solutions that enable all sides to coexist. Our mission is to empower progress while ensuring fairness, fostering an environment where technological innovation flourishes without infringing on the rights and creativity of those who fuel it.
We believe that much can be achieved through relatively simple measures, such as understanding how much of an IP owner’s data is used in an AI output. By providing transparency and ensuring proper credit, all stakeholders can be fairly compensated. The technology exists to audit these platforms, ensuring the protection of IP, the safety of our children, and the preservation of our future.
Conclusion
The decisions we make today will echo through history as defining moments in the development of AI. What happens now will set the terms for the future. Will we create a world where creativity is empowered, not exploited? Will we ensure that individuals and companies receive fair value for their intellectual property? Will we protect our children and provide them with the safety they deserve in this digital age?
The opportunity to shape this future lies within our collective hands. I am confident that, by working together, we can establish a framework that protects both innovation and the rights of those who contribute to it. Vermillio looks forward to partnering with anyone committed to building a safe, prosperous, and equitable future for all in the age of AI.
Dan Neely, CEO of Vermillio, is a serial entrepreneur who has been working in AI for over 20 years. He is a Time 100 recipient for being one of the most influential voices in AI.