AI Ph.D.s are flocking to Big Tech. Here’s why that could be bad news for open innovation

Date:

Share post:



The current debate as to whether open or closed advanced AI models are safer or better is a distraction. Rather than focus on one business model over the other, we must embrace a more holistic definition of what it means for AI to be open. This means shifting the conversation to focus on the need for open science, transparency, and equity if we are to build AI that works for and in the public interest.

Open science is the bedrock of technological advancement. We need more ideas, and more diverse ideas, that are more widely available, not less. The organization I lead, Partnership on AI, is itself a mission-driven experiment in open innovation, bringing together academic, civil society, industry partners, and policymakers to work on one of the hardest problems–ensuring the benefits of technology accrue to the many, not the few.

With open models, we cannot forget the influential upstream roles that public funding of science and the open publication of academic research play.

National science and innovation policy is crucial to an open ecosystem. In her book, The Entrepreneurial State, economist Mariana Mazzucato notes that public funding of research planted some of the IP seeds that grew into U.S.-based technology companies. From the internet to the iPhone and the Google Adwords algorithm, much of today’s AI technology received a boost from early government funding for novel and applied research.

Likewise, the open publication of research, peer evaluated with ethics review, is crucial to scientific advancement. ChatGPT, for example, would not have been possible without access to research published openly by researchers on transformer models. It is concerning to read, as reported in the Stanford AI Index, that the number of AI Ph.D. graduates taking jobs in academia has declined over the last decade while the number going to industry has risen, with more than double going to industry in 2021.

It’s also important to remember that open doesn’t mean transparent. And, while transparency may not be an end unto itself, it is a must-have for accountability.

Transparency requires timely disclosure, clear communications to relevant audiences, and explicit standards of documentation. As PAI’s Guidance for Safe Foundation Model Deployment illustrates, steps taken throughout the lifecycle of a model allow for greater external scrutiny and auditability while protecting competitiveness. This includes transparency with regard to the types of training data, testing and evaluations, incident reporting, sources of labor, human rights due diligence, and assessments of environmental impacts. Developing standards of documentation and disclosure are essential to ensure the safety and responsibility of advanced AI.

Finally, as our research has shown, it is easy to recognize the need to be open and create space for a diversity of perspectives to chart the future of AI–and much harder to do it. It is true that with fewer barriers to entry, an open ecosystem is more inclusive of actors from backgrounds not traditionally seen in Silicon Valley. It is also true that rather than further concentrating power and wealth, an open ecosystem sets the stage for more players to share the economic benefits of AI.

But we must do more than just set the stage.

We must invest in ensuring that communities that are disproportionately impacted by algorithmic harms, as well as those from historically marginalized groups, are able to fully participate in developing and deploying AI that works for them while protecting their data and privacy. This means focusing on skills and education but it also means redesigning who develops AI systems and how they are evaluated. Today, through private and public sandboxes and labs, citizen-led AI innovations are being piloted around the world.

Ensuring safety is not about taking sides between open and closed models. Rather it is about putting in place national research and open innovation systems that advance a resilient field of scientific innovations and integrity. It is about creating space for a competitive marketplace of ideas to advance prosperity. It is about ensuring that policy-makers and the public have visibility into the development of these new technologies to better interrogate their possibilities and peril. It is about acknowledging that clear rules of the road allow all of us to move faster and more safely. Most importantly, if AI is to attain its promise, it is about finding sustainable, respectful, and effective ways to listen to new and different voices in the AI conversation.

Rebecca Finlay is the CEO of Partnership on AI.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Subscribe to the new Fortune CEO Weekly Europe newsletter to get corner office insights on the biggest business stories in Europe. Sign up for free.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles

4 Things I Wish I Knew Before Starting My Own Business

Opinions expressed by Entrepreneur contributors are their own. ...

Apple Adds AI Writing to iPhone 16 for Texts, Emails

AI can write emails. AI can write songs. AI can suggest...

Selena Gomez Shuts Down Rumors, Says Not Selling Rare Beauty

Actor, singer, and entrepreneur Selena Gomez isn't selling her multi-billion dollar...

How To Change Mobile Number In Kotak Mahindra Bank? » Finance & Banking

Changing your mobile number with Kotak Mahindra Bank? Let me tell you, it’s quite the adventure! I...