By the People and For the People: An Urgent Call to Democratize the Future of AI

Brain graphic with digital circuitry, arrows indicating growth, and handshake symbol, representing AI's role in enhancing sales through neuroscience and human connection.
Source: Braintrust under CC BY-NC 4.0 license

Mary Beth Collins, J.D., M.A., Executive Director of the UW-Madison Center for Community and Nonprofit Studies (the “CommNS”)

Dr. Corey B. Jackson, Assistant Professor, the Information School at UW-Madison; CommNS Affiliate

The recent, inevitable explosion of the presence of artificial intelligence (“AI”) in our lives has invoked concern and excitement across communities, institutions, and enterprises.   The abrupt integration – in just a couple of years – of an entirely new level of technological power most of us don’t really understand is unnerving and yet, already an accepted part of our daily life.  While throughout history we have experienced rapid technological shifts that forever changed the human experience, we arguably now face, with the seemingly unlimited applications of the epic power of AI, the most formative technological leap in the “great acceleration”.  Meanwhile,  many of us feel a sense of futility in grappling with this shift, daunted by the complexity of AI technology, and the rapid pace at which it increasingly impacts our world.  But, AI is, in whatever form it takes, a defining feature of our current and future world, and we must not acquiesce to the shaping of it, and how it shapes us.  Indeed, we face an urgent opportunity to prevent the worst potential harms of AI, and to push its development toward the greater good, and we must take it.  It is time for individuals, communities, and purpose-driven organizations to mobilize for democratization of the development of an AI future by the people and for the people.

To date, a grave power imbalance is at the heart of the development and shaping of AI.  For-profit actors competing in the feverish AI arms race have been the most powerful drivers of the development of AI technology.  The public sector across jurisdictions has not kept pace in establishing  guidance or guardrails for AI and its impact on our communities and lives.   Meanwhile, civil society, or the “third sector” – purpose-driven organizations and community-based efforts –  are stirring to have a role in the future of this sweeping technology and the way it will change us.  This third sector influence is what must be strengthened and grown to democratize the shaping of AI, drawing on the best possible knowledge and wisdom to ensure that human thriving is the end game for this powerful force.  Through an array of timely and conscious efforts by individual users and consumers, community groups, purpose-driven organizations, institutions of higher education, and in turn, a responsive public sector, we have a chance to nurture AI in the direction that is promised to us by visionaries developing the technology – that it will amplify, elevate, and expand human power for the greater good.  However, without such a broad-based civil society effort wrangling democratized control of the future of AI, our worst fears about AI’s harms may become our future reality.  

As of today, for-profit entities’ dominant role in shaping AI and the way it affects us appear practically unfettered.  Stanford Social Innovation’s 2025 Human-Centered Artificial Intelligence Report reports that “business is ‘all-in’ on AI”, and that private investment in AI surpassed $100B in 2024, increasing notably from prior years, with continued increases in 2025.  A for-profit “AI arms race” of companies competing for market dominance and first-mover advantage, lacking meaningful inquiries into impacts, safety, and other values-driven considerations, and internal processes that would monitor and check them, is driving development of AI.  Stanford’s HAI reports that while “AI-related incidents are rising sharply. . . .standardized RAI evaluations remain rare among major industrial model developers.”   Notwithstanding AI companies’ overtures regarding beautiful possibilities for the future, the competitive and profit-driven dynamic driving AI development to date pushes prototypes into deployment, experimenting upon us, leaving communities to bear the consequences, and creating immediate and long-term risks.  

As AI increasingly slips into our daily lives in myriad ways, recent studies show that Americans are already much more likely to think A.I. will harm them (43 percent) than help them (24 percent). Regardless of whether American’s opinions on the cost-benefit analysis of AI are founded, it would be naive to expect profit-driven enterprises essentially reigned in only by corporate self-regulation to adequately address the complexities of AI and its impact on us at the expense of promises made to shareholders and investors. Rapid deployment prevails over careful assessment; voluntary commitments lack teeth.  Open AI’s recent restructuring provides a useful illustration to grasp the way that profit prevails over purpose in the AI race, with the main thrust of the restructuring being to re-calibrate its original, lauded, purpose-driven “nonprofit” structure to allow for more investment with profit-bearing potential. 

With this active experiment on our society ongoing, the risks and concerns about the impact of AI are not simply theoretical or fear-based.  As AI is more predominantly integrated into our daily activities with each passing day, we see innumerable illustrations of potentially harmful implications.   AI-generated imagery runs rampant through our social media, and in the U.S., we are experiencing the first presidential administration that regularly releases AI-generated images and AI-altered images – opening up new levels of potential manipulation, and making discernment of information more dubious.  Communities are already coping with the effects of the social media and smart phone era on human interactions, behavior, wellness, consumption and civic engagement; now, new acute risks have surfaced with the predominance of AI: more vivid and believable misinformation; AI-generated pornographic images being released into the information landscape; replacement of human relationships with AI interactions; and alleged links between youth use of AI personal assistants and risk of suicide.  Experts warn of the myriad impacts that outsourcing intellectual and relational functions to AI may have upon human development, relationships, attention, and cognitive capabilities. Expanded AI applications risk exacerbating existing chronic injustices and social ills through already-identified pitfalls such as algorithmic bias in hiring and lending, discriminatory outcomes in criminal justice risk assessment, vivid misinformation that undermines democratic discourse and fuels polarization, and insidious privacy and civil liberty violations. Yet, to date, the for-profit scramble to develop AI lacks adequate review and expertise from domains integral to responsible AI development, including ethicists, social scientists, community advocates, and domain specialists in education, healthcare, and criminal justice.  Rapid prototyping of AI results in the release and integration of models and the development of ubiquitous infrastructure which lack crucial context, present serious risks to our safety, and are likely to have disproportionate effects on certain populations, exacerbating age-old inequities that tend to make the world a less stable place. In addition to impacts on our individual lives, safety, and rights, the all-gas-no-brakes approach to AI development and infrastructure poses other systemic risks: economic volatility posed by a potential “AI financial bubble” based on zealous investment; national security threats created by our integration of and dependence on AI in major systems, and the hardware and resource imports required to sustain AI platforms; and the alarming environmental and resource demands posed by AI data centers.

Meaningful regulation to ebb the worst potential threats from international governing bodies, nation-states, and local jurisdictions lags behind.  The recent executive order from the U.S. president, aiming to make state-based AI regulation unlawful, demonstrates that even with meaningful and apt frameworks for government regulation, the political will to restrict the race is not a given. The irony of this in the United States is that everyday people’s tax dollars are subsidizing the AI arms race – a compelling consideration which should further motivate us to play a role in how this all-powerful technology is shaped, and shapes us.

The good news is that there are profound opportunities, and many mechanisms, for democratized and purpose-driven development and governance of AI that can and must be advanced on behalf of and accountable to the general publicNow is the time for an array of individuals, communities, civil society organizations, and institutions of higher education to take the reins in shaping AI, with a north star of human thriving for all.  An array of civil society efforts, including individual action, grassroots organizing, advocacy, large-scale collaborations and multinational institutions, can properly inform and advocate for the right kind of AI future for our human community.

A constellation of some such efforts has already begun to emerge.   Around the globe, community grassroots efforts have responded to and had an influence on proposals for data centers, transcending political divides, citing concerns about water use, energy demand, land use, and long–term environmental and infrastructure impacts.  Large philanthropic organizations are also stepping up to have an influence.  In October of 2025, a group of prominent philanthropic organizations announced the launch of Humanity AI, a collaborative effort “dedicated to making sure people have a stake in the future of artificial intelligence (AI)”,  backed by a $500M investment.  The Partnership on AI (PAI) convenes academic institutions, civil society organizations, industry partners, and media entities to discuss solutions that advance positive outcomes for people and society.  At the international level, the Global Partnership on AI (GPAI) operates as a voluntary, multi-stakeholder initiative that convenes working groups of experts from industry, government, civil society, and academia to advance responsible AI through shared research, policy recommendations, and the development of guidance for aligning AI with human rights, democratic values, and inclusive economic growth. In 2024, GPAI partnered with the Organization for Economic Co-operation and Development (OECD) to coordinate global efforts on trustworthy AI. Various labor organizations have already worked to mitigate impacts of AI on their members.  For example, existing nonprofit and civil society organizations that work with communities and youth should deliver trusted programming and initiatives that support intentional usage and discernment of AI, connecting to efforts to support education, civic participation, and other essential elements of healthy individuals, families, and communities.  Existing environmental and community organizing networks can support communities’ self-determination about AI infrastructure.  Medical and mental health providers should weigh in on AI impacts and safeguards to human health.  Issue advocacy and political organizations should ensure their agendas and platforms include AI policy.  Building upon existing civil society assets, we must continue to develop a more comprehensive and diversified network of efforts to ensure the AI technological shift is well-informed and shaped for the best aims – by the people and for the people.  

Individual action also matters.  As powerful as the tidal wave of AI may seem, we each have agency in and avenues for shaping AI. We as consumers can flex our influence with our purchase power and product selection, choosing AI products and services with governance practices and safety and quality protocols that align with our values, and boycotting those that do not.  To exert our influence, we must educate ourselves and each other about AI, tapping credible purpose-driven tools to help us in this pursuit.  For example, the Alan Turing Institute has published publicly accessible courses to help individuals learn about various AI functions (e.g., Data Science and AI), principles (e.g., SAFE-D), and implications (AI Ethics and Governance).  Other groups, like The Algorithmic Justice League, provide valuable public resources around AI harms and advocacy. Publicly available platforms like the AI Incident Database, allow for collective cataloguing of examples of AI harm experienced by the public and raise awareness of the risks and the need for adjustment. With widespread use of such platforms, we can develop and share our personal precautions and standards, and organizations and groups can develop scorecards or rating systems to assess AI systems and further assist consumers in navigating their choices.  

Community-based oversight mechanisms are additional promising and plausible methods for the democratization of influence and information central to AI development. Compelling efforts to involve community members and residents in the governance and regulation of AI are emerging worldwide. In the United States, this kind of oversight is less prevalent, but across Europe, participatory audit processes and citizen assemblies are being piloted to hold AI systems accountable.  These initiatives bring together non-AI-experts with subject-matter experts and policy-makers to assess the efficacy and impacts of AI tools, help identify problematic systems, gather evidence of their impacts, and use those insights to push for actionable governance and design recommendations –  translating lived experience into action.  One notable example is the Eticas Foundation: Community-Led AI Audits, a Barcelona-based non-profit that works directly with communities to reverse engineer and audit AI systems that affect them.  Deliberative processes which allow citizens to lend their insights and expertise can be a powerful step in shaping policy and development of AI – for the greater good.

Higher education institutions can and must also take a leading role in the shaping of the AI future, providing critical knowledge, evaluating, and analyzing. With important research in fields like human ecology, computer science, psychology, sociology, information science, and law, universities house the multidisciplinary expertise needed to ensure that AI is developed responsibly.  Thankfully, this work is already underway.  Social scientists are documenting the psychological effects of AI companionship on young people’s mental health and emotional development, including risks associated with attachment and inappropriate responses in AI chatbots interacting with minors. Computer scientists, ethicists, and medical researchers are examining how AI decision-support systems in healthcare raise ethical, legal, and regulatory challenges while also enhancing diagnostics and personalized treatment, requiring careful study of patient safety, privacy, fairness, and regulatory frameworks. Interdisciplinary research teams are also investigating the implications of AI in criminal justice systems; for example, how algorithmic risk assessment and facial recognition tools can produce biased and discriminatory outcomes that affect individual rights. These use cases show the benefits while also pointing out the need for sociological, legal, and data science perspectives to ensure AI supports our values and aims for society, and does not exacerbate existing problems.  Higher education institutions house mature oversight ecosystem for research and development standards, with established ethical oversight structures (e.g., Institutional Review Boards) and conflict-of-interest policies that have evolved over decades. This existing infrastructure is well-positioned to address many of the challenges of AI product development in the for-profit setting. 

Some higher education institutions are establishing AI-focused centers such as the Stanford Institute for Human-Centered AI, which brings together researchers across domains to study and shape the societal, ethical, and policy implications of AI and to ensure that technological innovation aligns with human values and public well-being. Stanford HAI’s annual report (2025 AI Index Report), describes global trends in AI research, investment, performance, policy, and societal impact.  Universities are also exploring technical governance mechanisms, such as hosting local, open foundation models that distribute oversight and experimentation beyond corporate platforms. Initiatives such as the BigScience collaboration’s BLOOM model and Stanford’s Center for Research on Foundation Models (which provide evaluation frameworks for foundational models) offer publicly accessible AI systems, documentation, evaluation frameworks, and auditing tools that serve as governance infrastructure. These efforts provide templates and infrastructure for technical capacity, transparency, and accountability that could extend beyond academic research to other stakeholders. They often include the involvement and collaboration with for-profit companies, illustrating the potential for cross-sector collaboration and highlighting the opportunity for university-led governance and science to inform responsible AI development.

Appropriate and effective legislation of AI is a critical building block of a hopeful future, and while it will be no easy accomplishment, civil society can and should have an active role in leveraging its special expertise to advocate for sensible policies.  As with any epic technological shift, we may not be able to anticipate and understand the full range of policies and consequences at issue. But, civil society can harness an incredibly valuable and robust knowledge base for iterative policy-making, drawing from the experiences of people and communities actually using and being impacted by AI in real time.  Civil society organizations and groups can help collect, organize, and harness that critical on-the-ground information and experiences of users from a broad range of pluralistic perspectives, to understand, analyze, and weigh in on the impacts of AI on an ongoing basis. 

Even with a fluid and rapidly-changing situation, we can work toward sensible and widely-agreed-to policies.  U.S. states have served as testing grounds for some such approaches. On September 29, 2025, California enacted Senate Bill 53, known as the Transparency in Frontier Artificial Intelligence Act, establishing California as the first U.S. state to create a comprehensive transparency and accountability framework specifically targeting the development and deployment of advanced AI models. The law requires companies that use large amounts of computing power and earn over $500 million to disclose their safety best practices and assess catastrophic risks. In Colorado, the AI Act (SB 24-205) focuses on “high-risk” AI systems that could influence important decisions, including employment, housing, education, and healthcare. The law requires developers to mitigate bias risks and implement transparency measures. In Texas, the Responsible AI Governance Act (TRAIGA) prohibits the intentional development or deployment of AI systems that discriminate, impair constitutional rights, or incite harmful acts. The law requires businesses to provide information about AI systems to the Attorney General, creating an inventory of both systems being used and accountability mechanism. Other states like Illinois and cities like New York have also enacted legislation to impose disclosure requirements of employer AI usage and prohibit employment discrimination through employer usage of AI.  However, all state and local regulations in the U.S. may now be subject to legal challenge due to the current president’s December 2025 executive order intended to challenge state laws that are inconsistent with federal AI policy objectives.  Meanwhile, other nations work to promulgate their own regulations, even resulting, for example, in the opening of recent investigations by the European Commission of AI platforms under its Digital Services Act legislation.  However, with AI’s somewhat inherently borderless nature, a fully global network and movement will be required to truly mitigate the worst possible consequences and work together toward the most hopeful possibilities of this technology.

While additional regulation and oversight are needed, a parallel set of industry-specific demands, standards, frameworks, and commitments from companies to guide the responsible development of AI is needed. Indeed, some companies are voluntarily embracing the transparency advocated for above. Salesforce, for instance, publishes AI Impact Assessment results to inform stakeholders about principles, testing and mitigation processes, and potential ethical and societal impacts of its AI systems, including how they are reviewed and governed in practice. These assessments are part of a broader effort to align with the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, a voluntary set of guidelines to help organizations identify and manage AI-related risks. Other companies, such as Microsoft, Google, and IBM, also voluntarily publish their responsible AI practices, impact assessments, and tooling to help stakeholders understand how they identify, measure, and mitigate AI risks. Companies have also contributed foundational frameworks and tools to responsible AI. Early evaluation and auditing frameworks, such as IBM’s AI Fairness 360 toolkit and Google’s Model Cards, provide criteria to test, document, and assess models for issues including bias, performance, and intended use before deployment. Professional certification programs for employees are also available, fostering specialized expertise and professional standards for employees who may advocate for responsible AI within organizations (e.g., the Artificial Intelligence Governance Professional certification and the AI Security & Governance Certification).  We should expect companies building AI to set, publicly state, adhere to, and report on, meaningful standards for ethical and responsible development of AI.

We, the global community and purpose-driven organizations, and scientific experts must collectively shape the future of AI and its influence on our human experience through a broad, network-based approach to engage, collect information, and advocate for the ultimate aim of a future AI reality which supports human thriving.  We have means and levers to do this, and we must begin engaging now to counterbalance the currently outsized influence of for-profit companies and shareholders and lagging response from the public sector.  The first step is getting clear-eyed about how AI is unfolding, and then taking steps as individuals and organizations to gather and share information, consume responsibly, and advocate forthe iterative policy and practice responses needed.  A sustainable, broad and multifaceted effort – by and for the people –  can serve as a safety net against the worst potential risks, and a springboard for the best potential possibilities for our AI future.

A Call to Action

  • Individuals:

    • Learn as much as we can about the status and impacts of AI, the way AI shapes and affects us, and the way our own use of AI shapes it.
    • Expect and demand reliable, credible sources of information.  
    • Use AI ethically and carefully, minimizing harm due to our own use, and reporting incidents of harm caused.  Expect and demand ethical and responsible use of AI from others.
    • Make thoughtful choices for our AI consumption – boycotting those platforms that are least responsible with ethics and impact ethical and consciously choosing those that are making good efforts to put appropriate guardrails and policies in place.  
    • Express our expectations for AI standards and apply pressure on markets and policy-makers for appropriate internal and external regulation of AI products. 
    • Share knowledge and perspectives with others to uphold quality and community standards.
    • Support candidates and policy platforms which address responsible AI development and guardrails.
  • Community, Civil Society and Philanthropic Organizations:

    • Support and conduct proactive programming, teaching, research, dissemination, and cross-sector participation to help individuals and communities engage responsibly and knowledgeably with AI.
    • Help organize people to hold companies and policy-makers accountable for minimizing harmful impacts and responsibly developing AI.
    • Work together toward community self-determination while navigating AI’s impact on social, environmental, and labor issues.
    • Support and conduct the collection and distillation of information gathered from community engagement with AI – publishing guides, scorecards, reporting incidents, and sharing information and tools for AI literacy.
  • Institutions of Higher Education:

    • Contribute multidisciplinary knowledge and science to the process of responsible AI development; study impacts of AI and share and publish findings with communities, policy-makers, and companies.  
    • Lend platforms and approaches for avoiding conflict of interest and research oversight and governance to those entities conducting research and development for AI.
    • Teach learners how to responsibly engage with AI.
  • Policymakers:

    • Promulgate sensible legislation and regulation of the industry – including principles of transparency, minimizing harm, supporting communities making knowledgeable decisions about AI infrastructure.
    • Take into account feedback and concerns from community members and civil society organizations, and learn from research promulgated by institutions of higher education.
  • AI Companies:

    • Conduct cross-sector collaboration to ensure AI development and infrastructure are informed by community input and knowledge generated from credible higher education and civil society organizations.  
    • Create and adhere to governance structures that will ensure some level of oversight of company executives’ decision-making.