By Andreas Voniatis
Ethics of Generative AI Statistics: USA 2025
Generative AI has evolved from a buzzword to a business tool, and the conversation is shifting from can we use it? to should we? Questions about fairness, transparency, and accountability are becoming more frequent, and business leaders are giving careful consideration to the role ethics should play in AI adoption.
To find out what 2,389,645 business leaders in the US’s opinions were about the ethics of generative AI, we utilized AI-driven audience profiling to synthesize insights from online discussions over a year ending 14 July 2025, to a high statistical confidence level. Their insights offer a clear picture of the values, concerns, and expectations shaping how AI is being used in real-world business settings.
Index
- 27% of business leaders say using AI in their organization is a potential worry due to ethical misuse
- 32% of business leaders consider regular tool evaluations an important part of ensuring teams use AI responsibly
- 47% of business leaders consider data privacy policies absolutely crucial when choosing a generative AI provider
- Transparent reporting is a crucial step in addressing bias in AI for 67% of business leaders
- 23% of business leaders agree that technology platforms should be crucial leaders in ethical AI development
- Originality and creativity are important considerations in AI-driven content for 28% of business leaders
- Intellectual property theft is somewhat concerning for 24% of business leaders using generative AI
- 14% of business leaders agree that end users should have a crucial influence over AI ethics
- 26% of business leaders often use case-by-case discretion when disclosing the use of AI-generated content
- 38% of leadership teams are somewhat involved in actively leading strategies for AI use policies
- 48% of business leaders agree that reliable vendor ethics make an AI tool absolutely trustworthy
- 16% of business leaders say it’s absolutely essential to encourage responsibility to use AI ethically
- 29% of business leaders say live demos and Q&As are essential for training teams to adopt AI ethically
- 23% of business leaders say it’s somewhat difficult to maintain accountability when using AI tools
- Methodology
What Is Your Greatest Concern About Using Generative AI in Your Organization?
27% of business leaders say using AI in their organization is a potential worry due to ethical misuse
Concerns about AI vary widely, but a few recurring themes are starting to take shape:
Concerns around using AI in the workplace tap into some big-picture questions about ethics, accuracy, and accountability. While just 1% of business leaders in our audience see ethical misuse as a major concern, 27% say it’s a potential issue, another 18% feel it’s not a big worry, and 1% have no concern at all.
The risk of misinformation stands out a little more, with 4% calling it a major concern and 25% saying it’s a potential issue, while 3% aren’t too worried about it. These patterns line up with insights from a recent study on the ethical implications of AI in business, which points to transparency, bias, and accountability as key ethical themes in business AI.
Further down the list, regulatory uncertainty is a potential issue for 3% and not a big worry for 8%. Lack of transparency is flagged by 7%, and 1% mentions job displacement as a potential issue. While few business leaders see AI ethics or misinformation as major concerns, many acknowledge them as potential issues, pointing to growing awareness around transparency, bias, and accountability in workplace AI.
How Do You Ensure Your Team Uses Generative AI Tools Responsibly?
32% of business leaders consider regular tool evaluations an important part of ensuring teams use AI responsibly
There’s no single approach to responsible AI use, but some strategies are gaining traction:
Making sure AI tools are used responsibly means putting the right structures in place, and different teams approach this in different ways. The Harvard Business Review recommends starting with clear expectations and refining usage over time, as part of a four-phase framework for implementing AI responsibly. That kind of thinking shows up in the 13% of our audience of business leaders who absolutely ensure regular tool evaluations, with 32% considering them important, 9% finding them somewhat necessary, and 11% not prioritizing them.
Mandatory training programs are absolutely ensured by 20%, considered important by 5%, somewhat necessary by 1%, and not prioritized by 2%. Clear internal guidelines are absolutely ensured by 1% and considered important by 5%.
Interestingly, no opinions were recorded regarding third-party oversight or team-level accountability, suggesting that while some foundational practices like training and tool evaluations are gaining traction, broader governance measures may still be underexplored or evolving in many organizations.
What Do You Consider Most Important When Choosing A Generative AI Provider?
47% of business leaders consider data privacy policies absolutely crucial when choosing a generative AI provider
When business leaders size up AI providers, one priority rises to the top:
In choosing an AI provider, business leaders in our audience weigh a mix of factors, with one clearly dominating. Data privacy policies lead the pack by a long way, with 47% of business leaders saying they are absolutely crucial, 12% calling them an important consideration, and 3% seeing them as somewhat relevant. That lines up with wider AI privacy best practices, which call for risk assessments, consent management, and safeguards for sensitive data.
A provider’s commitment to safety is absolutely crucial for 18% and an important consideration for 1%, while 8% and 2% feel the same about ethical certifications, respectively. Bias mitigation processes are also absolutely crucial for 4% and an important consideration for 5%, while no one voiced any opinions on model transparency, proving that it’s not a talking point.
How Should Bias In AI-Generated Content Be Addressed?
Transparent reporting is a crucial step in addressing bias in AI for 67% of business leaders
One approach to AI bias mitigation clearly leads the conversation:
In tackling bias in AI-generated content, most business leaders agree that transparency matters most. A systematic review of 555 AI models found that 83.1% were rated as high risk for bias, which helps explain why 67% of our audience say transparent reporting is a crucial step, and 14% consider it an important consideration.
Open-source oversight is viewed as a crucial step by 12% and an important consideration by 6%. Diverse testing teams rank much lower, with just 1% calling them a crucial step and another 1% viewing them as important, reinforcing where priorities lie.
Which Industry Should Lead The Way In Ethical AI Development?
23% of business leaders agree that technology platforms should be crucial leaders in ethical AI development
Opinions are mixed on who should be out in front in ethical AI development:
Our audience of US business leaders has strong views on which industry should take the lead in developing AI responsibly. Tech platforms are seen as crucial leaders by 23%, with another 6% saying they should take initiative, 4% saying they’re not the best fit, and 5% stating they should not lead. That level of trust reflects real-world practices, with brands like Adobe being applauded for their long-standing AI impact assessments before launching any new features.
Business communities are next, with 15% seeing them as crucial leaders, 7% saying they should take initiative, 6% calling them not the best fit, and another 6% saying they should not lead. Academic institutions are seen as crucial leaders by 7%, with 6% saying they should take the initiative, 1% calling them not the best fit, and 1% saying they should not lead. Government regulators are viewed as crucial leaders by 7% and as needing to take initiative by 5%, while public advocacy groups are named as crucial leaders by just 1%.
From these statistics, it’s evident that business leaders place the greatest trust in tech platforms to lead responsible AI development, while support for other sectors like business communities, academia, and regulators remains more cautious and divided.
What Value Should AI-Driven Content Always Prioritize?
Originality and creativity are important considerations in AI-driven content for 28% of business leaders
Business leaders are selective about which values AI content really needs to reflect:
Among the values AI-driven content should prioritize, business leaders place the greatest weight on originality and accuracy. As reports confirm, AI struggles with factual accuracy and originality, which helps explain why these two qualities stand out. Originality and creativity are seen as absolutely essential by 13%, an important consideration by 28%, less critical by 7%, and not a priority by 8%. Truthfulness and accuracy are absolutely essential for 13% and an important consideration for 8%.
Further down the list of priorities, brand alignment is absolutely essential for 1%, important for 4%, less critical for 6%, and not a priority for 3%, and audience trust is absolutely essential for 2%, important for 4%, and less critical for 2%. Interestingly, inclusivity and fairness were not a point of discussion, despite the issue of AI bias.
What Ethical Challenge Worries You Most When Using Generative AI?
Intellectual property theft is somewhat concerning for 24% of business leaders using generative AI
Some ethical challenges spark stronger reactions than others as business leaders navigate AI adoption:
Concerns about intellectual property theft top the list of ethical worries about using generative AI, with 24% of our audience saying it’s somewhat concerning, although 20% say it’s not a big issue, and 1% are not concerned at all. These worries align with findings in Policy & Society, which point out how many generative AI models are trained on copyrighted content scraped from the internet without permission or attribution. This likely contributes to the 4% who feel a loss of author ownership is somewhat concerning, while 2% say it’s not a big issue.
Amplification of stereotypes also raises concerns somewhat for 11% of US business leaders, although 20% view it as not a big issue, and 3% don’t see it as a concern at all. Undisclosed AI usage draws 6% who find it somewhat concerning, compared to 2% who do not see it as a big issue. Deepfake risks register as somewhat concerning for 3%, while another 3% say it’s not a big issue.
This suggests that while intellectual property theft stands out as the leading ethical concern around generative AI, many business leaders remain divided on other issues, revealing a broader need for clearer guidelines and shared standards in responsible AI use.
Which Stakeholder Group Should Influence AI Ethics Most?
14% of business leaders agree that end users should have a crucial influence over AI ethics
No single stakeholder group can carry the load on AI ethics alone:
Deciding which stakeholder group should influence AI ethics most reveals a range of opinions among business leaders. While 14% say end users should have a crucial influence, and another 18% believe they have an important role, 9% see their impact as limited, and 9% say it should be minimal.
Legal professionals are seen as crucial by 9%, important by another 9%, and limited by 1%. Human rights advocates are rated as crucial by 10%, important by 2%, with 1% calling their impact limited and 2% saying it should be minimal. Industry experts receive slightly lower support, with 6% calling their influence crucial, 7% calling it important, and 2% saying it is limited.
According to a recent analysis on ethical AI, this diversity of opinion makes sense. Ethical AI is a shared responsibility, with business leaders, policymakers, academics, civil society, and end users each playing a vital role in shaping how AI is developed and used.
How Do You Disclose AI-Generated Content To Your Audience?
26% of business leaders often use case-by-case discretion when disclosing the use of AI-generated content
AI disclosure is actioned in different ways:
In 2024, Utah was the first state to mandate certain customer disclosure requirements relating to AI with its Utah Artificial Intelligence Policy. Other states, like California and Colorado, have since implemented similar far-reaching AI use policies, along with a growing number of others.
For our audience of business leaders, this is an obvious consideration, with 26% already discretionarily disclosing the use of AI-generated content on a case-by-case basis, 5% often or strictly using clear labelling on all channels, 6% often disclosing, and 2% strictly disclosing in the fine print, and 12% only disclosing when required.
However, there are still a large number of business leaders who are not as transparent in their disclosure of the use of AI-generated content, with 15% rarely using case-by-case discretion, 7% strictly using case-by-case discretion, 8% rarely disclosing the use of AI unless its required, and 4% strictly disclosing only when required depending on the use case. Another 4% never use case-by-case discretion, 5% rarely disclose in the fine print, and 1% never use clear labelling on all channels or only do so when required.
With such contrasting opinions, it will be interesting to see how regulations reshape the landscape in the next few years as AI use and the disclosure thereof become more regulated.
How Involved Is Your Leadership Team In Shaping AI Use Policies?
38% of leadership teams are somewhat involved in actively leading strategies for AI use policies
Some leaders are all-in on AI policy, while others barely show up to the conversation:
How involved leadership teams are in shaping AI use policies varies significantly across organizations. While 17% of business leaders say their leadership team is deeply involved in actively leading strategy, 38% report they are somewhat involved, 10% note limited involvement, and 2% say there is no involvement at all.
In terms of consulting stakeholders, only 2% describe their leaders as deeply involved, while 14% say they’re somewhat involved and 1% note limited engagement. Delegating to tech leads sees even lower participation, with just 1% reporting deep involvement, 3% citing some, and 1% indicating limited input. Some leaders are not currently involved, with 3% naming limited involvement and another 3% saying there is none. Only a small group reviews AI policies quarterly, with 1% deeply engaged and 4% somewhat involved.
This shows that while some leadership teams are actively shaping AI strategy, overall involvement remains limited, pointing to a gap between strategic intent and hands-on governance in many organizations.
What Makes An AI Tool Trustworthy?
48% of business leaders agree that reliable vendor ethics make an AI tool absolutely trustworthy
Trust in AI tools centers around a single, standout priority:
When business leaders think about what makes an AI tool truly trustworthy, one factor stands out by far. A full 48% say reliable vendor ethics make a tool absolutely trustworthy. Another 11% also trust AI tools when the vendor’s ethics are sound, while 11% are cautiously optimistic, calling it somewhat trustworthy.
This emphasis on vendor ethics aligns with broader concerns in the market. A recent Cisco study found that 27% of organizations have banned the use of generative AI due to data privacy and security risks. That stat reinforces just how seriously many leaders take the ethical stance of their providers.
Other trust factors carry weight too, though not to the same degree. Consistent human review is seen as absolutely trustworthy by 4%, trusted by 11%, and somewhat trustworthy by 8%, though 5% do not trust it at all. Peer-reviewed development earns an “absolutely trustworthy” vote from just 1%, and another 1% view it as not trustworthy at all. However, no business leaders in our audience mentioned clear output controls as part of their trust criteria, indicating they may still be underrecognized or undervalued in current trust frameworks.
What Ethical Principle Do You Most Emphasize Internally?
16% of business leaders say it’s absolutely essential to encourage responsibility to use AI ethically
Business leaders are drawing clear lines around which ethical principles matter most in their use of AI:
When it comes to the ethical principles business leaders emphasize most internally, responsibility and harm reduction emerge as the top trends. 16% say encouraging responsibility is absolutely essential, 9% say it is important to consider, 1% view it as somewhat relevant, and another 1% say it is not a priority.
For the principle of “do no harm,” opinions are almost identical, with 15% saying it’s absolutely essential, 9% saying it is important to consider, 2% seeing it as somewhat relevant, and 1% saying it is not a priority. This strong focus on harm reduction aligns with UNESCO’s global ethical guidance, which places “proportionality and do no harm” as the first of ten core principles in its human rights-based approach to AI.
The principle of protecting data rights is considered absolutely essential by 12% of business leaders, important to consider by 7%, and somewhat relevant by 1%, while maintaining transparency is viewed as essential by 10% and important to consider by 5%. The emphasis on promoting equality is lower, with 5% rating it as absolutely essential, 4% as important to consider, and 3% as somewhat relevant.
What Kind Of Training Would Help Your Team Adopt AI Ethically?
29% of business leaders say live demos and Q&As are essential for training teams to adopt AI ethically
One approach to AI ethics training is the go-to learning format:
Live demos and Q&A sessions are the clear favorite training format for helping teams adopt AI more ethically. A full 29% of business leaders say this kind of hands-on format is essential, while 48% consider it helpful. Only 8% view it as not a priority. The popularity of this format suggests that business leaders are looking for interactive, real-world exposure where teams can ask questions, see AI in action, and better understand its implications.
Other formats received far less traction. Just 5% say scenario-based workshops are essential, with another 4% finding them helpful. Peer collaboration programs were considered essential by only 4%, and role-specific courses saw the least support, with only 1% rating them essential.
What Ethical Value Is Hardest To Maintain With Generative AI Tools?
23% of business leaders say it’s somewhat difficult to maintain accountability when using AI tools
Maintaining core ethical standards is easier said than done when AI enters the picture:
Business leaders are finding that certain ethical values are more difficult to uphold when working with AI tools. Accountability stands out as the biggest hurdle, with 23% calling it somewhat difficult and another 8% saying it’s manageable. Accuracy presents another challenge, with 16% and 10% expressing the same opinion, and just 1% saying it’s not a concern.
Fairness also raises flags, with 15% saying it’s somewhat difficult and 1% saying it’s manageable. Consent follows a similar pattern, with 19% and 1% feeling the same. Transparency appears to be less of a struggle, with 3% saying it’s somewhat difficult and 3% viewing it as manageable.
These concerns mirror broader workforce anxieties. McKinsey’s AI in the Workplace Report 2025 found that 50% of US employees see inaccuracy as a key risk of generative AI, while 30% highlight equity and fairness. While AI adoption is growing, trust in its ability to uphold core values remains fragile, especially around human judgment, fairness, and verifiable truth.
It’s obvious that generative AI has quickly become a powerful force across industries, but the question of how to use it responsibly remains front and center. These opinions of more than 2 million US business leaders reveal a clear push for accountability, transparency, and ethical leadership at every stage of adoption. As organizations continue to explore what AI can do, they’re also shaping what it should do.
Methodology
Sourced using Artios from an independent sample of 2,389,645 United States business leaders’ opinions across X, Reddit, TikTok, LinkedIn, Threads, and BlueSky. Responses are collected within a 95% confidence interval and 5% margin of error. Results are derived from opinions expressed online, not actual questions answered by people in the sample.
About the representative sample:
- 51% of business leaders are between the ages of 35 and 64.
- 53% identify as female and 47% as male.
- 34% earn between $200,000 and $500,0000 annually.
- The highest number (20%) is based in the Pacific US.