How we did this
pewresearch - BY LEE RAINIE, CARY FUNK, MONICA ANDERSON, AND ALEC TYSON - MARCH 17, 2022
Americans’ Openness Is Tempered by a Range of Concerns. Public views are tied to how these technologies would be used, and what constraints would be in place
Developments in artificial intelligence and human enhancement technologies have the potential to remake American society in the coming decades. A new Pew Research Center survey finds that Americans see promise in the ways these technologies could improve daily life and human abilities. Yet public views are also defined by the context of how these technologies would be used, what constraints would be in place, and who would stand to benefit – or lose – if these advances become widespread.
Fundamentally, caution runs through public views of artificial intelligence (AI) and human enhancement applications, often centered around concerns about autonomy, unintended consequences, and the amount of change these developments might mean for humans and society. People think economic disparities might worsen as some advances emerge and that technologies, like facial recognition software, could lead to more surveillance of Black or Hispanic Americans.
This survey looks at a broad arc of scientific and technological developments – some in use now, some still emerging. It concentrates on public views about six developments that are widely discussed among futurists, ethicists, and policy advocates. Three are part of the burgeoning array of AI applications: the use of facial recognition technology by police, the use of algorithms by social media companies to find false information on their sites, and the development of driverless passenger vehicles.
The other three, often described as types of human enhancements, revolve around developments tied to the convergence of AI, biotechnology, nanotechnology, and other fields. They raise the possibility of dramatic changes to human abilities in the future: computer chip implants in the brain to advance people’s cognitive skills, gene editing to greatly reduce a baby’s risk of developing serious diseases or health conditions, and robotic exoskeletons with a built-in AI system to greatly increase strength for lifting in manual labor jobs.
The current report builds on previous Pew Research Center analyses of attitudes about emerging scientific and technological developments and their implications for society, including opinions about animal genetic engineering and the potential to “enhance” human abilities through biomedical interventions, as well as views about automation and computer algorithms.
As Americans make judgments about the potential impact of AI and human enhancement applications, their views are varied and, for portions of the public, infused with uncertainty.
Americans are far more positive than negative about the widespread use of facial recognition technology by police to monitor crowds and look for people who may have committed a crime: 46% of U.S. adults think this would be a good idea for society, while 27% think this would be a bad idea and another 27% are unsure.
By narrower margins, more describe the use of computer algorithms by social media companies to find false information on their sites as a good rather than bad idea for society (38% vs. 31%), and the pattern is similar for the use of robotic exoskeletons with a built-in AI system to increase strength for manual labor jobs (33% vs. 24%).
By contrast, the public is much more cautious about a future with the widespread use of computer chip implants in the brain to allow people to far more quickly and accurately process information: 56% say this would be a bad idea for society, while just 13% think this would be a good idea. And when it comes to the much-discussed possibility of a future with autonomous passenger vehicles in widespread use, more Americans say this would be a bad idea (44%) than a good idea (26%).
Still, uncertainty is among the themes seen in emerging public views of AI and human enhancement applications. For instance, 42% are not sure how the widespread use of robotic exoskeletons in manual labor jobs would impact society. Similarly, 39% say they are not sure about the potential implications for society if gene editing is widely used to change the DNA of embryos to greatly reduce a baby’s risk of developing serious diseases or health conditions over their lifetime.
Ambivalence is another theme in the survey data: 45% say they are equally excited and concerned about the increased use of AI programs in daily life, compared with 37% who say they are more concerned than excited and 18% who say they are more excited than concerned.
A survey respondent summed up his excitement about the increased use of artificial intelligence in an open-ended question by saying:
“AI can help slingshot us into the future. It gives us the ability to focus on more complex issues and use the computing power of AI to solve world issues faster. AI should be used to help improve society as a whole if used correctly. This only works if we use it for the greater good and not for greed and power. AI is a tool, but it all depends on how this tool will be used.” – Man, 30s
Another respondent explained her ethical concerns about the increased use of AI this way:
“It’s just not normal. It’s removing the human race from doing the things that we should be doing. It’s scary because I’ve read from scientists that in the near future, robots can end up making decisions that we have no control over. I don’t like it at all.” – Woman, 60s
It is important to note that views on these specific applications do not constitute the full scope of opinions about the growing number of uses of AI and the proliferating possible advances being contemplated to boost human abilities.
The survey was built around six vignettes, to root opinion in a specific context and allow for a deeper exploration of views. Thus, our questions about public attitudes about facial recognition technology are not intended to cover all possible uses but, instead, to measure opinions about its use by police. Similarly, we concentrated our exploration of brain chip implants on their potential to allow people to far more efficiently process information rather than on the use of brain implants to address therapeutic needs, such as helping people with spinal cord injuries restore movement.
The survey findings underscore how public opinion is often contingent on the goals and circumstances around the uses of AI and human enhancement technologies. For example, in addition to exploring views about the use of facial recognition by police in-depth, the survey also sought opinions about several other possible uses of facial recognition technology. It shows that more U.S. adults oppose than favor the idea of social media sites using facial recognition to automatically identify people in photos (57% vs. 19%) and more oppose than favor the idea that companies might use facial recognition to automatically track the attendance of their employees (48% vs. 30%).
Some of the key themes in the survey of 10,260 U.S. adults, conducted in early November 2021:
A new era is emerging that Americans believe should have higher standards for assessing the safety of emerging technologies.
The survey sought public views about how to ensure the safety and effectiveness of the four technologies still in development and not widely used today. Across the set, there is strong support for the idea that higher standards should be applied, rather than the standards that are currently the norm. For instance, 87% of Americans say that higher standards for testing driverless cars should be in place, rather than using existing standards for passenger cars.
And 83% believe the testing of brain chip implants should meet a higher standard than is currently in use to test medical devices. Eight-in-ten Americans say that the testing regime for gene editing to greatly reduce a baby’s risk of serious diseases should be higher than that currently applied to testing medical treatments; 72% think the testing of robotic exoskeletons for manual labor should use higher standards than those currently applied to workplace equipment.
Sharp partisan divisions anchor people’s views about possible government regulation of these new and developing technologies. As people think about possible government regulation of these six scientific and technological developments, which prospect gives them more concern: the government will go too far or not far enough in regulating their use?
Majorities of Republicans and independents who lean to the Republican Party say they are more concerned about government overreach, while majorities of Democrats and Democratic leaners worry more that there will be too little oversight.
For example, Republicans are more likely than Democrats to say their greater concern is that the government will go too far in regulating the use of robotic exoskeletons for manual labor (67% vs. 33%). Conversely, Democrats are more likely than Republicans to say their concern is that government regulation will not go far enough.
People are relatively open to the idea that a variety of actors – in addition to the federal government – should have a role in setting the standards for how these technologies should be regulated. Across all six applications, majorities believe that federal government agencies, the creators of the different AI systems and human enhancement technologies, and end-users should play at least a minor role in setting standards.
Less than half of the public believes these technologies would improve things over the current situation.
One factor tied to public views of human enhancement is whether people think these developments would make life better than it is now, or whether reliance on AI would improve human judgment or performance. On these questions, less than half of the public is convinced improvements would result.
For example, 32% of Americans think that robotic exoskeletons with built-in AI systems to increase strength for manual labor would generally lead to improved working conditions. However, 36% think their use would not make much difference and 31% say they would make working conditions worse.
In thinking about a future with the widespread use of driverless cars, 39% believe the number of people killed or injured in such accidents would go down. But 27% think the number killed or injured would go up; 31% say there would be little effect on traffic fatalities or injuries.
Similarly, 34% think the widespread use of facial recognition by police would make policing fairer; 40% think that it would not make much difference, and 25% think it would make policing less fair.
Another concern for Americans is tied to the potential impact of these emerging technologies on social equity. People are far more likely to say the widespread use of several of these technologies would increase rather than decrease the gap between higher- and lower-income Americans. For instance, 57% say the widespread use of brain chips for the enhanced cognitive function would increase the gap between higher- and lower-income Americans; just 10% say it would decrease the gap. There are similar patterns in views about the widespread use of driverless cars and gene editing for babies to greatly reduce the risk of serious disease during their lifetime.
Even for far-reaching applications, such as the widespread use of driverless cars and brain chip implants, there are mitigating steps people say would make them more acceptable.
A desire to retain the ability to shape their own destinies is a theme seen in public views across AI and human enhancement technologies. For even the most advanced technologies, there are mitigating steps – some of which address the issue of autonomy – that Americans say would make the use of these technologies more acceptable.
Seven-in-ten Americans say they would find driverless cars more acceptable if there was a requirement that such cars were labeled as driverless so they could be easily identified on the road, and 67% would find driverless cars more acceptable if these cars were required to travel in dedicated lanes. In addition, 57% say their use would be more acceptable if a licensed driver was required to be in the vehicle.
Similarly, about six-in-ten Americans think the use of computer chip implants in the brain would be more acceptable if people could turn on and off the effects, and 53% would find the brain implants more acceptable if the computer chips could be put in place without surgery.
About half or more also see mitigating steps that would make the use of robotic exoskeletons, facial recognition technology by police, and gene editing in babies to greatly reduce the risk of serious disease during their lifetime more acceptable.
A map of this report
The chapters that follow cover a broad terrain.
How Americans think about artificial intelligence: Chapter 1 looks at people’s views about the increasing use of AI in everyday life and summarizes their written responses to an open-ended question about their concerns and excitement. It identifies some of the potential uses of AI that prompt more excitement than concern from the public – for instance, AI systems that can help with household chores. And it highlights some applications that would concern the public, including the potential of AI programs to know people’s thoughts and behaviors or make important life decisions for people. The chapter also looks at the common themes and demographic differences in how Americans think about the three specific contexts for AI in the survey.
Public more likely to see facial recognition use by police as good, rather than bad for society: Some 21% of Americans say they have heard or read a lot about this use of technology, 58% have heard a little and 20% have heard nothing at all. A plurality (46%) believes it is a good idea for society. Still, a 57% majority says that if widespread use of facial recognition by police occurs, crime would stay about the same. And 66% say police definitely or probably would use facial recognition to monitor Black and Hispanic neighborhoods much more often than other neighborhoods.
Mixed views about social media companies using algorithms to find false information: About a quarter (24%) of Americans have heard or read a lot about this, 51% have heard a little and 24% have heard nothing at all. Many social media users have seen information on these sites that have been flagged or labeled as false. Seven-in-ten think the widespread use of algorithms to find false information is leading to censorship of political viewpoints, and 69% say it’s leading to news and information being wrongly removed from the sites.
Americans cautious about the deployment of driverless cars: About a quarter of U.S. adults (26%) have heard a lot about driverless cars, compared with 62% who have heard a little and 12% who have heard nothing at all. Some 45% would be not too or not at all comfortable sharing the road with them, and more say they would not want to ride in a driverless vehicle themselves than say they would want to do this (63% vs. 37%).
What Americans think about possibilities ahead for human enhancement: Chapter 5 looks at how people anticipate a future where scientific and technological advances could bring fundamental shifts in human abilities. Americans are more enthusiastic about possibilities that could bring therapeutic benefits to people, such as allowing increased movement for people who are paralyzed. There is generally far less enthusiasm for using these technologies to enhance human abilities in ways that don’t address a clear need. Across possible uses, men are generally more supportive of potential changes to human abilities than women. Those with higher levels of religious commitment often express concern and are more likely to see such changes as meddling with nature, compared with those who have lower levels of religious commitment.
Public cautious about enhancing cognitive function using computer chip implants in the brain: A 62% majority foresees potential benefits for job productivity from brain chip implants for far faster and more accurate information processing. But most Americans (78%) say they, personally, would not want a brain chip implant if it were available. And 63% say widespread use of brain chips for cognitive enhancement would be meddling with nature and crossing a line we should not cross; far fewer (35%) say this would be in keeping with other ways humans have tried to better themselves over time.
Americans are closely divided over editing a baby’s genes to reduce serious health risks: On a personal level, about half of Americans say they would want gene editing for their own baby to greatly reduce the baby’s risks of developing a serious disease or health conditions, while roughly the same share say they would not want this (48% to 49%). At the same time, a majority (73%) think most parents would feel pressure to get this for their baby if the use of this technology becomes widespread.
Mixed views about a future with the widespread use of robotic exoskeletons to increase strength for manual labor jobs: Americans anticipate both benefits and downsides for workers from the possibility of widespread use of robotic exoskeletons with a built-in AI system to increase strength for manual labor jobs such as manufacturing or construction. About two-thirds (65%) see the potential for a wider array of people to fill such jobs, and 70% think the use of robotic exoskeletons would help prevent injuries on the job. At the same time, large majorities see this development as leading to worker layoffs (81%) and anticipate the loss of strength for workers who rely on these devices (73%).
PEW RESEARCH CENTER MARCH 17, 2022. by LEE RAINIE, CARY FUNK, MONICA ANDERSON, AND ALEC TYSON
How Americans think about artificial intelligence
Artificial intelligence (AI) is spreading through society into some of the most important sectors of people’s lives – from health care and legal services to agriculture and transportation.1 As Americans watch this proliferation, they are worried in some ways and excited in others.
In broad strokes, a larger share of Americans says they are “more concerned than excited” by the increased use of AI in daily life than say the opposite. Nearly half of U.S. adults (45%) say they are equally concerned and excited. Asked to explain in their own words what concerns them most about AI, some of those who are more concerned than excited cite their worries about the potential loss of jobs, privacy considerations, and the prospect that AI’s ascent might surpass human skills – and others say it will lead to a loss of human connection, be misused or be relied on too much.
But others are “more excited than concerned,” and they mention such things as the societal improvements they hope will emerge, the time savings and efficiencies AI can bring to daily life, and the ways in which AI systems might be helpful and safer at work. And people have mixed views on whether three specific AI applications are good or bad for society at large.
This chapter covers the general findings of the survey related to AI programs. It also runs through highlights from in-depth explorations of public attitudes about three AI-related applications that are fully explored in the three chapters after this. Some key findings:
How Pew Research Center approached this topic
The Center survey asked respondents a series of questions about three applications of artificial intelligence (AI):
- Facial recognition technology could be used by police to look for people who may have committed a crime or to monitor crowds in public spaces.
- Computer programs, called algorithms, are used by social media companies to find false information about important topics that appear on their sites.
- Driverless passenger vehicles that are equipped with software allowing them to operate with computer assistance and are expected to be able to operate entirely on their own without a human driver in the future.
Other questions asked respondents their feelings about AI’s increased use, the way AI programs are designed, and a range of other possible AI applications.
This study builds on prior Center research including surveys on Americans’ views about automation in everyday life, the role of algorithms in parts of society, and the use of facial recognition technology. It also draws on insights from several canvassings of experts about the future of AI and humans.
Use of facial recognition by police: We chose to explore the use of facial recognition by police because police reform has been a major topic of debate, especially in the wake of the killing of George Floyd in May 2020 and the ensuing protests. The survey shows that a plurality (46%) thinks the use of this technology by police is a good idea for society, while 27% believe it is a bad idea and 27% say they are not sure. At the same time, 57% think crime would stay about the same if the use of facial recognition by the police becomes widespread, while 33% think crime would decrease and 8% think it would rise.
Moreover, there are divided views about how the widespread use of facial recognition technology would impact the fairness of policing. Majorities believe it is definitely or probably likely that widespread police use of this technology would result in more missing persons being found by police and crimes being solved more quickly and efficiently. Still, about two-thirds also think the police would be able to track everyone’s location at all times and that police would monitor Black and Hispanic neighborhoods much more often than other neighborhoods.
Use of computer programs by social media companies to find false information on their sites: We chose to study attitudes about the use of computer programs (algorithms) by social media companies because social media is used by a majority of U.S. adults. There are also concerns about the impact of made-up information and how efforts to target misinformation might affect the freedom of information. The survey finds that 38% of U.S. adults think that the widespread use of computer programs by social media companies to find false information on their sites has been a good idea for society, compared with 31% who say it is a bad idea and 30% who say they are not sure.
When asked about specific possible impacts, public views are largely negative. Majorities believe the widespread use of algorithms by social media companies to find false information is definitely or probably causing political views to be censored and news and information to be wrongly removed from the sites. And majority do not think these algorithms are causing beneficial things to happen like making it easier to find trustworthy information or allowing people to have more meaningful conversations. There are substantial partisan differences on these questions, with Republicans and those who lean toward the GOP holding more negative views than Democrats and Democratic leaners.
Driverless passenger vehicles: We chose to study public views about driverless passenger vehicles because they are being tested on roads now and their rollout on a larger scale is being debated. The survey finds that a plurality of Americans (44%) believe that the widespread use of driverless passenger vehicles would be a bad idea for society. That compares with the 26% who think this would be a good idea. Some 29% say they are not sure. A majority say they definitely or probably would not want to ride in a driverless car if they had the opportunity. Some 39% believe widespread use of driverless cars would decrease the number of people killed or injured in traffic accidents, while 31% think there would not be much difference and 27% think there would be an increase in these types of deaths or injuries.
People envision a mix of positive and negative outcomes from the widespread use of driverless cars. Majorities believe older adults and those with disabilities would be able to live more independently and that getting from place to place would be less stressful. At the same time, majorities also think many people who make their living by driving others or delivering things with passenger vehicles would lose their jobs and that the computer systems in driverless passenger vehicles would be easily hacked in ways that put safety at risk.
In their responses to survey questions about other possible developments in artificial intelligence, majorities express concern about the prospect that AI could know people’s thoughts and behaviors and make important life decisions for people. And when it comes to the use of AI for decision-making in a variety of fields, the public is more opposed than not to the use of computer programs (algorithms) to make final decisions about which patients should get a medical treatment, which people should be good candidates for parole, which job applicants should move on to the next round of interviews or which people should be approved for mortgages.
Still, there are some possible AI applications that draw public appeal. For example, more Americans are excited than concerned about AI applications that can do household chores. That is also the pattern when people are asked about AI apps that can perform repetitive workplace tasks.
There are patterns in views of three AI applications, but other opinions are unique to particular AI systems
The chapters following this one cover extensive findings of people’s views about three major applications of AI, including demographic differences and patterns that emerge.
Americans are split in their views about the use of facial recognition by police. Among these differences: While majorities across racial and ethnic groups say police would use facial recognition to monitor Black and Hispanic neighborhoods much more often than other neighborhoods if the technology became widespread, Black and Hispanic adults are more likely than White adults to say this. As for the way algorithms are being used by social media companies to identify false information, there are clear partisan differences in the public’s assessment of the use of those computer programs. And people believe that a mix of both positive and negative outcomes would occur if driverless cars became widely used.
When it comes to public awareness of these AI applications, majorities have heard at least a little about each of them, but some Americans have not heard about them at all, and awareness can relate to views of these applications. For instance, those who have heard a lot about driverless passenger vehicles are more likely than those who have not heard anything about such cars to believe they are a good idea for society. But when it comes to the use of facial recognition by the police, those who have heard a lot are more likely to say it is a bad idea for society than those who have not heard anything about it. Views about whether the use of algorithms by social media companies to detect false information on their sites are good or bad for society lean negative among those who have heard a lot, while among those who have heard nothing, over half are not sure how they feel about this practice.
In addition to awareness being a factor associated with Americans’ views about these AI applications, there are patterns related to education. Those with higher levels of education often hold different views than those who have less formal education. For example, those with a postgraduate education are more likely than those with a high school education or less to think the widespread use of algorithms by social media companies to root out false information on the platforms and the use of driverless vehicles are good ideas for society. The reverse is true for facial recognition – those with a postgraduate degree are more likely to think its widespread use by police is a bad idea for society than those with a high school diploma or less education.
Additionally, the views of young adults and older adults diverge at times when these three AI applications are assessed. For instance, adults ages 18 to 29 are more likely than those 65 and older to say the widespread use of facial recognition by police is a bad idea for society. At the same time, this same group of young adults is more likely than those 65 and older to think the widespread use of self-driving cars is a good idea for society.
The next sections of this chapter cover the findings from the survey’s general questions about AI.
Americans are more likely to be ‘more concerned than excited about the increased use of AI in daily life than vice versa
In this survey, artificial intelligence computer programs were described as those designed to learn tasks that humans typically do, such as recognizing speech or pictures. Of course, an array of AI applications are being implemented in everything from game-playing to food growing to disease outbreak detection. Synthesis efforts now regularly chart the spread of AI.
As these developments unfold, a larger share of Americans say they are “more concerned than excited” about the increased use of AI in everyday life than say they are “more excited than concerned” about these prospects (37% vs. 18%). And nearly half (45%) say they are equally excited and concerned.
There are some differences in educational attainment and political affiliation. For instance, a larger share of those who have some college experience or a high school education or less says they are more concerned than excited, compared with their counterparts who have a bachelor’s or advanced degree (40% vs. 32%). Republicans are more likely than Democrats to say they are more concerned than excited (45% vs. 31%). Full details about the views of different groups on this question can be found in the Appendix.
When those who say they are more excited than concerned about the increased use of AI in daily life are asked to explain in their own words the main reason they feel that way, 31% said they believe AI has the ability to make key aspects of our lives and society better.
As one man explained in his written comments:
“AI, if used to its fullest ‘best’ potential, could help to solve an unbelievable number of major problems in the world and help solve massive crises like world hunger, pollution, climate change, joblessness and others.” – Man, 30s
A woman made a similar point:
“[AI has] the ability to learn and create things that humans are incapable of doing. [AI programs] will have massive impacts to our daily life and will solve issues related to climate change and healthcare.” – Woman, 30s
Smaller shares of those who express more excitement than concern over AI mention its ability to save time and make tasks more efficient (13%), see it as a reflection of inevitable progress (10%), or cite the fact that it could handle mundane or tedious tasks (7%) as the main reasons why they lean enthusiastically about the prospect of AI’s increased presence in daily life.
Those who are excited about the increased use of AI in daily life also cite AI’s ability to improve work, their sense that AI is interesting and exciting and the ability of AI programs to perform difficult or dangerous tasks as a reason: 6% of those more excited than concerned mentioned each.
In addition, 4% of those who are more excited say AI is more accurate than humans, while an identical share says they are excited because AI can make things more accessible for those who have a disability or who are older. Some 2% offer personal anecdotes of how AI has already been beneficial to their lives, and another 2% wrote that many of the fears about AI are misplaced due to what they believe to be unrealistic depictions of AI in science fiction and popular culture.
The 37% of Americans who are more concerned than excited about AI’s increasing use in daily life also mention a number of reasons behind their reticence. About one-in-five among this group (19%) express concerns that increased use of AI will result in job loss for humans. As a woman in her 70s put it:
“[AI programs] will eventually eliminate jobs. Then what will those people do to survive in life?” – Woman, 70s
Meanwhile, 16% of those who are more concerned about the increased use of AI say it could lead to privacy problems, surveillance or hacking. A woman in her 30s wrote of this concern:
“I am concerned that the increased use of artificial intelligence programs will infringe on the privacy of individuals. I feel these programs are not regulated enough and can be used to obtain information without the person knowing.” – Woman, 30s
Another 12% of these respondents are concerned about dehumanization, or the belief that human connections and qualities will be lost, while 8% each mention the potential for AI becoming too powerful or for people to misuse the technology for nefarious reasons.
Some 7% who express more concern than excitement about AI offer that it would make people overly reliant on this technology, and 6% worry about the failures and flaws of the technology.
Small shares of those who are worried about the integration of AI also mention other concerns ranging from what technology companies or the government would do with this type of technology to human biases being embedded into these computer programs to what they see as a lack of regulation or oversight of the technology and the industries that develop them.
Mixed views about some ways AI applications could develop: People are more excited about some, more concerned about others
In addition to the broad question about where people stand in terms of their general excitement or concern about AI, this survey also asked about a number of more specific possible developments in AI programs.
There are widely varying public views about six different kinds of AI applications that were included in the survey. Some prompt relatively more excitement than concern, and some generate substantial concern. For instance, 57% say they would be very or somewhat excited about AI applications that could perform household chores, but just 9% express the same level of enthusiasm for AI making important life decisions for people or knowing their thoughts and behaviors.
Nearly half (46%) would be very or somewhat excited about AI that could perform repetitive workplace tasks, compared with 26% who would be very or somewhat concerned about that. When it comes to AI that could diagnose medical problems, people are more evenly split: 40% would be at least somewhat excited and 35% would be at least somewhat concerned, while 24% say they are equally excited and concerned. More cautionary views are also evident when people are asked about AI that could handle customer service calls: 47% are very or somewhat concerned about this issue, compared with 27% who are at least somewhat excited.
It is important to note that on these issues, portions of Americans say they are equally excited and concerned about various possible AI developments. That share ranges from 16% to 27% depending on the possible development.
Some differences among groups stand out as Americans assess these various AI apps. Those with a high school education or less are more likely than those with postgraduate degrees to say they are at least somewhat concerned at the prospect that AI programs could perform repetitive workplace tasks (36% vs. 12%). Women are more likely than men to say they would be at least somewhat concerned if AI programs could diagnose medical problems (43% vs. 27%). A larger share of those ages 65 and older (82%) than of those 18 to 29 (63%) say they would be very or somewhat concerned if AI programs could make important life decisions for people.
Views of men, White adults are seen as better represented than those of other groups when designing AI programs
In recent years, there have been significant revelations about and investigations into potential shortcomings of artificial intelligence programs. One of the central concerns is that AI computer systems may not factor in a diversity of perspectives, especially when it comes to gender, race, and ethnicity.
In this survey, people were asked how well they thought that those who design AI programs take into account the experiences and views of some groups. Overall, about half of Americans (51%) believe the experiences and views of men are very or somewhat well taken into account by those who design AI programs. By contrast, smaller shares feel the views of women are taken into account very or somewhat well. And while just 12% of U.S. adults say the experiences of men are not well taken into account in the design of AI programs, about twice that share say the same about the experiences and views of women.
Additionally, 48% think the views of White adults are at least somewhat well taken into account in the creation of AI programs, versus smaller shares who think the views of Asian, Black or Hispanic adults are well-represented. Just 13% feel the views and experiences of White adults are not well taken into account; 23% say the same about Asian adults and a third say this about Black or Hispanic adults.
Still, there are about four in ten in each case who, when asked these questions, say they are not sure how the experiences and views of different groups are taken into account as AI programs are designed.
Views on this topic vary across racial and ethnic groups:
Among White adults: They are more likely than other racial and ethnic groups to say they are “not sure” how well the designers of AI programs take into account each of the six sets of experiences and views queried in this survey. For instance, 45% of White adults say they are not sure if the experiences and views of White adults are well accounted for in the design of AI programs. That compares with 30% of Black adults, 28% of Hispanic adults, and 21% of Asian adults who say they are not sure about this. Similar uncertainty among White adults appears when they are asked about other groups’ perspectives.
Among Black adults: About half of Black adults (47%) believe that the experiences and views of Black adults are not well taken into account by the people who design artificial intelligence programs, while a smaller share (24%) say Black adults’ experiences are well taken into account. Compared with Black adults, a similar share of Asian adults (39%) feel the experiences and views of Black adults are not well taken into account when AI programs are designed, while Hispanic adults (35%) and White adults (29%) are less likely than Black adults to hold this view.
Among Hispanic adults: About one-third of Hispanic Americans (34%) believe the experiences and views of Hispanic adults are well taken into account as the programs are designed. This is the highest share among the groups in the survey: 24% of Asian adults, 22% of Black adults, and 21% of White adults feel this way. Meanwhile, 36% of Hispanic adults say the experiences and views of Hispanic adults are not well taken into account as AI programs are designed. About three-in-ten Hispanic adults (29%) say they are not sure about this question.
Among Asian adults: Some 41% of Asian adults think that the experiences of Asian adults are well taken into account. Similar shares of Hispanic adults (42%) and Black adults (36%) say this about Asians’ views, versus a smaller share of White adults (29%) who think that is the case.
A plurality of Americans are not sure whether AI can be fairly designed
In addition to gathering opinions on how well various perspectives are taken into account, the survey explored how people judge AI programs when it comes to fair decisions. Asked if it is possible for the people who design AI to create computer programs that can consistently make fair decisions in complex situations, Americans are divided: 30% say AI design for fair decisions is possible, 28% say it is not possible, while the largest share – 41% – say they are not sure.
Some noteworthy differences among different groups on this question are tied to gender. Men are more likely than women to believe it is possible to design AI programs that can consistently make fair decisions (38% vs. 22%), and women are more likely to say they are not sure (46% vs. 35%).
Sign up for our Internet, Science, and Tech newsletter
The bottom line is for the people to regain their original, moral principles, which have intentionally been watered out over the past generations by our press, TV, and other media owned by the Illuminati/Bilderberger Group, corrupting our morals by making misbehavior acceptable to our society. Only in this way shall we conquer this oncoming wave of evil.
All articles contained in Human-Synthesis are freely available and collected from the Internet. The interpretation of the contents is left to the readers and do not necessarily represent the views of the Administrator. Disclaimer: The contents of this article are of the sole responsibility of the author(s). Human-Synthesis will not be responsible for any inaccurate or incorrect statement in this article. Human-Synthesis grants permission to cross-post original Human-Synthesis articles on community internet sites as long as the text & title are not modified