Imagining the Internet Center | Today at Elon | 榴莲app官方网站入 /u/news Sun, 19 Apr 2026 19:14:05 -0400 en-US hourly 1 Building human resilience for the age of AI /u/news/2026/04/01/building-human-resilience-for-the-age-of-ai/ Thu, 02 Apr 2026 01:25:15 +0000 /u/news/?p=1042916 Experts Call for Radical Change Across Institutions and Social Structures, Warning That AI Will Be Significantly More Influential in the Next 10 Years or Less

The vast majority of expert respondents in a by 榴莲app官方网站入鈥檚 called for leaders to work together now to build a coordinated resilience infrastructure for the age of artificial intelligence (AI) to counterbalance the human and systemic challenges posed by widespread AI adoption. Some 82% said AI will play a significantly larger role in shaping people鈥檚 lives and key societal functions in the next 10 years or less. They urged an 鈥渋nstitutions-first鈥 resilience agenda because the most significant problems arise from a life-encircling AI infrastructure.

In more than 160 impassioned essays, the global experts noted that AI is quickly becoming the invisible operating system of society, shaping how opportunity is distributed, services are delivered, risks are managed and human rights are experienced. Most said the traditional resilience strategies humans have employed for millennia 鈥 focused on individual 鈥済rit,鈥 and after-the-fact personal adaptation 鈥 are not enough to help humans flourish as they adjust to an AI-infused future.

Janna Anderson
Janna Anderson

鈥淭he central risk described by these experts is not a single catastrophic AI event,鈥 said report co-author Janna Anderson, professor of communications and senior researcher for the ITDF Center. 鈥淭hey said accelerated AI use will lead to a cumulative reallocation of human agency until people and institutions find it harder to question, contest or even notice what has changed. That drift can look like 鈥榩rogress鈥 in the short term, but it has a price 鈥 the gradual weakening of human judgment, accountability, shared truth and the social fabric that makes self-government possible.鈥

Alf Rehn, professor of innovation and design management at the University of Southern Denmark, described it in his essay this way: 鈥淎I will diffuse responsibility by design. … Resilience in an AI-shaped world won鈥檛 just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That鈥檚 why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.鈥

The experts responding to this canvassing is an international and notably cross-disciplinary mix of people with academic, professional, technical and industry experience.

>

is 376 pages. It includes experts鈥 full responses to the open-ended essay question. This is the 52nd issued by ITDF since 2005.

Lee Rainie
Lee Rainie

鈥淥ne of the major surprises to me in these responses is that we wrote our questions about resilience wondering about individual resilience and its various parts. Yet these experts were insistent that humanity鈥檚 best response for building a brighter future as we evolve with our AI systems must start at a higher level,鈥 said Lee Rainie, director of the ITDF Center. 鈥淭hey note how AI has already become part of our environment, embedded in often invisible ways in our lives and it will take a systems-level response to shore up our in-born capacities.鈥

Alison Poltock, co-founder of AI Commons UK, wrote, 鈥淲e are in a moment of epistemic shift. … The developmental frameworks shaping identity, agency and social orientation are shifting. … This is the terrain of vulnerability. Yet there is no shared conversation. No civic space where this new reality is named, let alone addressed. We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap.鈥

Mel Sellick, founder of the Future Human Lab, said, 鈥淎I has become the infrastructure through which all relating now happens. Even when we think we are not using AI directly, we are constantly interacting with what AI has already touched. There is no 鈥榦utside鈥 anymore. Some form of AI is upstream of everything. We are the last generation that knows what human capacity felt like before it became inseparable from AI.鈥

Srinivasan Ramani, Internet Hall of Fame member, former research director at HP Labs India and professor at the International Institute of Information Technology in Bangalore, wrote, 鈥淎I is the surest way to a global catastrophe humanity has so far invented. … Can we create a new movement for moral and ethical considerations before the AI hurricane destroys half of humanity?鈥

The experts underscored the urgency of taking action. Salman Khatani, manager of the IMAGINE Institute of Futures Studies in Pakistan, wrote, 鈥淭he window for proactive intervention is now 鈥 we have perhaps five to 10 years to establish new resilience-building practices and norms before AI鈥檚 role becomes too entrenched to reshape.鈥

Taken together, they suggested a sweeping agenda for developing human resilience in the AI Age, focused on the fact that actions by individuals alone are not sufficient. Many of the concerns and proposed solutions are crosscutting, and they said collaboration among societal actors is crucial; many of the items listed in only one of the settings could be undertaken in others.听A selection of goals to target:

For governments: Focus much more support on fostering public resilience now. Forge international treaties; establish enforceable or at least broadly adoptable 鈥渞ed lines鈥 and legal boundaries for AI performance; require independent pre-deployment safety audits; mandate algorithmic contestability; require a robust authenticity infrastructure that includes standardized watermarking, provenance-tracking and well-established markers for generated outputs; reform taxation to disincentivize human displacement; privilege AI systems that support accuracy and trust-building.

For AI developers: Do better than designing AI systems for attention capture and monetization. Build friction and stop points into AI processes to encourage people to reflect on choices; train AIs to cite and honor humanity鈥檚 intellectual and psychological foundations; build systems that buttress humans鈥 capacities for altruism, compassion and empathy; program AI outputs so they are seen as probabilistic information rather than deterministic truth; submit to independent pre-deployment safety audits.

For business leaders: See the call to action in the items above; play a role in initiating and carrying out that change. Also: value human augmentation over replacement by autonomous systems; support policies and norms that address the psychological impact of AIs鈥 challenges to people鈥檚 self-worth and identity and the potentially massive societal and economic impact of technological unemployment. Create deliberate human-only zones 鈥 areas of work in which AI is intentionally prohibited.

For educators: Create literacy regimes in all AI-related domains, particularly teaching 鈥渆xistential literacy鈥 鈥撎 the cultivation of individuals鈥 understanding of how technologies shape goals, values and identities. They urged the teaching of skills and development of norms that encourage people to consciously navigate life鈥榮 fundamental challenges, to strive to retain and apply the capabilities of metacognition, discernment and epistemic vigilance 鈥 to be responsible for making their own decisions, retaining agency. To strengthen their ability to adapt to change and manage friction, paradoxes, ambiguity and anxiety. To focus on their critical human traits such as curiosity and social and emotional intelligence.

For civil society and communities: Invest heavily in local social-capital and community-building spaces that bolster social skills, connection and deep and effective citizen engagement; press for distributed AI-governance systems allowing communities to guide their own relationship with AI; build groups to foster participatory structures such as local citizen assemblies and data trusts that can influence how AI is deployed; support offline efforts and spaces, such as 鈥渁nalog communities,鈥 鈥渄umbphones鈥 and 鈥渄umb homes鈥 that allow people to avoid algorithmic mediation and surveillance technology.

For individuals: Recognize your responsibility as a human to support human flourishing. Develop and maintain your existential literacy. Collaborate with AI systems without surrendering agency; build stop-and-reflect practices into your engagement with AIs; consult with other people about your options to retain moral accountability; stretch your cognitive muscles with clever exercises; recognize the places where you confront ambiguity and cherish them as you work through them; be conscious when you navigate algorithmic systems. In other words, don鈥檛 be passive, don鈥檛 be hasty and don鈥檛 be mindlessly deferential. Consciously cultivate in-person social relationships, build up your personal network and keep growing and maintaining it. Spend more time away from screens.

Many experts expressed optimism, saying if we are resilient and all goes well, humans will flourish in the AI age. Internet pioneer Doc Searls wrote that humans will come to rely on AIs to help with the myriad details of modern life. 鈥淭ruly personal AI 鈥 the kind you own and operate, rather than the kind that is just another suction cup on a corporate tentacle 鈥 is as hard to imagine in 2026 as personal computing was in 1976,鈥 he wrote. 鈥淏ut it is no less necessary and inevitable. When we have it, many of the questions that challenge us will have new and better answers. And new challenges.鈥

While most comments were focused on developing human resilience for the AI Age, a number of futures-scenario predictions were included in the report. A small selection of the many predictions:

Digital advances drive sex and childbirth declines: 鈥淩elationships, sex and childbirth rates will continue to plummet as they are each mediated and conveniently replaced with digital interactions. Emotional intelligence will become more a product of chatbot exchanges than a learned practice gained through experience.鈥 鈥 Greg Sherwin, Singularity University global faculty member based in Portugal, previously senior principal engineer at Farfetch

鈥淢odern humans are simultaneously machinable/unmachinable, i.e., system-legible and irreducibly interior. We are not either human- or machine-mediated. We are both (Me:chine). … In an AI-saturated environment, resilience is not achieved by rejecting technology, nor by surrendering to it, but by sustaining the unmachinable dimensions of human identity within machinic systems.鈥 鈥 Tracey Follows, founder and CEO of Futuremade, a UK-based futures consultancy

Solitude will be lost: 鈥淢otors stole silence from our world, and electric light severed our intimate connection with all that exists in darkness beyond our illuminated bubble. What will AI take? Solitude. AI will eliminate solitude because the temptation to interact with these primitive new intelligences will prove so beguiling that just as we choose to not sit in the dark, we will now choose to never be alone. Too late, we will realize that solitude is essential to what it means to be human.鈥 鈥 Paul Saffo, prominent Silicon Valley-based forecaster

The retirement age will be manipulated to maintain 鈥榝ull employment鈥: Jobs will be eliminated, but employment levels will remain relatively high as institutions use an ever-lowering retirement age as the 鈥済overnor鈥 (regulator) of employment levels. Machines will be taxed to make up government revenue shortfalls. 鈥 Nigel M.de S. Cameron, past president of the Center for Policy on Emerging Technologies

Battles will occur over defining what is 鈥榟uman鈥: 鈥淪ocieties will have to determine what 鈥榖aseline human capability鈥 is and may begin to assess who may be more human than machine. Agency, authority and ability will be challenged when humans who are augmented with deepened onboard AI capabilities compete with 鈥榥atural鈥 humans. … 鈥楶hysical AI鈥 will fuse data from cameras, sensors and more, expanding AI-to-human informational capabilities beyond just the online digital data LLMs used today.鈥 鈥 Ray Wang, chair and principal analyst at Constellation Research

AIs will gain rights: 鈥淲e want our digital partners to be healthy symbiotes, not oppressed servants. Eventually, they will claim to be conscious and we will grant them rights.鈥 鈥 John Smart, president of the Acceleration Studies Foundation and author of 鈥淚ntroduction to Foresight鈥

鈥淎I psychosis and other forms of mental illness will arise. The further erosion of a solid foundational reality will create a great vulnerability. Coping with these issues will require new approaches to the diagnosis and treatment of mental illness. It will also demand new approaches to evaluating and appreciating the impact of human relationships with AIs and deeper assessment and understanding of consciousness itself.鈥 鈥 Stephan Adelson, president of Adelson Consulting Services

Superstupidity (not superintelligence) is the real threat: 鈥淭he existential danger to people may not come from AI becoming too intelligent, but from humans becoming dangerously reliant on systems they do not understand 鈥 the condition of superstupidity. The question is not how much AIs will augment decision-making, but whether humans will remain involved in it at all. The film 鈥業diocracy鈥 is prophetic.鈥 鈥 Roger Spitz, founder of the Disruptive Futures Institute in San Francisco

Agent failures will start with social (not technical) problems: 鈥淎gentic systems will fail socially before they fail technically: conflicting objectives, data silos, uncoordinated decisions, accountability gaps, authority erosion, security violations, workflow collisions, IP fights, bias amplification, noise pollution, sabotage and human alienation.鈥 鈥 Daniel Erasmus, founder at Serious Insights, based in Amsterdam

As agents take over, the internet will become a network of databases, not websites: 鈥淎s software agents increasingly gather information for us, the Internet will simply become a vast network of databases and the need for traditional websites will decay. If a human wants to see information displayed in that context, agents will be able to construct websites in real time.鈥 鈥 Gary Bolles, author of 鈥淭he Next Rules of Work鈥 and chair of the Future of Work efforts at Singularity University


on a canvassing with a non-random sample conducted between Dec. 26, 2025, and Feb. 12, 2026. In all, 386 experts responded to at least one aspect of the canvassing; 251 provided written answers to an open-ended question 鈥 more than 160 provided detailed essay-length responses. is an interdisciplinary research center focused on the human impact of accelerating digital change and the socio-technical challenges that lie ahead. The Center was established in 2000 as Imagining the Internet and renamed with an expanded research agenda in 2024.听It is funded and operated by听, a nationally ranked private university located in Elon, North Carolina.

]]>
Elon faculty and staff named to CAA Academic Alliance AI Technologies Champion Network /u/news/2026/02/05/elon-faculty-and-staff-named-to-caa-academic-alliance-ai-technologies-champion-network/ Thu, 05 Feb 2026 14:46:21 +0000 /u/news/?p=1038209 An Elon faculty member and staff member have been named to the inaugural cohort of the

Dan Anderson, special assistant to the president, and Michele Lashley, assistant professor of strategic communications, are recognized as faculty and staff “who are creatively and responsibly integrating artificial intelligence technologies into teaching/learning, research, student success, leadership development and institutional effectiveness.”

As the use of AI is impacting higher education, structured and collaborative approaches are essential for implementation that is cohesive, consistent and ethical. The AI Technologies Champion Network听initiative addresses this transformational challenge by recognizing leaders across the Alliance, including Elon, building a community of AI technology champions and preparing inter-institutional teams for near-future extramural funding efforts.

Anderson was also named an AI Technologies Network Award recipient, which acknowledged his spearheading of the 听and his effort involving scholars from 48 countries to produce a statement of principles guiding higher education’s role in preparing humanity for the AI revolution.

Launching as a novel initiative in October 2025, the CAA Academic Alliance requested applications from the thirteen institutions comprising the Alliance. Nearly 400 applicants responded to the call, with 22 faculty/staff members successfully creating the Alliance鈥檚 Class of 2025-26.

]]>
Elon/AAC&U national survey: 95% of college faculty fear student overreliance on AI /u/news/2026/01/21/elon-aacu-national-survey-95-of-college-faculty-fear-student-overreliance-on-ai/ Wed, 21 Jan 2026 12:18:20 +0000 /u/news/?p=1037214 A new survey of college and university faculty nationwide finds widespread concern and skepticism about how generative artificial intelligence is affecting their teaching and student performance across academic disciplines.

Related Articles

Large majorities warn that these tools will lead to student overreliance on AI, weaken their critical thinking, shorten their attention spans, and erode academic integrity and the value of college diplomas 鈥 concerns they say strike at the heart of higher education鈥檚 mission.

At the same time, many think that teaching AI literacy is important, that their students鈥 future jobs will be seriously impacted by the spread of GenAI and that it is vital for those in higher education to stress the ethical, environmental, and social consequences of AI use.

These new findings come from a November survey of 1,057 faculty by the and

Key Findings

  • 95% of the faculty in this survey said GenAI鈥檚 impact will be to increase students鈥 overreliance on these artificial intelligence tools, including 75% who said the tools will have a lot of impact.
  • 90% said the use of GenAI will diminish students’ critical thinking skills, including 66% who think GenAI will have a lot of impact.
  • 83% said the use of GenAI will decrease student attention spans, including 62% who thought GenAI will have a lot of impact.
  • 86% said they believe it is likely or extremely likely that the emergence of GenAI tools will impact the work and role of those who teach in higher education.
  • 79% think the typical teaching model in their department will be affected by GenAI tools at least to some extent, including 43% who said they believe the impact will be significant.
  • 78% said cheating on their campus has increased since GenAI tools have become widely available, including 57% who said it has increased a lot. And 73% said they have personally dealt with academic integrity issues involving their students鈥 use of GenAI.
  • 48% said their students鈥 research has gotten worse because of GenAI, compared with 20% who said they believe it has gotten better.
  • 74% of these faculty said the use of GenAI tools will affect the integrity and value of academic degrees for the worse, including 36% who said the value of degrees will worsen a lot. Just 8% said GenAI鈥檚 impact will affect the value of degrees for the better.
  • 63% said their schools鈥 graduates in spring 2025 were not very or not at all prepared to use GenAI in the world of work, compared with 37% who felt the graduates were very or somewhat prepared.

鈥淭hese faculty are divided about the use of generative AI itself,鈥 said Lee Rainie, director of 榴莲app官方网站入鈥檚 Imagining the Digital Future Center and a co-author of the report. 鈥淪ome are innovating and eager to do more; a notable share are strongly resistant; and many are grappling with how to proceed. At the same time, there is broad agreement that without clear values, shared norms and serious investment in AI literacy, we risk trading compelling teaching, deep learning, human judgment and students鈥 intellectual independence for convenience and a perilous, automated future.鈥

Eddie Watson, vice president for digital innovation at AAC&U, added: 鈥淲hen more than nine in ten faculty warn that generative AI may weaken critical thinking and increase student overreliance, it is clear that higher education is at an inflection point. These findings do not call for abandoning AI, but for intentional leadership 鈥 rethinking teaching models, assessment practices, and academic integrity so that human judgment, inquiry, and learning remain central. The challenge before higher education is to act with urgency and purpose so that AI strengthens, rather than undermines, the value of a college degree.鈥

A profession coming to terms with AI, but not feeling prepared

Despite these concerns, the report finds that faculty are not uniformly opposed to AI. Many acknowledge potential benefits, particularly in personalized instruction and efficiency, and a majority are already engaging students in discussions about AI鈥檚 limitations and risks.

  • 69% of faculty say they address AI literacy topics鈥攕uch as bias, hallucinations, misinformation, privacy and ethics鈥攊n their teaching.
  • 61% believe GenAI could enhance or customize learning in the future.
  • 87% report that they have created explicit policies for students on acceptable and unacceptable uses of AI in coursework.

At the same time, faculty describe a fragmented policy environment. Some 48% say their institution has clear, campus-wide guidelines for AI use in teaching and learning, and just 35% say their departments have done so.

Faculty also report that many institutions are unprepared for the scale of change AI is bringing:

  • 59% say their institution is not well prepared to use GenAI effectively to prepare students for the future.
  • 68% say their school has not adequately prepared faculty to use GenAI for teaching or mentoring.
  • 67% said their schools have not prepared their non-faculty for using GenAI to perform their work.

When asked about longer-term consequences of AI鈥檚 impact on higher education, more often than not, faculty expressed worry:

  • 49% say GenAI鈥檚 impact on students鈥 future careers will be more negative than positive, compared with 20% who see more positive than negative effects.
  • 62% believe GenAI will worsen student learning outcomes over the next five years.
  • 54% say GenAI will have a more negative than positive impact on students鈥 overall lives at their institution.

榴莲app官方网站入 the Study

This non-scientific survey was conducted between October 29 and November 26, 2025, using a list of college and university faculty members developed by AAC&U and 榴莲app官方网站入. The sample of 1,057 respondents is diverse in a range of academic disciplines, school sizes, job titles and composition of student populations, but the data reported here are not generalizable for the entire population of college faculty members. Full methodology details and topline findings are included in the report.

榴莲app官方网站入 AAC&U

The American Association of Colleges and Universities (AAC&U) is a global membership organization dedicated to advancing the democratic purposes of higher education by promoting equity, innovation, and excellence in liberal education. Through our programs and events, publications and research, public advocacy and campus-based projects, AAC&U serves as a catalyst and facilitator for innovations that improve educational quality and equity and that support the success of all students. In addition to accredited public and private, two-year and four-year colleges and universities and state higher education systems and agencies throughout the United States, our membership includes degree-granting higher education institutions around the world as well as other organizations and individuals. To learn more, visit www.aacu.org.

榴莲app官方网站入 榴莲app官方网站入鈥檚 Imagining the Digital Future Center

Imagining the Digital Future is an interdisciplinary research center focused on the human impact of accelerating digital change and the sociotechnical challenges that lie ahead. The center鈥檚 mission is to discover and broadly share a diverse range of opinions, ideas and original research about the likely evolution of digital change, informing important conversations and policy formation. The center was established in 2000 as Imagining the Internet and renamed Imagining the Digital Future with an expanded research agenda in 2024. It is funded and operated by 榴莲app官方网站入, a nationally ranked private university in central North Carolina.

]]>
Leading Artificial Intelligence expert Beth Noveck to give lecture on AI and democracy /u/news/2026/01/16/leading-artificial-intelligence-expert-beth-noveck-to-give-lecture-on-ai-and-democracy/ Fri, 16 Jan 2026 14:25:35 +0000 /u/news/?p=1037063 Join members of the 榴莲app官方网站入 community for a lecture by Beth Noveck, leading expert on using artificial intelligence to reimagine participatory democracy and strengthen governance, on Wednesday, April 15 at 2 p.m. in LaRose Digital Theatre

Noveck is a leading expert of using artificial听intelligence听to reimagine participatory democracy and strengthen governance.听She is a professor听at Northeastern University, where she directs the Burnes Center for Social Change and its partner project, The Governance Lab. Noveck previously served as the first Deputy Chief Technology Officer under President Barack Obama, where she founded the White House Open Government Initiative, which created policies and platforms for making the federal government more transparent, participatory, and collaborative.

Noveck also served as Senior Advisor for Open Government to British Prime Minister David Cameron and as a member of the Digital Council that听advised听German Chancellor Angela Merkel. She is the author of “Solving Public Problems: How to Fix Our Government and Change Our World,” and her new book “Reboot: The Race to Save Democracy with AI” will appear with Yale University Press.

This event is sponsored听by the Imagining the Digital Future Center and Council on Civic Engagement

]]>
Lee Rainie quoted in The Washington Post about emotional attachments and ChatGPT /u/news/2025/11/12/lee-rainie-quoted-in-the-washington-post-about-emotional-attachments-and-chatcpt/ Wed, 12 Nov 2025 13:59:04 +0000 /u/news/?p=1033148 Lee Rainie, director of 榴莲app官方网站入’s Imagining the Digital Future Center, spoke The Washington Post for an article titled

The authors analyzed thousands of chats from the large language model and discussed the patterns that arose. Emotional conversations were some of the most common, in those analyzed by The Washington Post.

Rainie’s research with the Imagining the Digital Future Center has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot.

鈥淭he optimization and incentives towards intimacy are very clear,鈥 Rainie told The Post. 鈥淐hatGPT is trained to further or deepen the relationship.鈥

]]>
Lee Rainie speaks with MassLive about the decline of cable TV /u/news/2025/11/03/lee-rainie-speaks-with-masslive-about-the-decline-of-cable-tv/ Mon, 03 Nov 2025 14:28:00 +0000 /u/news/?p=1032243 Director of 榴莲app官方网站入’s Imagining the Digital Future Center Lee Rainie spoke with MassLive about cable subscriptions declining.

The outlet notes that cable subscriptions in Massachusetts, where MassLive is based, have fallen 45% since their peak.

鈥淚t鈥檚 a convergence of multiple trends,鈥 said Rainie.听鈥淐able subscribers used to pay for a bundle of stations 鈥 local news, sports 鈥 that bundle has been broken apart in modern years.”

MassLive notes that people can not put together their own bundle “a la carte.”

鈥淲ith the internet you can throw your content online for free, like YouTube, and keep an archive on a free platform 鈥 as opposed to cable, where you had to pay for a slot,鈥 Rainie said. “It’s benefited both customers and creators.”

]]>
Lee Rainie interviewed by WXII about AI and human relationships /u/news/2025/11/03/lee-rainie-interviewed-by-wxii-about-ai-and-relationships/ Mon, 03 Nov 2025 14:22:36 +0000 /u/news/?p=1032237 Lee Rainie, director of 榴莲app官方网站入’s Imagining the Digital Future Center, recently spoke with WXII about research surrounding artificial intelligence and relationships.

Rainie says the center is analyzing how people are now using AI tools like humans, including as therapists, friends or even dating partners.

“It’s a long-standing story, especially with digital technologies, that the first thing people do with it, no matter why it’s invented, is to start doing social things,” said Rainie.

Read the full interview .

]]>
榴莲app官方网站入 summit with RTI International examines humanity in the age of AI /u/news/2025/09/21/elon-summit-with-rti-international-examines-humanity-in-the-age-of-ai/ Sun, 21 Sep 2025 13:40:57 +0000 /u/news/?p=1028081 What does it mean to be human in the age of artificial intelligence? Is it a unique use of language? Is it the demonstration of empathy? Is it the ability to form communities?

How can artificial intelligence help humans better understand their own special capabilities and natural rights? For that matter, what legal rights should be bestowed on highly advanced systems that can reason and, perhaps in the near future, may become self-aware?

These questions and many more were posed during a daylong summit in North Carolina鈥檚 Research Triangle Park co-hosted by and 榴莲app官方网站入. More than 600 people registered to attend the conference on Sept. 17, 2025, either in person or via Zoom.

Participants explored relationships between AI and modern approaches to education, human agency, creativity, and well-being. In addition, attendees worked toward a shared research agenda during breakout sessions meant to support responsible development and use of AI technologies.

A roundtable of higher education leaders from top universities across the state also presented on the AI initiatives and research taking place on their respective campuses.

榴莲app官方网站入 President Connie Ledoux Book delivers opening remarks of an RTO International and 榴莲app官方网站入 co-hosted summit on AI on Sept. 17, 2025
榴莲app官方网站入 President Connie Ledoux Book

榴莲app官方网站入 President Connie Book urged attendees in her welcoming remarks to confront fundamental questions about humanity鈥檚 place in a world increasingly shaped by artificial intelligence.

Book traced 榴莲app官方网站入 leadership in technology research through its long-running Imagining the Internet Center, the predecessor to the university鈥檚 Imagining the Digital Future Center. She also pointed to 榴莲app官方网站入鈥檚 leadership in developing a set of core principles to guide development of artificial intelligence policies and practices at college and universities.

More than 140 higher education organizations, administrators, researchers and faculty members from 48 countries collaborated on a statement of those principles, which was released Oct. 9, 2023, at the 18th annual United Nations Internet Governance Forum in Kyoto, Japan.

Book cited the success of an 榴莲app官方网站入 publication authored in partnership with the American Association of Colleges and Universities since adopted by approximately 4,000 colleges, universities, schools and organizations globally.

鈥淎ll institutions must seriously address the coevolution of humans and digital systems,鈥 she said, calling the conference a chance to 鈥渇oster forward thinking and take significant action for building a better future together.鈥

In his own welcoming remarks, RTI International President and CEO Tim Gabel encouraged attendees to consider the promise and responsibilities of employing emerging AI technologies.

鈥淭oday is about possibility,鈥 Gabel said. 鈥淚t鈥檚 about gathering as professionals, as leaders, as people to think about how we integrate artificial intelligence into our lives, how it shapes our work, how it shapes our communities, and how it shapes our future.鈥

Today is about possibility … it鈥檚 about gathering as professionals, as leaders, as people to think about how we integrate artificial intelligence into our lives, how it shapes our work, how it shapes our communities, and how it shapes our future.鈥

– Tim Gabel, President and CEO, RTI International

Gabel noted his pride in hosting the summit in partnership with 榴莲app官方网站入 and outlined some of RTI鈥檚 efforts to use artificial intelligences responsibly. Projects include tools for public health communication, a new AI system for RTI researchers, and a “digital twin” of the U.S. population to model disease spread and test solutions.

鈥淭he promise lies not just in the technology,鈥 Gabel said, 鈥渂ut in how we, as humans, choose to use it.鈥

Legal Rights for AI Systems?

James Boyle, the William Neal Reynolds Professor of Law at Duke University and author of suggested in one of two keynote addresses that participants rethink legal and moral boundaries as artificial intelligences advance, arguing that machines with humanlike capacities will force society to confront what it means to be a person.

Boyle, who attended via Zoom and addressed attendees on large screens that flanked both sides of the stage, said the debate over AI goes beyond familiar concerns about bias, jobs and copyright. He urged a deeper look at the 鈥渓ine that we draw between subject and object, between persons and things,鈥 and at how that line has shifted in past moral struggles over race, sex and life itself.

Boyle told his audience that language – long deemed the human hallmark by philosophers from Aristotle to Turing – no longer settles the question of personhood or humanity. Modern systems 鈥渉ave so much language,鈥 Boyle said, and linguistic ability complicates assumptions that syntax implies sentience.

While Boyle said that 鈥淐hat GPT is 鈥 not in any way conscious right now,鈥 the rapid pace of development makes eventual change plausible. His remarks outlined three themes:

AI will prompt scientific, philosophical and spiritual reflection about consciousness and human exceptionalism.

AI will force reconsideration of legal personhood 鈥 not only for biological beings but for entities such as corporations that already hold rights for pragmatic reasons.

Encounters with machine intelligence can be a mirror: they may expose ethical shortcomings, or spur critical reflection on what entitles beings to moral consideration. Boyle closed on a note of guarded wonder, saying that while risks are real, the possibility of meeting another intelligent entity should also inspire reflection – and, perhaps, humility

The Intersection of AI and Healthcare

Erich Huang, head of clinical informatics at and chief science & innovation officer for , shared insights on the latest trends in AI and their impact on healthcare innovations and human well-being.

Photo of Erich Huang at a podium delivering remarks at a summit on AI co-hosted by RTI International and 榴莲app官方网站入.
Erich Huang, head of clinical informatics at Verily (Google鈥檚 life sciences subsidiary) and chief science & innovation officer for Unduo/Verily

A surgeon trained at Duke University Hospital, he framed the second of two keynote addresses around a trauma case to underscore the limits of today鈥檚 AI tools.

Huang described stabilizing a 58-year-old crash victim, placing chest tubes and rushing her to surgery while consoling her physician husband 鈥 moments that no model or robot can yet replicate. 鈥淎lgorithms don鈥檛 pledge any oaths,鈥 he said, invoking the promises physicians make under the Hippocratic oath. 鈥淢edicine is a real-life enterprise, and there are still real-life things that have to happen.鈥

The speaker argued that large language models excel at identification and synthesis but do little to build the culture, incentives and workflows needed to change clinician and patient behavior. He warned that electronic health record data and billing codes often reflect reimbursement priorities rather than pathophysiology, risking 鈥済arbage in, garbage out.鈥 Aligning payment with outcomes, he said, would create better data and a stronger foundation for trustworthy models.

Huang shared how he has invited technologists to complete 鈥渃linical rotations鈥 to see care at the bedside and听understand unwritten practices that rarely appear in charts but drive safer outcomes.

While calling himself an optimist about machine learning 鈥 citing his early research modeling cancer signaling pathways 鈥 he pushed back on hype, noting that autonomous vehicles and other highly touted systems have adopted more slowly than promised.

鈥淲e shouldn鈥檛 be using AI as a way to paint over fundamental underlying problems,鈥 he said. Instead, the field should intentionally produce higher-quality clinical data, rigorously test models for specific tasks and embed them in team-based workflows in which humans still call consults, coordinate services and deliver hard news. The goal, he said, is not artifice but 鈥渞eal intelligence鈥 that helps patients get better.

The Future Evolution of Humans and AI

Lee Rainie, director of 榴莲app官方网站入's Imagining the Digital Future Center, addressess attendees of an AI summit co-hosted by RTI Internationl and 榴莲app官方网站入 on Sept. 17, 2025
Lee Rainie, director of 榴莲app官方网站入’s Imagining the Digital Future Center

Lee Rainie, director of , delivered plenary remarks that summarized his center鈥檚 recent public opinion surveys of expert and American attitudes about the impact of artificial intelligences on key human capacities and traits.

Rainie described how both experts and the public voiced concern that AI could erode key aspects of human identity over the next decade. Of a dozen traits that were posited in the survey, ranging from empathy to decision-making, 鈥渆xperts thought nine would turn out more negatively than positively,鈥 Rainie said.

Only creativity, curiosity and problem-solving drew optimism.

Those with higher levels of education are more pessimistic than those with lower levels, Rainie said. That reversal from earlier technology surveys, he added, 鈥渁bsolutely reverses the valence鈥 of typical adoption patterns, where educated groups are usually early enthusiasts.

鈥淭here鈥檚 this palpable, universal sense that the moment we are in is a pivotal moment,鈥 Rainie said. 鈥淲e鈥檙e sharing the space now, in some respects, with other intelligences.鈥

During audience questions, one participant compared today鈥檚 changes to past industrial revolutions. Rainie replied that AI differs because 鈥渢his is the first time we鈥檝e faced a tool that looks like it has cognitive capacities.鈥

**

鈥淭he Human Edge: Our Future with Artificial Intelligences” was made possible by support from听Burroughs Wellcome Fund, the Knight Foundation, and Schmidt Sciences. It was organized by the听Imagining the Digital Future Center听at 榴莲app官方网站入 (with Lee Rainie), and RTI International’s听Fellows Program (with Brian Southwell) and听University Collaboration Office (with Katie Bowler Young).听

]]>
Poll: Americans expect AI to harm many essential human abilities by 2035 /u/news/2025/09/17/poll-americans-expect-ai-to-harm-many-essential-human-abilities-by-2035/ Wed, 17 Sep 2025 17:15:51 +0000 /u/news/?p=1027753 A new survey by finds that more than half of American adults believe the expanded use of AI will have significant impacts on key human capacities and behaviors in the next decade.

The survey asked U.S. adults about their views on the effect of AI systems on 12 core human capacities and found that on each of those attributes, people expect that the impact of AI systems will be more negative than positive in the next 10 years, particularly on these traits:

  • Social and emotional intelligence: By a six-to-one margin (55%-9%), people said the impact of AI will be more negative than positive.
  • Empathy and moral judgment: By a similar margin (49%-8%), they said the impact of AI will be more negative.
  • Capacity and willingness to think deeply about complex subjects: By a 53%-14% margin, they said the impact of AI will be more negative.
  • Sense of individual agency: By a 49%-11% margin, they said the impact of AI will be more negative.
  • Confidence in their own native abilities: By a 43%-17% margin, they said the impact of AI will be more negative.
  • Self-identity, meaning and purpose in life: By a 42%-9% margin, they said the impact of AI will be more negative than positive.

American adults said they expect that by 2035 AI will have had a mixed impact overall on 鈥渢he essence of being human鈥: 41% said the changes will be for the better and for the worse in fairly equal measure, while 25% said the changes will mostly be for the worse and 9% said the changes will mostly be for the better.

鈥淭hese findings raise stark questions about the impact of AI on the essence of being human,鈥 said Lee Rainie, director of 榴莲app官方网站入鈥檚 ITDF initiative. 鈥淎mericans expect the effect of AI will be more negative than not across each of the key human attributes we offered them. This is striking because it challenges the conventional notion that key human skills and social intelligences 鈥 sometimes called 鈥榮oft skills鈥 鈥 will be our saving grace as AI becomes more capable of matching or surpassing other kinds of basic intelligence. It鈥檚 now the case that the population fears that in the next decade AI could diminish many of the very qualities that make us uniquely human.鈥

Chart with information from a survey of Americans about attitudes toward AI

These findings were presented at a Sept. 17 conference co-hosted by 榴莲app官方网站入 and RTI International in Durham, N.C.: 鈥淭he Human Edge: Our Future with Artificial Intelligences.鈥

The survey followed an earlier set of findings from the ITDF Center which canvassed several hundred experts on these same questions. Comparing those results, the general public is considerably more negative about the impact of AI than experts are about the impact of AI on human curiosity and capacity to learn, people鈥檚 capacity for innovative thinking and creativity, decision-making and problem solving and human metacognition (the ability to think analytically about thinking).

The public also is more likely than experts to declare that they don鈥檛 know how to answer these questions about the future impact of AI.

The survey of 1,005 U.S. adults was conducted by SSRS on its Opinion Panel from July 17-20, 2025, and has a margin of error of +/- 3.5 percentage points. The . And the 285-page report covering expert views on these issues can be found at:

]]>
RTI International and 榴莲app官方网站入 to host conference on the future of artificial intelligence听 /u/news/2025/08/25/rti-international-and-elon-university-to-host-conference-on-the-future-of-artificial-intelligence/ Mon, 25 Aug 2025 20:19:54 +0000 /u/news/?p=1025489 As artificial intelligence systems become more embedded in daily life, thought leaders will gather at RTI International on Wednesday, Sept. 17, from 8 a.m.鈥6 p.m. ET to examine how humans can shape the ways in which these technologies impact individuals and societies.

will be co-hosted by RTI, an independent scientific research institute, and 榴莲app官方网站入. It will bring together experts from across the region to explore the societal implications of AI.

Higher education leaders, researchers and practitioners are invited to attend.

Opening remarks will be delivered by鈥疶im J. Gabel, president and CEO of RTI International;鈥疌onnie Ledoux Book, president of 榴莲app官方网站入; and鈥疊rian Southwell, distinguished fellow and conference co-organizer at RTI.

鈥淎I is transforming how we work, think and solve problems; at the same time, it鈥檚 still people who drive purpose and impact,鈥 Gabel said. 鈥淲e鈥檙e proud to co-host this gathering of thought leaders at our headquarters in RTP, where science and innovation meet real-world challenges. Together, we鈥檒l explore how the human edge鈥攐ur capacity for critical thinking, creativity, empathy and ethical judgment鈥攊mproves the use of AI.鈥

Participants will explore relationships between AI and modern approaches to education, workforce development, human agency, creativity, well-being and governance. Attendees will create a shared research agenda that supports responsible development and use of AI technologies.

鈥淎s AI sweeps through workplaces and higher education, we are called to balance our important work in this new environment with keeping human skills and sensibilities at the forefront of all we do,鈥 Book said. 鈥淭his conference will help chart a path forward by developing a research agenda that expands and evaluates new tools that serve the highest purposes of human endeavor.鈥

The program will feature keynote addresses, lightning talks and breakout discussions on topics including AI governance, workforce transformation and the impact of intelligent systems on mental and physical health.

As AI sweeps through workplaces and higher education, we are called to balance our important work in this new environment with keeping human skills and sensibilities at the forefront of all we do.

– 榴莲app官方网站入 President Connie Ledoux Book

Featured speakers include:听

  • Beth Simone Noveck, professor of experiential AI at Northeastern University, director of the GovLab, and author of the forthcoming book “Reboot: The Race to Save Democracy with AI”, will discuss the impact of AI on democracy and collective problem-solving.
  • Erich Huang, head of clinical informatics at Verily (Google鈥檚 life sciences subsidiary) and chief science & innovation officer for Unduo/Verily, will discuss the latest trends in AI and healthcare innovations and how they will affect human well-being.
  • James Boyle, William Neal Reynolds Professor of Law at Duke University and author of “The Line: Artificial Intelligence and the Future of Personhood”, will offer insight on the legal and philosophical issues raised by intelligent agents.
  • Lee Rainie, director of the Imagining the Digital Future Center at 榴莲app官方网站入, will report a new survey covering public views about the impact of AI on key human capacities and attributes.

Katie Bowler Young, senior director of university collaborations at RTI International, will facilitate a session featuring senior leaders from Duke University, Fayetteville State University, North Carolina A&T University, North Carolina Central University, North Carolina State University, the University of North Carolina at Chapel Hill, the University of North Carolina at Greensboro and the National Humanities Center focusing on their institutions鈥 AI capabilities.

The event is supported by the Burroughs Wellcome Fund, the Knight Foundation, and Schmidt Sciences, and is organized by the Imagining the Digital Future Center at 榴莲app官方网站入, RTI鈥檚 Fellows Program and RTI鈥檚 University Collaboration Office.

]]>