榴莲app官方网站入

榴莲app官方网站入

Credited Responses: The Best / Worst of Digital Future 2035

This page holds thousands of predictions and聽opinions expressed by experts who agreed to have their comments credited in a canvassing conducted from December 27, 2022, to February 21, 2023, by 榴莲app官方网站入鈥檚 Imagining the Internet Center and Pew Research Center. These experts were asked to respond with their thoughts about what are the BEST AND WORST CHANGES likely to occur by 2035 in digital technology and humans鈥 uses of digital systems.聽

Results released June 21, 2023聽鈥 Internet experts and highly engaged netizens participated in answering a survey fielded by 榴莲app官方网站入 and the Pew Internet Project between December 27, 2022 and February 21, 2023.聽Some respondents chose to identify themselves, some chose to be anonymous. We share the for-credit respondents’ written elaborations on this page. Workplaces are attributed for the purpose of indicating a level of expertise; statements reflect personal views.

This page does NOT hold the full report, which includes analysis, research findings and methodology.聽Click here to read the full report. In order, this page contains only: 1) the research question in brief; 2) a brief outline of the most common themes found among both anonymous and credited experts’ remarks; 3) the submissions from respondents to this canvassing who agreed to take credit for their remarks. (Anonymous responses are found here.)

The Prompt:聽The best and worst of digital life in 2035:聽We seek your insights about the future impact of digital change. This survey contains three substantive questions about that. The first two are open-ended questions. The third asks how you feel about the future you see.

The first open-ended question:聽As you look ahead to 2035, what are the BEST AND MOST BENEFICIAL changes that are likely to occur by then in digital technology and humans鈥 use of digital systems? We are particularly interested in your thoughts about how developments in digital technology and humans鈥 uses of it might improve human-centered development of digital tools and systems; human connections, governance and institutions; human rights; human knowledge; and human health and well-being.

The second open-ended question:聽As you look ahead to the year 2035, what are the MOST HARMFUL OR MENACING changes that are likely to occur by then in digital technology and humans鈥 use of digital systems? We are particularly interested in your thoughts about how developments in digital technology and humans鈥 uses of it are likely to be detrimental to human-centered development of digital tools and systems; human connections, governance and institutions; human rights; human knowledge; and human health and well-being.

The third and final question:聽On balance, how would you say that the developments you foresee in digital technology and uses of it by 2035 make you feel? (Choose one option.)

  • More excited than concerned
  • More concerned than excited
  • Equally excited and concerned
  • Neither excited nor concerned
  • I don鈥檛 think there will be much real change

Results for third question 鈥 regarding the respondents鈥 general mood in regard to the changes they foresee by 2035:

  • 42%聽of these experts said they are聽equally excited and concerned聽about the changes in humans-plus-tech evolution they expect to see by 2035
  • 37%聽said they are聽more concerned than excited聽about the change they expect
  • 18%聽said they are聽more excited than concerned聽about expected change by 2035
  • 2%聽said they are neither excited nor concerned
  • 2%聽said they don鈥檛 think there will be much real change by 2035

Click here to read the full 鈥淏est and Worst Digital Change鈥 report online

Click here to read anonymous responses to this research question

Common themes found among the experts’ qualitative responses:

Some 37% of these experts said they are聽more concerned than excited聽about coming technological change and 42% said they are聽equally concerned and excited. They spoke of these fears:

* The future of human-centered development of digital tools and systems:聽The experts who addressed this fear wrote about their concern that digital systems will continue to be driven by profit incentives in economics and power incentives in politics. They said this is likely to lead to data collection aimed at controlling people rather than empowering them to act freely, share ideas and protest injuries and injustices. These experts worry that ethical design will continue to be an afterthought and digital systems will continue to be released before being thoroughly tested. They believe the impact of all of this is likely to increase inequality and compromise democratic systems

*The future of human rights:聽These experts fear new threats to rights will arise as privacy becomes harder if not impossible to maintain; they cite surveillance advances, sophisticated bots embedded in civic spaces, the spread of deepfakes and disinformation, advanced facial-recognition systems and widening social and digital divides as looming threats. They foresee crimes and harassment spreading more widely, and the rise of new challenges to humans鈥 agency and security. A topmost concern is the expectation that increasingly sophisticated AI is likely to lead to the loss of jobs, resulting in a rise in poverty and the diminishment of human dignity.

*The future of human knowledge:聽They fear that the best of knowledge will be lost or neglected in a sea of mis- and disinformation, that the institutions previously dedicated to informing the public will be further decimated, that basic facts will be drowned out in a sea of entertaining distractions, bald-faced lies and targeted manipulation. They worry that people鈥檚 cognitive skills will decline. In addition, they argued that 鈥渞eality itself is under siege鈥 as emerging digital tools convincingly create deceptive or alternate realities. They worry that a class of 鈥渄oubters鈥 will hold back progress.

*The future of human health and well-being:聽A share of these experts said humanity鈥檚 embrace of digital systems has already spurred high levels of anxiety and depression and predicted things could get worse as technology embeds itself further in people鈥檚 lives and social arrangements. Some of the mental and physical problems could stem from tech-abetted loneliness and social isolation; some could come from people substituting tech-based experiences for real-life encounters; some could come from job displacements and related social strife; and some could come directly from tech-based attacks.

*The future of human connections, governance and institutions:聽The experts who addressed these issues fear that norms, standards and regulation around technology will not evolve quickly enough to improve the social and political interactions of individuals and organizations. One overarching concern: a trend towards autonomous weapons and cyberwarfare and the prospect of runaway digital systems. They also said things could worsen as the pace of tech change accelerates. They expect that people鈥檚 distrust in each other may grow and their faith in institutions may deteriorate. This, in turn, could deepen already undesirable levels of polarization, cognitive dissonance and public withdrawal from vital discourse. They fear, too, that digital systems will be too big and important to avoid, and all users will be captives.

Some 18% of these experts said they are聽more excited than concerned聽about coming technological change and 42% said they are聽equally excited and concerned. They shared their hopes for beneficial change in these categories:

*The future of human-centered development of digital tools and systems:聽The experts who cited tech hopes covered a wide range of likely digital enhancements in medicine, health, fitness and nutrition; access to information and expert recommendations; education in both formal and informal settings; entertainment; transportation and energy; and other spaces. They believe that digital and physical systems will continue to integrate, bringing 鈥渟martness鈥 to all manner of objects and organizations, and expect that individuals will have personal digital assistants that ease their daily lives.

*The future of human rights:聽These experts believe digital tools can be shaped in ways that allow people to freely speak up for their rights and join others to mobilize for the change they seek. They hope ongoing advances in digital tools and systems will give more people more access to resources, help them communicate and learn more effectively, and give them access to data in ways that will help them live better, safer lives. They urged that human rights must be supported and upheld as the internet spreads to the farthest corners of the world.

*The future of human knowledge:聽These respondents hope to see innovations in business models; local, national and global standards and regulation, societal norms and digital literacy that will lead to the revival of and elevation of trusted news and information sources in ways that attract attention and gain the public鈥檚 interest. Their hope is that new digital tools and human and technological systems will be designed to assure that factual information will be appropriately verified, highly findable and well-updated and archived.

*The future of human health and well-being:聽These experts expect that the many positives of digital evolution will bring a healthcare revolution that enhances every aspect of human health and well-being. They emphasize that full health equality in the future should direct equal attention to the needs of all people while also prioritizing their individual agency, safety, mental health and privacy and data rights.

*The future of human connections, governance and institutions:聽The hopeful experts said society is capable of adopting new digital standards and regulation that will promote pro-social digital activities and minimize anti-social activities. They predict that people will develop new norms for digital life and foresee them becoming more digitally literate in social and political interactions. They said in the best-case scenario these changes could influence digital life toward promoting human agency, security, privacy and data protection.

Responses from those preferring to take credit for their remarks. Some are longer versions of expert responses contained in shorter form in the survey report.

Following are the responses from survey participants聽who chose to take credit for their remarks聽in the survey; some are the longer versions of expert responses that are contained in shorter form in the official survey report. (Anonymous responses are published on a separate page.) The respondents were asked two qualitative questions:聽“What are the BEST AND MOST BENEFICIAL changes, and what are the MOST HARMFUL AND MENACING changes that are likely to occur by 2035 in digital technology and humans鈥 use of digital systems?”

Some of the experts answered only one of the two questions. Some answered both in one response rather than responding separately to the two questions. Some respondents chose not to provide any written elaboration, only choosing to respond to the closed-end Yes-No question; those responses are not included here, only respondents’ written remarks.

The statements are listed in random order.聽The written remarks are these respondents’ personal opinions; names of their workplaces are published only to indicate the locus of their expertise and do not represent their employers’ point of view.

Beneficial
Alejandro Pisanty, Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at the National Autonomous University of Mexico, predicted, 鈥淚mprovement will come from shrewd management of the Internet’s own way of making known human conduct and motivation act through technology: mass scaling/hyperconnectivity; identity management; transjurisdictional arbitrage; barrier lowering; friction reduction; and memory+oblivion.

鈥淎s long as these factors are managed for improvement, they can help identify advance warnings of ways in which digital tools may have undesirable side effects. An example: phishing grows on top of all six factors, while increasing friction is the single intervention that provides the best cost/benefit ratio.

鈥淚mprovements come through human connections that may cross many borders between and within societies. They throw a light on human rights and enhance them, while effecting timely warnings about potential violations, creating an unprecedented mass of human knowledge while getting multiple angles to verify what goes on record and correct misrepresentations (again a case for friction).

鈥淗ealth outcomes are improved through the whole cycle of information: research, diffusion of health information, prevention, diagnostics and remediation/mitigation considering the gamut of social determination of health.

鈥淓ducation may improve through scaling, personalization and feedback. There is a fundamental need to make sure the Right to Science becomes embedded in the growth of the Internet and cyberspace in order to align minds and competences within the age of the technology people are using. Another way of putting this: We need to close the gap 鈥 right now 21st century technology is in the hands of people and organizations with 19th century mentalities and competences, starting with the human body, microbes, electricity, thermodynamics and of course computing and its advances.鈥

Harmful
Alejandro Pisanty, Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at the National Autonomous University of Mexico, commented, 鈥淭he same set of factors that can map what we know of human motivation for improvement of humankind’s condition can help us identify ways to deal with the most harmful trends emerging from the Internet.

鈥淪peed is included in the Internet’s mass scaling and hyperconnectivity, and the social and entrepreneurial pressure for speed leaves little time to analyze and manage the negative effects of speed, such as unintended effects of technology, ways in which it can be abused and, in turn, ways to correct, mitigate or compensate against these effects.

鈥淗uman connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of expansion of the Internet makes it easy to identify dissidents and to attack them with increasingly extensive, disruptive and effective damage that extends into physical and social space.

鈥淎 long-term, concerted effort in societies will be necessary to harness the development of tools whose misuse is increasingly easy. The effectiveness of these tools incursions continues to remain based both on the tool and on features of the victim or the intermediaries such as na茂vet茅, lack of knowledge, lack of Internet savvy and the need to juggle too many tasks at the same time between making a living and acquiring dominion over cyber tools.鈥

Beneficial
James Hendler, director of the Future of Computing Institute at Rensselaer Polytechnic Institute, said, 鈥淲e have reached the point where major approaches to the global challenges to humankind 鈥 climate change, fresh water, health and wellness, etc. 鈥 will require a new generation of computing which will include the integration of heterogeneous systems including supercomputing, specialized AI hardware, and by 2035, quantum computing. In addition, these challenges will require scaling in many new ways 鈥 billions of sensors contributing to distributed learning systems, reduced precision devices that can scale computation without corresponding scaling of energy consumption, and many other new technologies. From a theoretical point of view, new foundations will be needed for researchers to understand the next generations of computational fabric that will allow these advances.

鈥淚 am encouraged by the growing realization in academic, industrial and increasingly government circles that research and development must go into this kind of interdisciplinary work, which will combine theory, engineering, and social sciences (to understand the policy implications that new models bring). The notion of Ph.D. research tightly tied to departments will have to give way to increasingly interdisciplinary efforts focused on the grand challenges.

鈥淚f successful, I would expect that health technology will be one of the first areas to benefit as the new computational approaches are well-suited to scaling genomic and proteomic research. While I am still pessimistic about major breakthroughs in climate change per se, I believe major work will be done in the impacts of climate change on infrastructure and the mitigations thereof.

鈥淔inally, the new generation of AI technologies, which still are not living up to their hype, when coupled with both humans who better understand the limitations, and heterogeneous systems that will be needed to support ever larger models, hold tremendous potential to help human scientists to solve these problems with ever larger data scale underlying the analytics.鈥

Harmful
James Hendler, director of the Future of Computing Institute at Rensselaer Polytechnic Institute, observed, 鈥淭here are a number of well-known quotes from scientists who used to claim the key to controlling climate change was better modeling, but now believe the issue is primarily political, beyond the edges of computation and the like. As I watch the evolution of powerful technologies, my optimism about the future of computing is counter-balanced (if not outweighed) by my cynicism that the political world will be able to control the negative impacts.

鈥淚n the more capitalist societies, the political power of the wealthy continues to grow, and thus those least impacted by the problems have the most power that could be wielded to solve them. In more authoritarian governments, we see oligarchs and power seekers controlling the very politics that are needed to solve the problems 鈥 solutions likely to come at a cost to themselves.

鈥淲e need to find new ways to teach technologists to speak to politicians and the powerful, we need people to understand that we have only one world to live in, and we need the political will such that as scientific innovation is achieved, the will to implement it must be concurrently developed. The new foundations of computing must include educating students in policy, public administration and implementation that focuses not on personal enrichment, but on planetary good.

鈥淭he progress made in technology in the coming decade will only help solve the real problems if we can align the technical with the social and create a movement of scientists who can understand and explain the realities.

鈥淲hat Rachel Carson did with 鈥楽ilent Spring鈥 in raising awareness of pesticide dangers must become something valued among scientists. We can have impact, but not by living in ivory towers or working solely on wealth generation 鈥 we must train a generation of technologists who understand not just the science, but the social impacts that go with them.

鈥淛ust as bioethics grew as an increasingly important part of the biological research world, motivated to a large degree by the horrors perpetrated in World War II, we must realize that we live in a time where the ethics of algorithms and technologies cannot be ignored.鈥

Beneficial
Bart Knijnenburg, associate professor and researcher on privacy decision-making and recommender systems at Clemson University, predicted, 鈥淚 am hoping that the gap between AI’s appearance and capabilities will shrink, thereby improving the usability of our interactions with AI systems and making our interactions with complex digital systems more intuitive. In our current interaction with AI systems there is a mismatch between the appearance of the systems (very humanlike) and their capabilities (still lacking far behind real humans). People tend to use the human-likeness of these systems as a shortcut to infer their capabilities, which leads to usability issues. I am hoping that advanced AI systems provide a more powerful and efficient interface to such knowledge. While we currently think of generative AI (e.g., GPT4) as the key to the future, I would like to see a shift toward a more explicit goal of summarizing and integrating existing sources of human knowledge as a means to more robustly answer complex user queries.

鈥淚n terms of human rights, I hope that AI systems can increasingly free human workers from menial (mental) tasks. Ideally, teaming with AI systems would make human work more interesting, rather than simply more demanding.

鈥淚n terms of human health and well-being, I would like to see AI systems that take a 鈥榙igital twin鈥 approach to modeling the mental state of a human user, where the AI serves as an intuitive interface for the user to interpret and critically reflect upon their personal mental state.鈥

Harmful
Bart Knijnenburg, assistant professor and researcher on privacy decision-making and recommender systems at Clemson University, said, 鈥淚n terms of human-centered development, I am worried that the complexity of the AI systems that are being developed will harm the transparency of our interaction with these systems. We can already see this with current voice assistance: they are great when they work well, but when they don’t do what we want it is extremely difficult to find out why.

鈥淚n terms of human rights and human health/happiness, I worry that a capitalist exploitation of AI technology will increase the expectations of human performance, thereby creating extra burden on human workers rather than reducing it. For example: while theoretically the support of an AI system can make the work of an administrative professional more meaningful, I worry that it will lead to a situation where one AI-assisted administrative worker will be asked to do the job of 10 traditional administrative workers.

鈥淚n terms of human knowledge, I worry that the products of generative AI will become indistinguishable from actual human-produced knowledge. This has severe consequences for data integrity (e.g., there have already been several example situations where GPT4 generates answers that look smart but are actually very wrong 鈥 will a human evaluator of AI answers be able to detect such errors?) and authenticity (e.g., how do we know for sure that this Pew survey is being answered by real humans, rather than bots?).鈥

Beneficial
Mojirayo Ogunlana, principal partner at M.O.N. Legal in Abuja, Nigeria, and founder of the Advocates for the Promotion of Digital Rights and Civic Interactions Initiative, wrote, 鈥淗uman-centered development of digital tools and systems will take place 鈥 safely advancing most human progress in these systems. There will be an increase in technological advancement, including a phenomenal rise in encryption and in technologies that would evade governments’ intrusion and detection.鈥

Harmful
Mojirayo Ogunlana, principal partner at M.O.N. Legal in Abuja, Nigeria, and founder of the Advocates for the Promotion of Digital Rights and Civic Interactions Initiative, predicted, 鈥淭he internet space will become truly ungovernable. As governments continue to push using harmful technologies to invade people’s privacy, there will also be an increase in the development of technologies that will be able to evade governments鈥 intrusion, which will invariably leave power in the hands of people who may use this as a tool for committing crimes against citizens and their private lives. Then digital and human rights will continue to be endangered as governments continue to take decisions based on their own selfish interests rather than for the good of humanity. The Ukraine/Russia war in context.鈥

Beneficial
David Clark, Internet Hall of Fame member and senior research scientist at MIT鈥檚 Computer Science and Artificial Intelligence Laboratory, wrote, 鈥淭o have an optimistic view of the future you must imagine several potential positives come to fruition to overcome big issues:

  • The currently rapid rate of change slows, helping us to catch up.
  • The Internet becomes much more accessible and inclusive, and the numbers of the unserved or poorly served become a much smaller fraction of the population.
  • Over the next 10 years the character of critical applications such as social media mature and stabilize, and users become more sophisticated about navigating the risks and negatives.
  • Increasing digital literacy helps all users to better avoid the worst perils of the Internet experience.
  • A new generation of social media emerges, with less focus on user profiling to sell ads, less emphasis on unrestrained virality and more of a focus on user-driven exploration and interconnection.
  • And the best thing that could happen is that application providers move away from the advertising-based revenue model and establish an expectation that users actually pay. This would remove many of the distorting incentives that plague the 鈥榝ree鈥 Internet experience today.

鈥淐onsumers today already pay for content (movies, sports and games, in-game purchases and the like). It is not necessary that the troublesome advertising-based financial model should dominate.鈥

Harmful
David Clark, Internet Hall of Fame member and senior research scientist at MIT鈥檚 Computer Science and Artificial Intelligence Laboratory, commented, 鈥淚 fear that the next 10 years may see many negative trends in the Internet experience. The current abuse of social media for manipulative purposes is going to bring greater government attention to the experience, which may lead to a period of turbulent regulation with inconsistent character across the globe. The abuse of social media may lead to continued polarization of societies, which will have an uncertain but potentially dramatic effect on the nature of the Internet and its apps.

鈥淭he use of the Internet as a tool for inter-state conflict (and conflict between state and non-state actors) may have increasing real-world consequences. We may see increasing restriction of cross-border interaction at the application layer. Attacks and manipulation of online content may overwhelm the ability of defenders to maintain what they consider a factually grounded basis, and sites like Wikipedia may become less trustworthy.

鈥淭hose who view the Internet as a powerful tool for social action may come to realize that social movements have no special claim to the Internet as a tool鈥攇overnments may have been slow to understand the power of the Internet but are learning how to shape the Internet experience of their citizens in powerful ways. The Internet can either become a tool for freedom or a tool for repression and manipulation, and we must not underestimate the motivation and capabilities of powerful organized actors to impose their desired character on the Internet and its users.鈥

Beneficial
S.B. Divya, an author, editor and electrical engineer and Hugo and Nebula Award-nominated author of 鈥淢achinehood,鈥 said, 鈥淏y 2035, I hope to see good advances in areas of biotechnology 鈥 especially in terms of gene therapy and better treatments for viral infections 鈥 that arise as a result of better computational modeling. I also anticipate seeing alternatives for antibiotics when treating bacterial infections. Medical diagnostics will make greater use of noninvasive techniques like smart pills and machine intelligence-based imaging.

鈥淚 expect to see a wave of new employment in areas involving AI-based tools, especially for people to harness these tools and elicit useful results from them. I think we鈥檒l continue to see rapid improvement in the capabilities of such tools, including new systems that integrate multiple modalities such as computer vision, audio and robotic motion. By 2035 we could see robots that can interact naturally with humans in service roles with well-defined behaviors and limited range of motion, such as ticket-taking or checking people in at medical facilities.

鈥淚 hope to see the internet and social media being put to use to address climate migration and refugee challenges. Microloans, crowdfunding and other types of grass roots charity will continue to expand as the needs become greater and require more rapid and dynamic deployment. In terms of governance, we might start to see effective regulation of qualifying the accuracy of digital information. This might also end up being decentralized, with crowdsourced metrics of 鈥榯ruth鈥 or 鈥榬eliability鈥 for content across the web.鈥

Harmful
S.B. Divya, an author, editor and electrical engineer and Hugo and Nebula Award-nominated author of 鈥淢achinehood,鈥 commented, 鈥淏y 2035, I expect that we will be struggling with the continued erosion of digital privacy and data rights as consumers trade ever-increasing information about their lives for social conveniences. We will find it more challenging to control the flow of facts, especially in terms of fabricated images, videos and text that are indistinguishable from reliable versions. This could lead to greater mistrust in government, journalists and other centralized sources of news. Trust in general is going weaken across the social fabric.

鈥淚 also anticipate a wider digital divide 鈥 gaps in access to necessary technology, especially those that require a high amount of electricity and maintenance. This would show up more in commerce than in consumer usage. The hazards of climate change will exacerbate this burden, since countries with fewer resources will struggle to rebuild digital infrastructure after storm damage.

鈥淗uman labor will undergo a shift as AI systems get increasingly sophisticated. Countries that don鈥檛 have good adult education infrastructure will struggle with unemployment, especially for older citizens and those who do not have the skills to retool. We might see another major economic depression before society adjusts to the new types of employment that can effectively harness these technologies.鈥

Beneficial
Glenn Grossman, a consultant of banking analytics at FICO, said, 鈥淎dvances in AI and data-driven decision-making can lead to improvements in the quality of many sectors of our culture and economy. In our current state, technology complements many human-driven processes. With the appropriate usage of data-driven technology improved decisions in all sectors can be achieved. Healthcare decisions can deliver improved health, especially when access to care is essentially a human-driven operation. Consider legal services where those with fewer resources can possibly obtain services where today it is a greater challenge.鈥

Harmful
Glenn Grossman, a consultant of banking analytics at FICO, commented, 鈥淎dvances in AI and data-driven decision-making can, when designed with biased data, cause harm to individuals. There is also a concern that many professions would be disrupted by new technologies, which may occur, but often we see that new technologies create new jobs. The transition can be difficult if some do not retrain.鈥

Beneficial
Satish Babu, a pioneering internet activist based in India and longtime participant in ICANN and IEEE activities, predicted, 鈥淭he outstanding gains will be made in: Digital communications 鈥 in mobile devices such as battery capacity, direct satellite connectivity and more. Health and well-being 鈥 in sensors and measurements, health data privacy, diagnosis and precision medicine. Rights, governance and democracy 鈥 direct democracy, tracking of rights and the right to Information. Recreation 鈥 improvements in simulated reality, virtual reality, mixed reality and augmented reality鈥

Harmful
Satish Babu, a pioneering internet activist based in India and longtime participant in ICANN and IEEE activities, said, 鈥淭here will be many major concerns in the years ahead. Social media, and fake news will become more of a problem, enabling the hijacking democratic institutions and processes. There will continue to be insufficient regulatory control over Big Tech, especially for emerging technologies. There will be more governmental surveillance in the name of 鈥榥ational security.鈥 There will be an expamsion of data theft and unauthorized monetization by tech companies. More people will become attracted by and addicted to gaming, and this will lead to self-harm. Cyber harassment, bullying, stalking, and the abetment of suicide will expand.鈥

Beneficial and Harmful
Paul Jones, professor emeritus at UNC-Chapel Hill School of Information and Library Science, commented that, 鈥淭here is a specter haunting the internet聽 鈥撀 the specter of artificial intelligence. All the powers of old thinking and knowledge production have entered into a holy (?) alliance to exorcise this specter: frenzied authors, journalists, artists, teachers, legislators and, most of all, lawyers. We are still waiting to hear from the Pope.

鈥淚n education, we used to teach people how to use computers. Now, we teach computers how to use people. By aggregating all that we can of human knowledge production in nearly every field, the computers can know more about humans as a mass and as individuals that we can know of ourselves.

鈥淭he upside of these knowledgeable computers can provide, and will quickly provide, better access to health, education and in many cases art and writing for humans. The cost is a loss of personal and social agency at individual, group, national and global levels.

鈥淲ho wouldn鈥檛 want the access? But who wouldn鈥檛 worry, rightly, about the loss of agency?

鈥淭hat double desire is what makes answering these questions difficult. 鈥楤est and most beneficial鈥 and 鈥榤ost harmful and menacing鈥 are opposite so much as co-joined. Like conjoined twins sharing essential organs and blood systems. Unlike for some such twins, no known surgery can separate them.

鈥淛ust as cars gave us, over a short time, a democratization of travel and at the same time became major agents of death 鈥 immediately in wrecks, more slowly via pollution 鈥 AI and the infrastructure to support it will give us untold benefits and access to knowledge while causing untold harm.

鈥淲e can predict somewhat the direction of AI, but more difficult will be how to understand the human response. Humans are now, or will soon be, co-joined to AI even if they don鈥檛 use it directly. AI will be used on everyone just as one need not drive or even ride in a car to be affected by the existence of cars.

鈥淎I changes will emerge when it possesses these traits:

  • Distinctive presences (AKA voices but also avatars personalized to suit the listener/reader in various situations). These will be created by merging distinctive human writing and speaking voices, say maybe Bob Dylan + Bruce Springsteen.
  • The ability to emotionally connect with humans (AKA presentation skills).
  • Curiosity. AI will do more than respond. It will be interactive and heuristic, offering paths that have not yet been offered 鈥 we have witnessed this AI behavior in the playing of Go and chess. AI will continue to present novel solutions.
  • A broad and unique worldview. Because AI can be trained on all digitizable human knowledge and can avail itself of information from sensors more in variance with those open to humans, AI will be able to apply, say, Taoism to questions about weather.
  • Empathy. Humans do not have an endless well of empathy. We tire easily. But AI can seem persistently and constantly empathetic. You may say that AI empathy isn鈥檛 real, but human empathy isn’t always either.
  • Situational Awareness. Thanks to input from a variety of sensors, AI can and will be able to understand situations even better than humans.
  • No area of knowledge work will be unaffected by AI and sensor awareness.

鈥淗ow will we greet our robot masters? With fear, awe, admiration, envy and desire.鈥

Beneficial
John Verdon, a retired Canada-based complexity and foresight consultant, said, 鈥淚magine a federally-funded foundation (the funding will be no issue because the population is becoming economically literate with Modern Monetary Theory). The foundation would be somewhat along the lines of DARPA. It will only seed and shape the development of open-source tools, devices and platforms in order to strengthen and fertilize a flourishing infrastructure of digital commons. Let your imagination run free, keeping in mind that every light will cast shadows and every shadow is cast by some light.鈥

Harmful
John Verdon, a retired Canada-based complexity and foresight consultant, commented, 鈥淭he enclosure of the programmable lifeworld by private property rights and the inevitable failures, inequalities, rapacious extractions of value and dampening of response-abilities for flourishing our world.鈥

Beneficial and Harmful
Tom Valovic, journalist and author, wrote, 鈥淎I and ChatGPT are major initiatives of a technocratic approach to culture and governance which will have profound negative consequences over the next 10 years. If there’s one dominant theme that’s emerged in my many years of research, it鈥檚 parsing the ingrained tension between the waning humanities and the rising technology regimes that absorb us.

鈥淚t’s impossible to look at these trends and their effects on our social and political life without also including Silicon Valley’s push toward Transhumanism. We see this in the forward march of AI in combination with powerful momentum toward the metaverse. That鈥檚 another contextual element that needs to be brought in. I see the limitations of human bandwidth and processing power to be problematic. I worry about the implications of an organic, evolving, complex, adaptive, networked system that may route around slow human processors and take on an existence of its own. This is an important framework to consider when imagining the future.

鈥淲hen we awake from this transhumanist fever dream of human perfection that bears little resemblance to the actual world we鈥檝e managed to create, I think steady efforts at preserving the core values of the humanities will have proved prescient. This massive and imposed technological infusion will be seen as a chimera. Perhaps we鈥檒l even learn how to use some of it wisely.

鈥淚 do think that AI is going to force some sort of omega point, some moment of truth past this dark age where the necessary balance between technology, culture and the natural world is restored. Sadly, it鈥檚 a question of how much 鈥榗reative destruction鈥 is needed to arrive at this point. With luck (and effort) I believe there will be a developing understanding that while hyper-technology appears to be taking us to new places, in the long run it鈥檚 actually resurrecting older, less desirable paradigms, a kind of cultural sleight of hand (or enantiodromia?)

鈥淚 found this observation from Kate Crawford, founder of the AI Now Institute at NYU to be useful along these lines: 鈥楢rtificial intelligence is not an objective, universal or neutral computational technique that makes determinations without human direction. Its systems are embedded in social, political, cultural and economic worlds shaped by humans, institutions and imperatives that determine what they do and how they do it. They are designed to discriminate, to amplify hierarchies and to encode narrow classifications.鈥

鈥淚f ChatGPT thinks and communicates, it’s because programmers and programs taught it how to think and communicate. Programmers have conscious and unconscious biases and possibly, like any of us, faulty cognitive assumptions that necessarily get imported into platform development. As sophisticated as that process or program has or will become, it can still be capable of the unintended consequences of human error, albeit still presenting and masking to the end user as machine-based error as sequences propagate. These can be hidden and perpetuated in code. If at some point, the system learns on its own (and I’m just not familiar enough with its genesis to know if that’s already the case) then it will be fully capable of making and communicating its own errors. (That’s the fascinating part.)

鈥淚n the current odd cultural climate, we鈥檙e all hungry to go back to a world where the 鈥榯ruth鈥 was not so maddeningly malleable. The idea of truth as some sort of objective reality based on purely scientific principles is, in my opinion, a chimera and an artifact of our Western scientific materialism. And yet we still keep chasing it. As Thomas Kuhn pointed out in his books on the epistemology of science, scientific knowledge is to a large extent a social construct, and that’s a fascinating rabbit hole to go down.

鈥淎s we evolve, our science evolves. In that sense, no machine, however sophisticated, will ever be able to serve as some kind of ultimate arbiter of what we regard as truth. But we might want to rely on these systems for their opinions and ability to make interesting connections (which is, of course, the basis for creative thinking) or not leave important elements of research out (which happens all the time in academic and scientific research, of course). But the caution is not to be seduced by the illusion of these systems serving up true objectivity. The 鈥榯ruth鈥 will always be a multifaceted, complex, socially constructed artifact of our very own human awareness and consciousness.

鈥淭he use of sophisticated computer technology to replace white-collar and blue-collar workers has been taking place for quite a while now. It will become exponentially greater in scope and momentum going forward. The original promise of futurists back in the day (the 1960s and 70s) was that automation would bring about the four-day work week and eventually a quasi-utopian 鈥榳ork less/play more鈥 social and cultural environment. What they didn’t factor in was the prospect of powerful corporations latching onto these new efficiencies to feather their nest to the exclusion of all else and the lack of appropriate government oversight as a result of the merging of corporate and government power.鈥

Beneficial
Lawrence Lannom, vice president at the Corporation for National Research Initiatives, wrote, 鈥淭he first and, from my perspective, the most obvious benefit of improved digital technology to the world of 2035 will be the improvements in both theoretical and applied science and engineering.

鈥淲e have gone from re-wiring patch panels in the 1940s, to writing assembly language, to higher-level languages, to low code and no code, and now on to generative AI writing code to improve itself. It has been 80 years since the arrival of the first real software, and the pace is accelerating. The changes are not just about increased computing power and ease of programming, but equally or even more importantly, networking capability.

鈥淲e routinely use networked capabilities in all aspects of digital technology, such that we can now regard the network as a single computational resource. Combine compute and network improvements with that of storage capacity and, to a first level of approximation, we can expect that by 2035 all data will be available and actionable with no limits on computing power.

鈥淎 great many challenges remain, mostly in the areas of technical and semantic interoperability, but these problems are being addressed.

鈥淭he result of all of this new ability to collect and correlate vast amounts of data, run large simulations, and in general provide exponentially more powerful digital tools to scientists and engineers will result in step changes in many areas, including materials science, biology, drug development, climatology and, in general, our basic understanding of how the world works and how we can approach problems that currently appear insoluble.

鈥淐ollaboration will continue to improve as virtual meetings move from the flat screen to a believable sense of being around the same table in the same room using the same white board. AI assistants will be able to tap the collective resources of humankind to help guide discussion and research. The potential for improvements in the human condition are almost unimaginable, even at the distance of 10-12 years. The harder question is whether we are capable of applying new capabilities for our collective betterment.鈥

Harmful
Larry Lannom, vice president at the Corporation for National Research Initiatives, observed, 鈥淚n thinking about the potential harm that exponentially improved digital technologies could wreck by 2035 I find that I have two levels of concern.

鈥淭he first is the fairly obvious worry that advanced technologies could be used by malevolent actors, at the state, small group or individual level, to cause damage beyond what they could achieve with today鈥檚 tools. AI-based autonomous weapons, new pathogens, torrents of mis-information precision crafted to appeal to the recipients, and total state-level intrusion into the private lives of the citizenry are just some of the worrying possibilities that are all too easy to imagine evolving by 2035.

鈥淎 more insidious worry, however, is the potential erosion of trust at all levels of society and government. More and more of our lives are affected by or even lived in the digital realm and as that environment increases in size and sophistication it seems likely that the impact will increase. But digital reality is much more amenable to distortion and manipulation than even the worst human-level deception.

鈥淭he ability of advanced computing systems of all kinds to convincingly generate fake audio and video representations of any public figures, to generate overwhelming amounts of reasonable sounding mis-information, and to use detailed personal information, gathered legally or illegally, to craft precision messaging for manipulation beyond what can be done today could contribute to a complete lack of trust at all levels of society. Once trust is lost it is difficult to reclaim.鈥

Beneficial
Josh Calder, partner and founder at The Foresight Alliance, wrote, 鈥淧roliferating devices and expanding bandwidth will provide an ever-growing majority of humanity access to immense information resources. This trend鈥檚 reach will be expanded by rapid improvements in translation beyond the largest languages. Artificial intelligence will enable startling new discoveries and solutions in many fields, from science to management, as patterns invisible to humans are uncovered.鈥

Harmful
Josh Calder, partner and founder at The Foresight Alliance, predicted, 鈥淎ccess to quality, truthful information will be undermined by information centralization, AI-produced fakes and propaganda of all types and the efforts of illiberal governments. Getting to high-quality information may take more effort and expense than most people are willing or able to do. Centralized, cloud-based knowledge systems may enable distortion or rewriting of reality 鈥 at least as most people see it 鈥 in a matter of moments. Also key to the future is AI and automation鈥檚 impact on people. A scenario remains plausible in which growing swathes of human work are devalued, degraded or replaced by automation, AI and robotics, without countervailing social and economic structures to counteract the economic and social damage that result. The danger may be even more acute in the developing world than in richer countries.鈥

Beneficial
Jane Gould, founder of DearSmartphone, commented, 鈥淲ith the speed and rapid diffusion of information between academic researchers and scientists, the foundations of science and technology will grow rapidly. Even those who cannot contribute to this knowledge base will gain from the progress made in technological solutions. However, there is a lot of room for deception and misinformation. In the less-scientific communities, we seem to be moving to a more image-based way of processing data. I am not an expert in cognitive learning, but I know processing images takes less cognitive work than writing and reading. So, the seeds for change will rest even more they do today on an elite, well-educated, well-versed scientific community.鈥

Harmful
Jane Gould, founder of DearSmartphone, responded, 鈥淲e have been rewriting the concept of screen time and exposure. This trend began in the 2000s but the introduction of mobility and iPhones and mobile apps in 2007 accelerated the change. We are rewriting childhood for youngsters ages 0 to 5, and it is not in healthy ways. All infants must go through discrete stages of cognitive and physical growth. There is nothing that we can do to speed these up, nor should we. Yet from their earliest moments we put young babies in front of digital devices and use them to entertain, educate and babysit them. These devices use artifices like bright lights and colors to hold their attention, but they do not educate them in the way that thoughtful, watchful parents can. More than anything else, these electronics keep children from playing with the traditional hand-held toys and games that use all five senses to keep babies busy and engaged, with play and in two-way exchanges. Meanwhile, parents are distracted and pay less attention to their infants because they stay engaged with their own personal phones and touchscreens.鈥

Beneficial
Michael Kleeman, a senior fellow at the University of California, San Diego, who previously worked for Boston Consulting and Sprint, predicted, 鈥淏asic connectivity will expand to many more people, allowing access to a range of services that in many places are only available to richer people. And this will likely increase transparency, causing the dual effect of greater pressure on governments to be responsive to citizens and allowing those who know how to manipulate information the ability to sway opinions more with seeming truths.鈥

Harmful
Michael Kleeman, a senior fellow at the University of California, San Diego, who previously worked for Boston Consulting and Sprint, responded, 鈥淎I-enabled fakes of all kinds are a danger. We will face the risk of these undermining the basic trust we have in remote communications if not causing real harm in the short run. The flip side is they will create a better informed and more nuanced approach to interpreting digital media and communications, perhaps driving us more to in-person interactions.鈥

Beneficial
闯别谤别尘测听笔别蝉苍别谤, senior policy analyst at the Bipartisan Policy Center, predicted, 鈥淚n 2035, there will be more and better ways to organize and understand the vast amount of digital information we consume every day. It will be easier to export data in machine-readable formats, and there will be more programs to ingest those formats and display high-level details about them. Because AI will be so prevalent in synthesizing information, it will be much easier to execute a first and second pass at researching a topic, although humans will still have to double-check the results and make their own additions. The falling costs of technology will mean that most people are on fairly even footing with one another, computationally speaking, and are therefore able to play immersive games and create high-quality digital art. Digital inequalities will also be lessened, as high-speed broadband will be available nearly everywhere, and just about everyone will know at least the basics of computing. Many will also know the basics of coding, even if they are not programmers, and will be able to execute basic scripts to organize their personal machines and even interface with service APIs. There will be more universal privacy laws, so it is less likely that peoples鈥 personal information will be leaked by through hacks and breaches, and more likely that they can manage their own health data.鈥

Harmful
闯别谤别尘测听笔别蝉苍别谤, senior policy analyst at the Bipartisan Policy Center, wrote, 鈥淢ost of the major technology services will continue to be owned and operated by a small number of companies and individuals.

鈥淭he gap between open-source and commercial software will continue to grow, such that there will be an increasing number of things that the latter can do that the former cannot, and therefore almost no one will know how the software we all use every day actually works. These individuals and companies will also continue to make a tremendous amount of money on these products and services, without the users of these services having any way to make money from them.

鈥淐ountries like China and Russia will continue to censor their Internet tremendously, if not outright disconnect it from the rest of the world.

鈥淏ecause it is so much easier to publish content digitally than in any other format, people will constantly be glued to their screens and social media, with all of the health and psychological downsides we know those portend.

鈥淭here will continue to be a major dissonance between the way people act in person and the way they act on social media, and there will be no clear way to encourage or foster constructive, healthy conversations online when the participants have nothing concrete to gain from it.

鈥淭he world will start to run out of raw metals that are used for technology manufacturing, prompting a mad dash to track down and recycle metal from any usable source.鈥

Beneficial
Henning Schulzrinne, Internet Hall of Fame member, Columbia University professor of computer science and co-chair of the Internet Technical Committee of the IEEE, predicted, 鈥淎mplified by machine learning and APIs, low-code and no-code systems will make it easier for small businesses and governments to develop user-facing systems to increase productivity and ease the transition to e-government. Government programs and consumer demand will make high-speed (100 Mb/s and higher) home access, mostly fiber, near-universal in the United States and large parts of Europe, including rural areas, supplemented by low-earth orbiting satellites for covering the most remote areas. And we will finally move beyond passwords as the most common means of consumer authentication, making systems easier to use and eliminating many security vulnerabilities that endanger systems today.鈥

Harmful
Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE, warned, 鈥淭he concentration of ad revenue and the lack of a viable alternative source of income will further diminish the reach and capabilities of local news media in many countries, degrading the information ecosystem. This will increase polarization, facilitate government corruption and reduce citizen engagement.鈥

Beneficial
Kevin T.聽Leicht, professor and head of the department of sociology at the聽University of Illinois-Urbana-Champaign, wrote, 鈥淒igital technology has vast potential to improve people鈥檚 health and well-being in the next 20 years or so. Specifically, AI programs will help physicians to diagnose so-called 鈥榳icked鈥 health problems 鈥 situations we all face as older people where there are several things wrong with us, some serious and some less so, yet coming up with a holistic way to treat all of those problems and maximize quality of life has been elusive. AI and digital technologies can help to sort through the maze of treatments, research findings, etc., to get to solutions that are specifically tailored to each patient.鈥

Harmful
Kevin T.聽Leicht, professor and head of the department of sociology at the聽University of Illinois-Urbana-Champaign, wrote, 鈥淥n the human knowledge front we, as yet, have no solution to the simple fact that digital technology gives any random village idiot a national or international forum. We knew that science and best practices didn鈥檛 sell themselves, but we were completely unready for the alternative worlds that people would create that seem to systematically negate any of the advancements the Enlightenment produced. People who want modern technology to back anti-science and pre-Enlightenment values are justifiably referred to as fascists. The digital world has produced a global movement of such people and we will have to spend the next 20 years clawing and fighting back against it.鈥

Beneficial and Harmful
Harold Feld, senior vice president at Public Knowledge, predicted, 鈥淩eliable, affordable high-speed broadband will become as ubiquitous in the world (including the developing world) as telephone service was in the United States in the late 20th Century. The actual technology will vary greatly depending on country, and we will still see speed differences and other quality of services differences that will maintain a digital divide. But the combination of available communications technology and solar operated systems will enable a wide range of benefits. These will include:

  • Far more efficient resource tracking and allocation and far more efficient environmental monitoring will enable dramatic increases in food and clean water distribution where needed and will help to predict potential environmental disasters with greater accuracy and certainty.
  • Greater communication potential will enable vast improvements in distance learning and telemedicine. In countries where health professionals are scarce, or where travel is difficult, a wealth of diagnostic tools and a broadband connection will allow a handful of trained first responders to treat people locally under the guidance of experienced and more highly trained medical professionals. Necessary resources such as antibiotics will be delivered by drones, and local personnel guided in how to administer and provide follow-up care. As a last resort, doctors can order medical evacuations.
  • Children will have access to education in their native language. Artificial expenses such as uniforms will be eliminated as a requirement. Girls will be able to access equal education without fear of assault.鈥

鈥淵et, here鈥檚 the thing:

  • Widespread ubiquitous broadband could easily broaden ubiquitous surveillance for corporate reasons and to aid repressive governments.
  • Big data systems will be able to sort the noise from the signal and allow corporate or government interests to predict with incredible accuracy human behavior and how to shape it in ways that best serve their interests.
  • Widespread access to others will create pockets of intense culture shock as communities find their basic assumptions about how to organize society undermined.
  • Basic trust in institutions will be replaced not with healthy skepticism for engagement, but either complete and fanatical belief in a trusted source or complete disbelief in any source.

“To slightly paraphrase William Butler Yeats, 鈥楳ere anarchy is loosed upon the world 鈥 The ceremony of innocence is drowned. The best will lack all conviction, while the worst will be filled with passionate intensity.鈥 Societies may become entirely paralyzed, caught between an inability to rely on facts for basic cooperation, or trapped between warring factions, or both.

鈥淐opyright and technology to manage microtransactions will create huge gaps in knowledge between the haves and have nots, as even basic educational material becomes subject to limitations on sharing and requirements for access fees.

“Ownership of books or other educational media will become a thing of the past, as every digital source of knowledge will be licensed rather than owned. Book printing will wither away, so that modern educational materials will be inaccessible to those who cannot afford them.

鈥淔or the same reason, innovation will slow and become the province of a privileged few able to negotiate access to the needed software tools. Even basic mechanical inventions will have digital locks and software to prevent any tinkering.鈥

Beneficial and Harmful
Kelly Bates, president of the Interaction Institute for Social Change, observed, 鈥淲e can transform human safety by using technology to survive pandemics, disease, climate shifts and terrorism through real-time communication, emergency response plans and resource sharing through apps and portals. We will harm citizens if there are no or limited controls over hate speech, political bullying, body shaming, personal attacks and the planning of insurrections on social media/online.鈥

Beneficial
Kyle Rose, principal architect at聽Akamai聽Technologies, said, 鈥淭he biggest positive change will be to relieve tedium: AI with access to the internet’s knowledge base will allow machines to do 75 percent-plus of the work required in creative endeavors, freeing humans to focus on the tasks that require actual intelligence and creativity.鈥

Harmful
Kyle Rose, principal architect at聽Akamai聽Technologies, observed, 鈥淎I is a value-neutral tool; while it can be used to improve lives and human productivity, it can also be used to mislead people. The biggest tech-enabled risk I see in the next decade (actually, just in the next year, and only getting worse beyond that point) is that AI will be leveraged by bad actors to create very convincing fictions that are used to create popular support for actions premised on a lie. That is likely to take the form of deepfake audio-visual content that fools large numbers of people into believing in events that didn’t actually happen. In an era of highly partisan journalism, without a trusted apolitical media willing to value truth over ideology, this will result in further bifurcation of perceived reality between left and right.鈥

Beneficial and Harmful
Matt Moore, a knowledge-management entrepreneur with聽Innotecture, which is based in Australia, observed, 鈥淗uman beings will remain wonderful and terrible and banal. That won’t change. We’ll see greater use and abuse of artificial intelligence. ChatGPT will seem just like the iPhone seems to us today 鈥 so 2007. Many mundane tasks will be undertaken by machines 鈥 unless we chose to do them for our own pleasure (artisanal drudgery). We will be more productive as societies. There will be more content, more connection, more everything. We will have ecological and climate-related technologies in abundance. We will have digital-twin ecosystems that allow us to model and manage our complex world better than ever. We’ll probably have more bionic implants and digital medicine. A subset of society will reject all that (the Neo-Amish) in different ways, as it can be overwhelming. We will use these technologies to hurt, exploit and persecute each other. We will surveil, wage war and seek to maximise profit. Parts of our ecosystem will collapse, and our technologies will both accelerate and mitigate that. Fertility will probably drop as people don’t just opt out themselves but also opt out their potential children.鈥

Beneficial
John Lazzaro, retired professor of electrical engineering and computer science at the University of California, Berkeley, wrote, 鈥淏y 2035, wireless barcode technology (RAIN RFID) will replace the printed barcodes that are ubiquitous on packaged goods. Fixed infrastructure will simultaneously scan hundreds of items per second, from tens of meters away, without direct line of sight. This sounds like a mundane upgrade. But it will facilitate an awareness of where every 鈥榯hing鈥 is, from the moment its manufacturing begins until its recycling at end of life. This sort of change in underlying infrastructure enables changes throughout society, just as container shipping infrastructure unleashed dozens of major changes in the second half of the 20th century.

鈥淲ireless barcodes let a store take complete inventory several times a day, with 95% accuracy. When the pandemic hit, retailers with this technology were able to pivot to omnichannel operation, so customers could shop online instead of in person, with the purchase being fulfilled from the inventory on the rack in a physical store. Those retailers became the retail winners of the pandemic, driving the rest of retail to put RFID on the fast lane. The leaders extended the use cases of RFID beyond inventory, to self-checkout and loss prevention.

鈥淪eeing this success, other verticals are now taking the first steps into RFID. The logistics giant UPS has stated its intention to put RFID on every package, and to add infrastructure throughout their logistics chain to take advantage of the technology.

鈥淗ealthcare systems are preparing implementations are well. When fully implemented, counterfeit drugs will be easy to detect, as RFID facilitates source authentication, and expired drugs can be identified. Adoption by grocery stores will probably happen last, but when it does, manually scanning items at the self-checkout stand will be replaced by wheeling a shopping cart past a radio gateway that scans all items in parallel, without taking them out of the cart.

鈥淓ach example above seems incremental. But looking back to the early commercialization of the Internet, each individual use case also seemed incremental. But the collective weight of dozens of use cases elevates the incremental changes into a step-function change.鈥

Beneficial and Harmful
Rosanna Guadagno, associate professor of persuasive information systems at the University of Oulu (Finland), wrote, 鈥淏y 2035, I expect that artificial intelligence will have made a substantial impact on the way people live and work. AI robotics will replace factory workers on a large scale and AI digital assistants will also be used to perform many tasks currently performed by white collar workers. I am less optimistic about AIs performing all of our driving tasks, but I do expect that driving will become easier and safer. These changes have the potential to increase people’s well-being as we spend less time on menial tasks. However, these changes will also displace many workers. It is my hope that governments will have the foresight to see this coming and will help the displaced workers find new occupations and/or purpose in life. If this does not occur, it will not be universally welcomed nor universally beneficial to human well-being.

鈥淓merging technologies taking people’s jobs could lead to civil unrest and wide-sweeping societal change. People may feel lost as they search for new meaning in their lives. People may have more leisure time which will initially be celebrated but will then become a source of boredom. AI technology may also serve to mediate our interpersonal interactions more so than it does now. This has the potential to cause misunderstandings as AI agents help people manage their lives and relationships. AIs that incorporate beliefs based on biases in algorithms may also stir up racial tensions as they display discriminatory behavior without an understanding of the impact these biases may have on humans. People’s greater reliance on AIs may also open up new opportunities for cybercrime.鈥

Beneficial
Sarita Schoenebeck, associate professor in the School of Information at the University of Michigan and director of the Living Online Lab, said, 鈥淚鈥檓 hopeful that there will be better integration between the digital technologies we use and our physical environments. It is awkward and even disruptive to use mobile phones in our everyday lives, whether at work, at home, walking on the street or at the gym. Our digital experiences tend to compete with our physical environments rather than working in concert with them. I鈥檓 hopeful devices will get better at fitting our body sizes, physical abilities and social environments. This will require advances in voice-based and gesture-based digital technologies. This is important for accessibility and for creating social experiences that blend physical and digital experiences.鈥

Harmful
Sarita Schoenebeck, associate professor in the School of Information at the University of Michigan and director of the Living Online Lab, commented, 鈥淚 am concerned about young people鈥檚 exposure to misogynistic and racist content, as well as other kinds of harmful content. My concern is that the exposure may be subtle, tacit and indistinct. It will be difficult for parents or teachers to notice it on a day-to-day basis, and perhaps difficult even for experts to track. The famous adage from the 1964 Supreme Court case, Jacobellis v. Ohio, where Justice Stewart said of pornography, 鈥淚 know it when I see it鈥 loses its durability here. We may not know it when we see it. I do not want to restrict our young people from the Internet, but I do want us to better understand ideas they are being exposed to, before those ideas become entrenched and harmful.鈥

Beneficial and Harmful
John Hartley, a research professor in media and communications at the University of Sydney in Australia, predicted, 鈥淭he most beneficial changes will come from processes of intersectional and international group-formation, whereby digital life is not propounded as a species of possessive individualism and antagonistic identity, but as a humanity-system, where individuality is a product of codes, meanings and relations that are generated and determined by anonymous collective systems (e.g., language, culture ethnicity, gender, class).

鈥淛ust as we, the species, have begun to understand that we live in a planetary biosphere and geosphere, so we are beginning to feel the force of a sense-making semiosphere (Yuri Lotman’s term), within which what we know of ourselves, our groups and the world is both coded and expressed in an open, adaptive, complex system, of which the digital is itself a technological expression.

鈥淎t present, the American version of digital life is the libertarian internet as a soft-power instrument of U.S. global cultural hegemony. The direction-of-travel of that system is toward the reduction of humanity to consuming individuals; digital affordances to an internet of shopping; and human relations to corporate decisions.

鈥淲ithin that setup, users have, however, discovered their own interlinked identities and interests and have begun to proliferate across platforms designed for consumerism, not as market influencers but as intersectional activists.

鈥淎 paradigm example of what is necessarily a mixed environment is Greta Thunberg. Her climate activism could not have gone global without digital life. Fridays for Future and School Strike for Climate could not have mobilized 6 million demonstrators without digital organisation. A lone teenager, Thunberg showed the world that innovation can come from anywhere in a digital system, and that collective action is possible to imagine at planetary scale to address a human-made planetary crises.

鈥淟ooking forward, ordinary users are becoming conscious of their own creative agency and are looking for groups in which world-building can be shared as a group-forming change agency. Thus, intersectionality, collective action, and planetary or species-level coding of the category of ‘we’ are what will be beneficial and of great benefit in digital life, to address the objective challenges of the Anthropocene, not as a false and singular unity of identity, but as a systemic population of difference, each active in their own sphere to link with common cause at group-level.

鈥淎t the same time, users are becoming more conscious of their individual ignorance in the context of cultural, political and economic multiplicity. Digital literacy includes recognition of what you don’t know. This is the self-consciousness of the expert, to seek understanding of context, history and others in order to improve their models of knowledge. Knowledge is already riven by power and antagonism and digital haters are probably better organized than activists for climate justice, but the developing understanding of how the system works both negatively and positively is another emergent benefit of digital literacy at humanity scale.

鈥淭he flip side: Incumbent powers, both political and commercial, are propagating stories in favour of conflict. These are now weaponized strategic forces, the continuation of warfare in the cultural realm, where audiences, viewers, players and consumers are encouraged to forget they are citizens, the public of humanity-in-common, and to cast themselves as partisans and enemies whose self-realization requires the destruction of others. The integration of digital life into knowledge, power and warfare systems is already far advanced. By 2035 it will be too late to self-correct without organized resistance.鈥

Beneficial and Harmful
Jon Stine, executive director of the Open Voice Network, wrote, 鈥淭hree advances that we will welcome in 2035:

  • A narrowing of the digital and linguistic divide through the ubiquity of natural language understanding and translation. We’ll be able to understand each other, if we choose to listen.
  • The rapid advances in healthcare early diagnosis, achieved through the use of biomarker data and the application of artificial intelligence
  • Ambient, ubiquitous conversational AI. We’ll live in a world of billions of AI’s, and every AI will be conversational. Welcome to the post-QWERTY world.

鈥淗owever the same digital advantages create this 2035 scenario:

  • The hyper-personalized attention economy has continued to accelerate 鈥 to the financial benefit of major technology platforms 鈥 and the belief/economic/trust canyons of 2023 are now unbridgeable chasms. Concepts of truth and fact are deemed irrelevant; the tribes of the earth exist within their own perceptual spheres.
  • The technology innovation ecosystem 鈥 research academics, VCs and start-ups, dominant firms 鈥 has fully embraced software libertarianism, and no longer concerns itself with ethical or societal considerations. If they can build it, they will (see above).
  • The digital divide has hardened, and divided into three groups: the digerati, who create and deliver it, and out of self-interest; the consumptives, into whose maw is fed ever-more-trite and behavior-shaping messaging and entertainment; and the ignored 鈥 old, impoverished, are off the grid.”

Beneficial
George Lessard, information curator and communications and media specialist at MediaMentor.ca, responded, “The best thing that could happen would be that the U.S. law that protects internet corporations from being held liable for content posted on their platforms by users will be revoked and that they become as liable as a newspaper is for publishing letters to the editor. The second-best thing would be that internet platforms like Google and Facebook are forced to pay the journalism sources they distribute for that content like they do in Australia and soon Canada. And the third best thing that could happen is that sites/platforms like Flickr and YouTube will be required to share the revenue generated by the intellectual property users/members share on their platforms.鈥

Harmful
George Lessard, information curator and communications and media specialist at MediaMentor.ca, said, “The most harmful thing that will happen is that the intellectual property posted by users to platforms like Facebook, Flickr and YouTube will continue to create revenue for these sites well past the life of the people who posted them, and their heirs will not be able to stop that drain of income for the creator’s families/agents.鈥

Harmful (Did not respond to Benefits question)
Judith Donath, fellow at Harvard’s Berkman Center, and the founder of the Sociable Media Group at the MIT Media Lab, wrote, 鈥淧ersuasion is the fundamental goal of communication. But, although one might want to persuade others of something false, persuasiveness has its limits. Audiences generally do not wish to be deceived, and thus communication throughout the living world has evolved to be, while not 100% honest, reliable enough to function.

鈥淚n human society by 2035, this balance will have shifted. AI systems will have developed unprecedented persuasive skills, able to reshape people鈥檚 beliefs and redirect their behavior. We humans won鈥檛 quite be an army of mindless drones, our every move dictated by omnipotent digital deities, but our choices and ultimately our understanding of the world will be profoundly influenced by algorithmically generated media exquisitely tuned to our individual desires and vulnerabilities. We are already well on our way to this. Companies such as Google and Facebook have become multinational behemoths (and their founders, billionaires) by gathering up all our browsings and buyings and synthesizing them into behavioral profiles. They sell this data to marketers for targeting personalized ads and they feed it to algorithms designed to encourage the endless binges of YouTube videos and social posting, providing an unbounded canvas for those ads.

鈥淣ew technologies will add vivid detail to those profiles. Augmented-reality systems need to know what you are looking at in order to layer virtual information onto real space: the record of your real-world attention joins the shadow dossier.聽 And thanks to the descendants of today鈥檚 Fitbits and Ouras, the records of what we do will be vivified with information about how we feel 鈥 information about our anxieties, tastes and vulnerabilities that is highly valuable for those who seek to sway us.

鈥淧ersuasion appears in many guises: news stories, novels and postings scripted by machine and honed for maximum virality, co-workers, bosses and politicians who gain power through stirring speeches and astutely targeted campaigns. By 2035, one of the most potent forms may well be the virtual companion, a comforting voice that accompanies you everywhere, her whispers ensuring you never get lost, never are at a loss for a word, a name or the right thing to say. If you are a young person in the 2030s, she鈥檒l have been your companion since you were small 鈥 she accompanied you on your first forays into the world without parental supervision; she knew the boundaries of where you were allowed to go and when you headed out of them she gently, yet irresistibly persuaded you to head home instead. Since then, you never really do anything without her. She鈥檚 your interface to dating apps. Your memory is her memory. She is often quiet, but it is comforting to know she is there accompanying you, ensuring you are never lost, never bored. Without her, you really wouldn’t know what to do with yourself.

鈥淧ersuasion could be used to advance good things 鈥 to promote cooperation, daily flossing, safer driving. Ideally, it would be used to save our over-crowded, over-heating planet, to induce people to buy less, forego air travel, eat lower on the food chain. Yet even if used for the most benevolent of purposes, the potential persuasiveness of digital technologies raises serious and difficult ethical questions about free will, about who should wield such power.

鈥淭hese questions, alas, are not the ones we are facing. The accelerating ability to influence our beliefs and behavior is far more likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions, a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians. The question we face instead is how do we prevent this?鈥

Beneficial
Marvin Borisch, chief technology officer at Red Eagle Digital based in Berlin, wrote, 鈥淪ince the invention of the ARPANET and the Internet, decentralization has been the thriving factor of our modern digital life and communication in the background. The navigation and use of decentralized structures, on the other hand, has not been easy, but over the last decades the emerging field of user experience has evolved interfaces and made digital products easier to use.

鈥淎fter an episode of centralized services, the rise of distributed ledger technology in the modern form of blockchains and decentralized, federated protocols such as ActivityPub make me believe that by 2035, more decentralized services and other digital goods will enhance our life for the better, giving back ownership of data to the end-user rather than data-silos and service providers. If our species strives for a stellar future rather than a mono-planetary one, decentralized services with local and federated states along with handshake-synchronization would create a great basis for futuristic communication, software updates and more.鈥

Harmful
Marvin Borisch, chief technology officer at Red Eagle Digital based in Berlin, commented, 鈥淭he rise of surveillance technology is dangerously alarming. European and U.S. surveillance technology is hitting a never-before-seen level which gets adapted and optimized by more autocratic nations all around the globe. The biggest problem is that such technology has always been around and will always be around. It penetrates people鈥檚 privacy more and more, step by step. Karl-Hermann Flach, journalist and politician once said, 鈥楩reedom always dies centimeter by centimeter,鈥 and that goes for privacy, one of the biggest guarantees of freedom.

鈥淭he rise of DLT (distributed ledger technology) in forms of blockchains can be used for great purposes, but with over-regulation through technological incompetence and fear it will create a big step toward the transparent citizen and therefore the transparent human. Such deep transparency will enhance the already existing chilling effect and might cause a decline of individuality.

鈥淪uch surveillance will come in forms of transparent 鈥楥entral Bank Digital Currencies鈥 which are a corner stone of social credit systems. It will come with the weakening of encryption through governmental mandatory backdoors but also with the rise of quantum computing. Later could, and probably will, be dangerous because of the costs of such technology.

鈥淨uantum resistance might already be a thing, but the spread of it will be limited to those that have access to quantum computing. New technological gatekeepers will rise, deciding who has access to such technology in a broader span.鈥

Beneficial and Harmful
Bob Frankston, internet pioneer and technology innovator, said, 鈥淭he idea that meaning is not intrinsic is a difficult one to grasp. Yet this idea has defined our world for the last half-century. Electronics spreadsheets knew nothing about finance yet allowed financiers and others to leverage their knowledge. Unlike the traditional telecommunications infrastructure, the Internet does not transport meaning 鈥 only meaningless packets of bits. Each of us can apply our own meaning if we accept intrinsic ambiguity.

鈥淚t poses a challenge to those who want to do human-centered infrastructure. The idea that putting such intent into the 鈥榩lumbing鈥 actually limits our ability to find our own meaning is counterintuitive. Getting past that and learning how to manage the chaos is key. Part of this is having an educational system that teaches critical thinking and how to learn.

鈥淲e need to accept a degree of chaos and uncertainty and learn to survive it, if we have the time.

鈥淚 might be expecting too much, but I can hope that some of those growing up with the new technologies will see the powerful ideas that made them possible and eschew the hubris of thinking they can define the one true future.鈥

鈥淚 worry about the hubris of those who think they can define the one true future and impose it on us. I see the danger in an appeal to authority or those who do not understand how AI works and thus trust it far too much. Just as we used to use steam engine analogies to understand cognition, we now use problematic computer analogies.

鈥淲e鈥檝e spent thousands of years developing a society implicitly defined by physical boundaries. Today we must learn how to live safely in a world without such boundaries. How do we manage the conflicts between rights in a connected world?

鈥淗ow will we negotiate a world that we understand is interconnected physically (with climate as an example) and more abstractly as with the Internet?鈥

Beneficial
Jonathan Taplin,聽author of 鈥淢ove Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy鈥 said, 鈥淚 am optimistic for the first time in 20 years that both the regulatory agencies and the Congress are serious about governing in the digital world. The FTC is seriously challenging the Big Tech monopolies and the SEC seems intent on bringing crypto exchanges under its purview. Whether these changes can be enacted in the next two years will be a test of the Biden administration鈥檚 willingness to take on the Silicon Valley donor class, which has become a huge part of Democratic campaign financing. At the Congressional level, I believe that Section 230 reform and some form of Net Neutrality applied to Google, Amazon, Meta and Apple (so they don鈥檛 favor their own services), are within the realm of bipartisan cooperation. This also makes me optimistic.鈥

Harmful
Jonathan Taplin,聽author of 鈥淢ove Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy鈥 commented, 鈥淲endell Berry once wrote, 鈥業t is easy for me to imagine that the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines.鈥 This is my greatest fear. From the point the technological Singularity was first proposed, the marriage of man and machine has proceeded at a pace that even worries the boosters of artificial general intelligence (AGI).

鈥淚 understand the Peter Thiel would like to live to 200, but that possibility fills me with dread. And the notion that AI (DALL-E, GPT 3) will create great ORIGINAL art is nonsense. These programs assume that all the possible ideas are already contained in the data sets, and that thinking merely consists of recombining them. Our culture is already crammed with sequels and knockoffs. AI would just exacerbate the problem.

鈥淲e are mired in a culture of escapism-crypto fortunes, living a fantasy life seven hours a day in the metaverse colonies on Mars. The dreams of Elon Musk, Marc Andreessen, Peter Thiel and Mark Zuckerberg are ridiculous and dangerous. They are 鈥榖read and circuses鈥 put forth by hype artists at a time when we should be financing the transition to a renewable energy economy, instead of spending $10 trillion on a pointless Martian Space Colony.鈥

Beneficial
Jonathan Kolber, author of 鈥淎 Celebration Society,鈥 said, 鈥淚 believe that we will see multiple significant and positive developments in the digital realm by 2035. These include:

  • Widespread availability of immersive VR (sight, sound, touch, and even limited smell and taste) at a low cost. Just as cellular phones with high-resolution screens are now serving most people on Earth, basic (sight and sound) VR devices should be similarly available for, at minimum, sight and sound. Further, I expect a FULLY immersive Dreamscape-type theater experience of it to be widely available, with thousands of available “channels” for experiences of wonder, learning, and play in 10-minute increments in many cities worldwide.
  • Wireless transmission of data will be fast enough and reliable enough that, in most cases, there will be the subjective experience of zero latency.
  • Courses will be taught this way. Families will commune at a distance. It will offer a new kind of spiritual/religious experience as well.
  • By 2035, I expect the prohibition on entheogens to have largely lifted and special kinds of therapy to be available in most countries using psilocybin, psychedelic cannabis, and (in select cases, per Dutch research) MDMA and LSD. PTSD will be routinely cured in one or two immersive VR experiences using these medicines under therapeutic guidance.鈥

Harmful
Jonathan Kolber, author of 鈥淎 Celebration Society,鈥 commented, 鈥淲ithout the emergence of a 鈥榯hird way,鈥 such as the restored and enhanced Venetian Republic-based model, the world will continue to crystallize into democracies and Orwellian states.

鈥淒emocracies will continue to be at risk of becoming fascist, regardless of the names it claims. As predicted as far back as the ancient Greeks, strongmen will emerge in times of crisis and instability, and accelerating climate change and accelerating automation with the attendant wholesale loss and disruption of jobs will provide these in abundance.

鈥淒igital tools will enable a level of surveillance and control in all types of systems far beyond Orwell’s nightmares. Flying surveillance drones the size of insects, slaved to AI systems via satellite connections, will be mass-produced. These will be deployed individually or in groups according to shifting needs and conditions, according to the policy goals set by those whom Adam Smith called The Masters.

鈥淚n most cases, however, the drones will not be required for total surveillance and control of a populace. The ubiquitous phones and VR devices will suffice, with AIs discreetly monitoring all communication for signals deemed subversive or suspicious.

鈥淩evolt will become increasingly difficult in such circumstances.

鈥淲e take universal surveillance as a given circa 2035. The only question becomes: surveillance by whom, and to what effect? Our celebration society proposal turns this on its head.鈥

Harmful (Did not respond to Benefits question)
Soraya Chemaly, an author, activist and co-founder of the Women鈥檚 Media Center Speech Project, wrote, 聽鈥淗uman-centered development of digital tools and systems – I’d like to say I am feeling optimistic about value-sensitive design that would improve human connections, governance, institutions, well-being, but, in fact, I fear we are backsliding.鈥

Beneficial
Zizi Papacharissi, professor and head of the communication department and professor of political science at the University of Illinois-Chicago, responded, 鈥淚 see technologies improving communication among friends, family and colleagues. Personally-mediated communication will be supported by technology that is more custom-made, easier to use, conversational agent-supported and social-robot enabled. I see technology advancing in making communication more immediate, more warm, more direct, more nuanced, more clear and more high fidelity. I see us moving away from social media platforms, due to growing cynicism about how they are managed, and this is a good thing. The tools we use will be more precise, glossy and crash-proof 鈥 but they will not bring about social justice, heightened connection or healthier relationships. Just because you get a better stove, does not mean you become a better cook. Getting a better car does not immediately make you a better driver.鈥

Harmful
Zizi Papacharissi, professor and head of the communication department and professor of political science at the University of Illinois-Chicago, said, 鈥淭he lead motivating factor in technology design is profit. Unless the mentality of innovation is radically reconfigured, so as to consider innovative something that promotes social justice and not something that makes things happen at a faster pace (and thus is better for-profit goals), tech will not do much for social justice. We will be making better cars, but those car will not have features that motivate us to become more responsible drivers; they will not be accessible in meaningful ways; they will not be friendly to the environment; they will not improve our lives in ways that push us forward (instead of giving us different ways to do what we have already been able to do in the past).鈥

Beneficial
Mauro D.聽R铆os, an adviser to the eGovernment Agency of Uruguay and director of the Uruguayan Internet Society chapter, responded, 鈥淚n 2035, advances in technology can and surely will surprise us, but they will surprise us even more IF human beings are willing to change their relationship with technology.

鈥淎dvances in technology will surprise us in the next 10 years, for example, possibly seeing the emergence of the real metaverse, something that does not yet exist. We will see a clear evolution of wearable tech, and we will also be surprised at how desktop computing undergoes a remake of the PC.

鈥淏ut technological advances alone do not create the future, even as they will continue to advance unfailingly. The ways in which people use them are what matter. What should occupy us is to understand if we and tech will be friends, lovers or have a happy marriage. We have discovered that, from the laws of robotics to the ethics behind artificial intelligence, our responsibility as a species is that as we create technology and dominate it, it is important that we generate a new social contract between it and we.

鈥淭he ubiquity of technology in our lives should lead us to question how we relate to it. Even back in the 1970s and 1980s it was very clear that the border between the human and the non-human was quite likely to blur soon. Today that border is blurry in certain scenarios. This is generating doubts, suspicions and concerns.

鈥淏y the year 2035, humans should have already resolved this discussion and have adapted and developed new, healthy models of interaction with technology. Digital technology is a permanent part of our world in an indissoluble way. It is necessary that we include a formal chapter on it in our social contract.鈥

Harmful
Mauro D.聽R铆os, an adviser to the eGovernment Agency of Uruguay and director of the Uruguayan Internet Society chapter, wrote, 鈥2035 awaits us with more complex challenges than we can imagine. Technology incites us, provokes us, corners us and causes us to question everything.

鈥淥ne of the biggest risks today is that the technology industry is resistant to establishing common standards. Steps like those taken by the European Community in relation to connectors are important, but technology companies continue to insist on avoiding standardization to win economic gain. In the past most of the battles were hardware-related, today they are software-related.

鈥淚f we want to develop things like the true Metaverse or the conquest of Mars, technology has to have common criteria in key aspects. It should be established in artificial intelligence, automation, remote or virtual work, personal medical information, educational platforms, interoperability and communications, autonomous systems and others.鈥

Beneficial
Nandi Nobell, futurist designer and senior associate at CallisonRTKL, a global architecture, planning and design practice, wrote, 鈥淲hether physical, digital or somewhere in-between, interfaces to human experiences are all we have and have ever had. The body-mind (consciousness) construct is already fully dependent on naturally evolved interfaces to both our surroundings and our inner lives, which is why designing more intuitive and seamless ways of interacting with all aspects of our human lives is both a natural and relevant step forward 鈥 it is crossing our current horizon to experience the next horizon.

鈥淲ith this in mind, extended reality, the metaverse and artificial intelligence become increasingly important all the time as there are many evident horizons we are crossing through our current endeavours simply by pursuing any advancement.

鈥淲hether it is the blockchain we know of today, or something more useful, user- and environmentally friendly, and smooth to integrate that can allow simplification of instant contracts and permission-less activities of all sorts, this can enable our world to verify source and quality of content, along with many other benefits.

鈥淭he best interfaces to experiences and services that can be achieved will influence what we can think and do, both as tools and services in everyday life, but also as the path to education, communication and so many other things. Improving interfaces 鈥 both physical and digital make the difference between having and not having superpowers as we advance.

鈥淐onnecting a wide range of technologies that bridge physical and digital possibilities grows the reach of both. This also means that thinking of the human habitat as belonging to all areas the body and mind can traverse is more useful than inventing new categories and silos to classify experiences by. Whatever the future version of multifaceted APIs are, they have to be flexible, largely open and easy to use. Connectivity between ways, directions, clarity, etc., of communication can extend the reach and multiplication of any possibilities – new or old.鈥

Harmful
Nandi Nobell, futurist designer and senior associate at CallisonRTKL, a global architecture, planning and design practice, commented, 鈥淔irst comes data 鈥 if the FAANGs of the world (non-American equivalents are equally bad) are allowed to remain even nearly as powerful as they are today, problems will become ever-greater, as their strength as manipulators of individuals grow deeper and more advanced. Manipulation will become vastly more advanced and difficult to recognize.

鈥淎rtificial intelligence is already becoming so powerful and versatile it can soon shape any imagery, audio and text or geometry in an instant. This means anyone with the computational resources and some basic tools can trick just about anyone into new thoughts and ideas. The owners of the greatest databanks of individuals鈥 and companies’ history and preferences can easily shape strategies to manipulate groups, individuals and entire nations into new behaviours.

鈥淲hy invest in anything if you will have it stolen at some point? Is some sort of perfect fraud-prevention system (blockchain or better) relevant in a future in which any ownership of any sort of asset class 鈥 digital or physical 鈥 is under threat of loss or distortion?

鈥淓xtended reality and the metaverse often gets a bit of a beating for how it can make people more vulnerable to harassment, and this is a real threat, but artificial intelligence is vastly more scalable 鈥 essentially it could impact every human with access to digital technology more or less simultaneously, while online harassment in an immersive context is not scalable in a similar sense.

鈥淪triking a comfortable and reasonable balance between safe and sane human freedom and surveillance technologies to keep a legit bottom line of this human safety is going to be hard to achieve. There will be further and deeper abuses in many cultures. This may create a digital world and lifestyle that branches off quite heavily from the non-digital counterparts, as digital lives can be expected to be surveilled while the physical can at least in principle be somewhat free of eavesdropping if people are not in view or earshot of a digital device.

鈥淭his being said, a state or company may still reward behaviour that trades data of all sorts also from anything happening offline 鈥 which has been the case in dictatorships throughout history.

鈥淭he very use and manufacturing of technology may also cost the planet more than it provides the human experience, and as long as the promises of the future drive the value of stock and investments, we are not likely to understand when to stop advancing on a frontier that is on a roll.

鈥淗ealthcare will likely become both better and worse 鈥 the class divide grows greater gaps 鈥 but long-term it is probably better for most people. The underlying factors generally have more to do with human individual values rather than with the technologies themselves.

鈥淭here might be artificial general intelligence by 2035. Such AI may have great potential to be helpful. Perhaps one individual can create a value for humanity or planet that is a million times greater than the next person鈥檚 contribution, but we do not know whether this value holds over time, or if it becomes just as bad as Nick Bostr枚m鈥檚 鈥榩aper clip鈥 analogy. Most people are willing to borrow from the future, and at the same time children are meant to be this future. What do we make of it? Are children therefore multi-dimensional batteries?鈥

Beneficial and Harmful
Frank Kaufmann,聽president of Twelve Gates Foundation and Values in Knowledge Foundation, wrote, 鈥淚 find all technological development good if developed and managed by humans who are good. The punchline is always this: To the extent that humans are impulsively driven by compassion and concern for others and for the good of the whole there is not a single prospective technological or digital breakthrough that bodes ill in its own right. Yet, to the extent that humans are impulsively driven for self-gain with others and the good of the whole as expendable in the equation, even the most primitive industrial/technological development is to be feared.

鈥淚 am extreme in this view as simple, fundamental and universal. For example, if humans were fixed in an inescapable makeup characterized by care and compassion, the development of an exoskeletal, indestructible, AI-controlled, military robot that could anticipate my movements up to four miles away, and morph to look just like my loving grandmother could be a perfectly wonderful development for the good of humankind. On the other hand, if humans cannot be elevated above the grotesque makeup in which others and the greater good are expendable in the pursuit of selfish gain, then the invention of a fork is a dangerous, even horrifying thing.

鈥淭he Basis to Assess Tech – Human Purpose, Human Nature: I hold that the existence of humans is intentional, not random. This starting point establishes for me two bases for assessing technological progress: How does technological/digital development relate to 1. Human purpose and 2. Human nature?

鈥淧urpose: Two things are the basis for assessing anything, the purpose and the nature of the agent. This is the same for whether we assess the CRISPR gene editing, or if I turn left or right at a streetlight. The question in both cases is: Does this action serve our purpose? This tells us if the matter in question is good or bad. It simply depends on what we are trying to do (our purpose). If our purpose is to get to our Mom鈥檚 house, then turning left at the light is a very bad thing to do. If the development of CRISPR gene editing is to elevate dignity for honorable people, it is good. If it is to advance the lusts of a demonic corporation, or the career of an ego-insane, medical monster, then likewise breakthroughs in CRISPR gene editing are worrisome.

鈥淯nfortunately, it is very difficult to know what human purpose is. Only religious and spiritual systems recommend what that might be.

鈥淗uman Nature: The second basis for assessing things (including digital and technological advances) relates to human nature. This is more accessible. We can ask: Does the action comport with our nature? For simplicity I鈥檝e created a limited list of what humans desire (human nature):

Original desires

1. To love and be loved

2. Privacy (personal sovereignty)

3. To be safe and healthy

4. Freedom and the means to create (creativity can be in several areas)

a. Ingenuity

b. Artistic expression

c. Sports and leisure, physical and athletic experience

Perverse and broken desires

1. Pursuit of and addiction to power

2. Willingness to indulge in conflict

Three Bases to Assess: In sum then, analyzing and assessing technological and digital development by the year 2035 should move along three lines of measure.

1. Does the breakthrough serve the reason why humans exist (human purpose)?

2. Which part of human nature does the breakthrough relate to?

3. Can the technology have built-in protections to prevent perfectly exciting, wonderful breakthroughs from becoming a dark and malign force over our lives and human history?

鈥淎ll technology coming in the next 15 years sits on a two-edged sword according to measures for the analysis described above.

Likely Benign, Little Danger 鈥 Some coming breakthroughs are merely exciting, such as open-air gesture technology, prosthetics with a sense of touch, printed food, printed organs, space tourism, self-driving vehicles, and much more.

Medium Danger 鈥 Some coming digital and tech breakthroughs have medium levels of concern for social or ethical implications, such as hybrid reality environments, tactile holograms, domestic service and workplace robots, quantum-encrypted-information, biotechnology and nano-technology, again, and much more.

Dangerous, Great Care Needed 鈥 Finally, there is a category of coming developments that should be put in the high-concern category. These include BCI and brain-implant technology, genome editing, cloning, selective breeding, genetic engineering, artificial general intelligence (AGI), deep fakes, people hacking, clumsy efforts to fix the environment through potentially risky geoengineering, CRISPR gene editing, and again many others.

鈥淎pplying the three bases in assessing the benefits and dangers of technological advances in our time can be done rigorously, systematically and extensively on any pending digital and tech developments They are listed here on a spectrum from less worrisome to potentially devastating.

鈥淚t is not the technology itself that marks it as hopeful or dystopic. This divergence is independent of the inherent quality of the precise technology itself. It is tied to the maturation of human divinity, ideal human nature.鈥

Beneficial (Did not respond to Harms question)
Marc Rotenberg, founder and president of the Center for AI and Digital Policy, said, “Innovative developments in the energy sector, coupled with the use of digital techniques, will counter the growing impact of climate change as data models will provide political leaders and the public with a greater awareness of the risks of climate catastrophe. Improved modeling will also help assess the effectiveness of policy responses. AI models will spur new forms of energy reduction and energy efficiency.”

Beneficial
Kenneth A. Grady, futurist and consultant on law and technology and editor of The Algorithmic Society newsletter, predicted, 鈥淭he best and most beneficial changes reside at the operational level. We will learn to do more things more efficiently and, most likely more effectively through digital technology, than we can do through analog technology or current digital technology. Our current and near-term future digital tools perform well if asked to answer simple questions, such as 鈥榳hat is the pattern?鈥 or 鈥榳hat changed.鈥 Tasks such as developing drugs; comparing images from various modalities; analyzing large, complex databases (weather information) leverage the current and past focus of digital tool research. The potential move to quantum computing will expand our capabilities in these and similar areas.鈥

Harmful
Kenneth A. Grady, futurist and consultant on law and technology and editor of The Algorithmic Society newsletter, said, 鈥淭he most harmful or menacing changes in digital life that are likely to occur by 2035 are the overuse of immature digital technology. The excitement over the apparent 鈥榮kill鈥 of chat bots based on large language models (e.g., ChatGPT) tends to overwhelm the reality of such experimental software. Those who create such software acknowledge its many limitations. But still, they release them into the wild. Individuals without appreciation for the limitations start incorporating the software into systems that people will use in real life, sometimes quite important, settings. The combination will lead to the inevitable failures which the overeager will chalk up to the cost of innovation. Neither the software nor society are ready for this step. History has shown us that releasing technologies into the wild too soon leads to significant harm. History has not taught us to show restraint.鈥

Beneficial (Did not respond to Harms question)
Jeff Johnson, principal consultant at UI Wizards, Inc., former chair of Computer Professionals for Social Responsibility, predicted, 鈥淐ars, trucks and busses will be improved in several ways. They will have more and better safety features, such as collision-avoidance and accident-triggered safety cocoons. They will be mostly powered by electric motors, have longer ranges than today’s electric cars, benefit from improved recharging infrastructure. In addition:

A significant proportion of AI applications will be designed in a human-centered way, improving human control and understanding.

  • Digital technology will improve humankind’s ability to understand, sequence and edit genetic material, fostering advances in medicine, including faster creation of more effective vaccines.
  • Direct brain鈥揷omputer interfaces and digital body implants will, by 2035, begin to be beneficial and commercially viable.
  • Auto-completion in typing will be smarter, avoiding the sorts of annoying errors common with auto-complete today. Voice control and biometric control, now emerging, may replace keyboards, pointers and touch screens.
  • Government oversight and regulation of digital technology will be more current and more accepted.
  • Mobile digital devices will consume less power and will have longer-lasting batteries.
  • Robots 鈥 humanoid and non-humanoid, cuddly and utilitarian 鈥 will be more common, and they will communicate with people more naturally.

鈥淢achine learning will continue to be used naively, however, and people will continue to rely on it, causing many poor decisions. Cryptocurrency will wax and wane, but will continue to waste significant power, productivity and human mental and emotional energy. Bad actors will develop autonomous weaponry. It will be distributed worldwide by rogue nations and arms dealers, contributing to a rise in terrorism and wars and in the destruction caused by them.鈥

Beneficial
Isabel Pedersen, director of the Digital Life Institute at Ontario Tech University, said, 鈥淭he most beneficial changes in digital life are difficult to predict because people rarely have shared values on the concept of betterment or human well-being. Put another way, social values involving lifestyle betterment are diverse and oftentimes conflicting. However, there is one area that most people agree upon. The opportunity for dramatic change lies in medical industries and the goal to improve healthcare.

鈥淗uman-centric AI technologies that are embodied and augmentative could converge to improve human health in dramatic ways by 2035. With the advent of personal health technologies 鈥 those that are worn on or implanted in bodies and designed to properly respond to individuals through dedicated AI- based platforms 鈥 the opportunity exists to diagnose, treat, restore, monitor, and care for people in improved ways.

鈥淚n this case, digital life will evolve to include healthcare not in terms of isolated activities (e.g., going to a doctor for diagnosis on a single health issue), but one whereby individual people interact with human doctors and caregivers (and their organizations) in relation to their own personalized biometric data. These types of utopian or techno-solutionist predictions have been made before, however deployment, adoption and adaptation to these technologies will finally start to occur.

鈥淒esign cycles that promised convergence are finally transforming to actual deployment cycles. The risk is that the rise of these technologies will benefit only those who can afford to purchase them by 2035 leading to further socio-economic problems of the digital divide.

鈥淎nother risk is algorithmic bias leading to racism, ageism, ableism or gender discrimination in healthcare. To achieve mass adoption of these technologies by societies, governments will need to regulate them to ensure equity and invest in them in order to actually benefit all members of society. Without the shared value of human well-being for everyone, the dream of improved human health will be limited.鈥

Harmful
Isabel Pedersen, director of the Digital Life Institute at Ontario Tech University, predicted, 鈥淒igital life technologies are on course to further endanger social life and extend socio-economic divides on a global scale by 2035. One cause will be the further displacement of legitimate news sources in the information economy. People will have even more trouble trusting what they read. The deprofessionalization of journalism is well under way and technocultural trends are only making this worse.

鈥淎long these lines, one technology that will harm people in 2035 is AI-based content-generation technology used through a range of deployments. The appropriate use of automated writing technologies seems unlikely and will further impoverish digital life by unhinging legitimate sources of information from the public sphere.

鈥淭ext-generation technologies, large language models and more advanced Natural Language Processing (NLP) innovations are undergoing extensive hype now; they will progress to further disrupt information industries. In the worst instances, they will help leverage disinformation campaigns by actors motivated by self-serving or malicious reasons.鈥

Beneficial
闯补尘别蝉听厂.听翱’搁辞耻谤办别听滨痴, professor of management at the University of Notre Dame and author of 23 books on communication, predicted, 鈥淭he best of what technology will have to offer will be in medicine, space flight, planetary defense against asteroids and space debris, interpersonal communication, data creation and storage and the mining of enormous data sets. Only the imagination will limit people’s use of such inventions.鈥

Harmful
闯补尘别蝉听厂.听翱’搁辞耻谤办别听滨痴, professor of management at the University of Notre Dame and author of 23 books on communication, commented, 鈥淟et’s explore some of the worst that technology will have to offer in regard to human rights by 2035.

鈥淔irst, I and others have genuine concern about social media platforms for several reasons. First, the sheer volume of messaging and video content. If 500 hours of video content are now posted to YouTube every minute, Google and Alphabet cannot possibly monitor the content.

鈥淔acebook owner Meta says that AI catches about 90 percent of terms-of-service violations, many of which are the worst humanity has to offer, simply horrific. The remaining 10 percent have been contracted out to firms such as Accenture. Two problems seem apparent here. First, Accenture cannot keep employees on the content monitoring teams longer than 45 to 90 days due to the heinous nature of the content itself. Turnover on those teams is 300% to 400% per annum. Second, the contract with Facebook is valued at $500 million per annum, and the Accenture board is unwilling to let go of it. Facebook says, 鈥楶roblem solved.鈥 Accenture says, 鈥榃e’re working on it.鈥

鈥淭he social media platforms are owned and operated either by billionaire entrepreneurs who may pay taxes but do not disclose operating figures, or by trillion-dollar publicly held firms that appear increasingly impossible to regulate. Annual income levels make it impossible for any government to levy a fine for misbehavior that would be meaningful. Regulating such platforms as public utilities would raise howls of indignation regarding First Amendment free speech infringements. Other social media platforms, such as TikTok, are either owned or controlled by dictatorial governments that continue to gather data on literally everyone, regardless of residence, citizenship or occupation.

鈥淎nother large concern about digital technology revolves around artificial intelligence. Several programs have either passed or come very close to passing the Turing Test. ChatGPT is but one example. The day when such algorithms can think for themselves and evade the efforts of homo sapiens to control them is honestly not far off. Neither legislators nor ethicists have given this subject the thought it deserves.

鈥淎nother concern has been fully realized. Facial recognition (FR) technology is now universally employed in the People’s Republic of China to track the moments, statements and behavior of virtually all Chinese citizens (and foreign visitors). Racial profiling to track, isolate and punish the Uyghur people has proven highly successful. In the United States, James Dolan, who owns the New York Knicks and Rangers as well as Radio City Music Hall, is using facial recognition to exclude all attorneys who work for law firms that have sued him and his corporate enterprises. They cannot be admitted to the entertainment venues, despite paying the price of admission, simply because of their affiliation. Many people fear central governments, but private enterprise operated by unaccountably rich individuals, have proven they can use FR and AI to control or punish those with whom they disagree.鈥

Beneficial
Christopher聽Le Dantec, associate professor of digital media at Georgia Tech, said, 鈥淭he big gains will be in medical breakthroughs from AI- and ML-assisted research.鈥

Harmful
Christopher聽Le Dantec, associate professor of digital media at Georgia Tech, predicted, 鈥淭he next industrial revolution from AI and automation will further advance wealth disparity and undermine stable economic growth for all. The rich will continue to get vastly richer. No one will be safe, everyone will be watched by someone/thing. Every aspect of human interaction will be commodified and sold, with value extracted at each turn. The public interest will fall to private motivation for power, control, value extraction.

鈥淪ocial media and the larger media landscape will continue to entrench and divide. This will continue to challenge political discourse, but science and medical advances will also suffer as a combination of outrage-driven revenue models and foreign actors advance mis- and disinformation to advance their interests.

鈥淭he tech sector will face a massive environmental/sustainability crisis as labor revolts spread through regions like China and India, as raw materials become more expensive, and as the mountain of e-waste becomes unmanageable.

鈥淥ngoing experiments in digital currency will continue to boom and bust, concentrating wealth in venture and financial industries; further impoverishing late-come, retail investors; and adding to a staggering energy and climate crisis.

鈥淎ctivists, journalists and private citizens will come under increased scrutiny and threat through a combination of institutional actors working against them and other private individuals who will increasingly use social media to harass, expose and harm people with whom they don’t agree.鈥

Beneficial (Did not respond to Harms question)
John McNutt, professor of public policy at the University of Delaware, said, 鈥淭echnology offers many new and wonderful possibilities, but how people adapt those technologies to their lives and the uplift of their societies is where the real genius occurs. Our challenge has always been how we use these tools to make life better and to prevent harm.

鈥淭he legal/lawmaking system has begun to take technology much more seriously and while the first efforts have not been particularly impressive, the beginnings of new legal regimes have emerged. The nonprofit sector will rebalance away from the current bricks and mortar sector to a mix of traditional organizations, voluntary associations and virtual organizations. Many of the issues that plague the sector will be addressed by technology and the new forms of social organization it will allow. Communities will develop their own technology which will supplement government.鈥

Beneficial
Daniel Pimienta, leader of the Observatory of Linguistic and Cultural Diversity on the Internet, commented, 鈥淚 hope to see the rise of the systematic organization of citizen education on digital literacy with a strong focus on information literacy. This should start in the earliest years and carry forward through life. I hope to see the prioritization of the ethics component (including bias evaluation) in the assessment of any digital system. I hope to see the emergence of innovative business models for digital systems that are NOT based on advertising revenue, and I hope that we will find a way to give credit to the real value of information.鈥

Harmful
Daniel Pimienta, leader of the Observatory of Linguistic and Cultural Diversity on the Internet, commented, 鈥淚 fear the generalization of state-governed, citizen-comprehensive surveillance systems from birth to death, from home to work and in-between. I fear the generalization of bias in uncontrollable digital systems that are designed to the objectives of surveillance capitalism.鈥

Beneficial
Juan Carlos Mora Montero, coordinator of post-graduate studies in planning at the Universidad Nacional de Costa Rica, said, 鈥淭he greatest benefit that I predict for 2035 related to the digital world is that technology will allow people to have access to equal opportunities both in the world of work and in culture, allowing them to discover other places, travel, study, share and enjoy spending time in real-life experiences.鈥

Harmful
Juan Carlos Mora Montero, coordinator of post-graduate studies in planning at the Universidad Nacional de Costa Rica, wrote, 鈥淭he biggest damaging change that can occur between now and 2035 is a deepening of inequities when it comes to communications tools and the further polarization of humanity between people who have access to the infinite opportunities that technology offers and the people who do not. This situation would increase the social inequality in the economic sphere that exists today and would force it to spill over into other areas of life.鈥

Harmful (Did not respond to Benefits question)
John McNutt, professor of public policy at the University of Delaware, observed, 鈥淪adly, while technology empowers positive behavior, it also empowers anti-social behavior and government repression. Hate groups, terrorists and bad actors of all stripes can use what technology offers to do their bidding. In addition, there are unintended consequences. As technology becomes more sophisticated, those externalities will become more difficult to predict and prevent.鈥

Harmful (Did not respond to Benefits question)
Llewellyn Kriel, retired CEO of a media services company based in Johannesburg, South Africa, wrote, 鈥淗uman-centered issues will increasingly take a backseat to tyranny in Africa, parts of the Middle East and the Near East. This is due to the threat digital tech poses to inept, corrupt and self-serving governance. Digital will be exploited to keep populations under control.

鈥淎lready governments in countries in sub-Saharan Africa are exploiting tech to ensure populations in rural areas remain servile by either denying connectivity, ensuring entrenched poverty & making connectedness a privilege rather than a right. This control will grow.

鈥淭hrough control and manipulation of education and curricula, governments ensure political policies are camouflaged as fact and truth. This makes real truth increasingly hard to identify. Digital growth and na茂vet茅 ensure popularity and easy-to-manipulate majoritarianism become 鈥榯he truth.鈥 This too will escalate.

鈥淗ealth is the only sector that holds some glimmer of hope, though access to resources will remain a control screw to entrench tyranny. Already the African digital divide is being exploited and communicated as an issue of narrow political privilege rather than one of basic human rights.

鈥淭he impotence of developers to ensure equity in digital tech extends to kind of new apartheid of which Israeli futurist Yuval Noah Hariri warned. The ease with which governments can and do manipulate access and social media will escalate. For Africa the next decade is very bleak.

鈥淭he fact that organised crime remains ahead of the curve will not only seriously raise the existing barrage of threats to individuals, but exacerbate suspicion, fear and rejection of digital progress in a baby-with-the-bathwater reaction.

鈥淭he gravest threat remains government manipulation. This is already dominant in sub-Saharan Africa and will grow simply because governments can, do and will control access. These responses are being written and formulated under precisely the extensive control of the ruling African National Congress and its myriad alliance proxies.

鈥淲hile the technology will grow worldwide, tyranny and control 鈥 especially in the geographically greater rural areas, as is currently the case on the South African Development Community region, which includes 16 countries in South Africa. Rulers ensure their security by denying access. This will grow because technology development’s focus on profit over rights equates to majority domination, populist control and trendy fashionable fads over equity, justice, fairness and balance.鈥

Beneficial (Did not respond to Harms question)
Robin Allen, a UK-based legal expert in AI and machine learning and co-author of 鈥淭echnology Managing People: The Legal Implications,鈥 wrote, 鈥淚 expect to see really important steps forward from just a debate about ethical principles to proper regulation of artificial intelligence as it regards overall governance and impacts on both individuals and institutions. The European Union’s AI Act will be a complete game changer. Meanwhile steps will be taken to ensure that definitional issues will be addressed by CEN/CENELEC and IEEE.鈥

Beneficial and Harmful
Warren Yoder, longtime director at Public Policy Center of Mississippi, now an executive coach, said, 鈥淎s the 21st century picks up speed, we are moving beyond a focus on the protocol-mediated computation of the Internet. The new focus is on computation that acts upon itself, not yet with autonomous agency, but certainly moving in that direction. Three beneficial changes stand out for the medium-term promise they offer: Machine learning, synthetic biology and the built world.

鈥淐hatGPT and other large language models command most of the attention at the moment because they speak our languages. Text, images and music are how we communicate with each other and, now, with computation. But machine learning offers much more. It promises to revolutionize math and science, disrupt the economy and change the way we produce and engage information. Educators are rethinking how they teach. Many of the rest of us will realize soon that we must do the same.

鈥淐OVID vaccines arrived in the nick of time, a popular introduction to the potential of synthetic biology. Drug discovery, mRNA treatments for old diseases, modifying the immune system to treat autoimmune disorders and many other advances in synthetic biology promise dramatic improved treatments in the medium-term.

鈥淎dding computation to the built environment is generally called the Internet of Things. But that formulation does not at all prepare the imagination for the computational changes we are now experiencing in our physical world. Transportation, manufacturing, even the normal tasks of everyday life will see profound gains in efficiency.

鈥淗aunting each of these beneficial changes are the specters of gross misuse, both for the entrepreneur class鈥檚 vanity and for big-business profit. We could lose not only our privacy, but also our freedom of voice and of exit.

鈥淥ur general culture is already adapting. Artists quickly protested the appropriation of their freely shared work to create the machine learning tools that could replace them. We do not generally acknowledge the speed of culture change, which happens even faster than technology change. Culture slurps tech with its morning coffee.

鈥淕overnance, on the other hand, is a messy business. The West delegates initial governance to the businesses that own the tech. Only later do governments try to regulate the harmful effects of tech. The process works poorly, but authoritarian regimes are even worse. In the medium-term, how well we avoid the most harmful effects of machine learning, synthetic biology and the built world depends on how well we cobble together a governance regime. The pieces are there to do an adequate job in the United States and the European Union. Success is anyone鈥檚 guess.鈥

Beneficial
Richard F. Forno, principal lecturer and director of the graduate cybersecurity program at the University of Maryland-Baltimore County, responded, 鈥淎I and Machine Learning capabilities will continue to work their way into society resulting in more efficient workflows in many (likely mostly white-collar) industries. By extension, more intelligent automation will likely result in significant shifts in task-oriented jobs being eliminated for labor cost savings. Along those lines, new fields of expression, such as AI-generated art, music and entertainment, will become mainstream attractions instead of AI/ML capabilities being used only to enhance traditional entertainment products (e.g., beyond ‘de-aging’, SFX, and creating fantasy landscapes).

Harmful
Richard F. Forno, principal lecturer and director of the graduate cybersecurity program at the University of Maryland-Baltimore County, wrote, 鈥淎nything man creates, man can misuse. Technologies used to enable freedom of speech or expression can be constrained to restrict it. Technologies used to provide ‘smart’ medical assistance (i.e., pacemakers, drug-dispensing) can be co-opted and used to cause harm.

鈥淎s a cybersecurity professor rooted in the humanities, I worry that, as with most new technologies, individuals and society will be more interested in the likely potential benefits, conveniences, cost savings and the 鈥榗ool factor’ and fail 鈥 or be unwilling 鈥 to recognize or even consider, the potential risks or ramifications. Over time, that can lead to infosocial environments in which corruption, abuse and criminality thrive at the hands of a select few political or business entities, which in turn presents larger social problems requiring remediation.鈥

Beneficial
Naveen Rao, a healthcare entrepreneur and founder and managing partner at Patchwise Labs, said, 鈥淎mong the beneficial changes I see are:

  • More human-centered tech/digital development 鈥 reduction (but not elimination) of some systemic disparities in access to web/digital tools, via better rural broadband availability, more intentional product design and tech/data policy at the organization/institutional level
  • Smoother government operations in areas of taxes, DMV, voting, civic/citizen engagement (e.g., census, public services)
  • Health 鈥 better (but not universal) access to care through widespread availability of a single digital front door experiences with numerous self-serve options (check-ins, appointment scheduling, RX refills, virtual visits, payment, etc.)
  • Knowledge and education 鈥 shift to primary digital textbooks in high schools and colleges, which removes the cost burden on students and enables real-time curriculum updates; shift toward more group education
  • The 鈥榚xperience鈥 of digital engagement will evolve for the better, with more integrated digital tools that don鈥檛 require eyes to be glued to a screen (voice, AR/XR, IoT).鈥

Harmful
Naveen Rao, a healthcare entrepreneur and founder and managing partner at Patchwise Labs, responded, 鈥淓verything that鈥檚 bad today is going to get worse as a direct of result of the U.S. government鈥檚 failure to regulate social media platforms: cyberbullying, corporate-fueled and funded misinformation campaigns, gun violence, political extremism will all become more pronounced and engrained, deeply shaping the minds of the next generation of adults (today鈥檚 grade schoolers).

鈥淎dults鈥 ability to engage in critical thinking 鈥 their ability to discern facts and data from propaganda 鈥 will be undermined by the exponential proliferation of echo chambers, calcified identity politics, and erosion of trust in the government and social institutions. These will all become even more shrouded by the wool of digital life鈥檚 ubiquity.

鈥淭he corporate takeover of the country鈥檚 soul 鈥 profit over people 鈥 will shape product design, regulatory loopholes and the systemic extraction of time, attention and money from the population. I do think there will be a cultural counterbalance that emerges (at what point I can鈥檛 guess), towards less digital reliance overall, but this will be left to the individual or family unit to foment, rather than policymakers, educators, civic leaders or other institutions.鈥

Beneficial
Robert M. Mason, a University of Washington professor emeritus expert in the impact of social media on knowledge work, wrote, 鈥淚 expect expanded accessibility to a wider range of digital technologies and applications through the use of natural language interfaces and greater use of improved graphics. This will be enabled by:

  • The 鈥榙emocratization鈥 of access to digital processes and services, including online information and online knowledge bases; digitization of knowledge
  • Expanded scope of online knowledge
  • Higher-resolution graphics that enable realistic representations of images and presentation of complex data relationships and analytic findings such as statistical relationships
  • Improved functionality and expanded use of natural language interfaces with digital knowledge bases and applications

鈥淚 expect greater integration of functional applications. This will stimulate innovation and the creation of new services for transportation and logistics. Past examples of such include the combination of GPS, large-scale integration, image processing, the World Wide Web and WiFi into a mobile phone; and further system integration to enable ride-sharing services and delivery services.鈥

Harmful
Robert M. Mason, a University of Washington professor emeritus expert in the impact of social media on knowledge work, said, 鈥淭he erosion of trust and faith in human institutions is of concern. Expanded accessibility to a wider range of technologies and applications for storing and promoting falsehoods under the pretense of sharing information and knowledge is detrimental. Then there is also the growth in the number of 鈥榠nfluencers鈥 who spread rumors based on false and incomplete information.

鈥淚n addition, the increased expectation of having rapid access to information and people鈥檚 accompanying impatience with delays or uncertainties associated with issues that require deeper research or analysis is extremely troublesome.

鈥淭here continues to be an erosion of trust in the institutions that value and support critical thinking and social equity.鈥

Beneficial and Harmful
Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, wrote, 鈥淎I has arrived. I’ve seen many cycles of AI hype. I’m going out on a limb and saying, this time, it’s real. It now passes a key indicator which signals an actual technological advance: The Porn Test – do people use this to create pornography, and are the results appealing? The outcome here isn’t ringing a bell, it’s blaring a siren, the technology has reached a point where consumer applications are being built. Further, there’s another reliable key indicator which is evident: The Lawyer Test – are expensive corporate lawyers suing over this? When professional hired guns start shooting at each other, that usually indicates they’re fighting over something significant.

鈥淣ow, this has nothing to do with the scary AI bogey beloved of writers who dress up a primal monster in a science-fiction skin. Rather, there have been major breakthroughs in the technology which have advanced the field, which will ultimately be truly world-changing. And I have to re-affirm my basic realism that we won’t be getting utopia (the Internet sure didn’t give us that). But we will be getting many benefits which will advance our standard of living.

鈥淭o just give a few examples: Even as I type this, at the very start of the development, I’m seeing practical tools which significantly improve the productivity of programmers. I don’t believe it will replace programmers as a profession. But there’s going to be a shift where some of the bottom level coding will be as obsolete as the old job of manually doing calculations.

鈥淓ntertainment is going to undergo another major improvement in production quality. I’m not going to make the silly pundit prediction of “democratization”, because that never works, for economic reasons. But I will point out the way CGI (Computer Generated Imagery) changed movies and animation, and AI will take that to another level.

鈥淲e’re already seeing experiments with new ways of searching. Search has always been an arms-race between high-quality information versus spammers and clickbait farms. That’ll never stop because it’s human nature. But the battlefield has just changed, and I think it’ll take a while for the attackers to figure out how to engage with heavy artillery being rolled out.

鈥淭here’s a whole set of scientific applications which will benefit. Medical diagnostics, drug discovery, anything to do with data analysis. Essentially, we can currently generate and store a huge amount of data. And now we have a new tool to help make sense of it all. While pundits who predict the equivalent of flying cars are not justified, that shouldn’t cause us to ignore that both flying (commercial air transportation) and cars (mass produced automobiles) had profound effects over the last century.

鈥淣owadays, I’m deeply troubled by how much just trying to keep my digital footprint low is starting to make me feel like I’m an eccentric character in a dystopian SF novel (quirkily using 鈥榖y Stallman’s beard!鈥 as an exclamation 鈥 a reference to Richard M. Stallman, who has been relentlessly arguing about freedom and technology for decades now). Every item I buy, every message I send, every physical place I go, every ebook I read, every website I browse, every video I watch … there’s a whole system set up to record it.

鈥淲hen we think of the world of the book 鈥1984,鈥 I believe one aspect which has been lost over the years is how the idea of the telescreen was, for the time, extremely hi-tech. Television wasn’t even widespread when it was written. Who would have thought that when such technology arrived, people would be eager to have telescreens installed in their homes for the consumer benefits? We consider the phrase 鈥楤ig Brother鈥 to be chilling. But in that fictional world, maybe to an apolitical person it has a meaning more like 鈥楢lexa鈥 or 鈥楽iri.鈥

鈥淭here was a fascinating moment this year, when just after the U.S. Supreme Court overturned nearly 50 years of Federal protection of abortion rights, the chattering class had a brief realization that all this surveillance could be extremely useful for the enforcement of anti-abortions laws. There’s a principle that activists should try to relate global concerns to people’s local issues. But it was very strange to me seeing how this huge monitoring system could only be considered in terms of a 鈥榟ot take鈥 in politics (鈥楬ere’s this One Weird Trick鈥 which could be used against pregnant women seeking an abortion). And then the glimmer of insight just seemed to disappear.

鈥淣ow, it’s not as if I’m the only person to ever notice the perils. There’s quite a bit of material on the dangers of 鈥榮urveillance capitalism.鈥 But doing anything about it runs into a problem of affecting present corporation profits for the benefit of safeguarding civil-liberties. And that’s just a very marginalized argument.

鈥淚 wish I knew more about how this was playing out in China or Singapore or other places which fully embrace such governmental population controls. The little bit I’ve read about the Chinese 鈥榮ocial credit鈥 system seem to outline a practical collaboration of government and corporate power that is very disturbing.

鈥淏y Stallman’s beard, I worry!鈥

Beneficial
Pete Cranston, a pro bono UK knowledge consultant and former co-director of Euforic Services Ltd., said, 鈥淚 expect an enhanced state of ubiquity of these technologies, enabling all global populations to participate equally in their own languages and without needing to learn any other input mechanism than speaking and output other than visual or auditory. Convergence of tech means this is likely to be through handheld mobile devices that will be as cheap as pens since there will be so many manufacturers. As we deal with the climate crisis, there will be real-time information through the above ubiquitous, convergent tech on how each individual is impacting the planet through their activities, including purchasing.

鈥淭here鈥檚 hope for some progress in limiting surveillance capitalism. The level of control recently introduced by the European Union will be extended and companies that harvest data will only be able to on the basis of informed consent.鈥

Harmful
Pete Cranston, a pro bono UK knowledge consultant and former co-director of Euforic Services Ltd. wrote, 鈥淚 see here the converse of my thoughts on positive trends. One major concern is that splinternets and commercial monopolies will prevent all global populations from participating equally in their own languages and without needing to learn any other input mechanism than speaking and output other than visual or auditory. Convergence of tech means this is likely to be through handheld mobile devices which will be as randomly priced as at present, but where the highest level of security and control will be more expensive than the majority of people will (want to) afford.

鈥淚n regard to the climate crisis, greenish and false information will conceal the planetary impact of ubiquitous, convergent tech, with information on how I am impacting the planet through my activities, including purchasing and using tech only available at a cost, and requiring at least first-degree educational levels.

鈥淚n regard to surveillance capitalism, a poor outcome would be that the level of control recently introduced by the EU will not be extended and carried on, and companies that harvest data will continue to harvest and share personal data without informed consent.鈥

Beneficial
Philippa Smith, communications and digital media expert, research consultant and commentator, said, 鈥淭he best and most beneficial changes will result from advances in our decision-making abilities. With more than 65 years since the first computer-to-computer communication occurred, we will be in good stead in the ongoing pursuit of beneficial changes for all peoples based on our accumulated knowledge over time as the digital has become the norm.

鈥淒rawing on our past experience and realisations about what has worked and what has not in our digital lives will enable a better mindset by 2035 to think more critically and deeply about where we want to be in the future. Designers, investors and stakeholders will be more cognizant of the need to think about social responsibility, the ways that technology can be more inclusive when it comes to the chasm of digital divides, and how potential pitfalls might be averted 鈥 especially when it comes to AI, cybersafety, cybersecurity, negative online behaviours, etc.

鈥淩esearchers will continue to work across disciplines to delve deep in applying theory and practice in their investigations and pursuing new methods 鈥 questioning and probing and gaining new knowledge to guide us along the yellow brick road towards a better digital life. Ideally, governments, tech companies and civil society will work collaboratively in designing the best possible digital life 鈥 but this will require honesty, transparency and compassion. Hopefully that is not too much to ask.鈥

Harmful
Philippa Smith, communications and digital media expert, research consultant and commentator, wrote, 鈥淚t is unlikely that by 2035 existing harmful and menacing online behaviours, particularly in terms of human health and well-being 鈥 such as cyber-bullying, abuse and harassment, scamming, identity theft, online hate, sexting, deep fakes, misinformation, dark web, fake news, online radicalisation or algorithmic manipulation 鈥 will have faded from view. In spite of legislation, regulation or counter measures, they will have morphed in more sinister ways as our lives become more digitally immersive bringing new challenges to confront.

鈥淢uch will be dependent on the management of technology development. Attempts to predict new and creative ways in which negative outcomes can be circumnavigated will be required. My main concern for the future, however, is on a bigger picture level and the effects that harmful and menacing changes in digital life will have on the human psyche and our sense of reality.

鈥淔uture generations may not necessarily be better off living a deeply immersive digital life, falling prey to algorithmic manipulation or conspiracy theories, or forgetting about the real physical world and all it has to offer. We will need to be careful in what we wish for.鈥

Beneficial and Harmful
Howard Rheingold, pioneering internet sociologist and author of 鈥淭he Virtual Community,鈥 commented, 鈥淟arge Language Models (LLMs), generative AI and machine learning are tingling my antennae a lot 鈥 the way the graphical user interface and the Web first did in their early days. But I think this evolution is going faster. Without getting into too many details I don’t understand, the large language part of it is that the models are based on very large collections of texts, images, sounds and code. So if it weren’t for all of us putting everything online over the past three decades, there wouldn’t be anything to apply machine learning to.

鈥淚f we are honestly looking back at the last decades of rapid technological change for hints about decades to come, we’re in for a world of hurt along with some really miraculous stuff. I sense that we are at an inflection point in the conduct of science as significant as the introduction of computers: the use of machine learning techniques as scientific thinking and knowledge tools. Proteins, for just one example, are topologically complex and can fold into a large number of possible shapes. Much of immune system and anti-cancer therapies rely on matching the shape of proteins on the surface of a cell. Now, AI can propose previously unknown proteins of medical significance.

鈥淢achine Learning (oversimplified) uses iterative computations modeled on the way neurons work. It can be applied to datasets other than the omniversal ones sought by large learning models, LLMs. LLMs don’t 鈥榢now,鈥 but the way significant knowledge can be parsed out of it is, in my opinion, impressive, although the technology is in its infancy. Yes, it swallows all the bull along with the good info, and yes, it is unreliable and makes stuff up, and no, the models are tools, they are not General Intelligence. They don’t understand. They do statistics. Think of them as thinking-knowledge tools. As mathematics and computers come to enable human minds to go places they were previously unable to explore, I see a lot of change coming from this symbiosis of machine learning and human production of words, images, sounds and code.

鈥淐omputational biology is a good example of this two-edged miracle. Wanna get scary about the other edge of the AI sword? Generative AI once suggested 40,000 chemical weapons in just six hours. I recall that Bill Joy wrote a Wired magazine essay (23 years ago!) titled 鈥榃hy the Future Doesn’t Need Us.鈥 In that essay he mentioned affordable desktop wetlabs, capable of creating malicious organisms. A good way to think about a proposed technology is to ask: What would 4chan do with it? Connecting computational biology to wetlab synthesizers is just a matter of money and expertise. What will 4chan do with LLM tools?鈥

Beneficial
Robert Y. Shapiro, professor and former chair of the political science department at Columbia University and faculty fellow at the Institute for Social and Economic Research and Policy, responded, 鈥淭he changes to watch for 鈥 and this is being optimistic: First, I have great concern for the protection of data and individuals’ privacy, and second, there have to be much more serious, concerted and thoughtful efforts to deal with issues of misinformation and disinformation. Unfortunately, these hopes could also be answers to a question about worst and least-beneficial changes.鈥

Harmful
Robert Y. Shapiro, professor and former chair of the political science department at Columbia University and faculty fellow at the Institute for Social and Economic Research and Policy, commented, 鈥淚 repeat my earlier response. I have great concern for the protection of individuals’ data and privacy, and, second, there have to be much more serious, concerted and thoughtful efforts to deal with issues of misinformation and disinformation.鈥

Beneficial (Did not respond to Harms question)
Bill Woodcock, executive director of the Packet Clearing House, said, 鈥淭he foundation of all current digital technology is electricity, and the single largest beneficial development we’re seeing right now is the shift from the consumption of environmentally destructive fossil fuels to the efficient use of the sun’s energy. This is happening in several ways: First, unexpectedly large economies in photovoltaic panels and the consequent dramatic reduction in the cost of solar-derived electricity is making all less-efficient forms of electrical production comparatively uneconomical. Second, non-electrical processes are being developed with increasing rapidity to supplant previously-inefficient and energy-consumptive processes, for a wide range of needs, including cooling and water purification. Together, these effects are reducing the foundational costs of digital technology and equalizing opportunities to apply it. Together with the broader distribution of previous-generation chip-making technologies and the further proliferation of open-source designs for hardware as well as software, I anticipate that a far greater portion of the world’s population will be in a position to innovate, create and produce digital infrastructure in 2035 than today. They will be able to seize the means of production.鈥

Harmful (Did not respond to Benefits question)
Stephen Abram, principal at Lighthouse Consulting, Inc., wrote, 鈥淐hatGPT has only been released for six weeks as I write this, and is already changing strategic thinking. Our political and governance structures are not competent to comprehend the international, transformative and open challenge this technology offers, and regulation, if attempted, will fail. If we can invest in the conversational and agreements to manage the outcomes of generative AI, good, neutral and bad, and avoid the near-term potential consequences of offloading human endeavor, creativity, intelligence, decisions, nuance and more 鈥 we might survive the first wave of generative AI.

鈥淎s copycat generative AIs proliferate this is a Gold Rush that will change the world. Misinformation, disinformation, political influence through social media: As the tools, including ChatGPT, allow for the creation of fake videos, voices, text and more, the problem is going to get far worse and democracies are in peril. We have not made a dent in the role of bad actors and disinformation and the part they play in democracies. This is a big hairy problem that is decades away from a framework, let alone a solution.

鈥淭ikTok has become somewhat transformational. Ownership of this platform aside, the role of fake videos and its strong presence in post-millennial demographics are of concern. Are any of the alternatives in place that are better? (Probably not) Transformation of core tools (Google and search), Microsoft Suite, Apple portfolio, etc. The massive investments of Microsoft, Alphabet/Google, Meta and Apple in generative AI tools and the emergent integration with core workplace tools in the absence of a conversation and framework for protecting privacy, identity, etc., is a massive concern.鈥

鈥淐hatGPT will start with a 鈥楲et-a-thousand-flowers-bloom鈥 strategy for a few years. As always, human adoption of the tools will go through a curve that takes years and results in adoption that can be narrow or broad and sometimes with different shares of usage in different segments. It is likely that programming and coding will adopt more quickly. Narrow tools such as those for conversation customer service, art (sadly including publishing, video, visual art), writing (including all forms of writing – presentations, scripts, speeches, white papers), and more will emerge gradually but quickly.鈥

Beneficial and Harmful
William L. Schrader, advisor to CEOs, previously co-founder of PSINet, wrote, 鈥淚 am disappointed with mankind and where it has taken the internet. I hope the dreams we old Internet folks had that kept us sleeping soundly after working for 18 hours a day, seven days a week to build the greatest communications system ever do come true. So far there have been good and bad outcomes.

1) Health and scientific advances moving twice or three times faster. This is not limited to big pharmaceuticals; it is focused on many massive improvements. One is fully remote surgeries in small towns without doctors, with only lightly medically train medical assistants or one registered nurse to be on site. This would include all routine surgical procedures. For more complex surgeries, the patient would need to be flown or driven hundreds of miles and possibly die in the process. This would be global so that we all had access, not just the rich. THAT is what we imagined in 1985 and before. It only takes really outstanding robotic 3-D motion equipment installed in a surgical suite that is maintained by the local team, high bandwidth supporting the video for the expert surgeon in a big medical center and the robotic controls from the experts鈥 location to the surgical site, and a team on both sides that is willing to give it a try and not get hung up on insurance risk. This must involve participants from multiple locations. This is not simply a business opportunity for a startup to assemble (the equipment is almost there, with the software and the video). This is a life saver.

2) Truth beating fascism is now required. We built this commercial Internet to stop the government from limiting the information each of us could access. We imagined that only a non-government-controlled Internet would enable that outcome. Freedom for all, we thought. FALSE. Over the past decade or so political operatives in various parts of the world have proven that social media and other online tools are excellent at promulgating fear and accelerating polarization. Online manipulations of public sentiment rife with false details that spread fear and create divisiveness have become a danger to democracy. I would like the Internet, the commercial Internet, to fight back with vigor. What Internet methods, what technologies, what timing, all remains to be seen. But people (myself included) understand it is time to build strong counter measures. We want all sides to be able to talk openly.

3) Climate change and inflation receive a lot of attention in the press for both Main Street and Wall Street. Looking at inflation, I have trust in our financial balancing system with the Federal Reserve Board, the thousands of brilliant analysts worldwide that watch their movement using the latest online tools and, of course. Other nations鈥 central banks are just as in tune as ours if, like ours, they are a bit focused on their own country. Inflation will resolve itself. Climate change, however, will not be solved. Not by politicians of any persuasion, not by the largest power companies, the latest gadgets in electric vehicles (EVs), not by carbon-capture technology and possibly not by anything. That could result in the end of the planet supporting homo sapiens. Alternatively, the commercial Internet could encourage 2 to 4 to 6 billion people who use it to not drive for one hour, turn off all electricity for the same hour, essentially a unified strike to tell the elected, appointed, monarchs or autocrats in charge or part of the government of all countries that the time has come do something so our grandchildren can survive. Only the Internet can do this. Please, someone start and support these movements.

4) Science tells us that we MUST expect more pandemics. Bill Gates has stated it clearly and funded activities that promise to help. We must stop listening to 鈥榠t鈥檚 over鈥 or 鈥榠t鈥檚 not any worse than a cold鈥 when our beloved grandparents have died or expect to if they mingle with their children鈥檚 children. In total over 6.7 million people have died. In the last year, 85 percent of the dead were elderly (over 65) in all countries (rich and poor). If only the commercial Internet could band together to convince those people who don鈥檛 believe in pandemics or don鈥檛 care about their grandparents to stop voting or to die from COVID, or the next one that comes along. Yes, this is a positive statement. There is a way for the Internet to persuade naysayers to stay away from the elderly or shop when they do not.

5) War in Ukraine and Russia will expand beyond Ukraine whether it 鈥榣oses鈥 or 鈥榳ins.鈥 The Internet can continue to support the tens of thousands of Ukraine voices 鈥 videos showing hundreds of indictable war crimes by the head of Russia who started the war a year ago. The Internet can communicate from any one person to any other one person or to millions. The truth matters. Lives are being lost hourly on all sides, all because we fail to say something or do something.鈥

鈥淭here are many scenarios that may play out between now and 2035, but the worst is the following: The commercial Internet has created opportunities for the evil side of man to excel with great speed, impact and lack of accountability. I am not talking about spam email, phone or text messages. I am talking about this: At its next election, the United States, the best of any democracy, might come to be led by a fascist supremacist. If this happens, it is likely that that faction may also have control of the Supreme Court and both houses of Congress. This could be accomplished using manipulative tactics on the Internet that create fear, spread lies and polarize the populace. The next step could be the U.S. sending military support to Russia instead of Ukraine, wiping out the middle class. The Internet enables this. The broad and sometimes far too silent community of intelligent, caring citizens who prefer to not live in a fascist state must implement the Internet to find a way to stop it.鈥

Harmful聽(Did not respond to Benefits question)
June P. Parris, a member of the Internet Society chapter in Barbados and former member of the UN Internet Governance Forum Multistakeholder Advisory Group, wrote, 鈥淗uman rights: Some developing countries are not aware of or practice human rights. If they are not aware or misunderstand human rights, how can they put policies in place that will not harm citizens? What needs to take place is a standard across a set of policies and protocols that are followed by every government, every country and all citizens.

鈥淕overnments: They need to follow these policies, guidelines and protocols religiously, not the way they do things now; they need to be made accountable. The poor deserve the same opportunities as the rich. Institutions are not connected; systems should hold data and should monitor this data to prevent breaches and hacking.

鈥淗uman Knowledge: Hacking 鈥 for example a recent incident at a local hospital 鈥 should be explained and reported back. Experts should be brought in to fix the problems. Companies in developing countries do not always employ those qualified to do the job. Often these people are not up to date with what is going on in the developed world; there is a lack of up-to-date skills in the industry.

鈥淗uman health and well-being: All should have the same rights in this sector. All citizens should have access to IT, health treatment and education, and the cost of the internet should be affordable so that everyone has full access to health care, living essentials and education.

鈥淗uman connections: Some have access to information and some don’t. Relying on hearsay is not an effective way to communicate. If you are not a member of the party, some of your rights are denied and information is not across the board. The elderly suffer as a result. Social policy is lacking, especially for the poor, the disabled, the elderly and children with problems. Charities are not always operating with guidelines. The right of speech and access to assistance is not always practiced; complainers are victimized and disregarded.

鈥淎s I see it, humans seem resistant to technology. Despite several opportunities to use technology, not much has changed over the past 10 years and it is unlikely to change with citizens, governments and technocrats.

鈥淕overnments have introduced online platforms in order to make things easier for citizens, however, especially in the developing world, the platforms are not easy to negotiate and seem not to be maintained efficiently. Those of us who want to use tech are frustrated. In many instances websites are off, WiFi is not working properly or is too expensive.

鈥淭echnocrats are arrogant and misunderstand what is needed for easy access to online tools and access for the public. Sometimes the people just have to give up.

鈥淧opulations, even those who should know how to use technology, seem lazy, and the use of technology is not up to standard. Schools do not seem to be teaching students the use of technology.鈥

Beneficial and Harmful
Ray Schroeder, senior fellow at the University Professional and Continuing Education Association, said, 鈥淭he dozen years ahead will bring the maturing of the relationship between human and artificial intelligence. In many ways, this will foster equity through enhanced education, access to skill development and broader knowledge for all 鈥 no matter the gender, race, where people live or their economic status.

鈥淓ducation will be delivered through AI-guided online adaptive learning for the most part in the first few years, and more radical 鈥榲irtual knowledge鈥 will evolve after 2030. This will allow global reach and dissemination without limits of language or disability. The ubiquity of access will not limit the diversity of topics that are addressed.

鈥淚n many ways, the use of AI will allow truths to be verified and shared. A new information age will emerge that spans the globe.

鈥淧erhaps the most impressive advances will come with Neuralink-type connections between human brains and the next evolution of the internet. Those without sight will be able to see. Those without hearing will be able to hear. And all will be able to connect to knowledge by just tapping the connected internet through their virtual memory synapses. Virtual learning will be instant. One will be able to virtually recall knowledge into the brain that was never learned in the ways to which we are accustomed. Simply think about a bit of information you need, and it will pop into your memory through the connected synapses. The potential for positive human impact for brain-implanted connectivity is enormous, but so too is the potential for evil and harm.

鈥淭he ethical control of knowledge and information will be of the utmost importance as we move further into uses of these digital tools and systems. Truth is at the core of ethics. Across the world today, there seems to be a lower regard for truth. We must change this trend before the power of instant and ubiquitous access to knowledge and information is released.

鈥淢y greatest concern is that politics will govern the information systems. This may lead to untruths, partial truths and propaganda being disseminated in the powerful new brain-connected networks. We must find ways to enable AI to make judgments of truth in content, or at least allow for access to the full context of information that is disseminated. This will involve international cooperation and collaboration for the well-being of all people.鈥

Beneficial
Philip J. Salem, a communications consultant and professor emeritus at Texas State University, said, 鈥淔irst, I think the most important changes will relate to climate change. There will be advances in storing energy and the political system will move from mixed sources of energy to those that exclude fossil fuels. Furthermore, digital technologies will evolve to help manage personal energy consumption and to help dimmish those behaviors that damage the climate. Second, there will be more mindful use of social media, especially among the newer generation of users. There will be a less dominating social media as well with a more fluid enrollment in a variety of sites. Third, governments will begin to restrict a variety of digital use. This will vary from enforcement of monopoly laws to holding some organizations subject to libel and slander laws for misuse of their sites.

Harmful
Philip J. Salem, a communications consultant and professor emeritus at Texas State University, wrote, 鈥淚n regard to human wellness, I see three worrying factors. First, people will continue to prefer digital engagement to actual communication with others. They will use the technology to 鈥榓muse themselves to death鈥 (see Neil Postman) or perform for others, rather than engage in dialogue. Performances seek validation, and for these isolated people validation for their public performances will act as a substitute for the confirmation they should be getting from close relationships. Second, people will increase their predisposition to communicate with others who are similar to themselves. This will bring even more homogenous social networks and political bubbles. Self-concepts will lose more depth and governance will be more difficult. Third, communication competence will diminish. That is, people will continue to lose their abilities to sustain conversation.鈥

Beneficial and Harmful
Valerie Bock, principal at VCB Consulting, wrote, 鈥淲e are going to go through a period of making serious mistakes as we integrate artificial intelligence into human life, but we can emerge with a more-sophisticated understanding regarding where human judgment is necessary to modify any suggestions made by our artificially intelligent assistants. Just as access to search engines and live mapping has made life better informed and more efficient for those of us privileged enough to have access to them, AI, too, will help people make better decisions in their daily lives.

鈥淚t is my hope that we will also become more sophisticated in our use of social networks. People will become aware of how they can be gamed, and they will benefit from stronger regulations around what untruths can be shared. We will also learn to make better use of our access to the strongest thinkers in our personal social circles and in the wider arenas in our societies.

鈥淏y 2035, I am hopeful that our social conventions will have adapted to the technological advances which came so quickly. Perhaps we will instruct our personal digital assistants to turn off their microphones when we are dining with one another or entertaining. We will embrace the basket into which our smartphones go when we are having face-to-face interactions at work and at home. There will be a whole canon of sound advice regarding when and under what circumstances to introduce our children to the tech with which they are surrounded. I’m hopeful that that will mean practicing respectful interaction, even with the robots, while understanding all the reasons why time with real people is important and precious.鈥

鈥淚 was once an avid fan of the notion that markets will, with appropriate feedback from consumers, adjust to serve human welfare. I no longer believe that to be true. Decades of weakening governmental oversight have not served us. Technology alone cannot serve humanity. We need people to look out for one another, and government is a more likely source of largescale care than private enterprise will ever be.

鈥淚 fear that the tech industry ethos that allows new technologies to be released to the public without serious consideration of potential downsides is likely to continue. Humans are terrible at imagining how our brilliant inventions can go wrong. We must commit to regulation and adequately fund regulators in a way that allows them the capacity to keep abreast of developments and encourage industry to better pre-identify the unexpected harms that might emerge when they are introduced to society. If not, we could see a nightmarish landscape of even worse profiteering in the face of real human suffering.鈥

Beneficial (Did not respond to Harms question)
Valdeane Brown, founder of NeurOptimal, predicted, 鈥淭here will be fully autonomous vehicles embedded within comprehensive ecosystems that have a fundamental emphasis on safety, efficiency and ease of use. And there will be fully individualized and comprehensive health management systems with emphasis on empowering healthy living for each person.”

Beneficial
Sam S. Adams, artificial general intelligence researcher at Metacognitive Technology, previously a distinguished engineer with IBM, commented, 鈥淚n regard to human-centered development, the trend will continue of increasingly sophisticated tool functionality with increasingly accessible and simplified interfaces, allowing a much larger number of humans to develop digital assets (software, content, etc.) without requiring a year of specialized training and experience. There will be more on-demand crowdsourcing; the recent example of OSINT in the Russian invasion of Ukraine demonstrates how large groups of volunteers can create valuable analysis from open-source information. This trend will continue with largescale crowdsourcing activities spontaneously emerging around topics of broad interest and concern.

Harmful
Sam S. Adams, artificial general intelligence researcher at Metacognitive Technology, previously a distinguished engineer with IBM, commented, 鈥淚n regard to human-to-human connections, the trend of increasing fragmentation of society will continue, aided and abetted by commercial and governmental systems specifically designed to 鈥榙ivide and conquer鈥 large populations around the world.

鈥淭here will continue to be problems with available knowledge. Propaganda and other disinformation will continue to grow, creating a balkanized global society organized around what channels, platforms or echo chambers they subscribe to. Postmodernism ends but leaves in its wake generations of adults with no common moral rudder to guide them through the rocks of future challenges.

鈥淚n regard to human well-being, I expect that digital globalization becomes a double-edged sword. There will be borderless communities with shared values around beauty and creativity on one side and echo chambers that justify and cheer genocide and imperial aggression on the other, especially in the face of the breakdown of economic globalization.鈥

Beneficial
Raquel Gatto, general consul and head of legal for the network information center of Brazil, NIC.br, wrote, 鈥淭he best and most beneficial change by 2035 would be to achieve universal and meaningful connectivity. It is important to have everyone connected to the Internet but also that each person has access to the same opportunities online, which includes digital literacy, basic skills, local content, proper quality connection and equipment for example.鈥

Harmful
Raquel Gatto, general consul and head of legal for the network information center of Brazil, NIC.br, said, 鈥淭he most harmful and menacing change by 2035 would be the overregulation that breaks the Internet. The risk of fragmentation that entails a misleading conceit of digital sovereignty is rising and needs to be addressed in order to avoid the loss of the open and global Internet that we know and value today.鈥

Beneficial
Sam Lehman-Wilzig, professor of communication at Bar-Ilan University, Israel, and author of 鈥淰irtuality and Humanity,鈥 said, 鈥淎s the possibility of mass human destruction and close-to-complete extinction becomes more of a reality, greater thought (and perhaps the start of planning) will be given how to archive all of human knowledge in ways that will enable it to survive all sorts of potential mass disasters. This involves software and hardware. The hardware is the type(s) of media in which such knowledge will be stored to last eons (titanium? DNA? etc.); the software involves the type of digital code 鈥 and lexical language 鈥 to be used so that future generations can comprehend what is embedded (whether textual, oral or visual). Another critical question: What sort of knowledge to save? Only information that would be found in an expanded wiki-type of encyclopedia? Or perhaps everything contained in today’s digital clouds run by Google, Amazon, Microsoft, etc.? A final element: who pays for this massive undertaking? Governments? Public corporations? Private philanthropists?鈥

Harmful
Sam Lehman-Wilzig, professor of communication at Bar-Ilan University, Israel, and author of 鈥淰irtuality and Humanity,鈥 commented, 鈥淒igitally-based artificial intelligence will finally make significant inroads in the economy, i.e., causing increasing unemployment. How will society and governments deal with this? We don鈥檛 know.

鈥淚 see the need for huge changes in the tax structure (far greater corporate tax; elimination or significant reduction of individual taxation). This is something that will be very difficult to execute, given political realities, including intense corporate lobbying and ideological stasis.

鈥淲hat will growing numbers of people do with their increasing free time in a future where most work is being handled autonomously? Can people survive (psychologically) being unemployed their entire lives? Our educational system should place far more emphasis already today on leisure education, and that which used to be called liberal arts? Like governments, educational systems tend to be highly conservative regarding serious change.

鈥淥bviously, all this will not reach fruition by 2035, but much later), but the trend will become obvious – leading to greater political turmoil regarding future-oriented policymaking (taxes, Social Security, corporate regulation, education, etc.鈥

Beneficial
Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool, England, and research lead for the UK government鈥檚 Digital Culture team, wrote, 鈥淚f digital tools can help with the climate crisis this could be their greatest beneficial impact.

Separate from that, I think that there are two critical areas that digital systems and media could have a beneficial impact: 1) Health and well-being – across everything from big data and genomics to everyday health apps digital systems and media could have considerable benefits. BUT only if well managed and regulated.

2) Knowledge production – this is obviously part of point 1 above. Digital systems provide unique opportunities to further human knowledge and understanding, but only if the current somewhat naive empiricism of 鈥楢I鈥 (= bad stats models) is replaced with far more thoughtful approaches. That means taking the computer scientists out of the driving seat and putting the topic specialists in charge again.鈥

Harmful
Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool, England, and research lead for the UK government鈥檚 Digital Culture team, wrote, 鈥淒igital systems and media are human societal products. The benefits or harms they engender are the products of our choices about how we as individuals, communities, organisations, governments and societies use and deploy them. They will have a mix of benefits and hazards. On current form, the lack of societal regulation (though I note the EU is still at the forefront of trying to regulate), the continued 鈥榖reak things鈥 attitude of big tech and the benefits to both powerful (big corporates) and very authoritarian (e.g., China) actors that digital systems provide I worry that the harms will outweigh the benefits for most citizens.

鈥淚 worry therefore that tech may facilitate some quite draconian and unpleasant societal changes driven by corporate or political desire (or inaction). Limiting rights and freedoms, damaging civic institutions etc. While at the same time helping some live longer, more comfortable lives. The question should be: “What societal changes do we need to make to ensure we maximise the benefits and limit the harms of digital systems and media.”

Beneficial
Robert Bell, co-founder of the Intelligent Community Forum, predicted, 鈥淎I will be the technology with the greatest impact as it works its way into countless existing applications and spawns completely new ones, like the much-heralded ChatGPT. The potential positives are huge: greater productivity in fields where IT has not produced progress, from education to healthcare; far deeper and broader analysis of our social and policy challenges to yield new solutions; and greater digital inclusion as platforms better anticipate our needs and communicate by voice and gesture. Getting the positives without the negatives, of course, will take huge skill and huge luck.

鈥淎 lesser-known advance will be in the digitization of our knowledge of Earth. The new fleets of earth observation satellites in space are not fundamentally about producing the pictures we see in the news. They are about producing near-real-time data with incredible precision and detail about the changing environment, the impact of public-sector and private-sector actions, and the resources available to us. Most important, the industry is collaboration to create a standards-based ecosystem in the cloud that makes this data broadly available and that enables non-data-scientists to put it to work.鈥

Harmful
Robert Bell, co-founder of the Intelligent Community Forum, commented, 鈥淭he potential for AI to be used for evil is almost unlimited, and it is certain to be used that way to some extent. A relatively minor 鈥 if still frightening example 鈥 are the bots that pollute social media to carry out the agenda of angry minorities and autocratic regimes. Powerful AI will also give malign actors new ways to create a 鈥榩ost-truth鈥 society using such tools such as deep fake images and videos. On the more frightening side will be weapons of unprecedented agility and destructive power, able to adapt to a battlespace at inhuman speed and, if permitted, make decisions to kill.

鈥淥ur challenge is that technology moves fast and governments move slowly. A company founder recently told me that we live in a 21st century of big, knotty problems but we operate in an economy formed in the 20th Century after the Second World War, managed by 19th Century government institutions. Keeping AI from delivering on its frightening potential will take an immense amount of work in policy and technology and must succeed in a world where a powerful minority of nations will refuse to go along.

Beneficial
R Ray Wang, founder and principal at Constellation Research, predicted, 鈥淲e will see a massive shift in how systems are designed from persuasive technologies (the ones that entrapped us into becoming the product), to consensual technologies (the ones that seek our permission), to mindful technologies (which work towards the individual’s benefit, not the network nor the system).

鈥淚n our digital life, we will see some big technology trends:

  • Autonomous Enterprise – the move to whole-scale automation of our most mundane tasks to allow us to free up time to focus on areas we choose.
  • Machine scale vs. human scale – we have to make a conscious decision to build things for human scale, yet operate at machine scale
  • The right to be disconnected – (without being seen as a terrorist). This notion of privacy will lead to a movement to ensure we can operate without being connected and retain our anonymity.
  • Genome editing – digital meets physical as we find ways to augment our genome
  • Cybernetic implants – expect more human APIs connected to implants, bio-engineering and augmentation.鈥

Harmful
R Ray Wang, founder and principal at Constellation Research, said, 鈥淭he biggest challenge will be the control that organizations such as the World Economic Forum and other powers that be have over our ability to have independent thinkers and thinking challenge the power of private-public partnerships with a globalist agenda. Policies are being created around the world to take away freedoms humanity has enjoyed and move us more towards the police state of China. Existing lawmakers have not created the tech policies to provide us with freedoms in a digital era.鈥

Beneficial
Steve Delbianco, president and CEO of NetChoice, wrote, 鈥淭here will be great progress in health diagnostics. AI will enable fast and inexpensive diagnostics of health conditions, based on images, video, biometric measurements, self-reporting, etc. Generative AI will then translate the diagnostic info into actionable prose in a wide range of scripts and languages. Access to human knowledge will be greatly enhanced by generative AI, which provides answers in digestible chunks of prose in a wide range of scripts and languages.鈥

Harmful
Steve Delbianco, president and CEO of NetChoice, said, 鈥淩egulation designed to curb interest-based advertising will change the way that free online services are working today. Ads that are not based on viewer interest command lower ad rates, meaning less ad revenue. With less ad revenue, services will need to show more ads that are less relevant, and/or cut investment in content and services. And many sites will erect pay walls to replace lost ad revenue. The detrimental effect will be to raise barriers for lower-income users when it comes to accessing knowledge and resources online.鈥

Beneficial
Rance Cleaveland, professor of computer science at the University of Maryland-College Park and former director of the Computing and Communication Foundations division of the National Science Foundation, said, 鈥淭he primary benefits will derive from the ongoing integration of digital and physical systems (so-called cyber-physical systems).

鈥淭here will be a revolution in healthcare, with digital technology enabling continuous yet privacy-respecting individual health monitoring, personalized immunotherapies for cancer treatment, full digitization of patient health records and radically streamlined administration of health-care processes. The healthcare industry is loaded with low-hanging fruit. I still cannot believe, in this day and age, that I have to carry a plastic card around with me to even obtain care!

鈥淭here will be full self-driving vehicle support on at least some highways, with attendant improvements in safety, congestion and driver experience. The trick to realizing this involves the transition from legacy vehicles to new self-driving technology. I expect this to happen piecemeal, with certain roads designated as 鈥榮elf-driving only.鈥

鈥淭here will be much better telepresence technology to support hybrid in-person and virtual collaboration among teams. We have seen significant improvements in virtual meeting technology (Zoom, etc.), but having hybrid collaborative work is still terribly disappointing. This could improve markedly with better augmented-reality technology.鈥

Harmful
Rance Cleaveland, professor of computer science at the University of Maryland-College Park and former director of the Computing and Communication Foundations division of the National Science Foundation, predicted, 鈥淭he biggest harms all derive from the unfettered anonymity and lack of cross-checking of information on the internet. These problems already exist and are not likely to have been fixed by 2035. Specific problems include:

  • Cyber-bullying and cyber-harassment
  • Cyber-crime, especially fraud (already a terrible scourge)
  • Dis-information and mis-information.鈥

Beneficial
Tim Bray, a technology leader who has worked for Amazon, Google and Sun Microsystems, wrote, 鈥淭he change that is dominating my attention is the rise of the 鈥楩ediverse,鈥 including technologies such as Mastodon, GoToSocial, Pleroma and so on. It seems unqualifiedly better for conversations on the Internet to be hosted by a network of federated providers than to be 鈥榦wned鈥 by any of the Big Techs. The Fediverse experience, in my personal opinion, is more engaging and welcoming than that provided by Twitter or Reddit or their peers. Elon Musk’s shenanigans are generating a wave of new voices giving the Fedisphere a try and (as far as I can tell) liking it. I’m also encouraged as a consequence of having constructed a financial model for a group of friends who want to build a sustainable self-funding Mastodon instance based on membership fees. My analysis shows that the cost of providing this service is absurdly low, somewhere in the range of $1/user/month at scale. This offers the hope for a social-media experience that is funded by extremely low monthly subscription or perhaps even voluntary contributions. It hardly needs saying that the impact on the digital advertising ecosystem could be devastating.鈥

Harmful
Tim Bray, a technology leader who has worked for Amazon, Google and Sun Microsystems, predicted, 鈥淭he final collapse of the cryptocurrency/Web3 sector will be painful, and quite a few people will lose a lot of money 鈥 for some of them it鈥檚 money they can’t afford to lose. But I don’t think the danger will be systemic to any mainstream sector of the economy. Autocrats will remain firmly in control of China and Russia, and fascist-adjacent politicians will hold power in Israel and various places around Eastern Europe. In Africa and Southeast Asia, autocratic governments will be more the rule rather than the exception. A substantial proportion of the U.S. electorate will be friendly to anti-democratic forces. Largescale war is perfectly possible at any moment should Xi Jinping think his interests are served by an invasion of Taiwan. These maleficent players are increasingly digitally sophisticated. So my concern is not the arrival of malignant new digital technologies, but the lethal application of existing technologies to attack the civic fabric and defense capabilities of the world’s developed, democratic nations.鈥

Beneficial
Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant, wrote, 鈥淎I will contribute to many aspects of life, including art and literature. Continuing improvements in price/performance of digital equipment will drive global economic gains. The EU will continue to lead the way in the push for humancentric use of technology. There will be continued gains in health technology, including electronic health data systems.鈥

Harmful
Scott Marcus, an economist, political scientist and engineer who works as a telecommunications consultant, said, 鈥淪ome of what follows on my list of worrisome areas may not seem digital at first blush, but everything is digital these days.

  • Armed conflict or the threat of conflict causes human and economic losses, and further impedes supply chains
  • Further decline in democratic institutions
  • Continued health crises (antibiotic resistant diseases, etc.)
  • Climate crisis leads to food crises/famine, migration challenges
  • Further growth of misinformation/disinformation
  • Massive breakdown of global supply chains for digital goods and (to a lesser degree?) services
  • The trade war U.S.A.-China increasingly drives a U.S.A.-EU trade war
  • Fragmentation of internet due to geopolitical tensions
  • Further breakdown of global institutions, including the World Health Organization and World Trade Organization鈥

Beneficial
Susan Aaronson, director of the Digital Trade and Data Governance Hub at George Washington University, wrote, 鈥淚 see an opportunity to 1) disseminate the benefits of data to a broader cross-section of the world鈥檚 people through new structures and policies, and 2) use sophisticated data analysis such as AI to solve cross-border wicked problems. Unfortunately, governance has not caught up to data-driven change.

鈥淚f public, private and non-governmental entities could protect and anonymize personal data (a big if) and share it to achieve public good purposes, the benefits of data sharing to mitigating shared wicked problems could be substantial. Policymakers could collaborate to create a new international organization, for now let鈥檚 call it the Wicked Problems Agency. It could prod societal entities 鈥 firms, individuals, civil society groups and governments 鈥 to share various types of data in the hope that such data sharing coupled with sophisticated data analysis could provide new insights into the mitigation of wicked problems.

鈥淭he Wicked Problems Agency would be a different type of international organization 鈥 it would be cloud-based and focused on mitigating problems. It would also serve as a center for international and cross-disciplinary collaboration and training in the latest forms of data analysis. It would rent useful data and compensate those entities that hold and control data. Over time, it may produce additional spillovers; it might inspire greater data sharing for other purposes and in so doing reduce the opacity over data hoarding. It could lead entities to hire people who can think globally and creatively about data use. It would also provide a practical example of how data sharing can yield both economic and public good benefits.鈥

Harmful
Susan Aaronson, director of the Digital Trade and Data Governance Hub at George Washington University, commented, 鈥淭oday鈥檚 trends indicate data governance is not likely to be improved without positive changes. Firms are not transparent about the data they hold (something that corporate-governance rules could address). They control the use/reuse of much of the world’s data, and they will not share it. This has huge implications for access to information. In addition, no government knows how to govern data, comprehensively understanding the relationships between algorithms protected by trade secrets and reuse of various types of data. The power relationship between governments and giant global firms could be reversed again with potential negative spillovers for access to information. In addition, new nations/states have rules allowing the capture of biometric data collected by sensors. If firms continue to rely on surveillance capitalism, they will collect ever more of the public鈥檚 personal data (including eye blinks, sweat, heart rates, etc.). They can’t protect that data effectively and they will be incentivized to sell it. This has serious negative implications for privacy and for human autonomy.鈥

Beneficial
Peter Levine, professor of citizenship and public affairs at Tufts University, commented, 鈥淚n the online 鈥榩ublic sphere鈥 (settings where strangers come together to share ideas and generate public opinion) things might improve if the large for-profit social networks lose users to alternative platforms that are either decentralized 鈥 like Mastodon 鈥 or democratically governed. We might also see sustainable models for producing journalism and paying reporters.鈥

Harmful
Peter Levine, professor of citizenship and public affairs at Tufts University, said, 鈥淚 am worried about substantial deterioration in our ability to concentrate, and especially to focus intently on lengthy and difficult texts. Deep reading allows us to escape our narrow experiences and biases and absorb alternative views of the world. Digital media are clearly undermining that capacity.鈥

Beneficial
Robert Atkinson, president of the Information Technology and Innovation Foundation, said, 鈥淭here will be widespread robotic automation that boosts annual labor productivity rates by several percentage points.鈥

Harmful
Robert Atkinson, president of the Information Technology and Innovation Foundation, said, 鈥淥ne harm that will have significant impact is people’s continuing decline in reading long-form documents (articles/books).鈥

Beneficial
Steven Sloman, professor of cognitive, linguistic and psychological sciences at Brown University, responded, 鈥淒evelopments in AI will create effective natural language tools. These tools will make a broader range of human knowledge available to every person. Questions will be answered in a more context-dependent way that will have much more nuance than today’s search tools. People will be able to get specific answers to questions about their health, legal questions, engineering issues, tailored advice for books, movies, shows, etc. The questions and answers will be stated in natural language and will be tailored to the questioner’s specific needs and interests.鈥

Harmful
Steven Sloman, professor of cognitive, linguistic and psychological sciences at Brown University, said, 鈥淒evelopments in AI will create effective natural language tools. These tools will make people feel they are getting accurate, individualized information but there will frequently be no way of checking. The actual information will be more homogeneous than it seems and will be stated with overconfidence. It will lead to large numbers of people obtaining biased information that will feed groundless ideology. Untruths about health, politics, history and more will pervade our culture even more than they already do.鈥

Beneficial and Harmful
Jim Spohrer, board member of the International Society of Service Innovation Professionals, previously a longtime IBM leader, wrote, 鈥淢any potential benefits lie ahead thanks to the possibilities raised by ongoing advances in humans鈥 uses of digital technology.

1) There will be a shift from 鈥榟uman-centered design鈥 to 鈥榟umanity-centered design鈥 in order to build a safer and better world. This is an increasingly necessary perspective shift as people born of the physical realm push deeper into the digital realm guided in part by ideas and ideals from the mathematical/philosophical/spiritual realms. Note that the shift from 鈥榟uman-centered鈥 to 鈥榟umanity-centered鈥 is an important shift that is required per Don Norman’s new 2023 book 鈥楧esign for a Better World: Meaningful, Sustainable, Humanity-Centered.鈥 Safely advancing technologies increasingly requires a transdisciplinary systems perspective as well as awareness of overall harms, not just the benefits that some stakeholders might enjoy at the expense of harms to under-served populations. The service research community, which studies interaction and change processes, has been emphasizing benefits of digital tools (especially value co-creation). It is now increasingly aware of harms to under-served populations (value co-destruction), so there鈥檚 hope for a broadening of the discussion to focus on harms and benefits as well as under-served and well-served populations of stakeholders. The work of Ray Fisk and the ServCollab team are also relevant regarding this change to service system design, engineering, management and governance.

2) There will be greater emphasis on how human connections via social media can be used to change conflict into deeper understanding, reducing polarization. It is hoped that there will be institutions and governance wise enough to eliminate poverty traps. An example of policy to reduce poverty in coming decades is 鈥楤uy2Invest,鈥 which ensures that customers who buy are investing in their retirement account.

3) Responsible actors in business, tech and politics can work to invest more systematically and wisely in protecting human rights and enforcing human responsibilities. One way is via digital twins technologies that allow prediction of harms and benefits for under-served and well-served populations. Service providers will not be replaced by AI, but service providers who do not use AI (and have a digital twin of themselves) will be replaced by those who do use AI. Human rights and responsibilities, harms and benefits are responsible actors (e.g., people, businesses, universities, cities, nations, etc.) that give and get service (AKA service system entities). The world simulator will include digital twins of all responsible actors, allowing better use of complexity economics in understanding interaction and change processes better. Note that large companies like Amazon, Google, Facebook, Twitter, etc. are building digital twins of their users/customers to better predict behavior patterns and create offers of mutual value/interest. Responsible actors will build and use AI digital twins of themselves increasingly.

4) There will be an increased emphasis on the democratization of open, replicable science 鈥 including the ability to rapidly rebuild knowledge from scratch and allow the masses to understand and replicate important experiments. The future of expertise depends on people’s ability to rebuild knowledge from scratch. The world needs better AI models. To get the benefits of service in the AI era, responsible actors need to invest in better models of the world (science), better models in people’s heads guiding interactions (logics), better models of organizations guiding change (architecture), and better models of technological capabilities and limitations shaping intelligence augmentation (IA).

5) Thanks to AI’s advancing technological capabilities it is likely that we are entering a golden age of service that will improve human well-being, including in the area of confronting harms done to under-served populations.

6) Local energy infrastructure will be advanced via decarbonized, geothermal drilling breakthrough innovations. Universities are increasingly adding AI data centers on campuses and experimenting with geothermal. The systems at top universities in each city serve as examples of decarbonized local energy infrastructure powering AI systems.

鈥淢any challenges are emerging due to the ongoing advances in humans鈥 uses of digital technology.

1) There is a lack of accountability for criminals involved in cybersecurity breaches/scams that may slow digital transformation of adoption of digital twins for all responsible actors. For example, Google and other providers are unable to eliminate all the Gmail spam and phishing emails 鈥 even though their AI does a good filtering job identifying spam and phishing. The lack of 鈥榟uman-like dynamic, episodic memory鈥 capabilities for AI systems slows the adoption of digital-twin ownership by individuals and the development of AI systems with commonsense reasoning capabilities.

2) The winner-take-all mindset in all competitive and developmental settings rather than the type of balanced collaboration that is necessary is dominant in business and geo-politics of the U.S., Russia, China, India and others.

3) A general resistance to welcoming immigrants by providing accelerated pathways to productive citizenship is causing increasing tensions between regions and wastes enormous amounts of human potential.

4) Models show that it is likely that publishers will be slow to adopt open-science disruptions.

5) It is expected that mental illness, anxiety, depression exacerbated by loneliness will become the number-one health challenge in all societies with elderly-dominant populations.

6) A lack of focus on geothermal solutions due to oil company interest in a hydrogen economy is expected to slow local energy independence.”

Beneficial
Greg Sherwin, a leader in digital experimentation with Singularity University, said, 鈥淎 greater social and scientific awareness of always-on digital communication technologies will lead to more regulation, consumer controls and public sentiment towards protecting our attention. The human social immune system will catch up with the addictive novelty of digitally mediated attention-hacking through communications and alerts. Attention hijacking by these systems will become conflated with smoking and fast food in terms of their detrimental effects, leading to greater thoughtfulness and balance in their use and application. On the negative side, as with smoking and fast food, poorer and more-marginalized groups will be the last to see these benefits.鈥

Harmful
Greg Sherwin, a leader in digital experimentation with Singularity University, wrote, 鈥淗umans on the wrong side of the digital divide will find themselves with all of the harms of digital technologies and little or no agency to control them or push back. This includes everything from insidious, pervasive dark patterns to hijack attention and motivation to finding themselves on the wrong end of algorithmic decision-making with no sense of agency nor recourse. This will result in mental health crises, loneliness and potential acts of resistance, rebellion and violence that further condemn and stigmatize marginalized communities.鈥

Beneficial
Doc Searls, a contributor at the Ostrom Workshop at Indiana University and co-founder and board member at Customer Commons, said, 鈥淏usiness in general will improve because markets will be opened and enlarged by customers finally becoming independent from control by tech giants. This is because customers have always been far more interesting and helpful to business as free and independent participants in the open markets than they are as dependent captives, and this will inevitably prove out in the digital world. This will also free marketing from seeking, without irony, to 鈥榯arget,鈥 鈥榓cquire,鈥 鈥榦wn,鈥 鈥榤anage,鈥 鈥榗ontrol鈥 and 鈥榣ock in鈥 customers as if they were slaves or cattle. This convention persisted in the industrial age but cannot last in the digital one. However, I am not sure this will happen by 2035.

鈥淏ack when we published 鈥楾he Cluetrain Manifesto: The End of Business as Usual鈥 (2000) and when I wrote 鈥楾he Intention Economy: When Customers Take Charge鈥 (2012), many like-minded folk (often called cyberutopians) expected 鈥榖usiness as usual鈥 to end and for independent human beings (no longer mere 鈥榰sers鈥) to take charge soon. While this still hasn鈥檛 happened, it will eventually, because the Internet鈥檚 base protocols (TCP/IP, HTTP, et. al.) were designed to support full agency for everyone, and the Digital Age is decades old at most 鈥 and it will be with us for decades, centuries or millennia to come.鈥

Harmful
Doc Searls, a contributor at Ostrom Workshop at Indiana University and co-founder and board member at Customer Commons, observed, 鈥淭he most harmful and menacing changes in digital life will be the same ones we鈥檝e had since forever in the physical world and for the last three decades in the digital one: bad acting by creeps who are out to make trouble for fun, profit or both.

鈥淎n iron law of technology will also apply: What can be done will be done 鈥 until we experience the harms it causes and work to correct them 鈥 even as some of those harms continue. This has been the case with every technological development from stone tools to nuclear power, electronic communication, computing and AI.

鈥淭hus, while we will experience the negative effects of new developments in digital life, we will also be working to prevent the worst of those. Same as it ever was.鈥

Beneficial
Jason Hong, professor of computer science at Carnegie Mellon鈥檚 Human-Computer Interaction Institute, wrote, 鈥淭he combination of better sensors, better AI, cheaper smart devices and smarter interventions will lead to much better outcomes for healthcare, especially for chronic conditions that require changes in diet, exercise and lifestyle. Improvements in AI will also lead to much better software, in terms of functionality, security, usability and reliability, as well as how quickly we can iterate and improve software. We’re already seeing the beginnings of a revolution in software development with GitHub Copilot, and advances will only get better from here. This will have significant consequences on many other aspects of digital life.鈥

Harmful
Jason Hong, professor of computer science at Carnegie Mellon鈥檚 Human-Computer Interaction Institute, said, 鈥淲hile AI will have many beneficial uses, there will also be many continuing negative consequences. Some of these will be unintentional (e.g., AI bias). Some of these will be deliberate, for example, more and better deepfakes, adaptive attacks on software and online services, fake personas online, fake discussion from chatbots online meant to 鈥榝lood the zone鈥 with propaganda or disinformation, and more. It’s much faster and easier for attackers to disrupt online activities than for defenders to defend it.鈥

Harmful (Did not respond to Benefits question; this contribution was shortened due overt length on one narrow topic)
Ashu M. G. Solo, principal R&D engineer at Maverick Trailblazers Inc. wrote, 鈥淥nline defamation, doxing and impersonation are three of the major problems of the Internet age. These issues are a perfect example of regulation not keeping up with change. As technology advances, these become greater problems. Laws and platform policies should be updated to mitigate this. Internet defamation and doxing often harms people鈥檚 reputations; prevents them from getting gainful employment; ruins romantic relationships; causes depression, anxiety and distress and leads to deeper mental health problems.

鈥淭he civil remedies for dealing with defamation or doxing are extremely inadequate. Lawyer fees for a defamation or doxing claim in the United States are typically in the range of $30,000 or more. The vast majority of defamation or doxing victims can鈥檛 afford the legal costs. Internet platform providers could take action and unfortunately do not. Freedom of speech was never meant to protect defamation. Among the steps that could be taken is for platforms to require users to use their real names online and provide proof of their address and record and keep the IP address of all users then allow law enforcement appropriate access in the appropriate situations. In addition, criminal laws for defamation should be enforced; they rarely are in the United States and Canada. And defamation or impersonation should be a criminal offense in every country.鈥

Beneficial (Did not respond to Harms question)
Terri Horton, work futurist at FuturePath, said, 鈥淒igital and immersive technologies and artificial intelligence will continue to exponentially transform human connections and knowledge across the domains of work, entertainment and social engagement. By 2035, the transition of talent acquisition, onboarding, learning and development, performance management and immersive remote work experiences into the metaverse 鈥 enabled by Web3 technologies 鈥 will be normalized and optimized. Work, as we know it, will be absolutely transformed. If crafted and executed ethically, responsibly and through a human-centered lens, transitioning work into the metaverse can be beneficial to workers by virtue of increased flexibility, creativity and inclusion. Additionally, by 2035, generative artificial intelligence (GAI) will be fully integrated across the employee experience to enhance and direct knowledge acquisition, decision-making, personalized learning, performance development, engagement and retention.鈥

Beneficial
Gary Marchionini, dean at the University of North Carolina-Chapel Hill School of Information and Library Science, responded, 鈥淚 see strong trends toward more human-centered technical thinking and practice. The 鈥榞o fast and break things鈥 mentality will be tempered by a marketplace that must pay for, or at least make transparent, how user data and activity is leveraged and valued. People will become more aware of the value their usage brings to digital technologies. Companies will not be able to easily ignore human dignity or ecological impact. Innovative and creative people will gravitate to careers of meaning (e.g., ecological balance, social justice, well-being). Tech workers will become more attentive to and engaged with knowledge and meaning as data hype attenuates. Human dignity will become as valued as stock options and big salaries. Some of these changes will be driven by government regulation, some will be due to the growing awareness and thoughtful conversations about socially-grounded IT, and some will be due to new tools and techniques, such as artificiality detectors and digital prophylactics.鈥

Harmful
Gary Marchionini, dean at the University of North Carolina-Chapel Hill School of Information and Library Science, said, 鈥淚 am old enough to recognize that we are in the third iteration of 鈥楢I will save the world鈥 and this latest hype bubble will eventually yield to a more moderate but impactful Gartner Hype Cycle with real positive and negative outcomes. My main worries are this moderating acceptance of generative algorithms and autonomous systems will have severe consequences for human life and happiness. Autonomous weapon systems more openly used in today鈥檚 conflicts, such as Ukraine-Russia, will foster the acceptance of space-based and other more global weapon systems. Likewise, the current orgasmic fascination with generative AI will set us up for development of a much more impactful generation of food, building materials, new organisms and modified humans through synthetic biology 3D printing.鈥

Beneficial and Harmful
Pamela Rutledge, director of the Media Psychology Research Center, wrote, 鈥淎ll change, good and bad, relies on human choices. Technology is a tool; it has no independent agenda. There are tremendous opportunities in digital technologies for humans to enhance their experiences and wellbeing. Digital technologies can increase access to healthcare and fight climate change. They can change education by automating repetitive tasks and running adaptive-learning experiences, allowing teachers to focus on teaching soft skills like creative thinking and problem-solving. In art, literature and music, generative AI and imagery tools like DALL-E can enable cost-effective exploration and prototyping, facilitating innovation.

鈥淭he ubiquity of technology highlights the need for better media literacy training. Media literacy must be integrated into the educational curriculum so that we teach each generation to ask critical questions and develop the skills necessary to understand the design of digital tools and the motivations behind them, including the agendas of content-producers. Young people need to learn smart practices in regard to privacy and data management, how to manage their time online and how to take action in the face of bullies or inappropriate content. These are skills transferable on- and offline, digital and in-person. A better-educated public will be better prepared to make the demands for Big Tech to pull back the curtain on the structural issues of technology, including issues tied to blackbox algorithms and artificial intelligence.

鈥淯sed well, these technologies offer tremendous opportunities to innovate, educate and connect in ways that make a significant positive difference in people鈥檚 lives. Digital technologies are not going away. A positive outcome depends on us leaning into the places where technology enhances the human experience and supports positive growth. As in strengths-based learning, we can apply the strengths of digital technologies to identify needs and solutions.鈥

鈥淭here are challenges, however. The inherent tendency of humanity is to resist change as innovation cycles become more rapid, particularly when innovation is economically disruptive. The world will have to grapple with dealing with all of this in an atmosphere in which trust in institutions has been undermined and people have become hyper-sensitized to threat, making them more reactive to fear, heightening the tendency to homophily and othering.

鈥淭he devaluation of information puts us at social and political risk. Bad actors and lack of transparency can continue to increase distrust and drive wedges in society. Technology is persuasive. Structural decisions influence how people interact, what they access and how they feel about themselves and the world.

鈥淭he inability to think of digital life as a holistic human issue, rather than in segments like the blind men and the elephant, will hamper individual well-being, social progress and economic growth. Regulating an app or behavior doesn鈥檛 solve the larger issue because it doesn鈥檛 identify the fundamentals of 鈥榳hy鈥 humans behave as they do. Regulations might divert the behavior, but they will not stop people from being curious, attracted by motion and sound or interested in creating and sharing content and seeing what other people are doing. Without education and training, this puts everyone individually and collectively at risk.鈥

Beneficial and Harmful
Deirdre Williams, an independent internet governance consultant, responded, 鈥淭here will be a great saving of time as digital systems replace cumbersome paper-based systems. There will be better planning facilitated by better records. Data collection will improve. Weather forecasting will become more precise and accurate. What we have here is an opportunity to advance global equity and justice but, judging by what has happened in humanity鈥檚 past, it is unlikely that full advantage of the opportunity will be taken. In regard to human rights, digital technology will abet good outcomes for citizens. The question is, which citizens, the citizens of where?

鈥淗umanity is becoming more selfish and individualistic. Or rather a portion of humanity is, and sadly, while it may be a minority, it has a loud and wide-ranging voice and a great deal of influence. More and more, people seem to live on 鈥榟ype鈥 鈥 an excitement which depends on neither fact nor truth, but only on the extremity of the sensation. This is shared and amplified by the technology. It isn鈥檛 just a space that allows people individual freedom of expression, it is also a space on which some people encourage or seek homogenisation. The movement toward 鈥榖inary thinking鈥 rules out the middle way, although there are and should be many middle ways, many 鈥榤aybes.鈥 Computers deal with 1 and 0, yes and no, but people are not computers. Binary human thinking is doing its best to turn people into computers.

鈥淪ubtleties are being eroded, so that precise communication becomes less and less possible. Reviewing history, it is apparent that humanity is on a pendulum swinging between extremes of individualism and community. Sometimes it seems that the period of the swing is shortening; it certainly seems that we are getting closer to the point of return now, but it is difficult to stand far enough back so as to be able to get a proper view of the time scale.

鈥淲hen the swing reverses, I expect we鈥檒l all be more optimistic because, as someone said during the Caribbean Telecommunications Union鈥檚 workshop on legislative policy for the digital economy last week, the PEOPLE are the heart, soul and everything in the digital world. Without the people, the technology has no meaning.鈥

Beneficial and Harmful
Charles Ess, emeritus professor of ethics at the University of Oslo, said, 鈥淚n the best-case scenario, more ethically-informed approaches within engineering, computer science and so on promise to be part of the package of developments that might save us from the worst possibilities of these emerging technologies. A brief paraphrase the executive summary of the first edition of the IEEE paper: These communities should now recognize that the first priorities in their work are to design and implement these technologies for the sake of human flourishing and planetary well-being, protecting basic human rights and human autonomy 鈥 over the current focus on profit and GNP.

鈥淥n the dark side, however, this sort of endeavor also opens up every temptation for 鈥榚thics-washing,鈥 so critical eyes need to watch closely. On the other hand, there would be real grounds for optimism if these sorts of developments should catch further hold in other disciplines and approaches that have historically likewise divorced themselves from more humanistic foci. Time will tell.

鈥淚f such ethical shaping and informed policy development and regulation succeed in good measure, then the manifest benefits of AI/ML will be genuinely significant and transformative. Given how computational and network technologies are now the envelope and ecology in which most of us in the so-called developed countries live, the promises and likely benefits of these technologies range across just about every aspect of human existence 鈥 including, as the initial questions suggest, in medicine and healthcare.

鈥淎ll of this depends, however, on our taking to heart and implementing in praxis the clear lessons of the past 50 years or so. Human judgment must remain central in the implementation of any such system that impinges on human health, well-being and flourishing, rather than acquiescing to the pressures of profit and efficiencies in seeking to offload such judgment to AI/ML systems.

鈥淭he technical details are especially important here, as they make very clear that such systems, however impressive and often genuinely useful their results may be, are simply very fancy statistical inference machines 鈥 i.e., probabilistic guessing based on literally mindless calculation. 鈥楾he lights are on, but nobody鈥檚 home,鈥 as I like to say 鈥 i.e., there is no consciousness, much less the human sorts of intelligences that implicate empathy, care and especially reflective judgment that we as human beings rely on for making our most difficult and often painful choices. As the 70 percent failure rate of current AI projects (so far) suggests, to offload this distinctively human work and responsibility to our machineries will often have devastating consequences for individuals and the larger society, as mindless statistical inference will sometimes result in a 鈥榙ecision鈥 that is manifestly mistaken (as well as impossible for anyone to explain 鈥 another set of problems).

鈥淓ven more problematic is how offloading human judgment and responsibility in these ways thereby de-skills us, i.e., we become rusty 鈥 worst case, we simply forget how to make such judgments on our own. Stated more generally: contra the understanding of such technologies as human augmentation 鈥 the more we engage with them, the more we become like them.

鈥淕iven these caveats, it is also manifest that these and related digital technologies will continue to聽 have enormous impact in the domain of human knowledge 鈥 at least those domains that thrive upon quantitative/calculative approaches, primarily in mathematics and the natural sciences. This is to be lauded not only for its own sake, but specifically for the very utilitarian and utterly critical matter of addressing and hopefully mitigating climate change and at least it鈥檚 likely worst consequences…

鈥淲e have some 20+ years of debate over what 鈥榯he digital鈥 may mean, and more recently, whether or not any distinction between the digital and the analogue even makes any sense or difference. My own take is that we have been sold 鈥 literally 鈥 on 鈥榯he digital鈥 as the universal panacea for all of humankind鈥檚 ills, all too often at the cost of the analogue, the qualitative, the foundational experience of what it is and might mean to be a human being. This does not bode well for human/e futures for free moral agents capable of pursuing lives of flourishing in liberal-democratic societies, nor for the planet.

鈥淭he same holds for hopes of using these technologies in the name of greater democracy, freedom and equality 鈥 what many of us foregrounded as the 鈥榙emocratizing potentials of the Internet鈥 in its first 20 years or so. One can only hope that these uses will continue, expand and multiply. At the same time, however, the larger pattern is not promising. Rather, what is often called the rise of digital authoritarianism 鈥 amplified by actors such as China who make good money selling their surveillance systems to other regimes intent on keeping their populations under strict control 鈥 has been documented since at least 2012 and is a phenomenon that only gets worse from year to year.

鈥淚 have not addressed other prominent technologies 鈥 starting with virtual assistants and social robots. As primarily the offspring of AI/ML systems, much of the same sort of comments would apply here. Ditto for the current excitement over ChatGPT and other Large Language Models (LLMs). A particular wrinkle has to be noted here, however, especially in the use of social robots and virtual assistants among very young children: again, a risk of deskilling 鈥 or never learning in the first place 鈥 such basic human/e elements as empathy, care and so on. So:

1) One threat is the risk of AI/ML systems displacing human judgment, autonomy and responsibility 鈥 accompanied by the ultimate risks of de-skilling should we fail to keep humans (specifically, our skills of empathy and judgment) 鈥榠n the loop鈥 of the whole range of human development (specifically, for very young children in terms of empathy and care) and decision-making that will be increasingly offloaded to these systems, with often catastrophic losses.

2) The larger pattern of displacing or eliminating humanistic studies and resources in favor of STEM 鈥 thereby eliminating a very great deal of the kinds of education and experiences needed precisely to foster more qualitative forms of judgment, empathy and so on.

3) The continued rise of 鈥榙igital authoritarianism鈥 鈥 i.e., contra the emancipatory and democratizing potentials of digital technologies, more and more countries, including the nominally democratic ones, will make use of these technologies rather to reinforce and expand authoritarian control over their populations.鈥

鈥淭he majority of our fascination with the majority of the applications of the majority of these digital technologies has robbed us of critical abilities to concentrate or exercise a previously accepted ability to exercise critical reflection of a sustained and systematic sort. These likewise appear to be reducing our central capacities or abilities of empathy, perseverance, patience, care and so on 鈥 all of which are required for basic communication, long-term friendships and the deep sorts of relationships necessary for parenting, and so on. Twenty years ago, the early warnings along these lines were dismissed as moral panics (if not worse). Pun intended: we should have paid better attention.

鈥淭he flaws of today鈥檚 automated processes as substitutes for human鈥檚 doing the critical thinking are made evident in the work of data/AI legal philosopher Mireille Hildebrandt. Her research showed how AI/ML systems short-circuit the rights of the accused to contest evidence and accusations in court: when the accusation comes from an AI/ML system that statistically but mindlessly calculates that you are guilty, there is no way 鈥 not even for the system鈥檚 programmers and handlers 鈥 to explain just why this inference was made.

鈥淭he human/e loss will be enormous. These systems are built around models of behavior surveillance, modification and control rooted in Skinnerian Behaviorism 鈥 now a thousand times more sophisticated and thus effective in measuring and modifying our behaviors. The upshot is thus primitively simple: human beings are now nothing more than Skinner pigeons in Skinner cages of monitoring and control via positive, sometimes negative reinforcement. A very worst-case scenario is that 鈥榃e are the Borg鈥: we ourselves have become the makers and consumers of technologies that risk eliminating 鈥 if not simply preventing us from acquiring in the first place 鈥 that which is most central to living out free human lives of meaning and flourishing. Resistance may not be entirely futile, but somehow getting along without these technologies is simply not a likely or possible choice for most people.

“Somehow reshaping and redesigning our uses and implementations of these technologies offers some hope. But whether enough professional and business organizations undertake the sorts of changes needed; whether or not our legal and political systems will nudge/force them to do so; and most of all, whether or not enough of us, the consumers and users of these technologies, will successfully resist current patterns and forces and insist on much more human/e directions of development and implementation, remains to be seen.

鈥淔ailure to do so will mean that whatever human skills and abilities affiliated with freedom, empathy, judgment, care and all else required for lives of meaning and flourishing will be increasingly offloaded 鈥 it is always easier to let the machines do the dirty work. And, very worst case, fewer and fewer of us would notice or care, as all of that will be forgotten, lost (deskilled) or simply never introduced and cultivated in the first place.

鈥淢anifestly, I very much hope such the worst cases are never realized, and there may be some good grounds for hoping that they will not. But slowing down and redirecting the primary current patterns of technology development and diffusion will be very difficult indeed, I fear.鈥

Beneficial and Harmful
Oksana Prykhodko, director of INGO European Media Platform, an international NGO based in Ukraine, said, 鈥淚 live in Ukraine, under full-scale, unprovoked aggression from Russia, and even now, after nearly 12 months of cyberattacks and the bombing of our citizens, ISPs, energy infrastructure and so on, I have an Internet connection.

鈥淏efore the war we had more than 6,500 different ISPs. Now nearly every large household, every office, every point of invincibility has its own Starlink satellite connection and a generator and shares its Wi-Fi with its neighbours. I am sure that the Ukrainian experience of 鈥榢eeping Ukraine connected鈥 (with the help of many stakeholders from around the world) can help to ensure human-centered, government-decentralised Internet connection. I am hoping that by 2035 we will have several competitive decentralised private satellite providers for connectivity and to improve our social and political interactions in the future with all democratic countries.

鈥淚 am not optimistic about the future of human rights, but perhaps there will be better awareness-raising in support of them in the next decade, and the establishment of litigation processes in support of rights that result in clear and practical outcomes. The Russians are doing their best to commit the genocide of the Ukrainian people. We in Ukraine are extremely worried about our personal data protection and cybersecurity, the forced deportation of children to the country-aggressor, fake referendums with fake lists of 鈥榲oters,鈥 and acts of torture committed on people found on e-registries. These crimes will demand future investigation and the trial of those who must take responsibility.

鈥淲e in Ukraine fully support the multistakeholder model of Internet governance. Because we have free speech, fierce discussions often break out among our stakeholders as we excitedly discuss the big issues tied to the future of the Internet. Russians have no such rights, no multistakeholders, only the governing class. Ignoring the fact that there are no stakeholders in non-democratic countries undermines the full realization of the global multistakeholder model.

鈥淚n this war, Ukrainian schoolteachers have had to become e-teachers (very often against their own wishes, against their technical capabilities) because it became unsafe to stay in Ukrainian schools in areas targeted for Russian bombings). This is the worst way to further the development of e-learning.鈥

Beneficial
Dan Hess, global chief product officer at NPD Group, commented, 鈥淎rtificial intelligence, coupled with other digital technologies, will continue to have an astounding impact on advances in health care. For example, researchers have already used neural networks to mine massive samples of electrocardiogram (ECG) data for patterns that previously may have eluded detection. This learning can be applied to real-time inputs from devices such as wearable ECGs to alert providers to treatable health risks far faster and more completely than ever before.

鈥淪imilarly, imaging and processing technologies are driving a reduction in the cost and timing of DNA sequencing. Where once this process took weeks or months and millions of dollars, the application of new technologies will enable it to be done for less than $100 in less time than it takes to eat lunch. AI will interpret these results more thoroughly and quickly than ever, again resulting in early detection of health risks and the creation of new medications to treat them.

鈥淭he net result will be greater quality and length of life for humans 鈥 and, for that matter, countless other living creatures.鈥

Harmful
Dan Hess, global chief product officer at NPD Group, wrote, 鈥淔or all of the incredible positive impact that AI will have, it will also give rise to a vast range of dark issues that individuals, societies and our governments will need to confront.

鈥淭here is a very real probability of technological singularity. There isn鈥檛 enough time or space here to tackle the implications of that, so here are a few challenges that we鈥檒l face until 鈥 and after — that day comes.

鈥淚n healthcare, such developments as AI-driven disease detection will drive ever-greater life expectancy. This in turn will drive further acceleration of population growth and all of its consequences to the environment, agriculture, trade and more.

鈥淢achines will continue to replace humans in more jobs, including knowledge work such as scientific research. The use of AI across every aspect of life will have an impact on learning and development that eclipses what calculators, PCs and smartphones did to people鈥檚 ability to write and do basic math. At the same time, a longer overall lifespan will force individuals to find ways to lead a longer and/or more intense working life to keep food on the table for many more years of post-work retirement.鈥

Beneficial
Mary Chayko, sociologist, author of 鈥淪uperconnected鈥 and professor of communication and information at Rutgers University, said, 鈥淎s communication technology advances into 2035, it will allow people to learn from one another in ever more diverse, multifaceted, widely distributed social networks. We will be able to grow healthier, happier, more knowledgeable and more connected as we create and traverse these networked pathways together. The development of digital systems that are credible, secure, low-cost and user-friendly will inspire all kinds of innovations and job opportunities. If we have these types of networks and use them to their fullest advantage, we will have the means and the tools to shape the kind of society we want to live in.鈥

Harmful
Mary Chayko, sociologist, author of 鈥淪uperconnected鈥 and professor of communication and information at Rutgers University, commented, 鈥淯nfortunately, the commodification of human thought and experience online will accelerate as we approach 2035. People have long found it commercially viable to buy and sell ideas, knowledge, likenesses and experiential accounts 鈥 as suggested in the thriving worlds of fiction and nonfiction 鈥 but by 2035, this process may be out of our everyday control.

鈥淭echnology is already used not only to harvest, appropriate and sell our data, but to manufacture and market data that simulates the human experience, as with applications of artificial intelligence. This has the potential to degrade and diminish the specialness of being human, even as it makes some humans very rich.

鈥淭he extent and verisimilitude of these practices will certainly increase as technology permits the replication of human thought and likeness in ever more realistic ways. But it is human beings who design, develop, unleash, interpret and use these technological tools and systems. We can choose to center the humanity of these systems, and to support those that do so, and we must.鈥

Beneficial
Alexander Halavais, associate professor of social data science at Arizona State University, said, 鈥淔or some, new tools will allow for new ways of creating; a new and different kind of arts and crafts movement will emerge. We already have seen the corner of this: from Etsy to YouTube. But there will be a democratization of powerful software and hardware for creating, and at least some of the overhead in terms of specialized training will be handled by the systems themselves.

鈥淲e are likely to see increased monitoring of use of resources: chiefly energy and water, but also minerals and materials. Whether the environmental costs will be priced into the economy remains to be seen, but we will have far better tools to determine which products and practices make the most efficient use of resources.

鈥淚ndividualized medicine will mean better health outcomes for those with access to advanced healthcare. This does not mean 鈥榓n end to death鈥 but it does mean dramatically healthier older people and longer fruitful lifespans.

鈥淎ccess to a core education will continue to become more universally available. While there will remain significant boundaries to gaining access to these, they will continue to be eroded, as geographically based schools and universities give way to more broadly accessible (and affordable) sources of learning.

鈥淎n outgrowth of distrust of platform capitalism will see a resurgence in networked and federated sociality 鈥 again, for some. This will carve into advertising revenues for the largest platforms, and there may be a combination of subscription and cooperative systems on a smaller scale for those who are interested.

鈥淲e will increasingly see conversations among AI agents for arranging our schedules, travel, etc., and those working in these services will find themselves interacting with non-human agents more often.

鈥淎cross a number of professional careers, the ability to team with groups of mixed human and non-human actors will become a core skill.鈥

Harmful
Alexander Halavais, associate professor of social data science at Arizona State University, responded, 鈥淐yberwar is already here and will increase in the coming decades. The hopeful edge of this may appear to be a reduction in traditional warfighters, but in practice this means that the front is everywhere. Along with the proliferation of strong encryption and new forms of small-scale autonomous robotics, the security realm will become increasingly unpredictable and fraught.

鈥淭he divide between those who can make use of new, smart technologies (including robotics and AI) and those who are replaced by them will grow rapidly. It seems unlikely political and economic patches will be easy to implement, especially in countries like the United States that do not have a history of working with labor. I suspect this means that in those countries, technological progress may be impeded, and it will be increasingly difficult to avoid this long-standing divide coming to a head.

鈥淚 suspect that both universities and k-12 schools in the United States will also see something of a bifurcation. Those who can afford to live in areas with strong public schools and universities, or who can afford private tuition, will keep a relatively small number of 鈥榳inners鈥 active, while most will turn to open and commodity forms of education. Khan Academy, for example, has done a great deal to democratize math education, but it also displaces some kinds of existing schools. At the margin, there will be some interesting experimentation, but it will mean a difficult transition for much of the educational establishment. We will see a continued decline of small liberal arts colleges, followed by larger public and private universities and colleges. I suspect, in the end, it will follow a pattern much like that of newspapers in the U.S., with a few niche, high-reputation providers, several mega universities and very few small, local/regional institutions surviving.

鈥淭he current bout of disinformation and misinformation is not unprecedented, of course, but it will require some significant global cultural shifts to cause it to recede. I see little hope, at present, of that happening. I suspect that the result will be a combination of populist leaders seeking to capitalize on such disinformation, and others retreating from democratic structures in order to preserve technocratic and knowledge-based government. These paired tendencies are already visible, but if they become entrenched in some of the largest countries (and particularly in the United States), they will contribute to growing political and economic instability.

鈥淲e have already seen a bit of a pushback from both global institutions and the global economy. In some ways, this is natural, as the damage of global transportation of goods is somewhat hidden. But the growth of the globalized economy has also closed some gaps between the global North and South over the last few decades. There will still be opportunities, especially in services, as more people embrace working from home and distanced teams. Nonetheless, there will be new, stronger national borders that will make international trade, as well as global cosmopolitanism, recede.鈥

Beneficial
Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation, said, 鈥淭here are three main sectors where digital systems offer the most potential benefit: health, education and transportation.

鈥淚n health, I hope to see two primary benefits. First, using digital to bring down the cost of care, particularly through telehealth services and automation. For example, today鈥檚 nurse intakes interviews could be completed with voice chatbots and some routine care could be provided by health care workers with significantly less medical training (e.g., a 2-year nurse technician versus a 10-year primary care physician). Second, using data to design more effective treatments. This should include designing and bringing new drugs to market faster, creating personalized treatments, and better understanding population-level impacts of various medical interventions.

鈥淚n education, the big opportunity is personalized learning. Digital has the potential to give everyone educational opportunities that meet them at their level.

鈥淎nd in transportation, the big opportunity is improving safety, i.e., minimizing deaths and significant injuries. Whether this comes from fully autonomous vehicles or simply vehicles with greater safety functions is not important. But the goal should be to create vehicles less likely to cause injury.鈥

Harmful
Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation, responded, 鈥淭here has been a lot of work on closing the digital divide 鈥 helping to ensure everyone has access to the Internet, computers and basic digital literacy. But there is less thinking about addressing future digital inequities. In particular, there is the problem of the data divide where not enough data is collected about some individuals or their communities so that they are unable to benefit fully from the digital economy. Addressing the data divide will be necessary to ensure that everyone benefits from digital progress.鈥

Beneficial
Janet Salmons, an online research methodologist, wrote, 鈥淚 have hope for human health and well-being due to the regulations emerging from the European Union (such as the DSA and DMA). They were written to protect people from cyberbullying and violent threats. I have hope for positive developments in human knowledge if people continue to reject book bans and content restrictions. Open-access and cross-border library access are important to stopping censorship.鈥

Harmful
Janet Salmons, an online research methodologist, responded, 鈥淚 have concerns about human rights and human health and well-being. Without regulations, the Internet becomes too dangerous to use, because privacy and safety are not protected. More walled gardens emerge as safe spaces. Digital tools and systems are based in greed, not the public good, with unrestricted collection, sale and use of data collected from Internet users.鈥

Beneficial
Danny Gillane, an information science professional, commented, 鈥淲e will begin to focus on privacy and security more seriously, and companies like Facebook (Meta), Google and Amazon will be edged out by more privacy-focused, less sell-user-information-focused companies like Apple (which is more interested in selling hardware) and DuckDuckGo and new companies. Well, okay, that’s more of a hope than a prediction.鈥

Harmful
Danny Gillane, an information science professional, wrote, 鈥淐ompanies are going to run ahead with AI without regard to safety. The genie cannot be put back into the bottle. Government will not act to regulate or provide safeguards and consumers will suffer.鈥

Beneficial
Corinne Cath, an anthropologist of Internet infrastructure governance, politics and cultures, wrote, 鈥淭ech is just another instantiation of the economic system. It鈥檚 not magic. The rose-tinted glasses about the ‘positive’ impact of the tech are off, tech critique is getting stronger.鈥

Harmful
Corinne Cath, an anthropologist of Internet infrastructure governance, politics and cultures, said, 鈥淓verything depends on the cloud computing industry, from critical infrastructure to health to electricity to government as well as education and even the business sector itself 鈥 this centralizes power in the centralized power structure even further.鈥

Beneficial
Gus Hosein, executive director of Privacy International, commented, 鈥淒irect human connections will continue to grow over the next decade-plus, with more local community-building and not as many global or regional or national divisions.

鈥淧eople will have more time and a more sophisticated appreciation for the benefits and limits of technology. While increased electrification will result in ubiquity of digital technology, people will use it more seamlessly rather than through online vs offline.

鈥淗uman rights: Having been through a dark period of transition, a sensibility around human rights will emerge in places where human rights are currently protected and will find itself under greater protection in many more places, but not under the umbrella term of ‘human rights.鈥欌

Harmful
Gus Hosein, executive director of Privacy International, said, 鈥淲hen and where human rights are disregarded matters will grow worse over the next decade-plus. A new fundamentalism will emerge from the over-indulgences of the tech/information/free market era, with at least some traditional values emerging, but also aspects of a cultural revolution, both requiring people to exhibit behaviours to satisfy the community. This will start to bleed into more free societies and will pose a challenge to the term and symbolism of human rights.

鈥淟oneliness will continue to rise, starting from early ages as some do not make it out of the end of online vs offline. Alongside the struggle around human rights vs. traditional values, more loneliness will result in people who are different, being outcast from their physical communities and not finding ways to compensate.

鈥淗uman knowledge development will slow. As we learn more about what it is to be human and how we interact with one another, the fundamentalism and quest for simplicity will mean that we care less and less about discovery and will seek solace in natural solutions. This has benefits for sure, but just as the link between new age and wellbeing has some links to right wing and anti-science ideologies, this will grow as we stop obsessing about technology as a driver of human progress and just see a huge replacement of pre-2023 infrastructure with electrification.鈥

Beneficial
Michael Muller, a researcher for a top global technology company who is focused on human aspects of data science and ethics and values in applications of artificial intelligence, wrote, 鈥淲e will learn new ways in which humans and AIs can collaborate. Humans will remain the center of the situation. That doesn’t mean that they will always be in control, but they will always control when and how they delegate selected activities to one or more AIs.鈥

Harmful
Michael Muller, a researcher for a top global technology company focused on human aspects of data science and ethics and values in applications of artificial intelligence, commented, 鈥淗uman activities will increasingly be displaced by AIs, and AIs will increasingly anticipate and interfere with human activities. Most humans will be surveilled and channeled by AI algorithms. Surveillance will serve both authoritarian government and increasingly dominant corporations.鈥

Beneficial
Akah Harvey, director of engineering at Seven GPS, Cameroon, said, 鈥淗umans are always on the quest to reduce human labor and improve quality of life as much as they possibly can. With the advancement in the fields of artificial intelligence and renewable energy, we are getting closer and closer to achieving those goals. My biggest hope is that practical applications of conversational AIs like ChatGPT will eliminate monotonous discussions across several industry domains, from banking and finance to building and architecture, health and education. We can finally employ such artificial agents to speed up policy designs that give us significant insight on how we can better allocate resources in different departments for better productivity. Fairness and equity in a given community can be more achievable if we could test our policies more rapidly and efficiently across a wider target population. We could gain several hundred years of research and development from the application of such AIs. New drug synthesis could be developed in less than one-tenth of the time it would conventionally do. This creates a safe way of anticipating future health or economic disasters by preparing responses well ahead, or just preventing it all together. There’s really only a limit as to what domain of human endeavor we allow autonomous agents to be applied to. The opportunities for a better life regardless of where we are on Earth are boundless.鈥

Harmful
Akah Harvey, director of engineering at Seven GPS, Cameroon, wrote, 鈥淲e have to think long and hard about in just which industry domains we let artificial intelligence provide work product without some sort of rules in regard to it. We are soon going to have AI lawyers in our courts. What should we allow as acceptable from that AI in that setting? The danger in using these tools is the bias they may bring which we may not have yet conceived ever in the industry. This has the potential to sway judgment in a way that doesn’t render justice.

鈥淎rtificial intelligence that passes the Turing Test must be explainable. When people give up the security of their digital identity for a little more convenience, the risk could be far too great for the damage potential it represents. When interacting with agents, there’s need for proper identification as to whether that agent is an AI (acting autonomously) or a human. These tools are beating the test more and more these days, such that they can even impersonate actual humans to carry out acts that would otherwise jeopardize the stability of any given institution and global peace at large.

鈥淲e are most likely to be seeing more and more movies being created entirely by artificial entities than by humans. These will tend to be hardly distinguishable from a conventional movie production. It is going to drive less and less involvement of humans in the industry and therefore create pressure in society to create new roles for people to fill. The dangers are existential and public policy needs to keep up almost as fast as these new tools evolve.鈥

Beneficial
Jeffrey D. Ullman, professor emeritus of computer science, Stanford University, commented, 鈥淚鈥檇 like to touch on the future of human rights, knowledge, digital tools and systems and privacy.

鈥淗uman Rights: Today, governments such as China鈥檚 are able to control what most of their citizens see on the Internet. Yes, technically adept people can get around the censorship, but I assume random citizens do not have the ability to use VPNs and such. By 2035, it should be possible to make simple workarounds that nontechnical people can access. Especially when dictators are threatened, the first thing they do is cut off the Internet so the people cannot organize. By 2035, it should be possible for anyone to access the Internet without possibility of restriction. I note, for example, how satellite-based Internet was made available to the protesters in Iran, but Elon Musk then demanded payment for the service. I would envision, rather, a distributed system, uncontrolled from any one point (like cryptocurrency) as a means of access to the Internet, at least in times of crisis.

鈥淜nowledge: Today we are in the 鈥榳ild west鈥 in how we deal with behavior on the Internet. There are currently some fairly accurate systems for detecting social-media postings that are inappropriate or dangerous in some way (e.g., hate speech, fear speech, bullying, threats). They need to get better, and there needs to be some regulation regarding what is inappropriate under what circumstances. I hope and expect that by 2035 there will be established a reasonable standard for behavior on the Internet much as there is for behavior on the street. I also believe that enforcement of such a standard will be possible using software, rather than human intervention, in 99.9% of the instances.

鈥淒igital Tools and Systems: Scams of all sorts appear on the Internet and elsewhere, and they are becoming more sophisticated. I hope that by 2035, we will have the technology in place to help vulnerable people avoid the traps. I envision a guide that looks over your shoulder 鈥 at your financial dealings, your on-line behavior, and such and warns you if you are about to make a mistake (e.g., sending your life savings to someone claiming to be the IRS, or downloading ransomware).

鈥淧rivacy: I believe that our current approach to privacy is wrong. The Internet has turned us into a global village, and just as villagers of 200 years ago knew everything about one another, we need to accept that our lives are open, not secret, as they were for most of human history. For example, many people look with horror at the idea that companies gather information about them and use that information to pitch ads. These same people are happy to get all sorts of free service, but very unhappy that they are sent ads that have a higher-than-random chance of being for something they might actually be interested in. I hope that by 2035, we will have adjusted to the new reality.鈥

Harmful
Jeffrey D. Ullman, professor emeritus of computer science, Stanford University, commented, 鈥淲hile I am fairly confident that the major risks from the new technologies have technological solutions, there are a number of serious risks.

Governance and Institutions: Social media is, I believe, responsible for the polarization of politics. It is no longer necessary to get your news from reasonable, responsible sources, and many people have been given blinders that let them see only what they already believe. If this trend persists, we will see more events like Jan. 6, 2021, or the recent events in Brazil, possibly leading to social breakdown.

Human Connections: I recall that with the advent of online gaming, it was claimed that 鈥100,000 people live their lives primarily in cyberspace.鈥 I believe it was referring to things like playing World of Warcraft all day. 100K isn鈥檛 a real problem, but what if virtual reality (the metaverse) becomes a reality by 2035, as it probably will, and a hundred million people are spending their lives there?

Well-Being: I remember from the 1960s the Mad Magazine satire of 鈥楾he IBM Fight Song鈥: 鈥樷hat if automation, idles half the nation, we鈥檒l still work for IBM鈥︹ Well 60 years later, automation has steadily replaced human workers, and more recently AI has started to replace brain work as well as physical labor. Yet unemployment has remained about the same. That doesn鈥檛 mean there won鈥檛 be a scarcity of work in the future, with all the social unrest it would entail. Especially, a consequence of the rapid obsolescence of jobs means the rate at which people must be retrained will only increase, and at some point I think we reach a limit, where people just give up trying to learn new skills.

Other (Education): It has recently been noticed that ChatGPT is capable of writing things that look like student essays. I think the panic is unwarranted; there are already tools being developed that will tell ChatGPT output from the work of high-school students pretty well. But what happens when students can build their own trillion-parameter models (without much thought 鈥 just using publicly available on-line software tools and data) and use it to do their homework. Worse, the increasing prevalence of on-line education has made it possible for students to use all sorts of scams to avoid actually learning anything (e.g., hiring someone on the other side of the world to do their work for them). Are we going to raise a generation of students who get good grades but don鈥檛 actually learn anything?

Digital Tools and Systems: I do not believe the 鈥楾erminator鈥 scenario where AI develops free will and takes over the world is likely anytime soon. The stories about chatbots becoming sentient are nonsense 鈥 they are designed to talk like the humans who created the text on which the chatbot was trained, so it looks sentient but is not. The risk is not that, for example, a driverless car will suddenly become self-aware and decide it would be fun to drive up on the sidewalk and run people over. It is much more likely that some rogue software engineer will program the car to do that. Thus, the real risk is not from unexpected behavior of an AI system, but rather from the possible evil intent of one or more of their creators.鈥

Beneficial
Lauren Wilcox, a senior scientist and group manager at Google Research who investigates AI and society, predicted, 鈥淭he best and most beneficial changes in digital life likely to take place by 2035 tie into health and education.

鈥淚mproved capabilities of health systems (both at-home health solutions as well as health care infrastructure) to meet the challenges of an aging population and the need for greater chronic condition management at home.

鈥淎dvancements in and expanded availability of telemedicine, last-mile delivery of goods and services, sensors, data analytics, security, networks, robotics, and AI-aided diagnosis, treatment, and management of conditions, will strengthen our ability to improve the health and wellness of more people.

鈥淭hese solutions will improve the health of our population when they augment rather than replace human interaction, and when they are coupled with innovations that enable citizens to manage the cost and complexity of care and meet everyday needs that enable prevention of disease, such as healthy work and living environments, healthy food, a culture of care for each other, and access to health care.

鈥淚ncreases in the availability of digital education that enables more flexibility for learners in how they engage with knowledge resources and educational content. Increasing advancements in digital classroom design, accessible multi-modal media, and learning infrastructures will enable education for people who might otherwise face barriers to access.

鈥淭hese solutions will be most beneficial when they augment rather than replace human teachers, and when they are coupled with innovations that enable citizens to manage the cost of education.鈥

Harmful
Lauren Wilcox, a senior scientist and group manager at Google Research who investigates AI and society, observed, 鈥淭he most harmful or menacing changes in digital life likely to take place by 2035 are likely to emerge from聽irresponsible development and use, or misuses, of certain classes of AI, such as generative AI (e.g., applications powered by large language and multimodal models) and AI that increasingly performs human tasks or behaves in ways that increasingly seem human-like.

鈥淔or example,聽 current generative AI systems can now take as input from the user natural language sentences and paragraphs and generate personalized natural language and image-based and multimodal responses. The models learn from a large body of available information online to learn patterns.

鈥淗uman interaction risks of irresponsible uses of these classes of AI include the ability for an AI system to impersonate people in order to compromise security, emotionally manipulate users, and gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking overtrust and reliance on them, diminishing learning and information discovery opportunities, and making it difficult for people to know when a response is incorrect or incomplete.

鈥淚n a future in which people rely on these AI systems, but cannot validate their responses easily,聽 or don鈥檛 know what data they鈥檝e been trained on or what other techniques were used to generate responses, a lack of transparency will make accountability for poor or wrong decisions made with these systems difficult to assess.

鈥淭his is especially problematic when acknowledging the biases that are inherent to AI systems that are not responsibly developed; for example, an AI model that is trained on text available online will inherit cultural and social biases, leading to the potential erasure of many perspectives and reinforcement of particular worldviews.

鈥淚rresponsible use or misuse of these AI technologies can also bring material risks to people, including a lack of fairness to creators of the original content that models learn from to generate their outputs, and the potential displacement of creators and knowledge workers resulting from their replacement by AI systems, in the absence of policies to ensure their livelihood.

鈥淔inally, we鈥檒l need to advance the business models and user interfaces we use to keep web businesses viable: when AI applications replace or significantly outpace the use of search engines, web traffic to websites one would usually visit as they search for information might be reduced if an AI application provides a one-stop shop for answers. If sites lose the ability to remain viable, a negative feedback loop could limit diversity in the content these models learn from, concentrating information sources even further into a limited number of the most powerful channels.鈥

Beneficial and Harmful
Charles Fadel, founder of the Center for Curriculum Redesign and co-author of 鈥淎rtificial Intelligence in Education,鈥 explained, 鈥淭he amazing thing about this moment is how quickly artificial intelligence is spreading and being applied. With that in mind, let鈥檚 walk through some of your survey prompts:

鈥淥n human-centered development of digital tools and systems: I do believe significant autonomy will be achieved by specialized robotic systems, assisting in driving (U.S.), (air and land) package delivery, or bedside patient care (Japan), etc. But we don鈥檛 know exactly what 鈥榮ignificant鈥 entails; in other words, the degree of autonomy may vary by life-criticality of the applications 鈥 the more life-critical, the less trustworthy the application (package delivery on one end, being driven safely on the other).

鈥淥n human knowledge: Foundation models (like GPT-3) are surprising everyone and will lead to hard-to-imagine transformations. What can a quadrillion-item system achieve? (Or is there a diminishing return? We will find out in the next six months, if not before the time this is published). We鈥檝e already seen how very modest technology changes disrupt societies. I was witness to the discussion regarding the Global System for Mobile Communications (GSM) effort years ago, when technologists were trying to see if we could use a bit of free bandwidth that was available between voice communications channels. They came up with short messages 鈥 140 characters that only needed 10 kilohertz of bandwidth. I wondered: Who would care about this?

鈥淲ell, people did care, and they started exchanging astonishing volumes of messages. The humble SMS [text message] has led to societal transformations that were complete 鈥榰nknown unknowns.鈥 First, it led to the erosion of commitments (by people not showing up when they said they would) and not soon afterward led to the erosion of democracy (via Twitter). If something that small could have such an impact, it鈥檚 impossible to imagine what foundation models will have. For now, I鈥檇 recommend that everybody take a deep breath and wait to see what the emerging impact of these models is. We are talking about punctuated equilibria a la Stephen Jay Gould, for AI. but we鈥檙e not sure how far until the next plateauing.

鈥淗uman connections, governance and institutions: I worry about regulation. I continue to marvel at the inability of lawyers and politicians, who are typically humanities types, to understand the impact of technologies for a decade or more after they erupt. This leads to catastrophes before anyone is galvanized to react. Look at the catastrophe of Facebook and Cambridge Analytica and the 2016 election. No one in the political class was paying attention then 鈥 and there still aren鈥檛 any real regulations. There is no anticipation in the political circles about how technology changes things and the dangers that are obvious. It takes 2-3 decades for them to react, when regulations should come within three years at worst.

鈥淔or other kinds of institutions like universities, it鈥檚 still hard to guess whether tech developments will be the real silver bullet that helps higher ed or ruins it. Every new technology from radio to CD-ROMs to personal computers was supposed to fix education. It hasn鈥檛 yet happened. The better approach for educators would be to recognize the environment is changing and some of the changes will be helpful, and they do not destroy everything that鈥檚 valuable.

鈥淗uman rights: Should a centibillionaire have more free speech rights because they own a global platform? Look at Twitter now. This is a dangerous situation because of greed 鈥 greed for power and money. It鈥檚 all about the manipulation of people by understanding who they are and making the messages sent to them stickier and stickier. And we鈥檝e seen how much harm it can do when misinformation hurts people 鈥 people who didn鈥檛 believe the early warnings about COVID. We鈥檙e basically emotional beings who are very easy to manipulate. That won鈥檛 change anytime soon.鈥

Beneficial and Harmful
Mark Davis,聽an associate professor of communications at the University of Melbourne in Australia, whose research focuses on online 鈥榓nti-publics鈥 and extreme online discourse, responded, 鈥淭here must be and surely will be a new wave of regulation. As things stand, digital media threatens the end of democracy.

鈥淭he structure, scale and speed of online life exceed deliberative and cooperative democratic processes. Digital media plays into the hands of demagogues, whether it be the libertarians whose philosophy still dominates Western tech companies and the online cultures they produce or the authoritarian figures who restrict the activities of tech companies and their audiences in the world鈥檚 largest non-democratic state, China.

鈥淗ow do we regulate to maximise civic processes without undermining the freedom of association and opinion the internet has given us is one of the great challenges of our times? AI, currently derided as presaging the end of everything from university assessment to originality in music, can perhaps come to the rescue.

鈥淗ate speech, vilification, threats to rape and kill, and the amplification of division that has become generic to online discussion, can all potentially be addressed through generative machine learning. The so-far-missing components of a better online world, however, have nothing to do with advances in technology: wisdom and an ethics of care. Are the proprietors and engineers of online platforms capable of exercising these all-too-human attributes?

鈥淗umanity risks drowning in a rising tide of meaningless words. The sheer volume of online chatter generated by trolls, bots, entrepreneurs of division, and now apps like ChatGPT, risks devaluing language itself. What is the human without language? Where is the human in the exponentially wide sea of language currently being produced? Questions about writing, speech and authenticity structure western epistemology and ontology, which are being restructured by the scale, structure and speed of digital life.

鈥淯nderneath this are questions of value. What speech is to be valued? Whose speech is to be valued? The exponential production of meaningless words, that is, words without connection to the human, raises questions about what it is to be human. Perhaps this will be a saving grace of AI; that it forces a revaluation of the human since the rising tides of words raises the question of what gives words meaning. Perhaps, however, there is no time or opportunity for this kind of reflection, given the commercial imperatives of digital media, the role platforms play in the global economy, or the way we, as thinkers, citizens, humans, use their content to fill almost every available silence.鈥

Beneficial and Harmful
Clifford Lynch, director of the Coalition for Networked Information, wrote, 鈥淥ne of the most exciting long-term developments 鈥 it is already well advanced and will be much farther along by 2035 鈥 is the restructuring, representation or encoding of much of our knowledge, particularly in scientific and technological areas, into forms and structures that lend themselves to machine manipulation, retrieval, inference, machine learning and similar activities. While this started with the body of scholarly knowledge, it is increasingly extending into many other areas; this restructuring is a slow, very largescale, long-term project, with the technology evolving even as deployment proceeds. Developments in machine learning, natural language processing and open-science practices are all accelerating the process.

鈥淭he implications of this shift include greatly accelerated progress in scientific discovery (particularly when coupled with other technologies such as AI and robotically controlled experimental apparatus). There will be many other ramifications, many of which will be shaped by how broadly public these structured knowledge representations are, and to what extent we encode not only knowledge in areas like molecular biology or astronomy but also personal behaviors and activities. Note that for scholarly and scientific knowledge the movements towards open scholarship and open-science practices and the broad sharing of scholarly data mean that more and more scholarly and scientific knowledge will be genuinely public. This is one of the few areas of technological change in our lives where I feel the promise is almost entirely positive, and where I am profoundly optimistic.

鈥淭he emergence of the so-called 鈥榞eospatial singularity鈥 鈥 the ability to easily obtain near-continuous high-resolution multispectral imaging of almost any point on Earth, and to couple this data in near-real-time with advanced machine learning and analysis tools, plus historical imagery libraries for comparison purposes, and the shift of such capabilities from the sole control of nation-states to the commercial sector 鈥 also seems to be a force primarily for good. The imagery is not so detailed as to suggest an urgent new threat to individual privacy (such as the ability to track the movement of identifiable individuals), but it will usher in a new era of accountability and transparency around the activities of governments, migrations, sources of pollution and greenhouse gases, climate change, wars and insurgencies and many other developments.

鈥淲e will see some big wins from technology that monitors various individual health parameters like current blood sugar levels. These are already appearing. But to have a large-scale impact they鈥檒l require changes in the health care delivery system, and to have a really large impact we鈥檒l also have to figure out how to move beyond sophisticated users who serve as their own advocates to a broader and more equitable deployment in the general population that needs these technologies.

鈥淪ocial media as an environment for propaganda and disinformation, for targeting information delivery to audiences rather than supporting conversations among people who know each other, as well as a tool for collecting personal information on social media users, seems to be a cesspool without limit. The sooner we can see the development of services and business models that allow people who want to use social media for relatively controlled interaction with other known people without putting themselves at risk of exposure to the rest of the environment the better. It鈥檚 very striking to me to see how more and more toxic platforms for social media communities continue to emerge and flourish. These are doing enormous damage to our society.

鈥淚 hope we鈥檒l see social media split into two almost distinct things. One is a mechanism for staying in touch with people you already know (or at least once knew); here we鈥檒l see some convergence between computer mediated communication more broadly (such as video conferencing) and traditional social media systems. I see this kind of system as a substantial good for people, and in particular a way of offsetting many current trends towards the isolation of individuals for various reasons. The other would be the environment targeting information delivery to audiences rather than supporting conversations among friends who know each other. The split cannot happen soon enough.

鈥淚t’s hard to pick the worst potential technological developments between now and 2035 for human welfare and well-being; there are so many possibilities, and they tend to mutually re-enforce each other in various dystopian scenarios. And I have to say that we鈥檝e got a very rich inventory of technologies that might be deployed in the service of what I believe would be evil political objectives; saving graces here will be political choices, if there are any.

鈥淥ne cross-cutting theme the challenges to actually achieving the ethical or responsible use of technologies. It鈥檚 great to talk about these things, but they these conversations are not likely to survive the challenges of marketplace competition. And I absolutely despair in the fact that reluctance to deploy autonomous weapons systems is not likely to survive the crucible of conflict. I am also concerned that too many people are simply whining about the importance of taking cautious, slow, ethical, responsible approaches rather than thinking constructively and specifically about getting this accomplished in the likely real-world scenarios for which we need to know how to understand and manage them.

鈥淚鈥檓 increasingly of the opinion that so-called 鈥榞enerative AI鈥 systems, despite their promise, are likely to do more harm than good, at least in the next 10 years. Part of this is the impact of deliberately deceptive deepfake variants in text, images, sound and video, but it goes beyond this to the proliferation of plausible-sounding AI-generated materials in all of these genres as well (think advertising copy, news articles, legislative commentary or proposals, scholarly articles and so many more things). I鈥檇 really like to be wrong about this.

鈥淔inally, I鈥檇 like to believe brain-machine interfaces (where I expect to see significant progress in the coming decade or so) as a force for good 鈥 there鈥檚 no question that they can do tremendous good, and perhaps open up astounding new opportunities for people, but again I cannot help but be doubtful that these will be put to responsible uses, for example, think about using such an interface as a means of interrogating someone, as opposed to a way of enabling a disabled person; there are also, of course, more neutral scenarios such as controlling drones or other devices.

鈥淚 am simultaneously excited and frightened about the way that digital life may change in the coming decade. It鈥檚 going to be a critical period. I believe that as a society and a culture we will at least begin to negotiate, to come to terms with a number of critically important issues, though I鈥檓 doubtful that either the legal or legislative system is prepared to deal with the questions at hand. I鈥檓 thinking we will see some pragmatic commercial and cultural compromises as well as legislative and legal developments.

鈥淭here will be disruption in expectations of memorization and a wide variety of other specific skills in education and in qualification for employment in various positions. This will be disruptive not only to the educational system at all levels but to our expectations about the capabilities of educated or adult individuals.

鈥淩elated to these questions but actually considerably distinct will be a substantial reconsideration of what we remember as a culture, how we remember and what institutions are responsible for remembering; we鈥檒l also revisit how and why we cease to remember certain things.

鈥淔inally, I expect that we will be forced to revisit our thinking in regard to intellectual property and copyright, about the nature of creative works and about how all of these interact with not only with the rise of structured knowledge corpora, but even more urgently with machine learning and generative AI systems broadly.鈥

Beneficial
Maja Vujovic, owner and director of Compass Communications in Belgrade, Serbia, responded, 鈥淣ew technologies don鈥檛 just pop up out of the blue; they grow through iterative improvements of conceivable concepts moved forward by bold new ideas. Thus, in the decade ahead, we will see advances in most of the key breakthroughs we already know and use (automation and robotics, sensors and predictive maintenance, AR and VR, gaming and metaverse, generative arts and chatbots and digital humans) as they mature into the mass mainstream.

鈥淢uch as spreadsheet tech sprouted in the 1970s and first thrived on mainframe computers but became adopted en masse when those apps migrated onto personal desktops, in the same way, we will witness in the coming years countless variations of apps for personal use of our current top-tier technologies.

鈥淭he most useful among those tech-granulation trends will be the use of complex tech in personalized healthcare. We will see very likable robots serve as companions to ailing children and as care assistants to infirm elderly. Portable sensors will graduate from superfluous swagger to life-saving utility. We will be willing and able to remotely track our pets to begin with, but gradually our small children or demented parents as well.

鈥淒rowning in data, we will have tools for managing other tools and widgets for automating our digital lives. Apps will work silently in the background, or in our sleep, tagging our personal photos, tallying our daily expenses, planning our celebrations or curating our one (combined) social media feed. Rather than supplanting us and scaling our creative processes 鈥 which by definition only works on a scale of one! 鈥 technology will be deployed where we need it the most, in support of what we do best 鈥 and that is human creation.

鈥淭o extract the full value from tools like chatbots, we will all soon need to master the arcane art of prompting AI. A prompt engineer is already a highly paid job. In the next decade, prompting AI will be an advanced skill at first, then a realm of licensed practitioners and eventually an academic discipline.鈥

Harmful
Maja Vujovic, owner and director of Compass Communications in Belgrade, Serbia, said, 鈥淥ur most advanced digital technologies are a result of unprecedented aggregation. Top apps have enlisted almost half of the global population. The only foreseeable scenario for them is to keep growing. Yet our global linguistic capital is not evenly distributed.

鈥淏y compiling the vocabularies of languages with far fewer users than English or Chinese have, a handful of private enterprises have captured and processed the linguistic equity of not only English, or Hindu or Spanish, but of many small cultures as well, such as Serbian, Welsh or Sinhalese. Those cultures have far less capacity to compile and digitally process their own linguistic assets by themselves. While most benign at times of peace, this dis-balance can have grave consequences during more tense periods. Effectively, it is a form of digital supremacy, which in time might prove taxing on smaller, less wealthy cultures and economies.

鈥淢oreover, technology is always at the mercy of other factors, which get to determine whether it is used or misused. The more potent the technologies at hand, the more damage they can potentially inflict. Having known war firsthand and having gone through the related swift disintegration of social, economic and technical infrastructure around me, I am concerned to think how utterly devastating such disintegration would be in the near future, given our total dependence on an inherently frail digital infrastructure.

鈥淲ith our global communication signals fully digitized in recent times, there would be absolutely no way to get vital information, talk to distant relatives or collect funds from online finance operators, in case of any accidental or intentional interruptions or blockades of Internet service. Virtually all amenities of contemporary living 鈥 our whole digital life 鈥 may be canceled with a flip of a switch, without recourse. As implausible as this sounds, it isn鈥檛 impossible. Indeed, we have witnessed implausible events take place in the recent years. So, I don鈥檛 like the odds.鈥

Beneficial
Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society, said, 鈥淒igital technology has come to stay in our lives for good. One area that excites me about the future is the use of artificial intelligence, which of course is going to shape the way we live by 2035. We have started to see the dividends of artificial intelligence in our society.

鈥淓ssentially, the human-centered development of digital tools and systems is safely advancing human progress in the area of transportation, health, finances, energy harvesting and so on. As an engineer who believes in the power of digital technology, I see limitless opportunities for our transportation system. Beyond the personal driverless cars and taxis, by 2035, our public transportation will be taken over by remote-controlled buses with accurate timing with a marginal error of 0.0099 which will make us feel the needless use of personal cars. This will be cheaper without disappointment.

鈥淎utonomous public transport will be pocket-friendly to the general citizenry. This will come with less pollution as energy harvesting from green sources will take a tremendous positive turn with the use of IoT and other digital technologies that harvest energy from multiple sources by estimating what amount of energy is needed and which green sources are available at a particular time with plus one redundancy. Hence minimal inefficiencies.

鈥淒eployment of bigger drones that can come directly to your house to pick you up after identifying you and debiting your digital wallet account and confirming the payment will be a reality. The use of paper tickets will be a thing of the past as digital wallets to pay for all services will be ubiquitous.

鈥淚n regard to human connections, governance and institutions and the improvement of social and political interactions, by 2035, the body of knowledge will be fully connected. There will be universal acceptance of open-source applications that make it possible to have a globally robust body of knowledge in artificial intelligence and robotics. There will be less depression in society. If your friends are far away, robots will be available as friends you can talk to and even watch TV with and analyze World Cup matches as you might do with your friends. Robots will also be able to contribute to your research work even more than what ChatGPT is capable of today.

鈥淕overnance will be seamless as we are closer to the government in the digital ecosystem. You pay your taxes without being chased around because it is deducted from the source. There will be less corruption in our society. We will need fewer law enforcement agents, as there will be minimal lawbreaking because there is little or no opportunity to break the law for the recalcitrant and our society will be safer, even as digital finances take over the financial ecosystem, with AI and blockchain giving room for corruption.

鈥淐ontract gains can be calculated even before the contract is awarded and a change in budget will be opened, giving room for minimal corruption as AI brings in changes in the prices of materials at zero hours without going into the physical market. I look forward to participating in more research on how this can be implemented.

鈥淚n regard to human knowledge and the verifying, updating, safe archiving and so on, open-source AI will make research work easier. However, human ingenuity will still be needed to add value. Research will be much easier as we concentrate the creativity while the secondary research is being conducted by AI. Hence, there will be an increase in contributions to the body of knowledge and our society will be better off.

鈥淗uman health and well-being will benefit greatly from the use of AI, bringing about a healthy population as sicknesses and diseases can be easily diagnosed. Infectious diseases will become less virulent because of the use of robots in highly infectious pandemics and pandemics can easily be curbed. With enhanced big data using AI and ML, pandemics can be easily predicted and prevented and the impact curve flattened in the shortest possible time using AI-driven pandemic management systems.鈥

Harmful
Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society, wrote, 鈥淚t is pertinent to also look at the other side of the coin as we gain positive traction on digital technologies. There will be concern about the safety of humans as this technology falls into the hands of scoundrels who use it for crime, mischief and other negative ends. This technology can be used to attack innocent souls. It may be used to manipulate the public or destroy political enemies, thus it is not necessarily always the 鈥榖ad guys鈥 who are endangering our society.

鈥淗uman rights may be abused. For example, a government may want to tie us to one digital wallet through a central bank of digital currencies and dictate how we spend our money. These are issues that need to be looked at in order not to trample on human rights.

鈥淭echnological decolonization may also raise a concern as unique cultures may be eroded due to global harmonization. This can create an unequal society in which some sovereignty may benefit more than others.鈥

Beneficial
Dennis Szerszen, an independent business and marketing consultant who previously worked with IBM, wrote, 鈥溾淓mbedded information technology will make our personal transportation autonomous by 2035. It is likely to save lives. We will be less reliant on our own senses for driving, and, with broader information, we will not need to make choices regarding fuel, or even how we get to our destination. Predictive information may even enable us to migrate from fossil-based fuels by making the powering of our vehicles autonomously managed.

鈥淥ur home tech will be far more autonomous as well. Predictive information will help shop for us. Our food supply chain will become far more stable than it has been in these times of unstable supply chain and population growth.

鈥淚 predict that our healthcare system will be dramatically changed. We will still have to work through the mega-hospital system for our care, but care will be managed less by human decision-making but more by information systems that can anticipate conditions, completely manage predictive care and handle nearly all scheduled interactions including vaccinations and surgical procedures. I predict (with hope) that medical research will change dramatically from the short-sighted model used today that鈥檚 predominantly driven by big pharma seeking to make money on medications for chronic conditions, to one that migrates back to academia and focuses on predicting and curing human conditions, affecting both lifespan and quality of life.鈥

Harmful
Dennis Szerszen, an independent business and marketing consultant who previously worked with IBM, commented, 鈥淔alse news will become the majority of what we see online, even through ‘trusted’ news services. We will trust even less the information that is presented to us as fact-based reporting. Ideology-driven decision makers will rule our governments and our courts, further eroding human rights, especially for women and for members of other-gender populations. Educational systems will be adversely affected by struggle over information available for teaching materials because of ideological shifts and the plain lack of non-subjective historical information. Social media will be flooded with idealized images, our sense of normal human appearance will be altered and changed. Our impression of beauty will become narrow.鈥

Beneficial and Harmful
Andy Opel, professor of communications at Florida State University, wrote, 鈥淚n drafting this response, the first thing I notice if how hard it is to imagine a better digital future and how easily dystopian narratives, fears, and anxieties dominate my imaginative visions. The history of consolidated power and commercial imperatives have successfully warped my expectations to the point where potentially positive outcomes are met with skepticism and suspicion. Given this impulse, the following is an attempt to silence those voices and make room for a possible future that could emerge if we are able to wrestle our institutions back in service to the public good.

鈥淭he fall of 2022 introduced profound changes to the world with the release of OpenAI鈥檚 ChatGPT. Five days later, over a million users had registered for access, marking the fastest diffusion of a new technology ever recorded. This tool, combined with a myriad of text-to-image, text-to-sound and voice-transcription generators, is creating a dynamic environment that is going to present new opportunities across a wide range of industries and professions.

鈥淭hese emerging digital systems will become integrated into daily routines, assisting in everything from the most complicated astrophysics to the banality of daily meal preparation. As the proliferation of access to collected human knowledge spreads, individuals will be empowered to make more informed decisions, navigate legal and bureaucratic institutions, and resolve technical problems with unprecedented speed and accuracy.

鈥淎I tools will reshape our digital and material landscapes, disrupting the divisive algorithms that have elevated cultural and political differences while masking enormity of our shared values 鈥 clean air, water, food, safe neighborhoods, good schools, access to medical care and baseline economic security. As our shared values and ecological interdependence become more visible, a new politics will emerge that will overcome the stagnation and oligarchic trends that have dominated the neoliberal era.

鈥淥ut of this new digital landscape is likely to grow a realization of the need to reconfigure our economy to support what the pandemic revealed as 鈥榚ssential workers,鈥 the core elements of every community worldwide; farmers, grocery clerks, teachers, police and fire, service industry workers, etc. Society cannot function when the foundational professions of a society cannot afford homes in the communities they serve.

鈥淭his economic realignment will be possible because of the digital revolution taking place at this very moment. AI will both eliminate thousands of jobs and generate enough wealth to provide a basic income that will free up human time, energy and ingenuity. Through shorter work weeks and a move away from the two-parent income requirement to sustain a family, local, sustainable, communities will reconnect and rebuild the civic infrastructure and social relations that have been the base of human history across the millennia.

鈥淩ichard Nixon proposed a universal basic income in 1969 but the initiative never made it out of the Senate. Over half a century later, we are on the precipice of a new economic order made possible by the power, transparency and ubiquity of AI. Whether we are able to harness the new power of emerging digital tools in service to humanity is an open question. I expect AI will play a central role in assisting the transition to a more equitable and sustainable economy and a more accessible and transparent political process.鈥

“I end with this quote: ‘If the great mass of Americans is going to have any role whatsoever in shaping of this future, if there is to be any chance at all that the 21st-century will belong to the whole of humanity, as opposed to the monopolists of a new Gilded Age, then the defining economic issues of the age must become the defining political issues of the age.’ – Robert McChesney & John Nichols, authors of ‘People Get Ready: The Fight Against a Jobless Economy and a Citizenless Democracy.’鈥

Harmful
Andy Opel, professor of communications at Florida State University, said, 鈥淎I and emerging digital technologies have a wide range of possible negative impacts, but I want to focus on two: the environmental impact of AI and the erosion of human skills.

鈥淭he creation of the current AI tools from ChatGPT-3 to Stable Diffusion and other text-to-image generators required significant amounts of electricity to provide the computing power to train the models. According to MIT Technology Review, over 600 metric tons of CO2 were produced to train ChatGPT-3. This is the equivalent of over 1,000 flights between London and New York, and this is just to train the AI tool, not to run the daily queries that are now expected among millions of users worldwide.

鈥淚n addition, ChatGPT-3 is just one of many AI tools that have been trained and are in use, and that number is expanding at an accelerating rate. Until renewable energy is used to run the server farms that are the backbone of every AI tool, these digital assets will have a growing impact on the climate crisis. This impact remains largely invisible to citizens who store media in 鈥榯he cloud,鈥 too often forgetting the real cloud of CO2 that is produced with every click on the screen.

鈥淭he second major impact of emerging digital media tools is the ephemeral nature of the information and the vulnerability of this information. While print media has a limited lifespan, we continue to have access to documents that were written over 2,000 years ago, and the Epic of Gilgamesh continues to animate high school classes, thousands of years later. Computer software and hardware on the other hand changes so quickly most of us have media drives with cables that no longer connect to our machines, rendering those files obsolete to only the most dedicated media cable archivist!

鈥淎s our reliance on digital tools grows 鈥 from the simplicity of spell checking to the complexity of astrophysics, our collective knowledge is increasingly stored in a digital format that is vulnerable to disruption. At the same time, the ubiquity of these tools is seductive, allowing the unskilled to produce amazing visual art or music or simulate the appearance of expertise in a wide range of subject areas.

鈥淭he growing dependence on this simulation masks the physical skills that are being stripped out, replaced by expertise in search term and prompt writing skills. This is accelerating a trend that has been in place for many years as people moved from the physical to the digital. Without the mechanical skills of hammers and wrenches, planting and compost, wiring and circuits, entire populations become dependent on a shrinking pool of people who actually *do* things. When the power goes out, the best AI in the world will not help.鈥

Beneficial
Aymar Jean Christian, associate professor of communication studies at Northwestern University and adviser to the Center for Critical Race Digital Studies, observed, 鈥溾淒ecentralization is a promising trend in platform distribution. Web 2.0 companies grew powerful by creating centralized platforms and amassing large amounts of social data. The next phase of the web promises more user ownership and control over how our data, social interactions and cultural productions are distributed. The decentralization of intellectual property and its distribution could provide opportunities for communities that have historically lacked access to capitalizing on their ideas. Already users and grassroots organizations are experimenting with new decentralized governance models, innovating in the longstanding hierarchical corporate structure.

Harmful
Aymar Jean Christian, associate professor of communication studies at Northwestern University and adviser to the Center for Critical Race Digital Studies, observed, 鈥淭he automation of story creation and distribution through artificial intelligence poses pronounced labor equality issues as corporations seek cost benefits for creative content and content moderation on platforms. These AI systems have been trained on the un- or under-compensated labor of artists, journalists and everyday people, many of them underpaid labor outsourced by U.S.-based companies. These sources may not be representative of global culture or hold the ideals of equality and justice. Their automation poses severe risks for U.S. and global culture and politics.

鈥淎s the web evolves, there remain big questions as to whether equity is possible or if venture capital and the wealthy will buy up all digital intellectual property. Conglomeration among firms often leads to market manipulation, labor inequality and cultural representations that do not reflect changing demographics and attitudes. And, there are also climate implications for many new technological developments, particularly concerning the use of energy and other material natural resources.鈥

Beneficial (Did not respond to harmful)
Jon Lebkowsky, writer and co-wrangler of Plutopia News Network, previously CEO, founder and digital strategist, Polycot Associates, said, 鈥淗owever you define AI, it will be an increasingly present technology. I believe that increasing use of AI will highlight its constraints and limitations, with an understanding that it’s most effective for its ability to support and expand human endeavors. To the extent AI can automate tasks, we will have to rethink human employment and revise our economic thinking.

鈥淲e can expect to see substantial innovation related to climate change adaptation, and possibly mitigation to the extent that’s still possible. We will see the development of increasingly efficient and clean fuel sources and technologies for leveraging those sources most effectively.

鈥淲e’ll see a computer-mediated trend toward decentralization of social media and social organization. We’ll also hopefully see effective use of technology to support more decentralized and democratic cooperative enterprises.

鈥淲e can also hope to see ongoing medical advances including development of sophisticated vaccines and therapies to manage and prevent global pandemics. Hopefully we will find more and better ways to extend sophisticated healthcare broadly, leveraging technology effectively to make care delivery increasingly efficient and accessible.鈥

Beneficial (Did not respond to harmful)
Cathy Cavanaugh, chief experience officer at the University of Florida Lastinger Center for Learning, said, 鈥淚nequitable access to technology and services that exacerbates existing social and economic gaps rather than eliminating them. Too few governments balance capitalism and social services in ways that serve the greatest needs. These imbalances look likely to continue rather than to change because of increasing power imbalances in many countries. Equitable access to essential human services is crucial. Technology now exists in most locations that is affordable, available in most languages and for people of many physical abilities and easy to learn. The most beneficial use of this personal technology is to connect individuals, families and communities to necessary and life-changing services using secure technology that can streamline and automate these services, making them more accessible. We have seen numerous examples including microfinance, apps that help unhoused people find shelter, online education, telehealth and a range of government services. Too many people still experience poverty, bias and lack of access to serve their needs and create opportunities for them to fully participate in and contribute to their communities.鈥

Beneficial
Justin Reich, associate professor of digital media at MIT and director of the Teaching Systems Lab,聽commented, 鈥淰ideo games have continued to grow as a media and art form, both on the AAA side and on the indie side. I’m excited to see what games people are making in 2035. I bet a number of them will be really fun, engaging and moving.鈥

Harmful
Justin Reich, associate professor of digital media at MIT and director of the Teaching Systems Lab,聽commented, 鈥淭he hard thing about predicting the future of tech is that so much of it is a reflection on our society. The more we embrace values of civility, democracy, equality and inclusion, the more likely it is that our technologies will reflect our social goals. If the advocates of fascism are successful in growing their political power, then the digital world will be full of menace 鈥 constant surveillance, targeted harassment of minorities and vulnerable people, widespread dissemination of crappy art and design, and so forth, all the way up to true tragedies like the genocide of the Uyghur people in China.鈥

Beneficial
Stephan Adelson, president of Adelson Consulting Services and an expert in the internet and public health, said, 鈥淭he recent release of several AI tools in their various categories begins a significant shift in the creative and predictive spaces. Creative writing, predictive algorithms, image creation, computations, even the process and products of thought itself are being challenged. I predict that the greatest potential for benefit to mankind by 2035 from digital technologies will come through the challenges their existence creates. We, as a species, are creators of technologies that are learning and growing their productive capabilities and creative capacities. As these tools grow, learn and become integrated into our everyday lives, both personal and professional, they will become major competitors for resources, financial, social and as entertainment. I feel it is in this competition that they will provide our greatest growth and benefits as a species. As we compete with our digital creations we will be forced to grow or become dependent on what we have created and can no longer exceed.鈥

Harmful
Stephan Adelson, president of Adelson Consulting Services and an expert in the internet and public health, said, 鈥淩eality itself is under siege. AI, CGI, developmental augmented reality and other tools that have the ability to create misleading, alternate or deceptive reality, especially when used politically, are the greatest threats to our future. Manipulation of the masses through media has always been a foundation of political and personal gain. As digital tools that can create more convincing alternatives to what mankind sees, hears, comprehends and perceives become mainstream daily tools, as I believe they will by 2035, the temptation to use them for personal and political gain will be ever present. There will be battles for 鈥榯ruths鈥 that may cause a future where paranoia, conspiracy theories and a continual fight over what is real and what is not are commonplace. I fear for the mental health of those unable to comprehend the tools and that do not have the capacity to discern truth from deception.

鈥淐ontinued political and social unrest, increases of mental illnesses and a further widening of the economic gap are almost guaranteed if actions that create restrictions on their use and/or tools developed that are reliable and capable of the separation of ‘truth’ from ‘fiction.’鈥

Beneficial and Harmful
Marcus Foth, professor of informatics at Queensland University of Technology, said, 鈥淭he best and most beneficial changes with regards to digital technology and humans鈥 use of digital systems will be in the areas of governance 鈥 from the micro-scale governance of households, buildings and street blocks to the macro-scale governance of nation-states and the entire planet.

鈥淭he latest we are seeing now in the area of governance are digital twins 鈥 in essence, large agglomerations of data and data analysis. We will look back at them and perhaps smile. They are a starting point. Yet they don’t necessarily result in better political decision-making or evidence-based policy-making. Those are two areas in urgent need of attention. This attention has to come from the humanities, communications and political science fields more so than the typical STEM/computer science responses that tend to favour technological solutionism.

鈥淭he best and most beneficial changes will be those that end the neoliberal late-capitalist era of planetary ecocide and bring about a new collective system of governance that establishes consensus with a view to stop us from destroying planet Earth and ourselves. If we are still around in 2035 that is. The most harmful or menacing changes are those portrayed as sustainable but are nothing more than greenwashing. Digital technology and humans鈥 use of digital systems are at the core of the greenwashing problem. We are told by corporations that in order to be green and environmentally friendly, we need to opt for the paper-based straw, the array of PV solar panels on our roofs, and the electric vehicle in our garage. Yet, the planetary ecocide is not based on an energy or resources crisis but on a consumption crisis.

鈥淟ate capitalism has the perverted quality of profiteering from the planetary ecocide by telling greenwashing lies 鈥 this extends to digital technology and humans鈥 use of digital systems from individual consumption choices such as solar and EVs to large-scale investments such as smart cities. The reason these types of technology are harmful is because they just shift the problem elsewhere 鈥 out of sight.

鈥淭he mining of rare earth metals continues to affect the poorest of the poor across the Global South. The ever-increasing pile of e-waste continues to grow due to planned obsolescence and people being denied a right to repair.

鈥淭he idea of a circular economy is being dummified by large corporations in an attempt to continue BAU 鈥 business as usual. The Weltschmerz caused by humans鈥 use of digital systems is what’s most menacing without that we know it.鈥

Beneficial
Robin Raskin, founder of the Virtual Events Group, author, publisher and conference and events creator, wrote, 鈥淭he metaverse marches forward in fits and starts but ultimately it will divide into two distinct categories. There will be a metaverse for gaming, entertainment and shopping. The most critical metaverse will be a digital twin of everything 鈥 cities, schools and factories, for example. These twins coupled with IoT devices will make it possible to create simulations, inferences and prototypes for knowing how to optimize for efficiency before ever building a single thing.

鈥淭he consumerization of AI will augment, if not replace, most of the white-collar jobs in areas including traditional office work, advertising and marketing, writing and even programming. Since work won’t be 鈥榓 thing鈥 anymore, we’ll need to find other means of compensation for our contribution to humanity. How much positive participation we contribute to the web? A Universal Basic Income because we all taught AI to do our jobs? It remains to be seen but the AI Revolution will be as huge as the Industrial Revolution.

鈥淏ig tech as it is today will no longer be 鈥榖ig.鈥 Rather, tech jobs will go to various sectors, from agriculture and sustainability to biomed. The Googles and Facebooks have almost maxed out on their capabilities to broaden their innovations. Tech talent will move to solve more pressing problems in vertical sectors.

鈥淏y 2035 we will have a new digital currency (probably not crypto as we know it today, but close). We may have a new system of voting for leaders (a button in your home instead of a representative in Congress or Senate so that we really achieve something closer to one man/one vote).

鈥淔inally, doctors and hospitals will continue to become less relevant to our everyday lives. People will be allowed to be sick in their homes, monitored remotely through tele-med and devices. We’re already seeing CVS, Walmart and emergency clinics replace doctors as the first point of contact. Medicine will spread into the community rather than be a destination.鈥

Harmful
Robin Raskin, founder of the Virtual Events Group, author, publisher and conference and events creator, predicted, 鈥淲hat we’re experiencing now is the harbinger of what’s to come. Synthetic humans and robot friends may increase our social isolation. The demise of the office or a school campus as a gathering place will leave us hungry for human companionship and may cause us to lose our most human skills – empathy and compassion.

鈥淲e become 鈥榤an and his machine鈥 rather than 鈥榤an and his society.鈥

鈥淗igher education will face a crisis like never before. Exorbitant pricing and lack of parity with the real world makes college seem quite antiquated. I鈥檓 wagering that 50 percent of higher education in the United States will be forced to close down. We will devise other systems of degrees and badges to prove competency.鈥

Beneficial
Mark Schaefer, a business professor at Rutgers University and author of 鈥楳arketing Rebellion,鈥 wrote, 鈥淚n America, healthcare progress will come from startups and boutique clinics that offer wealthy individuals environmental screening devices and pharmaceutical solutions customized for precise genetic optimization. The smart home of the future will analyze air quality, samples from the bathroom waste stream and food consumption to suggest daily health routines and make automatic environmental and pharmaceutical adjustments.

鈥淥verall, an AI-driven healthcare system will be radically streamlined to be highly personal, effective and efficient in many developed regions of the world 鈥 excluding the United States. While the U.S. will remain the leader in developing new healthcare technology, the country will lag most of the world in this tech adoption due to powerful lobbyists in the healthcare industry and a dysfunctional government unable to legislate reform.

鈥淗owever, progress will take off rapidly in China, a country with a rapidly-aging population and a government that will dictate speedy reform. Dramatic improvements will also occur in countries with socialized healthcare, since efficiency means a dramatic improvement in direct government spending. Expected lifespan will increase by 10% in these nations by 2035. China鈥檚 population will have declined dramatically by 2035, a symptom of the one-child policy, rapid urbanization and social changes. China will attract immigrant workers to boost its population by offering free AI-driven healthcare.鈥

Harmful
Mark Schaefer, a business professor at Rutgers University and author of 鈥淢arketing Rebellion,鈥 wrote, 鈥淭he rapid advances of artificial intelligence in our digital lives will mean massive worker displacement and set off a ripple of unintended consequences. Unlike previous industrial shifts, the AI-driven change will happen so suddenly 鈥 and create a skill gap so great 鈥 that re-training on a massive scale will be largely impossible.

鈥淲hile this will have obvious economic consequences that will renew discussion about a minimum universal income, I鈥檓 more concerned by the significant psychological impact of the sudden, and perhaps permanent, loss of a person鈥檚 purpose in life.

鈥淚 recently completed a new book after two years of research, writing and significant personal sacrifice. After the book was published, I tested an AI tool to write a section of my book 鈥 in my 鈥榲oice鈥 and with appropriate academic references. It did it, and it did it well, in five seconds. I am at least 80% replaced by a soulless bot. It was the most depressing moment of my career. Although my career is not necessarily threatened at this moment, much of my meaning is derived from the personal struggle it takes to create extraordinary books and the satisfaction of the reader鈥檚 response to my unique effort. What happens when this loss of meaning and purpose occurs on a massive, global scale?

鈥淭here is a large body of research showing that unemployment is linked to anxiety, depression and loss of life satisfaction, among other negative outcomes. Even underemployment and job instability create distress for those who aren鈥檛 counted in the unemployment numbers.

鈥淢illions of these displaced people will require psychological support. They will probably receive it from AI-fueled bots. After all, much of psychological treatment is simply a scientific-based response to detectable patient behavior patterns, which is exactly what AI loves to do.

鈥淢any lonely people will fill their empty days with content programming that is uniquely designed for them. Limitless, personalized media will be tuned to individual brain wave responses and optimized to elicit precise amounts of dopamine, oxytocin and serotonin to keep us blissfully and naturally high all day. Literally, we will be addicted to our media. We鈥檒l routinely have immersive digital experiences with deceased loved ones, heroes and historical figures who will help us forget that we have nothing better to do.

鈥淭he general loss of employment and meaning will create new businesses to serve the bored and depressed population. Many social and economic issues will finally be addressed by the large number of volunteers with free time on their hands. I have seen bits and pieces of this media technology in action already and it can certainly be available on a massive scale by 2035.鈥

Beneficial
Eileen Donahoe, executive director of the Stanford Global Digital Policy Incubator, wrote, 鈥淗uman-centered design of digital tools will become a well-developed framework, the use of which is expected and demanded by all stakeholder groups. In addition, we will have seen much more progress on what human-centered design actually requires in practice across all types of technological innovation.鈥

Harmful
Eileen Donahoe, executive director of the Stanford Global Digital Policy Incubator, commented, 鈥淒igital authoritarianism could become a dominant model of governance across the globe, due to a combination of intentional use of technology for repression in places where human rights are not embraced, plus failure to adhere to a human rights-based approach to use and regulation of digital technology even in countries where human rights are embraced.鈥

Beneficial
Rosalie Day, a policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust, wrote, 鈥淪purred on by the pandemic and for multiple reasons, more of the population will be connected digitally, even in internet-walled countries within their walls. This makes it easier to get to unbanked populations and provide benefits, virtual healthcare and education. Disaster relief will be facilitated, and corruption will not be as prevalent for as long (assuming power generation and satellites restore connectivity).

鈥淲ith progress in AI comes more ways to mitigate and adapt to climate change. Benefits to both will come in the form of supply chain optimizations, which will increase efficiency of fuel use and potentially increase food security.

鈥淭herapies for cancers and rare diseases are going to vastly advance with the amount of data available for training the AI. Access to anonymized patient data will increase. Organizations aside from traditional players in big pharma will be enabled to strong gains, especially in genetic discoveries and gene therapies.鈥

Harmful
Rosalie Day, a policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust, said, 鈥淢isinformation will continue to grow 鈥 accelerated amplification. Now, not only by the algorithms that play toward our own worst instincts, but also generative AI will further embed biases and make us more skeptical of what can be 鈥榮een.鈥 The latter will make the importance of digital literacy an even greater divide. The digitally challenged will increasingly rely on the credibility of the source of the information, which we know is detrimentally subjective.

鈥淕enerative AI will hurt the education of our workforce. It is difficult enough to teach and evaluate critical thinking now. I expect knowledge silos to increase as the use of generative AI concentrates subjects and the training data becomes the spawned data. Critical thought asks the thinker to incorporate knowledge, adapt ideas and modify accordingly. The never seen before becomes the constraint and group think becomes the enforcer.

鈥淕enerative AI will also displace many educated and uneducated workers. Quality of life will go down because of the satisficing nature of human systems: is it sufficient? Does the technology get it right within the normal distribution? Systems will exclude hiring people with passion or those particularly good at innovating because they are statistical outliers.鈥

Beneficial
Isaac Mao, Chinese technologist, data scientist and entrepreneur, said, 鈥淎rtificial Intelligence is poised to greatly improve human well-being by providing assistance in processing information and enhancing daily life. From digital assistants for the elderly to productivity tools for content creation and disinformation detection, to health and hygiene innovations such as AI-powered gadgets, AI technology is set to bring about unprecedented advancements in various aspects of our lives. These advances will not only improve our daily routines but also bring about a new level of convenience and efficiency that has not been seen for centuries. With the help of AI, even the most mundane tasks such as brushing teeth or cutting hair can be done with little to no effort and concern, dramatically changing the way we have struggled for centuries.鈥

Harmful
Isaac Mao, Chinese technologist, data scientist and entrepreneur, observed, 鈥淚t is important to recognize that digital tools, particularly those related to artificial intelligence, can be misused and abused in ways that harm individuals, even without traditional forms of punishment such as jailing or physical torture. These tools can be used to invade privacy, discriminate against certain groups and even cause loss of life. When used by centralized powers, such as a repressive government, the consequences can be devastating. For example, AI-powered surveillance programs could be used to unjustly monitor, restrict, or even target individuals without the need for physical imprisonment or traditional forms of torture. To prevent such abuse, it is crucial to be aware of the potential dangers of technology and to work towards making them more transparent through democratic processes and political empowerment.

鈥淲hile some technologies, such as Virtual Reality (VR) or the Metaverse, have the potential to be used for entertainment and education, they also pose a risk of blurring the lines between reality and fiction. This can be dangerous and lead to long-term struggles in managing and utilizing these technologies for the greater good. It is important to be aware of these potential dangers and take steps to ensure that these technologies are used responsibly and ethically.鈥

Beneficial
Evan Selinger,聽professor of philosophy at Rochester Institute of Technology and author of 鈥淩e-Engineering Humanity,鈥 wrote, 鈥淏y 2035, there will be significant beneficial changes to healthcare, specifically in AI-assisted medical diagnosis and treatment, as well as AI predictions related to public health. I also anticipate highly immersive and interactive digital environments for working, socializing, learning, gaming, shopping, traveling and attending healthcare-related appointments.鈥

Harmful
Evan Selinger,聽professor of philosophy at Rochester Institute of Technology and author of 鈥淩e-Engineering Humanity,鈥 predicted, 鈥淪urveillance technology will become increasingly invasive 鈥 not just in terms of its capacity to identify people based on a variety of biometric data, but also in its ability to infer what those in power deem to be fundamental aspects of our identities (including preferences and dispositions) as well as predict, in finer-grained detail, our future behavior and proclivities. Hyper surveillance will permeate public and private sectors 鈥 spanning policing, military operations, employment (full cycle, from hiring through day-to-day activities, promotion and firing), education, shopping and dating.鈥

Beneficial and Harmful
David A. Banks, director of globalization studies at the University at Albany-SUNY commented, 鈥淏etween now and 2035, as the tech industry will experience a declining rate of profit and individual firms will seek to extract as much revenue as possible from existing core services, thus users could begin to critically reevaluate their reliance on large-scale social media, group chat systems (e.g., Slack, Teams), and perhaps even search as we know it. Advertising, the 鈥榠nternet鈥檚 original sin鈥 as Ethan Zuckerman so aptly put it in 2014, will combine with intractable free-speech debates, unsustainable increases in web stack complexity and increasingly unreliable core cloud services to trigger a mass exodus from Web 2.0 services. This is a good thing!

鈥淚f big tech gets the reputation it deserves, that could lead to a renaissance of libraries and human-centered knowledge searching as an alternative to the predatory, profit-driven search services. Buying clubs and human-authored product reviews could conceivably replace algorithmic recommendations, which would be correctly recognized as the advertisements that they are. Rather than wring hands about 鈥榚cho chambers,鈥 media could finally return to a partisan stance where biases are acknowledged, and audiences can make fully-informed decisions about the sources of their news and entertainment. It would be more common for audiences to directly support independent journalists and media makers who utilize a new, wider range of platforms.

鈥淥n the supply side, up-and-coming tech firms and their financial backers could respond by throwing out the infinite expansion model established by Facebook and Google in favor of niche markets that are willing to spend money directly on services that they use and enjoy, rather than passively pay for ostensibly free services through ad revenue. Call it the 鈥榟umble net鈥 if you like 鈥 companies that are small and aspire to stay small in a symbiotic relationship with a core, loyal userbase. The smartest people in tech will recognize that they have to design around trust and sustainability rather than trustless platforms built for infinite growth.

鈥淚 am mostly basing my worst-case scenario prognostication on how the alt right has set up a wide range of social media services meant to foster and promulgate their worldview.

“In this scenario, venture capital firms will not be satisfied with the humble net and will likely put their money into firms that sell to institutional buyers, think weapons manufacturers, billing and finance tools, work-from-home hardware and software and biotech. This move by VCs will have the aggregate effect of privatizing much-needed public goods, supercharging overt surveillance technology and stifling innovation in basic research that takes more than a few years to produce marketable products.

鈥淎s big companies鈥 products lose their sheen and inevitably lose loyal customers, they will likely attempt to become infrastructure, rather than customer-facing brands. This can be seen as a retrenchment of control over markets and an attempt to become a market arbiter rather than a dominant competitor. This will likely lead to monopolistic behavior 鈥撀 price gouging, market manipulation, collusion with other firms in adjacent industries and markets 鈥 that will not be readily recognizable by the public or regulators. There is no reason to believe regulatory environments will strengthen to prevent this in the next decade.

鈥淏ig firms, in their desperation for new sources of revenue, will turn toward more aggressive freemium subscription models and push into what is left of brick-and-mortar stores. I have called this phenomenon the 鈥楽ubscriber City,鈥 where entire portions of cities will be put behind paywalls. Everything from your local coffee shop to public transportation will either offer deep discounts to subscribers of an Amazon Prime-esque service or refuse direct payments altogether. Transportation services like Uber and Waze will more obviously and directly act like managers of segregation than convenience and information services.

鈥淩eal estate markets, which were once geographically fragmented, will become increasingly integrated at national and international scales so that landlords and banks can collude to set prices on rent, interest rates and insurance premiums. The goal here will be to dynamically price real estate and its attendant financial services for individuals and maximize returns for institutional investors.

鈥淲estern firms will be dragged into trade wars by an increasingly antagonistic U.S. State Department, leading to increased prices on goods and services and more overt forms of censorship, especially with regard to international current events. This will likely drive people to their preferred humble nets to get news of varying veracity. Right-wing media consumers will seek out conspiratorial jingoism, centrists will enjoy a heavily censored corporate mainstream media, and the left will be left victim to con artists, would-be journalism influencers, and vast lacunas of valuable information.鈥

Beneficial
Stephan G. Humer,聽sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin, responded, 鈥淭he human being will be much more at the center of digital action than now. It’s not only about usability, interface design or intuitive usage of smartphones; it’s about real human empowerment, improvement and strength. Digitization has left the phase where technology is at the center of all things happening, and we will now move more and more to a real human-centered design of digital tools and services. Three main aspects will be visible: better knowledge management, better global communication and better societal improvements. Ultimately, we will have the most sovereign individuals of all time.鈥

Harmful
Stephan G. Humer,聽sociologist and computer scientist at Fresenius University of Applied Sciences in Berlin, said, 鈥淭he most harmful changes will appear if governments, digital companies and other institutions will not focus on the empowered citizen. The prime example here is social media: people need an excellent digital culture to successfully understand, deal with and engage with social media. The longer governments and other stakeholders wait, the more harmful it will be for societies and individuals.鈥

Beneficial
Ravi Iyer, managing director of the Psychology of Technology Institute at the University of Southern California, formerly product manager at Meta and co-founder of Ranker.com, said, 鈥淚 expect that people will start to demand human-centered AI systems that work for them and not for the companies that build them. These demands will be enforced by governments and app stores. I also expect that AI will lead to great leaps forward in personalized medicine and the availability of automated health tools for emerging markets.鈥

Harmful
Ravi Iyer, managing director of the Psychology of Technology Institute at the University of Southern California, formerly product manager at Meta and co-founder of Ranker.com, predicted, 鈥淎 rogue state will build autonomous killing machines that will have disastrous unintended consequences. I also expect that the owners of capital will gain even more power and wealth due to advances in AI, such that the resulting inequality will further polarize and destabilize the world.鈥

Beneficial
Dean Willis, founder of Softarmor Systems, commented, 鈥淎I at Internet scale will provide for substantial advances in search, general information management and organization, public policy development and oversight, and health 鈥 analytical, monitoring and public health management. However, there is a massive dark side.

Harmful
Dean Willis, founder of Softarmor Systems, observed, 鈥淔rom a public policy and governance perspective, AI provides authoritarian governments with unprecedented power for detecting and suppressing non-conformant behavior. This is not limited to political and ideological behavior or position; it could quite possibly be used to enforce erroneous public health policies, environmental madness, or, quite literally, any aspect of human belief and behavior. AI could be the best 鈥榙ictator kit鈥 ever imagined. Author George Orwell was an optimist, as he envisioned only spotty monitoring by human observers. Rather, we will face continuous, eternal vigilance with visibility into every aspect of our lives. This is beyond terrifying. Authoritarian AI coupled with gamification has the potential to produce the most inhumane human behavior ever imaged.鈥

Beneficial
Ben Shneiderman, widely respected human-computer interaction pioneer and author of 鈥淗uman-Centered AI,鈥 said, 鈥淎 human-centered approach to technology development is driven by deep understanding of human needs, which leads to design-thinking strategies that bring successful products and services. Human-centered user interface design guidelines, principles and theories will enable future designers to create astonishing applications that facilitate communication, improve well-being, promote business activities and much more.

鈥淏uilding tools that give users superpowers is what brought users email, the web, search engines, digital cameras and mobile devices. Future superpowers could enable reduction of disinformation, greater security/privacy and improved social connectedness that supports potent forms of collaboration.

鈥淭his could be the Golden Age of Collaboration, with remarkable global projects such as developing COVID-19 vaccine in 42 days. The future could be made brighter if similar efforts were devoted to fighting climate change, restoring the environment, reducing inequality and supporting the 17 UN Sustainable Development Goals.

鈥淓quitable and universal access to technology could improve the lives of many, including those users with disabilities. The challenge will be to ensure human control, while increasing the level of automation.鈥

Harmful
Ben Shneiderman, widely respected human-computer interaction pioneer and author of 鈥淗uman-Centered AI,鈥 warned, 鈥淒angers from poorly designed social technologies increase confusion, which undermines the capacity of users to accomplish their goals, receive truthful information or enjoy entertainment and sports. More serious harms come from failures and bias in transactional services such as mortgage applications, hiring, parole requests or business operations. Unacceptable harms come from life-critical applications such as in medicine, transportation and military operations.

鈥淥ther threats come from malicious actors who use technology for destructive purposes, such as cybercriminals, terrorists, oppressive political leaders and hate speech bullies. They will never be eliminated, but they can be countered to lessen their impact.

鈥淭here are dangers of unequal access to technology and designs that limit use by minorities, low-literacy users and users with disabilities. These perils could undermine economic development, leading to strains within societies, with damage to democratic institutions, which threatens human rights and individual dignity.鈥

Beneficial
Russell Blackford, Editor-in-Chief of IEET Journal of Evolution and Technology, wrote, 鈥淚nternational communication, networking and availability of information will continue to improve.

Russell Blackford, Editor-in-Chief of IEET Journal of Evolution and Technology, said, 鈥淭he surveillance society will become even more intense, hindering personal freedoms and privacy.鈥

Beneficial
Jeremy Foote, a computational social scientist at Purdue University studying cooperation and collaboration in online communities, said, 鈥淭here are a number of trends in our digital life that are promising. One is the potential for AI as an engine for creativity. While GPT and other LLMs have been met with open-mouthed awe from some and derision from others, I think that it’s likely that AI tools like ChatGPT become important tools for both 1) empowering creativity through novel ways of thinking, and 2) improving productivity in knowledge work through making some tedious aspects easier, such as reading and summarizing large amounts of text. By 2035 we will likely know the limits of these tools (which are likely many) but we will also have identified many more of their uses.

鈥淎 second promising change in digital technology has been increasing skepticism about the power of corporations as platforms. The early web grew based on 鈥榩rotocols instead of platforms鈥 and there are indicators that protocols may be making a comeback. This is mostly good news as decentralized platforms have less centralized power.

鈥淔inally, in optimistic moods I think that there is a chance that the excesses of misinformation, chaos and polarization drive creativity in figuring out institutions (in the Northian sense) that can help us to understand and connect with each other. There are not technologies or institutions now that I see as particularly promising, but these do not seem like completely impossible problems.鈥

Harmful
Jeremy Foote, a computational social scientist at Purdue University studying cooperation and collaboration in online communities, commented, 鈥淭he pessimistic version of 2035 looks pretty bad. The promise of AI also comes with perils. There is lots of potential for the creation of much more persuasive, tireless misinformation and propaganda machines, willing to converse and persuade 24/7. This could lead to a real distrust of basically anything that we see on the Internet.

鈥淎 second worrying trend is the ability of social media to polarize and radicalize some folks. Trends like decentralization may make it easier for radical groups to recruit while avoiding censors or moderators.

鈥淭hird, state actors have been surprisingly adept at using propaganda and other digital tools to control their citizens and to frame issues globally. Democracies may be at a disadvantage when it comes to this sort of informational warfare.鈥

Beneficial
Rich Salz, principal engineer at Akamai Technologies, predicted, 鈥淲e will see a proliferation of AI systems to help with medical diagnosis and research. This may cover a wide range of applications, such as: expert systems to detect breast cancer or other X-ray/imaging analysis; protein folding, etc., and discovery of new drugs; better analytics on drug and other testing; limited initial consultation for doing diagnosis at medical visits. Similar improvements will be seen in many other fields, for instance, astronomical data analysis tools. I hope the tech field gets more unionized.鈥

Harmful
Rich Salz, principal engineer at Akamai Technologies, warned, 鈥淢ass facial-recognition systems will be among the digital tools more widely implemented in the future. There will be increased centralization of internet systems leading to more extra-governmental data collection and further loss of privacy. In addition, we can expect that cell phone cracking will invade privacy and all of this, plus more government surveillance, will be taking place, particularly in regions with tyrannical regimes. Most people will believe that AI’s large language models are 鈥榠ntelligent,鈥 and they will, unfortunately, come to trust them. There will be a further fracturing of the global internet along national boundaries.鈥

Beneficial
Lambert Schomaker, a professor at the Institute of Artificial Intelligence and Cognitive Engineering at the University of Groningen, Netherlands, wrote, 鈥淭he total societal cost of inadequate IT-plus-human-hellhounds who create office bottlenecks must be astronomical. In current society, human administrative work tends to be concentrated in a set of key positions in companies and institutions 鈥 for financial control, human-resource management, data and IT services, etc. The human personnel in these positions abuse their power, they do not assist but instead deflect any question without an actual solution. Office workflows could be streamlined, documentation could be written in more-user-friendly ways, tailored to the needs of the people being served. It seems as if, across society, these positions are usually held by individuals who, in their hearts, have no inclination to be service-oriented toward other humans. This is where AI comes in. Replacing pesky humans at friction points in society will lead to a higher productivity and higher level of happiness for most of us. The administrative and policy people will lose their jobs, but is that so terrible, even for themselves?鈥

Harmful
Lambert Schomaker, a professor at the Institute of Artificial Intelligence and Cognitive Engineering at the University of Groningen, Netherlands, commented, 鈥淐urrent developments around ChatGPT and DALL-E2, although in their early stages now, will have had a deep impact on the way humans look at themselves. This can also be seen from the emotional reactions from artists, writers and researchers in the humanities. Many capabilities considered purely human now appear to be statistical of nature. Writing smooth, conflict-avoiding pieces of text is, apparently, fairly mechanical. This is very threatening.

鈥淎t the moment such users try to topple the current algorithms instead of providing cooperative text prompts. This is good, because the AI community will be eager to prove that many of the current problems (e.g., in logical reasoning) are fairly easily solvable. Also, creative diversity in image and music generation by AI will have improved dramatically.

鈥淗owever, the psychological effect of these developments may be dramatic. Why go to school, the machine can do it all! As a consequence, motivation to work at all may drop. The only green lining here may be that physical activity (‘maker world’) will gain in importance. Given the current shortage of skilled workers in building, electrical engineering and agriculture, this may even be beneficial in some areas. However, the upheaval caused by the AI revolution may have an irreparable effect on the tissue of societies in all world cultures.鈥

Beneficial
Jonathan Stray, senior scientist at the Berkeley Center for Human-Compatible AI, studying algorithms that select and rank content, predicted, 鈥淎mong the developments we鈥檒l see come along well:

Self-driving cars will reduce congestion, carbon emissions and road accidents.

Automated drug discovery will revolutionize the use of pharmaceuticals. This will be particularly beneficial where speed or diversity of development is crucial, as in cancer, rare diseases and antibiotic resistance.

We will start to see platforms for political news, debate and decision-making that are designed to bring out the best of us, through sophisticated combinations of human and automated moderation.

AI assistants will be able to write sophisticated, well-cited research briefs on any topic. Essentially, most people will have access to instant specialist literature reviews.鈥

Harmful
Jonathan Stray, senior scientist at the Berkeley Center for Human-Compatible AI, studying algorithms that select and rank content, warned, 鈥淜ey worries include human rights, human knowledge and economic inequality.

鈥淚n regard to human rights, some governments will use surveillance and content-moderation techniques for control, making it impossible to express dissenting opinions. This will mostly happen in authoritarian regimes, however certain liberal democracies will also use this technology for narrower purposes, and speech regulations will shift depending on who wins elections.

鈥淚n regard to human knowledge, generative models for text, images and video will make it difficult to know what is true without specialist help. Essentially, we’ll need an AI layer on top of the Internet that does a new kind of 鈥榮pam鈥 filtering in order to stand any chance of receiving reliable information.

鈥淚n regard to economic inequality, although AI will create massive wealth for some people and companies, this will not be accompanied by large productivity gains in most cases. Most people will still feel economically precarious and affording housing, medical care, etc., will be a challenge.鈥

Beneficial
Kay Stanney, CEO and founder of Design Interactive, commented, 鈥淗uman-centered development of digital tools can profoundly impact the way we work and learn. Specifically, by coupling digital phenotypes (i.e., real-time, moment-by-moment quantification of the individual-level human phenotype, in situ, using data from personal digital devices, in particular smartphones) with digital twins (i.e., digital representation of an intended or actual real-world physical product, system or process), it will be possible to optimize both human and system performance and well-being. Through this symbiosis, interactions between humans and systems can be adapted in real-time to ensure the system gets what it needs (e.g., predicted maintenance) and the human can get what it needs (e.g., guided stress-reducing mechanisms), thereby realizing truly transformational gains in the enterprise.鈥

Harmful
Kay Stanney, CEO and founder of Design Interactive, wrote, 鈥淗uman-centered development of digital tools and systems could be done in such a manner that there may be accessibility limitations, thereby allowing some groups to benefit more than others. If this limits advancement, it is not an acceptable outcome.鈥

Beneficial
Pedro U. Lima, professor of computer science at the Institute for Systems and Robotics at the University of Lisbon, said, 鈥淚 expect technology to develop in such a way that physical machines (AKA, robots), not just virtual systems, will be developed to replace humans advantageously in dangerous, dull and dirty work. This will increase production, make work safer and create new challenges for humankind not thought of until then.鈥

Harmful
Pedro U. Lima, professor of computer science at the Institute for Systems and Robotics at the University of Lisbon, noted, 鈥淲hat I fear most is not the technology itself, but the wrong use of it. If we replace humans in hard work and do not create new jobs to face new challenges and/or do not provide mechanisms such as companies paying for social security whenever they replace a human by a robot and/or ensure paying universal basic income, societies may blow up.鈥

Beneficial
Alexander Klimburg, senior fellow at the Institute of Advanced Studies, Austria, commented, 鈥淚n the best possible circumstances, by 2035 we will have fully internalized that cybersecurity is a policy and political issue, not just a technical issue. That means we will have honest and productive public discussions on the various tradeoffs that need to take place 鈥 how much individual security and responsibility? How much with companies? How much for government? And, most importantly, how do we ensure that our values are maintained and keep the Internet free 鈥 a meaning under smart regulation, not new-age state-mandated cyber-despotism or the slow suffocation of individual monopolies. The key to all of this is cracking the difficult question of governance of cyberspace. The decision points for this are now, and in particular in 2024 and 2025.鈥

Harmful
Alexander Klimburg, senior fellow at the Institute of Advanced Studies, Austria, predicted, 鈥淚n the worst cases by 2035, two nightmare scenarios can develop 鈥 firstly, an age of warring cyber blocks, where different internets are the battlefield for a ferocious battle between ideological intractable foes – democracies against the authoritarian regimes. In this scenario, a new forever war, not unlike the Global War on Terror but instead state-focused, keeps us mired in tit-for-tat attacks on critical infrastructure, undermines governments and destroys economies. A second nightmare is similar, but in some ways worse: the authoritarian voices who want a state-controlled Internet win the global policy fight, leading to a world where either governments or very few duopolies control the Internet and therefore our entire news consumption and censor our output, automatically shaping our preferences and beliefs along the way. Either the lights go out in cyberwar, or they never go out in a type of Orwellian cyber dystopia that even democracies will not be fully safe from.鈥

Beneficial
Charlie Kaufman, a system security architect with Dell Technologies, predicted, 鈥淚n the area of human health and well-being we will have the ability to carry on natural language discussions of medical issues with an AI that is less expensive and less intimidating than a medical professional, especially when seeking guidance as to whether to seek professional help. We should be able to give it access to medical records and take pictures of visible anomalies. I also predict AI will be capable of providing companionship for people who don’t do well interacting with real people or have situations making that difficult. AI engines will be able to predict what sorts of entertainment I鈥檇 like to enjoy, which articles I would like to read and what sort of videos I’d like to watch and save me the time of seeking these out.鈥

Harmful
Charlie Kaufman, a system security architect with Dell Technologies, said, 鈥淒igital systems will continue to be difficult to use well, and large fractions of humanity will be cut off from the benefits of technology because of lack of training and commercial rationing enforced with intellectual property protection to maximize corporate profits.

鈥淚n regard to the future of human knowledge, I hope for the best and fear the worst. Technology of late has been used to spread misinformation. I would hope that we will figure out a way to minimize that while making all public knowledge available to anyone who wants to ask.鈥

鈥淚n regard to human rights, I hope for the best but fear the worst for technology’s impact on personal privacy. Technology to date has lessened it, and while it has great potential to improve things, I fear that trend will continue.鈥

Beneficial
Frank Bajak, cybersecurity investigations chief at the Associated Press, wrote, 鈥淢any technologies have the potential to bring people and cultures together as never before and bridge understanding and cultural and historic knowledge. Speech- and image-recognition are tops among them. Labor-saving devices including AI and robotics have tremendous potential for creating more leisure time and greater dedication to the satisfactions of the physical world. Technologies developed for countering climate change are likely to have multiple unanticipated benefits. Advancements in medicine benefiting from our improved understanding of genetics 鈥 such as targeted gene therapies 鈥 will improve human longevity. The potential for technology to make the world safer and more harmonious is great. But this will depend on how humans wield it and whether we can make wealth distribution more equitable and wars less common and damaging. Every technology can be leveraged for good or ill.鈥

Harmful
Frank Bajak, cybersecurity investigations chief at the Associated Press, predicted, 鈥淭he powerful technologies maturing over the next decade will be badly abused in much of the world unless the trend toward illiberal, autocratic rule is reversed. Surveillance technology has few guardrails now, though the Biden administration has shown some will for limiting it. Yet far too many governments have no qualms about violating their citizens鈥 rights with spyware and other intrusive technologies. Digital dossiers will be amassed widely by repressive regimes. Unless the United States suppresses the fascist tendencies of opportunist demagogues, the U.S. could become a major surveillance state. Much depends also on the European Union being able to maintain democracy and prosperity and contain xenophobia. We seem destined at present to see biometrics combined with databases 鈥 anchored in facial, iris and fingerprint collection 鈥 used to control human migration, prejudicing the Black and brown of the Global South.

鈥淚 am also concerned about junk AI, bioweapons and killer robots. It will probably take at least a decade to sort out hurtful from helpful AI. Full autonomous offensive lethal weapons will be operative long before 2035, including drone swarms in the air and sea. It will be incumbent on us to forge treaties restricting the use of killer robots.

鈥淭echnology is not and never was the problem. Humans are. Technology will continue to imbue humans with God-like powers. I wish I had more faith in our better angels. AI will likely eventually make software, currently dismally flawed, much safer as security becomes central to ground-up design. This is apt to take more than a decade to shake out. I’d expect a few major computer outages in the meantime. We may also learn not to bake software into absolutely everything in our environment as we currently seem to be doing. Maybe we’ll mature out of our surveillance doorbell stage.鈥

Beneficial
Micah Altman, social and information scientist at the Center for Research in Equitable and Open Scholarship at MIT, said, 鈥淲hether digital or analog, there are five dimensions to individual well-being: longevity, health, access to resources, subjective well-being and agency over making meaningful life choices. Within the last decade the increasing digitalization of human activities has contributed substantially in each of these areas, providing benefits in four of the five areas.

鈥淒igital life is greatly expanding access to online education (especially through open online courses and increasingly through online degree and certification programs); health information and health treatment (especially through telehealth in the area of behavioral wellness); the opportunity to work from remote locations (which is particularly beneficial for people with disabilities); and the ability to engage with government through online services, access to records, and modes of online participate in (e.g., through online public hearings). Expansion in most of these areas is likely to continue over the next dozen years.鈥

Harmful
Micah Altman, social and information scientist at the Center for Research in Equitable and Open Scholarship at MIT, wrote, 鈥淭here is more reason to be concerned than excited 鈥 not because digital life offers more peril than promise, but because the results of progress are incremental, while the results of failure could be catastrophic. Thus it is essential to govern digital platforms, to integrate social values into their design, and to establish mechanisms for transparency and accountability.

鈥淭he most menacing potential changes to life over the next couple of decades are the increasing concentration in the distribution of wealth, a related concentration of effective political power, and the ecological and societal disruptions likely to result from our collective failure to diligently mitigate climate change (further, the latter is related to the former).

鈥淎s a consequence, the most menacing potential changes to digital life are those that facilitate this concentration of power: The susceptibility of digital information and social platforms to be used for disinformation, for monopolization (often through the monetization and appropriation of information generated by individuals and their activities) and for surveillance. Unfortunately, the incentives for the creation of digital platforms, such as the monetization of individual attention, has created platforms on which it is easy to spread disinformation to 10 million people and monitor how they react, but hard to promote a meaningful discussion among even a hundred people.鈥

Beneficial
Gary Grossman, senior vice president and global lead of the AI Center of Excellence at Edelman, observed, 鈥淭here are a great number of potential benefits, ranging from improved access to education, better medical diagnosis and treatments, to breaking down language barriers for enhanced global communications. However, there are technical, social and governmental barriers to these and others so the path forward will at times be messy.鈥

Harmful
Gary Grossman, senior vice president and global lead of the AI Center of Excellence at Edelman, said, 鈥淧erhaps because we can already feel tomorrow’s dangers in activities playing out today, the downside seems quite dramatic. Deepfakes and disinformation are getting a boost from generative AI technologies and could become pervasive, greatly undermining what little public trust of institutions remain. Digital addiction, already an issue for many who play video games, watch TikTok or YouTube videos, or who hang on every tweet, could become an even greater problem as these and other digital channels become even more personalized and appeal to base instincts for eyeballs.鈥

Beneficial
Deanna Zandt, writer, artist and award-winning technologist, said, 鈥淚 continue to be hopeful that new platforms and tech will find ways around the totalitarian capitalist systems we live in, allowing us to connect with each other on fundamentally (ironically enough) human levels. My own first love of the internet was finding out that I wasn’t alone in how I felt or in the things I liked and finding community in those things. Even though many of those protocols and platforms have been coopted in service of profit-making, developers continue to find brilliant paths of opening up human connection in surprising ways.

鈥淚’m also hopeful the current trend of hypercapitalistic tech driving people back to more fundamental forms of internet communication will continue. Email as a protocol has been around for how long? And it’s still, as much as we complain about its limitations or overwhelm, a main way we connect. Look at the rise of Substack 鈥 some crazy high percentage of its users don’t know that it’s a platform with a website and features. They just get email from creators they love. Brilliant.鈥

Harmful
Deanna Zandt, writer, artist and award-winning technologist, wrote, 鈥淔irst, deepfakes and misinformation will continue to undermine our faith in public knowledge and our ability to make individual and collective sound decisions about how we live our lives. And second, while we continue to work on gender, racial, disability and other inclusive lenses in tech development, the continued lack of equity and representation in the tech community (especially when empowered by lots of rich, able-bodied white men) will continue to create harm for people living on the margins.鈥

Beneficial
Ayden F茅rdeline, Landecker Democracy Fellow at Humanity in Action, commented, 鈥淭he Internet today is largely centralized, with a few companies having a stranglehold over the control and distribution of information. As a result, data is vulnerable to single points of failure and important records are susceptible to censorship, Internet shutdowns and link rot. By 2035, control over the Internet鈥檚 core infrastructure will have become less concentrated. Decentralized technologies will have become more prevalent by 2035, making the Internet more durable and better equipped to preserve information that requires long-term storage and accessibility. It won鈥檛 just be that we can reliably retrieve data like historical records 鈥 we will be able to verify their origins and that they have not been manipulated with over time. Initiatives like the Coalition for Content Provenance and Authenticity are developing the mechanisms for verifying digital media that will become increasingly important in legal proceedings and journalism.鈥

Harmful
Ayden F茅rdeline, Landecker Democracy Fellow at Humanity in Action, wrote, 鈥淭here are organizations today which profit from being perceived as 鈥榤erchants of truth.鈥 News organizations, for example, derive their authority and influence through being trusted by their audience as having integrity. Similarly, the judicial system is based on the idea that the truth can be established through an impartial and fair hearing of evidence and arguments. Historically, we have trusted those actors and their expertise in verifying information. As we transition to building trust into digital media files through techniques like authentication-at-source and blockchain ledgers that provide an audit trail of how a file has been altered over time, there may be attempts to use regulation to limit how we can cryptographically establish the authenticity and provenance of digital media. More online regulation is inevitable given the importance of the Internet economically and socially, and the likelihood that digital media will increasingly be used as evidence in legal proceedings. But will we get the regulation right? Will we regulate digital media in a way that builds trust, or will we create convoluted, expensive authentication techniques that increase the cost of justice 鈥 if they are adopted at all?鈥

Beneficial
Alan Inouye, director of the office for information technology policy at the American Library Association, said, 鈥淚 am optimistic that the U.S. will achieve nearly ubiquitous access to advanced technology by 2035. Already, we have seen the rapid diffusion of such technology in the United States and worldwide. I was recently in Laos, and it struck me how many people had portable phones, such as folks running food stands on the side of the road and tuk-tuk drivers. Accelerating diffusion is the amplified awareness coming out of the COVID-19 pandemic, and the multiple federal funding programs for broadband and digital inclusion. I see this momentum carrying through for years to come by governments at all levels, corporations and the non-profit sector.

鈥淭hat said, it is always the case that there is differential access to advanced technology by the population. The well-to-do and those-in-the-know will have access to more-advanced technology than less-privileged members of society, whether we鈥檙e talking about typewriters or the latest smartphone. However, the difference by 2035 is that the level of technology capability will be so high that even those with only access to basic technology will still have a great deal of computing and communications power at their fingertips.鈥

Harmful
Alan Inouye, director of the office for information technology policy at the American Library Association, commented, 鈥淧erhaps ironically, the most harmful aspects by 2035 will arise from our very ubiquitous access to advanced technology. As the technology access playing field will become somewhat more level, the distinguishing difference or competitive advantage will be knowledge and social capital.

鈥淭hus, the edge with ubiquitous access to advanced technology goes to knowledge workers and those highly proficient with the online world, and those who are well connected in that world. A divide between these people and others will become more visible, and resentment will build among those who do not understand that their profound challenge is in the realm of lacking adequate knowledge and social capital.

鈥淚t will take considerable education of and advocacy with policy makers to address this divide. The lack of a device or internet access is an obvious deficiency and plain to see, and policy solutions are relatively clear. Inadequate digital literacy and ability to engage in economic opportunity online is a much more profound challenge, much beyond one-time policy prescriptions as training classes or online modules. This is the latest stage of our society鈥檚 education and workforce challenge generally, as we see an increasing bifurcation 鈥 of high achievers and low achievers in the U.S. education and workforce system.鈥

Beneficial and Harmful
Sean McGregor, founder of the Responsible AI Collaborative, said, 鈥淏y 2035, technology will have developed a window into many inequities of life, thereby empowering individuals to advocate for greater access to and authority over decision-making currently entrusted to people with inscrutable agendas and biases. The power of the individual will expand with communication, artistic and educational capacities not known throughout previous human history. However, if trends remain as they are now, people, organizations and governments interested in accumulating power and wealth over the broader public interest will apply these technologies toward increasingly repressive and extractive aims. It is vital that there be a concerted, coordinated and calm effort to globally empower humans in the governance of artificial intelligence systems. This is required to avoid the worst possibilities of complex socio-technical systems. At present, we are woefully unprepared and show no signs of beginning collaborative efforts of the scale required to sufficiently address the problem.鈥

Beneficial and Harmful
Cory Doctorow, activist journalist and author of 鈥淗ow to Destroy Surveillance Capitalism,鈥 wrote, 鈥淚 hope to see an increased understanding of the benefits of federation and decentralization; interoperability mandates, such as the Digital Markets Act, and a renewed emphasis on interoperability as a means of lowering switching costs and disciplining firms; a decoupling of decentralization from blockchain (which is nonsense); and an emphasis on subsidiarity in platform governance. Among the challenges are new compliance duties for intermediaries 鈥 new rules that increase surveillance and algorithmic filtering while creating barriers to entry for small players 鈥 and 鈥榣ink taxes鈥 and other pseudocopyrights that control who can take action to link to, quote and discuss the news.鈥

Beneficial
Richard Barke, associate professor of public policy at Georgia Institute of Technology, wrote, 鈥淚t is dangerous to characterize any specific possibility as 鈥榣ikely鈥 given the pace of technological, social and political changes in the United States in the preceding years and decades. The trajectory of those changes suggests that the shift from real to digital life probably will not decelerate. The use of digital technologies for shopping, medical diagnosis and interpersonal relations will continue. The use of data analytics by businesses and governments also will continue to grow. And the number and severity of harmful consequences of these changes will also grow.鈥

Harmful
Richard Barke, associate professor of public policy at Georgia Institute of Technology, responded, 鈥淣ew technologies, market tools or social changes never come without some harmful consequences. Concerns about privacy and discrimination will increase, with the result that demands for transparency about business practices, targeting of subpopulations, and government policies will grow at least as fast as digital life. Those demands are not likely to be answered in the absence of significant harmful or menacing events that catch the attention of the public, the media and eventually policymakers.

鈥淭he environmental movement needed a Rachel Carson and a Love Canal in the 1960s and 1970s as policy entrepreneurs and focusing events. The same is true for many other significant changes in business and government decision making. Unfortunately, it is likely that by 2035 some highly visible abuse or scandal with clearly identifiable victims and culprits will be needed to provide an inflection point that puts an aggrieved public in the streets and on social media, in courtrooms and in legislative hallways, resulting in a new regime of law and regulations to constrain the worst excesses of the digital world. But, even then, is it likely 鈥 or even possible 鈥 that the speed of reforms will be able to keep up with the speed of technological and business innovations?鈥

Beneficial
Christopher Richter, a retired professor of communications from Hollins University, wrote, 鈥淢ore tech industry leaders will develop social consciences and work toward the greater good in terms of both tech development goals and revenue distribution. More people generally will develop informed, critical perspectives on digital/social media content and emotionally manipulative media processes.

鈥淭echnology: Artificial intelligence and robotics applications will lower the costs of improved quality of routine elder care and healthcare generally. Digital technologies will be developed that make substantial contributions to reducing greenhouse emissions and ameliorating climate change. Digital technologies will continue to be developed that facilitate equitable education processes.鈥

Harmful
Christopher Richter, a retired professor of communications from Hollins University, said, 鈥淭o the detriment of humanity and life on Earth generally, more tech industry leaders will come to believe that what is good for their company鈥檚 bottom line is good for society.

鈥淢ore digital technologies will be developed to enhance the status quo, often under the guise of being revolutionary (e.g., the way current developments in autonomous vehicles to date just reinforce the values of one person-one vehicle, commuter lifestyles, highway systems as vital infrastructure, etc.). Digital tech will continue to be exploited to deepen social, political, informational and financial divides, both domestically and globally. Even well-meaning developments in AI, robotics and digital tech generally will have unintended negative consequences (e.g., the way current developments like ChatGPT are useful for plagiarists, or early utopian dreams of the internet as an ideally functioning public missed a lot of the negative realities).鈥

Beneficial
Adam Nagy, a senior research coordinator at The Berkman Klein Center for Internet & Society at Harvard University, said, 鈥淎lbeit far from guaranteed, there may be some beneficial changes to digital life by 2035. As indicated by recent legislation in the European Union, there will be a global expansion of obligations imposed on firms that control or process data and more robust penalties and enforcement for rule-breaking. Hopefully, improved regulations foster more responsible corporate behavior (or at least clamp down on the worst excesses of the digital economy).

鈥淐ooperative ownership structures of digital products and services are only just beginning to take off. The availability of alternatives to traditional corporate governance will provide consumers with more choices and control over how their data is used. And, by 2025 decentralized identity systems will already be much further along in their development and adoption. While far from foolproof, these systems will improve user privacy and security and also make signing up for new products and services more accessible.鈥

Harmful
Adam Nagy, a senior research coordinator at The Berkman Klein Center for Internet & Society at Harvard University, said, 鈥淧eople are increasingly alienated from their peers, struggling to form friendships and romantic relationships, removed from civic life and polarized across ideological lines. These trends impact our experiences online in negative ways, but they are also, to some extent, an outcome of the way digital life affects our moods, viewpoints and behaviors. The continuation of this vicious cycle spells disaster for the well-being of younger generations and the overall health of society.鈥

Beneficial and Harmful
Alan D. Mutter, consultant and former Silicon Valley CEO, said, 鈥淭he magic of technology enables me to Google lentil soup recipes, trade stocks in the park, stream Bollywood music and Zoom with friends in Germany. Without question, tech has solved the eternally vexing P2P problem 鈥 the rapid, friction-free delivery of hot-ish pizza to pepperoni-craving persons. Techno thingies like software calibration and hardware calibration networks will get faster and somewhat better (albeit more complex) but probably not cheaper. Here鈥檚 what I mean: For no additional charge, the latest Apple Watches will call 911 if they think you fell. It鈥檚 a good idea and the feature actually has saved some lives. But it also is producing an overwhelming number of false alarms. So, it is a good thing that sometimes is a bad thing.

  • AI probably will do a better job of reading routine scans than radiologists and might do a better job than human air traffic controllers who sometimes vector two planes to the same runway.
  • AI undoubtedly will answer all phones everywhere, cutting costs but also further compromising the quality of customer service at medical offices, insurance companies, tech-support lines and all the rest.
  • AI will produce all forms of media content but likely without the elan and judgment formerly contributed by humans.
  • AI probably will be more accurate than humans at doing math but less savvy at sorting fact from fiction and nuance from nuisance.

鈥淭echnology has upended forever the ways we get and give information. We now live in a Tower of Babel where yadda-yadda moves unchecked, unmoderated and unhinged at the speed of light, polluting and corrupting the public discourse. This is perilous for a democracy like the United States. I am afraid for our republic.鈥

Beneficial
Edson Prestes, professor of informatics at Federal University of Rio Grande do Sul, Brazil, responded, 鈥淚 believe digital technologies and their use will help us to understand ourselves and what kind of world we want to live in. This awareness is essential for creating a better and fairer world. All problems created by digital technologies come from a lack of self-, community- and planet-awareness. The sooner we understand this point, the faster we will understand that we live in an interconnected world and, consequently, the faster we will act correctly. Thus, I tend to be optimist we will live in a better society than we do today. The poor and vulnerable will have the opportunity to have a good quality of life and standard of living on a healthy planet, where those with differences and a diversity of opinions, religions, identities will coexist peacefully.鈥

Harmful
Edson Prestes, professor of informatics at Federal University of Rio Grande do Sul, Brazil, said, 鈥淗aving a just and fair world is not an easy task. Digital technologies have the power to objectify human beings and human relationships with severe consequences for society as a whole. The lack of guardrails or the slow pace of implementation of these guardrails can lead to a dystopian society. In this sense, the metaverse and similar universes pose a serious threat with huge potential to amplify existing problems in the real world. We barely understand the impact of current digital technologies on our lives. The most prominent is the impact on privacy. When we shift the use of digital technology from tool to a universe we can live in, new threats will be unlocked. Although, digital universes are only in the digital domain, they have a direct effect in the real world. Maybe some people will prefer to live only in the digital universe and die in the real world.鈥

Beneficial
David Bernstein, a retired market-research and new-product-development professional, said, 鈥淥ne of the most beneficial developments will be the ability for physicians and mental health professionals to reach even more individuals. As we can already transmit and share EKG information from home or away, I look forward to being able to have slightly more invasive medical processes such as blood analysis more easily available from a distance. Access to more advanced learning for adults not close to traditional education centers will become easier. The rapid changes in what is required to be a productive workforce member will likely necessitate more regular periods of needing to upgrade skills. Society cannot afford to sideline large groups of workers because their skills are not the latest and greatest.鈥

Harmful
David Bernstein, a retired market-research and new-product-development professional, said, 鈥淧erhaps the most harmful development I see is the further class-based division in our social, economic and political lives. We have already seen how having the financial means to access specialized services, such as online higher education, financial market information and local government has already divided many communities. Indeed, what is an advantage to the middle and upper classes, is a disadvantage for lower class groups. The days of being worried that your position may become redundant due to computerization and automation will continue to worry many. Though, I believe the manual laborer’s job may be more secure than the average programmer’s.鈥

Beneficial
Carolyn Heinrich, professor of public policy and education at Vanderbilt University, wrote, 鈥淭he best and most beneficial changes in digital life are those that will increase individual access to information that expands their health, education and economic opportunities. The expansion of digital access to areas where it has been limited by poor infrastructure, including rural areas in developed and developing countries, could go the farthest toward driving these beneficial changes. Access to information and opportunities to use it to improve human well-being can also fuel political and social demands for improvement in government and institutions and human rights. The digital expansion will need to be accompanied by human interactions, such as was done in France Services, to ensure that individuals who are limited in various capacities are not left out.鈥

Harmful
Carolyn Heinrich, professor of public policy and education at Vanderbilt University, commented, 鈥淭he most harmful aspects of digital tools and systems are those that are used to spread misinformation and to manipulate people in ways that are harmful to society. Digital tools are used to scam people of money, steal identities and to bully, blackmail and defame people, and so the expansion of digital tools and systems to areas where they are currently less present will also put more people at risk of these negative aspects of their use. The spread of misinformation promotes distrust in all sources of knowledge, to the detriment of the progress of human knowledge, including reputable research. Children are especially vulnerable to the misuse of digital tools and information, and there is serious concern for the negative impacts that this has had on their mental health.鈥

Beneficial (Did not respond to harms question)
David J. Krieger, director of the Institute for Communication and Leadership, Switzerland, commented, 鈥淚n regard to human connections, governance and institutions, in a best-case scenario the widespread adaptation of AI will encourage the development of effective fact-checking for digital tools and establish standards of truth-telling and evidence in decision-making that will influence all aspects of society and human relations. Many plus-side options may emerge from that.

鈥淚n media: The development of personalized products and services will tend to eliminate spam and with it the economy of attention. In its place will appear an economy of participation. The disappearance of the economy of attention will disrupt the media system. Mass media will be integrated into decentralized and participatory information services.

鈥淚n society overall: The climate catastrophe will fully arrive, as all experts predict. The result will be 1) the collapse of the nation-states (which are responsible for the catastrophe). 2) From the ashes of the nation-states in the best-case scenario, global governance will arise. 3) In order to control the environment, geo-engineering will become widespread and mandatory. 4) In the place of functionally differentiated society based on nation-states, there will arise a global network society based on network governance frameworks, established by self-organizing global networks cutting across all functions (business, law, education, healthcare, science, etc.) and certified and audited by global governance institutions.

鈥淣ew values and norms for social interaction that are appropriate for a global network society and a new understanding of human existence as relational and associational will replace the values and ideologies of modern Western industrial society.

鈥淚n the opposite future setting, the nation-states will successfully block the establishment of effective global governance institutions. The climate catastrophe will leave some nation-states or regions as winners and others as losers, increasing wars, migration, inequality, etc. There will be no effective fact-checking for information processing and uses of AI, which will lead to a loss of trust and greater polarization throughout society.鈥

Beneficial
Erhardt Graeff, a researcher at Olin College of Engineering who is expert in the design and use of technology for civic and political engagement, wrote, 鈥淚’m hopeful that digital technology will continue to increase the quality of government service provision, making it easier, faster and more transparent for citizens and residents engaging with their municipalities and states.

  • I’m hopeful that it will increase the transparency and therefore accountability of our government institutions by making government data more accessible and usable.
  • I’m hopeful that criminal legal system data, in particular, will be made available to community members and advocates to scrutinize the activity of police, prosecutors and courts.
  • I’m hopeful that the laws, policies and procurement changes necessary to ensure responsible and citizen-centered applications of digital technology and data will be put in place as citizens and officials become more comfortable acknowledging the role digital technology plays and the expectations we should have of the interfaces and data provided by government agencies.鈥

Harmful
Erhardt Graeff, a researcher at Olin College of Engineering who is expert in the design and use of technology for civic and political engagement, said, 鈥淚 worry that humanity will largely accept the hyper-individualism and social and moral distance made possible by digital technology and assume that this is how society should function. I worry that our social and political divisions will grow wider if we continue to invest ourselves personally and institutionally in the false efficiencies and false democracies of Twitter-like social media.鈥

Beneficial
Jim Fenton, a longtime leader in the Internet Engineering Task Force who has worked over the past 35 years at Altmode Networks, Neustar and Cisco Systems, commented, 鈥淏y 2035, I expect that social norms and technological norms will be closer in alignment. We have undergone such a rapid evolution in areas like social networking, online identity, privacy and online commerce (particularly as applied to cryptocurrency) that our society doesn鈥檛 really know what to think about the new technology. At the same time, I don鈥檛 expect innovation to slow in the next 12 years. We will undoubtedly have different issues where society and technology fall out of alignment, but resolving these fundamental issues will, in my opinion, provide a basis for tackling new areas that arise.鈥

Harmful
Jim Fenton, a longtime leader in the Internet Engineering Task Force who has worked over the past 35 years at Altmode Networks, Neustar and Cisco Systems, said, 鈥淚 am particularly concerned about the increasing surveillance associated with digital content and tools. Unfortunately, there seems to be a counter-incentive for governments to legislate for privacy, since they are often either the ones doing the surveilling, or they consume the information collected by others. As the public realizes more and more about the ways they are watched, it is likely to affect their behavior and mental state.鈥

Beneficial
Artur Serra, deputy director of the聽i2cat聽Foundation and research director of Citilab in Catalonia, Spain, said, 鈥淚n 2035 there is the possibility of designing and building the first universal innovation ecosystem based on Internet and digital technologies. As universal access to the Internet is progressing, by 2035 the majority of the African continent will be already online. Then the big question will be now what? What will be the purpose of having all of humankind connected the network? We will understand that the Internet is more than an information and communication network. It is a research and innovation network that can allow for the first time building such universal innovation ecosystems in each country and globally 鈥 empowering everyone to innovate.鈥

Harmful
Artur Serra, deputy director of the聽i2cat聽Foundation and research director of Citilab in Catalonia, Spain, wrote 鈥淚n 2035, the same great opportunity of design and building such universal innovation ecosystems upon the Internet can be paradoxically the most menacing threat to humanity. Transforming our countries in real labs can also become the most harmful change in our societies. It can end in the appropriation of the innovation capabilities of billions of people by a small group of corporations or public bureaucracies, resulting in a real dark era for the whole humanity.鈥

Beneficial
Carol Chetkovich, professor emeritus of public policy at Mills College, said, 鈥淭echnology can contribute to all aspects of human experience through increased speed, data storage capacity, reach and processing sophistication. For example, with respect to human rights, technology can increase connectivity among citizens (and across societies) that enables them to learn about their rights and to organize to advocate more effectively. Similarly, health monitoring should become easier as technology enables us to measure more bodily activities/functions and assess changes in real time.

Harmful
Carol Chetkovich, professor emeritus of public policy at Mills College, wrote, 鈥淚 am skeptical that technological development will be sufficiently human-centered, and therein lies the downside of tech change. In particular, we have vast inequalities in our society today, and it’s easy to see how existing gaps in access to technology and control over it could be aggravated as the tools become more sophisticated and more expensive to buy and use. The development of the robotic industry may be a boon to its owners, but not necessarily to those who lose their jobs as a result. The only way to ensure that technological advancement does not disadvantage some is by thinking through its implications and designing not just technologies but all social systems to be able to account for the changes. So, if a large number of people will not be employed as a result of robotics, we need to be thinking of how to support and educate those who are displaced before it happens. Parallel arguments could be made about human rights, health and well-being, and so on.鈥

Beneficial
Andrew Czernek, former vice president of technology at a major technology company, predicted, 鈥淐omputing will be ubiquitous, digital devices designed to meet human needs will proliferate. Everything from electrical outlets to aircraft will have useful, new and innovative functions. The spread of 5G networks and movement toward 6G will help accelerate this trend. In professional and personal settings, we鈥檒l see more-intelligent software. It will be able to do simulations and use database information to improve its utility. Virtual reality is already becoming popular in gaming applications and its extension to education applications offers incredible utility. No longer will we have to rely on wooden models for astronomy or biology teaching but visualization of planets or molecules through software. Digital technology is at the beginning of revolutionizing gene editing and its uses in disease control. New techniques will allow better gene modeling and editing, and this will accelerate in the next decade.鈥

Harmful
Andrew Czernek, former vice president of technology at a major technology company, observed, 鈥淧revalence of digital devices gives dozens of entry points for hackers and digital theft. The situation will become so bad in the next five years that companies will be forced to set up white hat hacking operations as a defense. Lack of a political will to create a unique digital ID will result in increasing rates of theft. Will the education system be capable of producing teachers who can teach with technology? If not, wealth gaps will continue to exacerbate. One or two companies have a great opportunity to create teaching tools, but it will take wise management to create a profitable enterprise.鈥

Harmful (Did not respond to Benefits question)
Gina Neff, professor and director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, predicted, 鈥淏y 2035 we will see large-scale systems with little room for opting out that lack the ability for people to rectify mistakes or hold systems and power accountable. The digital systems we now enjoy have been based up to now on an assumption of democratic control and governance. Challenges to democracy in democratic countries and increasing use of AI systems for control by authoritarian governments in other countries, will mean that our digital systems will come with a high cost to freedom, privacy and rights. Technologies will appear accurate but have hidden flaws and biases, making it difficult to challenge predictions or results. Guilty until proven otherwise 鈥 and it will take a lot to prove otherwise 鈥 will be the modus operandi of digital systems in 2035.鈥

Beneficial
Amali De Silva-Mitchell, founder and coordinator of the UN Internet Governance Forum Dynamic Coalition on Data-Driven Health Technologies, commented, 鈥淒evelopment in the e-health/medical internet of things (MIoT) space is growing. This is good news for supporting and scaling up the mandate of the UN Sustainable Development Goal #3, Health and Well-Being for All, given the rapidly increasing global population and the resulting pressure that is created on traditional medical services.

鈥淪uccess will be dependent on quality internet connectivity for all, as well as on availability of devices and user skills or on the support of good IT Samaritans. Funding new innovation is critical. Accessibility for disabled persons can be significantly bettered through the AI and other technologies being developed for use by those who are blind or who are hard of hearing, for example, so as to enable them to access e-health and other services and activities. Robotics, virtual and augmented reality will develop to enhance the human-computer interaction space and hopefully support the e-health space as well.

鈥淎s more individuals in the overall global population increase their IT knowledge and skills as users and developers, they will demand more results of the science, as they see more options for innovation. Ethics must be core to any development and user support activity. The potential to train and provide access to knowledge and ethics training to users and developers becomes increasingly easier with online education and support, to create resilient, ethical, accessible, quality ICT (information and communications technology) systems.鈥

Harmful
Amali De Silva-Mitchell, founder and coordinator of the UN Internet Governance Forum Dynamic Coalition on Data-Driven Health Technologies, said, 鈥淭here is growing attrition in the quality of universal value systems for the public good. Quick wins have led to a successive decline in the quality of systems through oversimplification, biased profiling, lack of care to data capture, storage, update and outputs, security, patching etc.

鈥淎 lack of quality collaboration and leadership amongst stakeholders can lead to expensive failures and loss in investments for similar innovations and a general malaise can set in, stalling growth and productivity in the ICT (information and communications technology) sector. When intellectual property is not credited appropriately, innovation can also stall.

鈥淢isinformation must be dealt with. Ownership of failures without finger-pointing is important if betterment is to take place. Trust will be eroded, if positive, quality social outcomes are not the center of the development and delivery of ICTs.

鈥淚nternet fragmentation at a time of geo-political instability is preventing the development of the rich portfolio of ICT solutions the human world could produce together. The uncertain future that is now on our global horizon due to this and change, require people to work toward ICT togetherness. E-health, in particular, requires trust in multistakeholder support to enable global health and well-being especially due to the likely impacts of climate change on human health.鈥

Beneficial
Bryan Alexander, futurist, speaker and consultant, wrote, 鈥淭he most beneficial changes are those empowering human creativity. We have already seen a generation of digitally enabled creativity, from increasingly accessible tools to new ways of sharing works. Indeed, it’s a historical tendency that no sooner do humans invent new technologies than we try to make art and tell stories with them. So, looking ahead, we should expect new tech and new creativity.

鈥淔or example, people using AI to generate computer games of ever-increasing sophistication. 3D printing is likely to become ever more capable and easier to use, spawning new art forms, enabling more people to make more stuff. New materials will be printable, from building parts to biological tissue. To whatever extent people use the metaverse there will be virtual art and stories in that space. Across all of these domains we should see old forms of creativity reused and new ideas take shape. Digital art philosophies and schools are likely to surface and compete.鈥

Harmful
Bryan Alexander, futurist, speaker and consultant, responded, 鈥淚 fear the most dangerous use of digital technologies will be various forms of antidemocratic restrictions on humanity. We have already seen this, from governments using digital surveillance to control and manipulate residents to groups using hacking to harm individuals and other groups. Looking ahead, we can easily imagine malign actors using AI to create profiles of targets, drones for terror and killing, 3D printing weapons and bioprinting diseases.

鈥淭he creation of augmented and virtual reality spaces means some will abuse other people therein, if history is of any guide (see 鈥楢 Rape in Cyberspace鈥 or, more recently, 鈥楪amergate鈥). All of these potentials for human harm can then feed into restrictions on human behavior, either as means of intimidation or as justifications for authoritarianism (e.g., we must impose controls in order to fend off bioprinted disease vectors). AI can supercharge governmental and private power.鈥

Beneficial (Did not respond to Harms question)
David Bray, distinguished fellow with the non-partisan Stimson Center and the Atlantic Council, wrote, 鈥淚t鈥檚 possible to see us getting 鈥榣eft of boom鈥 of future pandemics, natural catastrophes, human-caused catastrophes, famines, environment erosion and climate change by using new digital technologies on the ground and in space. We often see the signs of a new outbreak before we see people getting sick. We can create an immune system for the planet 鈥 a network of tools that search for signs of new infections, directly detect and analyze new pathogens when they first appear and identify, develop and deploy effective therapies.

鈥淭his immune system could rely on existing tools, such as monitoring demand and prices for medicinal therapies, analyze satellite images of traffic patterns and step up聽our efforts to monitor for聽pathogens in wastewater. It could use new tools that search for novel pathogens in the air, water or soil, sequence their DNA or RNA, then use high-performance computers to analyze the molecules and search through an index of known therapies that might be able to neutralize the pathogen. Biosensors that can detect pathogens could be embedded in animals and plants living in the tropical regions rich in biodiversity, where new infectious diseases often originate. Transmissions from these sensors could link to a supercomputing network that characterizes new pathogens. Of course, such a dramatic scaling up of monitoring and therapeutics could raise concerns about privacy and personal choice, so we will need to take steps to ensure this planetary immune system doesn鈥檛 create a surveillance state.

鈥淲e can also use AI to develop indicators, warnings and plans to spot vulnerabilities in global food production and help make the agriculture system more resilient and sustainable.

鈥淚n an era in which precision medicine is possible, so too will be precision bioattacks, tailored and at a distance. This will become a national security issue if we don鈥檛 figure out how to better use technology to do the work of deliberative governance at the speed necessary to keep up with threats associated with pandemics. Exponentially reducing the time it takes to mitigate a biothreat agent will save lives, property and national economies. To do this, we need to:

  • Automate detection by embedding electronic sensors and developing algorithms that take humans out of loop with characterizing a biothreat agent
  • Universalize treatment methods by employing automated methods to massively select bacteriophages vs. bacteria or antibody-producing E. Coli vs. viruses
  • Accelerate mass remediation either via rain or the drinking water supply with chemicals to time-limit the therapy

鈥淐hallenges of misinformation and disinformation are polarizing societies, sowing distrust and outpacing any truthful beliefs or facts. Dis- and misinformation will be on the rise by 2035, but they have been around ever since humans first emerged on Earth. One of the biggest challenges now is that people do not follow complicated narratives 鈥 they don鈥檛 go viral, and science is often complicated. We will need to find ways to win people over, despite the preference of algorithms and people for simple, one-sided narratives.

鈥淲e need more people-centered approaches to remedy the challenges of our day. Across communities and nations, we need to internally acknowledge the troubling events of history and of human nature, and then strive externally to be benevolent, bold and brave in finding ways wherever we can at the local level across organizations or sectors or communities to build bridges. The reason why is simple: we and future generations deserve such a world.鈥

Beneficial
Aaron Chia-Yuan Hung, associate professor of educational technology at Adelphi University, said, 鈥淎I, rightfully gets a lot of bad raps these days, but it is often used for good, especially in terms of helping us see how we can overcome complex problems, especially wicked problems such as climate change. For individuals, it can be difficult to see their carbon footprint and the influence on the environment of their choices. AI can help unpack those decisions and make them easier to understand.

鈥淚n the future, AI will work for you to condense information in large documents that most people don鈥檛 bother reading (like Terms of Service) into a simpler document and flag potentially problematic clauses an individual would want to pay close attention to.

鈥淵ou will also be able to enter your diet and the medications you take into an app and have AI keep you aware of potential side effects and continuously update that information based on the latest scientific knowledge, sending you notifications about what things to add, reduce or subtract. This ease of access to complex analysis could really benefit human health if properly implemented, with proper respect for privacy.

鈥淟ike AI, robots often conjure up dystopian nightmares of rogue machines causing havoc, but robots are being designed for home use and will soon be implemented in that setting to do many helpful things like lifting heavy objects, household tasks, etc. Robot vacuums and dishwashers have been around for many years. More people will own useful robots.鈥

Harmful
Aaron Chia-Yuan Hung, associate professor of educational technology at Adelphi University, said, 鈥淚 am concerned about the fragmentation of society. More people than ever before are being exposed to confirmation bias because algorithms feed us what we like to see, not what we should see. Because so much of media (including news, popular culture, social media, etc.) is about getting our attention, and because we are drawn to things that fit our worldview, we are constantly fed things programmed to drive us to think in particular ways. We are not encouraged to think beyond those parameters. This is an intentional design and, while it鈥檚 easy to point to social media being the issue, it鈥檚 not the sole perpetrator of this problem.

鈥淏ecause the economy is based so much on attention, it is hard to get tech companies to design products that nudge us out of our worldview, let alone encourage us to have civil discourse based on factual evidence about complex issues. Humans are more isolated today and often too insulated. They don鈥檛 learn how to have proper conversations with people they disagree with. They are often not open to new ideas. This isolation, coupled with confirmation bias, is fragmenting society. It could possibly be reduced or alleviated by the correct redesign and updating of digital technology.鈥

Beneficial
Fernando Barrio, lecturer in business and law at Queen Mary University of London, wrote, 鈥淭aking into account today鈥檚 current trends in technology development 鈥 which can be linked to the greatest wealth concentration ever seen 鈥 it is difficult to imagine positive changes by 2035 (the disregard for the environmental impact of these technologies and the glorification of triviality not withstanding). I will say, however, that over the next 12 years the possibility for radical change does exist.

鈥淚f we analyze the trends and focus on potential benefits, seeking the best changes we might find in an otherwise bleak scenario, digital technologies 鈥 specifically AI 鈥 will work wonders in healthcare and new drug development. AI, which is more accurately designated in today鈥檚 form as self-learning algorithmic systems or artificial narrow intelligence, is currently used to theoretically test libraries of drugs against specific diseases. Deep learning technologies have the potential to isolate the impact that different components have on specific areas of a particular disease and to then recombine them to create new drugs.

鈥淭hrough state-sponsored initiatives, philanthropic activity or, more unlikely, a reconversion of corporate objectives, it is possible that by 2035 technology can be used to upgrade society in many realms. In the field of health it can find treatment for many of the most serious diseases that are harming humanity today and get that treatment out globally, beyond the tiny proportion of the world’s population that is situated in affluent countries.

鈥淥ther technological development that it is likely to revolutionize healthcare, at least in the Global North, is the use of AI-based robots for elder care. With an aging population, the use of robots to care for and cater to the older generations seems inevitable, and there are already plenty of examples in countries like Japan that are likely to be globalized by 2035.鈥

Harmful
Fernando Barrio, lecturer in business and law at Queen Mary University of London, commented, 鈥淭o this point in their development people鈥檚 uses of the new digital technologies are primarily responsible for today鈥檚 extreme concentration of wealth, the overt glorification of the trivial and superficial, an exacerbation of extremes and political polarization and a relativization of human rights violations that may surpass most such influences of the past.

鈥淏lind technosolutionism and a concerted push for keeping technology unregulated under the false pretense that regulation would hinder its development (and its growth being paramount to human development and happiness) led us to the present. Anyone who believes the fallacy that unbridled technological development was the only thing that kept the planet functioning during the global pandemic fails to realize that those technologies could well have evolved even better in a different, more human-centered regulatory and ethical environment, very likely with more stability.

鈥淭here needs to be a substantial change in the way that society regulates technology, or the overall result will not be positive.

鈥淭here is a move in intellectual and academic circles to justify the de-humanization of social interactions and brand as technophobes anyone seeing it to be a negative that people spend most of their time today looking at digital devices. The claim is that those who spend hours physically isolated are actually more connected than others, and that spending hours watching trivial media is a new form of literacy. The advocates of that form of technologically driven social isolation and trivialization will have to explain聽 why 鈥 in the age of greatest access to information in history 鈥 we see a constant decline in knowledge and in the capacity to analyze information, not to mention the current pandemic of mental health issues within the younger generations.

鈥淏y 2035, unless there is a radical change in the way people, especially the young, interact with technology, the current situation will worsen substantially.

鈥淯ses of digital technology have led to an outbreak of political polarization and the constant creating of unbridgeable ideological divides, leading to more highly damaging social self-harming situations like Brexit in UK, and the shocking January 6, 2021, invasion of the U.S. Capitol. Technology does not create these situations, but its use is providing fertile ground for mischief, creating isolated people and affording them the tools to replicate and spread polarized and polarizing messages. The trivialization of almost everything via social media along with this polarization and the spread of misinformation is leading to an unfortunate decay in human rights.鈥

Beneficial (Did not respond to Harms question)
Dan Lynch, internet pioneer and inventor of CyberCash, wrote, 鈥淚鈥檓 concerned about the huge reliance on digital systems while the amount of illegal activity is growing daily. One really can鈥檛 trust everything. Sure, buying stuff from Amazon is easy and it really doesn鈥檛 matter if a few things are dropped or missing. I suggest you stay away from the money apps! Their underlying math is shaky. I know. I invented CyberCash in the mid-1990s.鈥

Beneficial and Harmful
Allison Wylde, senior lecturer at Glasgow Caledonian University and team leader on the UN Global Digital Compact Team with the Internet Safety, Security and Standards Coalition, said, 鈥淭o help us try to look forward and understand possible futures, two prominent approaches are suggested: examining possible-future scenarios and learning from published works of non-fiction and fiction. I鈥檇 like to merge these approaches here.

鈥淩oyal Dutch Shell has arguably led on the scenario approach since the 1960s. For scenario development, as a starting point, an important consideration concerns the framing of any question. Next, the question opens out by asking 鈥榳hat if?鈥 to help consider possible futures that may be marginal possibilities.

鈥淔rom published literature, fiction and non-fiction, a recent research project examining robots in the workplace concluded that society may experience gains and/or losses. From classical literature, as William Shakespeare suggested, perhaps cautioned, consequences are rooted in past actions, 鈥榃hat鈥檚 past is prologue.鈥 What can we take from this?

鈥淚f we look back to the time of the invention of the World Wide Web by Tim Berners-Lee, we see the internet started out as a space of openness and freedom. During the Arab Spring, citizens created live-streamed material that acted both as a real-time warning of threats from military forces, and as a record of events. Citizens from other countries assisted. Outside help is also being offered via online assistance today in the conflict between Russia and Ukraine. Is this one possible future: open and sharing?

鈥淎lternative futures, for instance those predicted by H.G. Wells at the end of the 19th century, suggest that we are being watched by intelligences greater than ours, 鈥榳ith intellects vast and cool and unsympathetic.鈥 While we humans are 鈥榮o vain and blinded by vanity that we couldn鈥檛 comprehend that intelligent life could have developed; so far鈥r indeed at all.鈥 Right now, we can see around us the open-source community developing AI-enhanced tools designed to help us, Dall-E, ChatGPT and Hugging Face are examples of such work. At the same time, malicious actors are turning these tools against us.

鈥淐urrently AI is viewed as a binary: good or bad. So, are we facing a binary problem, with two possible avenues? Or are our futures with AI, and indeed as with the rest of our lives, more complex, with multiple and interlinked possibilities? In addition, from literature (in particular, fiction), is the future constantly shifting 鈥 appearing and disappearing?

鈥淎t this point in time, the United Nations is shaping the language for a Global Digital Compact (GDC) that calls for a trusted, free, open and secure internet, with trust and trust-building as a central and underpinning foundation. Although the UN calls for trust and trust building, there is a silence on the mechanisms for such. Futures discussed here are but possibilities. The preliminary insights of those working toward finding their way to creating a widely accepted GDC share common threads, the importance of considering beyond good and bad, recognising the past and the present, and being alert, and thus well-prepared and well-resourced, to participate and anticipate possible multiple futures.

鈥淎rguably, just what and who will be in our futures may be more complex that we can imagine. Kazu Ishiguro in the novel 鈥楰lara and the Sun鈥 paints yet another picture: a humanoid robot, pining for the attention of a human and seeking comfort in the 鈥榟um of a fridge.鈥 This image may chime with the views of a recent Google staffer fired in 2022 for suggesting that AI chatbots may already be sentient. While like children 鈥榳ho want to help the world,鈥 their creators need to take responsibility as illustrated by the drive toward the use of explainable AI (XAI). (As a final note, Mary Shelly was not invoked.)

Beneficial
Kat Schrier, associate professor and founding director of the Games & Emerging Media program at聽Marist College, wrote, 鈥淚 believe one of the best benefits of future technology is that it will reveal more of the messiness of humanity. We can’t solve a problem unless we can identify it and name it as such. Through the advent of digital technologies, we have started to acknowledge issues with everything from harassment and hate to governance and privacy.

鈥淭hese are issues that have always been there, but are highlighted through connections in gaming, social media and other virtual spaces. My great hope is that digital technology will help to solve complex human and social problems like climate change, racial inequities and war.

鈥淲e are already starting to see humans working alongside computers to solve scientific problems in games like Foldit or EteRNA. Complex, wicked problems are so challenging to solve. Could we share perspectives, interpret data and play with each other in ways that help illuminate and apply solutions to wicked problems?鈥

Harmful
Kat Schrier, associate professor and founding director of the Games & Emerging Media program at聽Marist College, commented, 鈥淭here are a number of large issues; these are just a few:

  1. Systemic inequities are transmogrified by digital technologies (though these problems have always existed, we may be further harming others through the advent of these systems). For instance, problems might include biased representation of racial, gender, ethnic and sexual identities in games or other media It also might include how a game or virtual community is designed and the cultural tone that is established. Who is included or excluded, by design?
  2. Other ethical considerations, such as privacy of data or how interactions will be used, stored and sold.
  3. Governance issues, such as how people report and gain justice for harms, how we prevent problems and encourage prosocial behavior, or how we moderate a virtual system ethically. The law has not evolved to fully adjudicate these types of interactions, which may also be happening across national boundaries.
  4. Social and emotional issues, such as how people are allowed to connect or disconnect, how they are allowed to express emotions, or how they are able to express their identities through virtual/digital communities.鈥

Beneficial and Harmful
Karl聽M.聽van Meter, author of 鈥淐omputational Social Science in the Era of Big Data,鈥 commented, 鈥淎t this period in the development of digital technology I am both excited and concerned. That attitude will probably evolve with time and future developments. My major concerns are with governance and the environment.

鈥淕iven hominine ingenuity, proven over millions of years, and the current economic pressure for new developments 鈥 including in technology 鈥 the fundamental question is 鈥榟ow will our societies and their economies manage future technological developments?鈥 Will the economic and profit pressure to obtain more and more personal data with new technology continue to generate major abuses and override individuals鈥 wishes for privacy? This is a question of governance and not of technology and new technological developments. It is up to humanity.

鈥淚n personal work in scientific research, the vast availability of information and contacts with others has been a major advantage and has resulted in great progress, but the same technologies have served to give a voice and assistance to creating serious obstacles to such progress and increasingly bringing ideological extremism into daily both in developed and less-developed countries.鈥

Beneficial
Laurie L. Putnam, educator and communications consultant, wrote, 鈥淭here is great potential for digital technologies to improve health and medical care. The trendlines are clear: Our population is growing older, caregivers are becoming harder to find, and medical specialists are often located some distance from their patients (even a short distance can be too far for a senior without support). Out of necessity, digital healthcare will become a norm.

鈥淩emote house calls, which became more common during the COVID pandemic, will serve more patients more frequently. Remote diagnostics and monitoring will be especially valuable for aging and rural populations that find it difficult to travel. Connected technologies will make it easier for specialized medical personnel to work together from across the country and around the world. Medical researchers will benefit from advances in digital data, tools and connections, collaborating in ways never before possible. We have already made great strides in remote research, diagnostics and treatment. Demographic trends are clearly telling us to do more of this.鈥

Harmful
Laurie L. Putnam, educator and communications consultant, said, 鈥淢any digital technologies are taking more than they give. And what we are giving up is difficult, if not impossible, to get back. Today鈥檚 digital spaces, populated by the personal data of people in the real world, is lightly regulated and freely exploited. Technologies like generative AI and cryptocurrency are costing us more in raw energy than they are returning in human benefit. Our digital lives are generating profit and power for people at the top of the pyramid without careful consideration of the shadows they cast below, shadows that could darken our collective future.

鈥淚f we want to see different outcomes in the coming years, we will need to rethink our ROI calculations and apply broader, longer-term definitions of return. We are beginning to see more companies heading in this direction, led by people who aren鈥檛 prepared to sacrifice entire societies for shareholders鈥 profits, but these are not yet the most-powerful forces. Power must shift and priorities must change.

Beneficial
Jim Kennedy, senior vice president for strategy at The Associated Press, wrote, 鈥淭he most significant advances in technology will be in search, the mobile experience, social networking, content creation and software development. These 鈥 among so many other components of digital life 鈥 will be rapidly advanced through artificial intelligence. Generative text and imagery are just the early manifestations of an AI-assisted world that should spark a massive new wave of creativity along with major productivity boosts. To get the most out of this rapid revolution, the humans in the loop will have to sharpen their focus on targets where we can realize the biggest gains and move quickly from experimentation to implementation. Another big sleeper is the electrification of motor vehicles, which will finally break open the next big venue for the mobile experience beyond the phone. AI, of course, will be central to that development as well. At the root of it all will be real personalization, which has been the holy grail since the beginning of digitalization.鈥

Harmful
Jim Kennedy, senior vice president for strategy at The Associated Press, responded, 鈥淢isinformation and disinformation are by far the biggest threats to digital life and to the peace and security of the world in the future. We have already seen the effects of this, but we probably haven鈥檛 seen the worst of it yet. The technological advances that promise to vastly improve our lives are the same ones giving bad actors the power to wage war against the truth and tear at the fabric of societies around the world. At the root of this problem is the lack of regulation and restraint of the major tech platforms that enable so much of our individual and collective digital experience. Governments exist to hold societies together. When will they catch up with the digital giants and hold them to account?鈥

Beneficial
Czes艂aw Mesjasz, an associate professor at Cracow University of Economics, Krak贸w, Poland, responded, 鈥淎mong the advances I see coming:

  1. Improving human knowledge about social life and nature should enhance capabilities to influence them positively
  2. Improving the quality of medical services will lead to better outcomes – especially diagnoses and treatment
  3. Helping people from various cultures understand one another could lead to a more peaceful world
  4. Increasing standards of living thanks to a higher productivity will bring many more people above the poverty line.鈥

Beneficial
Christopher Wilkinson, a retired European Union official, board member for EURid.eu, and Internet Society leader said, 鈥淣early everything that one might imagine for the future depends on proactive decisions and interventions by public authorities and by dominant corporations. At the global level, there will have to be coordination between regional and national public authorities and between corporate and financial entities. The United Nations (the available global authority) and the regional authorities (e.g., European Union and the like) are too weak to ensure protection of the public interest, and available institutions representing the corporate side (e.g., World Economic Forum) are conducive to collusive business behavior.

鈥淗uman rights are an absolute. Those who are least likely to have access to and use of digital technologies are those who are also most likely to suffer from limitations to, or abuse of, their human rights. During the next decade, a large part of the 8 billion world population will still not have access to and command of digital technologies. The hope is that corrective controls and actions are initiated to at least come incremental improvements, including management of interactions in languages, democratic governance and commerce, all of which require extensive research, education and investment to achieve in regions where connectivity is still a luxury.

鈥淚n regard to human health and wellness, I have no personal experience of digital life making people happier. In the light of current experience of Ukraine, Syria or Turkey, I have doubts about the ability of digital to make people safer. No doubt that predictive technologies and big data applications might reduce certain risks (the recent train crash in Ohio comes to mind). But human fallibility, greed and envy usually still prevail. Healthier? Vast resources will be required to address the health of the aging and victims of famine, war and pandemics. There is no evidence that this will be done worldwide, at scale, transparently, accessibly and affordably during the next decade.鈥

Harmful
Christopher Wilkinson, a retired European Union official, board member for EURid.eu, and Internet Society leader said, 鈥淎mong the potential harms:

鈥淒igital applications in governance and other institutional decision-making will continue to be distrusted. Voting machines, identification, what else? Usually, the best solutions already exist somewhere, if only they can be identified and reproduced.

鈥淎mong the many concerns tied to human rights between now and 2035 are the continued exclusion of minorities in digital opportunity and the problems that would be raised by the disappearance of the right to payment with cash in light of the massive recent move to digital-only transactions.

鈥淚n health and medicine, the new leading-edge applications can be brilliant. The challenge is how to extend the best ones to replace the legacy systems already in place, linking patient IDs, doctors, hospitals, pharmacies, public health insurance, etc., into one interoperable system, whilst protecting the patients’ privacy.

鈥淢ore generally, since 2035 is, like, next week in the world of planning for institutions and populations, the main priority should be to ensure that the best solutions are extended to the full populations as a whole. It is no longer the time for blue sky research. A lot of all that has already been done. The benefits of existing technology and knowledge need to be extended to the population as a whole, if the objective is to improve implementations by 2035.鈥

Beneficial
Jeffrey Johnson, professor of complexity science and design at The Open University, said, 鈥淎mong the advances I foresee is that the internet will become better regulated. It will no longer be possible to make anonymous comments about people. This will curtail the terrible misogyny, lies, threats and false news that currently poison social media and damage social and political life. Artificial Intelligence will be better understood as a technology. It will not be legally possible to make false claims for AI. Autonomous robots will improve and be widely applied in agriculture and many other aspects of life. But they will remain primitive. And computer systems will become much better as the theory of programming matches better the way that computer systems are used in organisations.鈥

Harmful
Jeffrey Johnson, professor of complexity science and design at The Open University, said, 鈥淚 would say that the harms that may come will arrive if the points I just made are not achieved. If effective internet regulation does not come to fruition, it will remain possible to make anonymous comments about people. This will exacerbate the terrible misogyny, lies, threats and false news that currently poison social media and damage social and political life. It is possible that artificial Intelligence will continue to be misunderstood as a technology and it will remain legally possible to make false claims for AI. Autonomous robots, like all technology can be used for all purposes; they could be improved and then be widely applied in warfare and in many ways that harm the public and curtail citizens’ rights and lives. And computer systems may not be improved as the theory of programming continues to mismatch the way that computer systems are used in organisations.鈥

Beneficial
Jens Ambsdorf, director of the Lighthouse Foundation in Germany, said, 鈥淚 can imagine that the stormy development of AI applications in the sense of meaningful sorting and verifying big data could help a lot to identify and verify trends and patterns at an unprecedented speed and range. This can potentially have a huge impact on all areas of to which it is applied. It is critical that the access to these technologies is not limited to small interest groups but society at large. But the same technologies could finally offer easy access routes also for non-tech affine groups and be an enabling driver for this development. Still, broad education is a prerequisite for a meaningful application, and interpretation of these technologies is needed in many countries and societies that are now lacking a full and useful understanding of them.鈥

Harmful
Jens Ambsdorf, director of the Lighthouse Foundation in Germany, wrote, 鈥淭he same technologies that could be drivers for a more coherent and knowledge-based world can be the source of further fragmentation and the building up of parallel societies. The creation of self-referenced echo chambers and alternative narratives is a threat to the very existence of humans on this planet as the self-inflicted challenges like biodiversity loss, climate change, pollution and destructive economies can only be successfully faced together on this planet. Currently I hold this danger to be far bigger than the chance for a positive development, as the tools for change are not based in the hands of society but more and more in the hands of private competing interests.鈥

Beneficial
Herb Lin, senior research scholar for cyber policy and security at Stanford University鈥檚 Center for International Security and Cooperation, said, 鈥淭he most beneficial change in digital life likely to take place by 2035 is that things don’t get much worse than they are now with respect to pollution in and corruption of the information environment. Applications such as ChatGPT will get better without question, but the ability of humans to use such applications wisely will lag. My best hope is that human wisdom and willingness to act will not lag so much that they are unable to respond effectively to the worst of the new challenges accompanying innovation in digital life.鈥

Harmful
Herb Lin, senior research scholar for cyber policy and security at Stanford University鈥檚 Center for International Security and Cooperation, commented, 鈥淭he worst likely outcome is that humans will develop too much trust and faith in the utility of the applications of digital life and become ever more confused between what they want and what they need. The result will be that societal actors with greater power than others will use the new applications to increase these power differentials for their own advantage.鈥

Beneficial
Davi Ottenheimer, vice president for trust and digital ethics at Inrupt, a company applying the new Solid data protocol, said, 鈥淭he best and most beneficial changes in digital life by 2035 by most accounts will be from innovations in machine learning, virtualization and interconnected things (IoT). Learning technology can reduce the cost of knowledge. Virtualization technology can reduce the cost of presence. Interconnected things can both improve the quantity of data for the previous two, while also delivering more accessibility.

鈥淭his all speaks mainly to infrastructure tools, however, which need a special kind of glue. Stewardship and ethics can chart a beneficent course for the tools by focusing on an improved digital life that takes those three pieces and weaves them together with open standards for data interoperability. We saw a similar transformation of the 1970s closed data processing infrastructure into the 1990s interconnected open-standards Web.

鈥淭his shift from centralized data infrastructure to federated and distributed processing is happening again already, which is expected to provide ever higher quality/integrity data. For a practical example, a web page today can better represent details of a person or an organization than most things could 20 years ago. In fact, we trust the Web to process, store and transmit everything from personalized medicine to our hobbies and work.

鈥淭he next 20 years will continue a trend to Web 3.0 by allowing people to become more whole and real digital-selves in a much safer and healthier format. The digital self could be free of self-interested moat platforms, using instead representative ones; a right to be understood, founded in a right to move and maintain data about ourselves for our purposes (including wider social benefit).

鈥淜nowledge will improve as it can be far more easily curated and managed by its owner when it isn’t locked away, divided into complex walled gardens and forgotten in a graveyard of consents. A blood pressure sensor, for example, would send data to a personal data store for processing and learning far more privately and accurately. Metadata then could be shared based narrowly on purpose and time, such as with a relative, coach, assistant or healthcare professional. Health and well-being thus benefits directly from coming improvements in data-integrity architecture, as we already are seeing consent-based open-standards sharing infrastructure being delivered that will transform lives for the better.鈥

Harmful
Davi Ottenheimer, vice president for trust and digital ethics at Inrupt, a company applying the new Solid data protocol, predicted, 鈥淭he most harmful or menacing changes likely to occur by 2035 in digital technology are related to the disruptive social effects of domain shifts. A domain shift pulls people out of areas they are familiar with and forces them to reattach to unfamiliar technology, such as with the end of horses and the rise of cars. In retrospect the wheel was inferior to four-legged transit in very particular ways (e.g., requirement for a well-maintained road in favorable weather, dumping highly toxic byproducts in its wake) yet we are very far away from realizing any technology-based legged transit system.

“Sophisticated or not-well-understood technology can be misrepresented using fear tactics such that groups will drive into decades of failure and harm, without realizing they’ve being fooled. We’ve seen this in the return push to driverless vehicles, which are not very new but presented lately as magically very near to being realized.

鈥淪ensor-based learning machines are solicited unfairly at unqualified consumers to prey on their fear about loss of control; people want to believe a simple and saccharin digital assistant will make them safer without evidence. This has manifested as a form of addiction and over-dependence causing social and mental health issues, including an alarming rise in crashes and preventable deaths by inattentive drivers believing misinformation about automation.

鈥淓ven more to the point, an over-emphasis on automation instead of augmentation leaves necessary human safety controls and oversight out of the loop on extremely dangerous and centrally controlled machines. It quickly becomes more practical and probable to poison a driverless algorithm in a foreign country to unleash a mass casualty event using loitering cars as swarm kamikazes, than to fire remote missiles or establish airspace control for bombs.

鈥淎nother example, related to misinformation, is the domain shift in identity and digital self. Often referred to as deep fakes, an over-reliance on certain cues can be manipulated to target people who don’t use other forms of validation. Trust sometimes is based on the sound of a voice or based on the visual appearance of a face. That was a luxury, as any deaf or blind person can provide useful insight about. Now in the rapidly evolving digital tools market anyone can sound or look like anyone, like observers becoming deaf or blind and needing some other means of trust to be established.

鈥淭his erodes old domains of trust, yet it also could radically shift trust by fundamentally altering what credible sources should be based upon. A black woman having the opportunity to put on a white face to reach audiences, or an unknown person looking like a celebrity, challenges many groups’ notions of what they should have been trusting as a connection and their message.

鈥淐ontent should be judged, rather than the cover, as the old saying goes. Like the printing press revolution, without wise content frameworks we may see increased polarization and division due to exploitation of this knowledge shift 鈥 the spread of bogus ideology through rapidly evolving inexpensive communication channels.鈥

Beneficial
Raymond Perrault, a distinguished computer scientist at SRI International and director of the AI Center there from 1988 to 2017, wrote, 鈥淔irst, some background. I find it useful to describe digital life as falling into three broad, and somewhat overlapping categories:

  • Content: web media, news, movies, music, games (mostly not interactive)
  • Social media (interactive, but with little dependency on automation)
  • Digital services, in two main categories: pure digital (e.g., search, financial, commerce, government) and that which is embedded in the physical world (e.g., healthcare, transportation, care for disabled and elderly)

鈥淭he big challenges are quality of information (veracity and completeness) and technical feasibility of some services, in particular those depending on interaction.

鈥淢ost digital services depend on interaction with human users and the physical world that is timely and highly context-dependent. Our main models for this kind of interaction today (search engines, chatbots, LLMs) are all deficient in that they depend on a combination of brittle hand-crafted rules, large amounts of labelled training data, or even larger amounts of unlabeled data, all to produce systems that are either limited in function or insufficiently reliable for critical applications. We have to consider security of infrastructure and transactions, privacy, fairness in algorithmic decision-making, sustainability for high-security transactions (e.g., with blockchain), and fairness to content creators, large and small.

鈥淪o, what good may happen by 2035?

鈥淗ardware, storage, compute, communications costs will continue to decrease, both in cloud and at the edge. Computation will continue to be embedded in more and more devices, but usefulness of devices will continue to be limited by the constraints on interactive systems. Algorithms essential to supporting interaction between humans and computers (and between computers and the physical world) will improve if we can figure out how to combine tacit/implicit reasoning, as done by current deep learning-based language models, with more explicit reasoning, as done by symbolic algorithms.

鈥淲e don鈥檛 know how to do this, and a significant part of the AI community resists the connection, but I see it as a (difficult) technical problem to be solved, and I am confident that it will one day be solved. I believe that improving this connection would allow systems to generalize better, be taught general principles by humans (e.g., mathematics), reliably connect to symbolically stored information, and conform to policies and guidance imposed by humans. Doing so would significantly improve the quality of digital assistants and of physical autonomous systems. Ten years is not a bad horizon.

Harmful
Raymond Perrault, a distinguished computer scientist at SRI International and director of the AI Center there from 1988 to 2017, predicted, 鈥淏etter algorithms will not solve the disinformation problem, though they will (continue to) be able to bring cases of it to the attention of humans. Ultimately this requires improvements in policy and large investments in people, which goes against incentives of corporations and can only be imposed on them by governments, which are currently incapable of doing so. I don鈥檛 see this changing in a decade. Nor will better algorithms solve the necessary investments to prevent certain kinds of information services (e.g., local news) from disappearing, nor treating content creators fairly. Government services could be significantly improved by investment using known technologies, e.g., to support tax collection. The obstacles again are political, not technical.鈥

Beneficial
惭颈肠丑补别濒听骋.听顿测别谤, professor emeritus of computer science at UCLA, wrote 鈥淎I systems like ChatGPT and DALL-E represent major advances in Artificial Intelligence. They illustrate “infinite generative capacity” which is an ability to both generate and recognize sentences and situations never before described. As a result of such systems, AI researchers are beginning to narrow in on how to create entities with consciousness. As an AI professor I had always believed that if an AI system passed the Turing Test it would have consciousness but systems such as ChatGPT have proven me wrong. ChatGPT behaves as though it has consciousness but does not. The question then arises: What is missing?

鈥淎 system like ChatGPT (to my knowledge) does not have a stream of thought; it remains idle when no input is given. In contrast, humans, when not asleep or engaged in some task, will experience their minds wandering 鈥 thoughts, images, past events and imaginary situations will trigger more of the same. Humans also continuously sense their internal and external environments and update representations of these, including their body orientation and location in space and the temporal position of past recalled events or of hypothetical, imagined future events.

鈥淗umans maintain memories of past episodes. I am not aware as to whether or not ChatGPT keeps track of interviews it has engaged in or of questions it has been asked (or the answers it has given). Humans are also planners: they have goals, and they create, execute and alter/repair plans that are designed to achieve their goals. Over time they also create new goals; they abandon old goals and re-rank the relative importance of existing goals.

鈥淚t will not take long to integrate systems like ChatGPT with robotic and planning systems and to alter ChatGPT so that it has a continual stream of thought. These forms of integration could easily happen by 2035. Such integration will lead to an entire new type of technology 鈥 technologies with consciousness.鈥

Harmful
惭颈肠丑补别濒听骋.听顿测别谤, professor emeritus of computer science at UCLA, warned 鈥淗umans have never before created artificial entities with consciousness and so it is very difficult to predict what sort of products will come about, along with their unintended consequences.

鈥淚 would like to comment on two dissociations with respect to A.I. The first is that an AI entity (whether software or robotic) can be highly intelligent while NOT being conscious or biologically alive. As a result, an AI will have none of the human needs that come from being alive and having evolved on our planet (e.g., the human need for food, air, emotional/social attachments, etc.). The second dissociation is between consciousness/intelligence and civil/moral rights. Many people might conclude that an AI with consciousness and intelligence must necessarily be given civil/moral rights; however, this is not the case. Civil/moral rights are only assigned to entities that can feel pleasure and pain. If an entity cannot feel pain then it cannot be harmed. If an entity cannot feel pleasure then it cannot be harmed by being denied that pleasure.

鈥淐orporations have certain rights (e.g., they can own property) but they do not have moral/civil rights, because they cannot experience happiness, nor suffering. It is eminently possible to produce an AI entity that will have consciousness/intelligence but that will NOT experience pleasure/pain. If we humans are smart enough, we will restrict the creation of synthetic entities to those WITHOUT pleasure/pain. In that case we might survive our inventions.

鈥淚n the entertainment media, synthetic entities are always portrayed by humans and a common trope is that of those entities being mistreated by humans and the audience then sides with those entities. In fact, synthetic entities will be very nonhuman. They will NOT eat food; give birth; grow as children into adulthood; get sick; fall in love; grow old or die. They will not need to breath and currently I am unaware of any AI system that has any sort of empathy for the suffering of humans. Most likely (and unfortunately) AI researchers will create AI systems that do experience pleasure/pain and even argue for doing such, so that such systems learn to have empathy. Unfortunately, such a capacity will then turn them into agents deserving of moral consideration and thus of civil rights.

鈥淲ill humans want to give civil rights and moral status to synthetic entities who are not biologically alive and who could care less if they pollute the air that humans must breathe to stay alive? Such entities will be able to maintain backups of their memories and live on forever. Another mistake would be to give them any goals for survival. If the thought of being turned off causes such entities emotional pain, then humans will be causing suffering in a very alien sort of creature and humans will then become morally responsible for their suffering. If humans give survival goals to synthetic agents, then those entities will compete with humans for survival.

鈥淭he field of AI is advancing very rapidly. AI systems now exist that pass the Turing Test but still lack consciousness, but conscious systems are not far off. Will humans also create (non-human, non-living) AI systems that are able to demand civil rights, due to their ability to also experience pleasure and pain?

鈥淣ote here that I have ignored the thorny issue of determining whether or a not an AI entity is actually experiencing pain when in the future it behaves as though it is in pain. With ChatGPT we already have the problem of determining whether or not it is conscious while it behaves as though it is conscious.鈥

Beneficial
Frank Odasz, president of Lone Eagle Consulting, said, 鈥淏y 2035, everyone will have a relationship with AI in multiple forms. ChatGPT is an AI tool to draft essays on any topic. Jobs will require less training and will be continually aided by AI helpers. The congressional office of technology assessment will be reinstated to counter the exponential abuses of AI, deepfake videos and all other known abuses. Creating trust in online businesses and secure identities will become common place. Four-day work weeks and continued growth in remote work and remote learning will mean everyone can make the living they want, living wherever they want.

鈥淓veryone will have a global citizenship mindset working toward those processes that empower everyone. Keeping humankind to the same instant of progress will become a shared goal as the volume of new innovations continues to increase, creating increasing opportunities for everyone to combine multiple innovations to create new integrated innovations.

鈥淒eveloping human talent and agency will become a global shared goal. Purposeful use of our time will become a key component of learning. There will be those who spend hours a day using VR googles for work and gaming with increasingly social components. A significant portion of society will be able to opt out of most digital activities once universal basic income programs proliferate. Life, liberty and pursuit of happiness, equality before the law and new forms of self-exploration and self-care will proliferate.

鈥淐ollective values will emerge and become important regarding life choices. Reconnecting with nature and our responsibility for stewardship of our planet鈥檚 environments, and each other, will take a very purposeful role in the lives of everyone. As more people learn the benefits of being positive, progressive, tolerant of differences and open-minded, most people will agree that people are basically good.

鈥淭he World Values Survey has previously recorded metrics like 78% of Swedish citizens believe people are basically good, while Latin Americans give 15%, and those in Asia 5%. Exact figures are at the website and ongoing surveys will reflect changes.

鈥淧ursuit of meaningful use of our time, freed from menial labor, will create a new global culture of purpose to rally all global citizens to work together to sustain civil society and our planet.

鈥淲ith all the advances in tech, what could go wrong? … 鈥

Harmful
Frank Odasz, president of Lone Eagle Consulting, wrote, 鈥淏y 2035, the vague promise of broadband for all, providing meaningful, measurable, transformational outcomes, will create a split society, extending what we already see in 2023 with the most-educated leaning toward a progressive, tolerant, open-learning society able to adapt easily to accelerating change. Those left behind without the mutual support necessary to grow to learn to love learning and benefit from accelerating technical innovation will grow fearful of change, learning and of those who do understand the potential for transformational outcomes of motivated self-directed Internet learning and particularly of collaborating with others. If we all share what we know, we鈥檒l all have access to all our knowledge.

鈥淟ensa AI is an app from China that turns your photo into many choices for an avatar and/or a more compelling ID photo, requiring only that you sign away all intellectual rights to your own likeness. Abuses of social media are listed at the Ledger of Harms from the Center for Humane Tech.

鈥淚t is known that foreign countries continue to implement increasingly insidious methods for proliferating misinformation and propaganda. Certainly the United States, internally, has severe problems due to severe political polarization that went nearly ballistic in 2020 and 2021.

鈥淚f a unified global value system evolves, there is hope international law can contain such moral and ethical abuses. Note: the Scout Law created in 1911 has a dozen generic values for common decency and served as the basis for the largest uniformed organizations in the world 鈥 Boy Scouts and Girl Scouts. Reverence is one trait that encompasses all religions.

鈥淟eave no one behind needs to refer to those without a moral compass; positive, supportive culture; self-esteem; and common sense.

鈥淢ental health problems are rampant worldwide. Vladimir Putin controls more than 4,500 nuclear missiles. In the United States, proliferation of mass shootings tells us one person can wreak havoc with the lives of very many others. If 99 percent of society evolves to be good people with moral values and generous spirits, the reality is human society might still end in nuclear fires due to the actions of a few individuals, or even a single individual with a finger on the red button capable of destroying billions and making huge parts of the planet uninhabitable. How can technology assure our future? Finland has built underground cities to house their entire population in the event of nuclear war.

鈥淭he battle between good and evil has changed due to the power of technology. The potential disaster only a few persons can exact upon society continues to grow disproportionally to the security the best efforts of good folks can deliver. This dichotomy, taken to extremes, might spell doom for us all unless radical measures are taken, down to the level of monitoring individuals every moment of the day.

鈥淎cultural worldviews need to evolve to create a common bond accepting our differences as allowable commonalities. This is the key to sustainability of the human race, and it is not a given. Our human-caused climate changes are already creating dire outcomes, drought, sea levels rising and much more. The risk of greater divisiveness will increase as impacts of climate change continue to increase. Migration pressure is but one example.鈥

Beneficial and Harmful
David Weinberger, senior researcher at Harvard鈥檚 Berkman Center for Internet and Society, wrote, 鈥淏oth the Internet and machine learning have removed the safe but artificial boundaries around what we can know and do, plunging us into a chaos that is certainly creative and human, but also dangerous and attractive to governments and corporations desperate to control more than ever.

鈥淚t also means that the lines between predicting and hoping or fearing are impossibly blurred. Nevertheless:

鈥淩ight now, large language models (LLMs) of the sort used by ChatGPT know more about our use of language than any entity ever has, but they know absolutely nothing about the world. (I’m using 鈥榢now鈥 sloppily here.) In the relative short term, they’ll likely be intersected with systems that have some claim to actual knowledge so that the next generation of AI chatters will hallucinate less and be more reliable. As this progresses, it will likely disrupt both our traditional and Net-based knowledge ecosystems.

“With luck, the new knowledge ecosystem is going to have us asking whether knowing with brains and books hasn’t been one long dark age. I mean, we did spectacularly well with our limited tools, so good job fellow humans! But we did well according to a definition of knowledge tuned to our limitations.

“As machine learning begins to influence how we think about and experience our lives and world, our confidence in general rules and laws as the high mark of knowledge may fade, enabling us to pay more attention to the particulars in every situation. This may open up new ways of thinking about morality in the West and could be a welcome opportunity for the feminist ethics of care to become more known and heeded as a way of thinking about what we ought to do.

“Much of the online world may be represented by agents: software that presents itself as a digital “person” that can be addressed in conversation and can represent a body of knowledge, an organization, a place, a movement. Agents are likely to have (i.e., be given) points of view and interests. What will happen when these agents have conversations with one another is interesting to contemplate.

鈥淲e are living through an initial burst of energy and progress in areas that until recently were too complex to even imagine we could.

鈥淭hese new machines will give us more control over our world and lives, but with our understanding lagging, often terminally. This is an opportunity for us to come face to face with how small a light our mortal intelligence casts. But it is also an overwhelming temptation for self-centered corporations, governments and individuals to exploit that power and use it against us.

鈥淚 imagine that both of those things will happen.

鈥淪econd, we are heading into a second generation that has lived much of its life on the Internet. For all of its many faults 鈥 a central topic of our time 鈥 being on the Internet has also shown us the benefits and truth of living in creative chaos. We have done so much so quickly with it that we now assume connected people and groups can undertake challenges that before were too remote even to consider. The collaborative culture of the Internet 鈥 yes, always unfair and often cruel 鈥 has proven the creative power of unmanaged connective networks.

鈥淎ll of these developments make predicting the future impossible 鈥 beyond, perhaps, saying that the chaos that these two technologies rely on and unleash is only going to become more unruly and unpredictable, driving relentlessly in multiple and contradictory directions.

鈥淚n short: I don’t know.鈥

Beneficial and Harmful
Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, observed, 鈥淗umans and technology rarely develop in perfect sync, but we will see them catching up. We鈥檝e lived through a period in which digital tech has developed at speeds we鈥檝e struggled to keep up with: too much content, too much noise, too much disinformation.

鈥淪lowly but surely, we鈥檙e getting the tools to regain some semblance of control. AI used to be the monster under our beds, but now we鈥檙e seeing how we might make it our obedient dog (although some still fear it might be a cat in disguise). As new tools are released, we鈥檙e increasingly seeing people using them for fearless experimentation, finding ways to bend ever more powerful technologies to human wills. From fearing that AI and other technologies are going to take our jobs and make us obsolete, humans are finding ever more ways to elevate themselves with technology and making digital wrangling into not just the hobby of a few forerunners, but a new folk culture.

鈥淭here was a time when using electricity was something you could only do after serious education and a long apprenticeship. Today, we all know how a plug works. The same is happening in the digital space. Increasingly, digital technologies are being turned into something so easy to use, utilize and manipulate so that they become the modern equivalent of electricity. As every man, woman and child knows how to use an AI to solve a problem, digital technology becomes ever less scary and more and more the equivalent of building with Lego blocks. In 2035 the limits are not technological, but creative and communicative. If you can dream it and articulate it, digital technology can build it, improve upon it and help you transcend the limitations you thought you had.

鈥淭hat is, unless a corporate structure blocks you.

鈥淪piderman鈥檚 Uncle Ben said, 鈥榃ith great power comes great responsibility.鈥 What happens when we all gain great power? The fact that some of us will act irresponsibly is already well known, but we also need to heed the backlash this all brings. There are great institutional powers at play that may not be that pleased with the power that the new and emerging digital technologies afford the general populace. At the same time, there is a distinct risk that radicalized actors will find ever more toxic ways to utilize the exponentially developing digital tools 鈥 particularly in the field of AI. A common fear in scary future scenarios is that AIs will develop to a point where they subjugate humanity, but right now, leading up to 2035, our biggest concern is the ways in which humans are and will be weaponizing AI tools.

鈥淲here this places most of humanity is in a double bind. As digital technology becomes more and more powerful, state institutions will aim to curtail bad actors using it in toxic ways. At the same time, and for the same reason, bad actors will find ever more creative ways to use it to cheat, fool, manipulate, defraud and otherwise mess with us. The average Joe and/or Jane (if such a thing exists anymore) will be caught up in the coming AI turf wars, and some will become collateral damage.

鈥淲hat this means is that the most menacing thing about digital technologies won鈥檛 be the tech itself, nor any one person鈥檚 deployment of the same, but being caught in the pincer movement of attempted control and wanton weaponization. We think we鈥檝e felt this now, with the occasional social media post being quarantined, but things are about to get a lot, lot worse.

鈥淚magine having written a simple, original post, only to see it torn apart by content-monitoring software and at the same time endlessly repurposed by agents who twist your message to its very antithesis. Imagine this being a normal, daily affair. Imagine being afraid to even write an email, lest it becomes fodder in the content wars. Imagine tearing your children鈥檚 tech away, just to keep them safe for a moment longer.鈥

Beneficial (Did not answer the Harms question)
Garth Graham, longtime Canadian networked communities leader, commented, 鈥淐onsider the widely accepted Internet Society phrase, 鈥業nternet Governance Ecology.鈥 In that phrase, what does the word ecology actually mean? Is the Internet Society鈥檚 description of Internet governance as ecology a metaphor, an analogy or a reality? And, if it is a reality, what are the consequences of accepting it?

鈥淒igital technology surfaces the importance of understanding two different approaches to governance. Our current understanding of governance, including democracies, is hierarchical, mechanistic and measures things on an absolute scale. The rules about making rules are assumed to be apply externally from outside systems of governance. And this means that those with power assume their power is external to the systems they inhabit. The Internet, as a set of protocols for inter-networking, is based on a different assumption. Its protocols are grounded in a shift in epistemology away from the mechanistic and towards the relational.

鈥業t is a common pool resource and an example of the governance of complex adaptive self-organizing systems. In those systems, the rules about making rules are internal to each and every element of the system. They are not externally applied. This complexity means that the adaptive outcomes of such systems cannot be predicted from the sum of the parts. The assumption of control by leadership inherent in the organization of hierarchical systems is not present. In fact, the external imposition of management practices on a complex adaptive system is inherently disruptive of the system鈥檚 equilibrium. So the system, like a packet-switched network, has to route around it to survive.

鈥淧resently, our understanding of the difference between these two approaches to governance is most visible in the social changes occurring in the shift towards awareness of interconnectedness in ecologies, and in the significance that has for the mitigation of climate change. There is a chance that by 2035 awareness of the Internet鈥檚 nature as a complex adaptive system that mirrors and supports other self-organizing adaptive systems will accelerate a shift in epistemology away from governance by hierarchy and toward open systems of self-organization.

鈥淭hen the choice to connect or not with any system of relationship becomes personal, and the organizational responses to problems become distributed, adaptive and local, rather than top-down.

鈥淚 do not think we understand what society becomes when machines are social agents. Code is the only language that鈥檚 executable. It is able to put a plan or instruction or design into effect on its own. It is a human utterance (artifact) that, once substantiated in hardware, has agency. We write the code and then the code writes us. Artificial intelligence (AI) intensifies that agency. That makes necessary a shift in our assumptions about the structure of society.

鈥淎ll of us now inhabit dynamic systems of human-machine interaction. That complexifies our experience. Yes, we make our networks, and our networks make us. Interdependently, we participate in the world and thus change its nature. We then adapt to an altered nature in which we have participated. But the 鈥榳e鈥 in those phrases now includes encoded agents that interact autonomously in the dynamic alteration of culture. Those agents sense, experience and learn from the environment, modifying it in the process, just as we do. This represents an increase in the complexity of society and the capacity for radical change in social relation.

鈥淯rsula Franklin鈥檚 definition of technology 鈥 鈥楾echnology involves organization, procedures, symbols, new words, equations, and, most of all, it involves a mindset鈥 鈥 is that it is the way we do things around here. It becomes different as a consequence of a shift in the definition of 鈥榳e.鈥 AI increases our capacity to modify the world, and thus alter our experience of it. But it puts 鈥榰s鈥 into a new social space we neither understand nor anticipate.鈥

Beneficial (Did not answer the Harms question)
David Porush, writer and longtime professor at Rensselaer Polytechnic Institute, commented, 鈥淭here will be positive progress in many realms. Quantum computing will become a partner to human creativity and problem solving. We’ve shown sophisticated brute force computing achieve this already with ChatGPT. Quantum computing will surprise us and challenge us to exceed ourselves even further and in much more surprising ways. It will also challenge former expectations about nature and the super-natural, physics and metaphysics. It will rattle the cage of scientific axioms of the mechanist-vitalism duality. This is a belief, and a hope, with only hints in empirical evidence.

鈥淲e might establish a new worldwide court of criminal justice. Utopian dreams that the World Wide Web and new social technologies might change human behavior have failed 鈥 note the ongoing human criminality, predation, tribalism, hate speech, theft and deception, demagoguery, etc. Nonetheless, social networks also enable us to witness, record and testify to bad behavior almost instantly, no matter where in the world it happens.

鈥淏y 2035 I believe this will promote the creation (or beginning of the discussion of the creation) of a new worldwide court of criminal justice, including a means to prosecute and punish individual war crimes and bad nation actors. My hope is that this court would supersede our current broken UN and come to apolitical verdicts based on empirical evidence and universal laws. Citizens pretty universally have shown they will give up rights to privacy to corporations for convenience. It would also imply that the panopticon of technologies used for spying and intrusion, whether for profit or totalitarian control by governments, will be converted to serve global good.

鈥淪ocial networking contributes to scientific progress, especially in the field of virology. The global reaction to the arrival of COVID-19 showed the power of data gathering, data sharing and collaboration on analysis to combat a pandemic. Worldwide virology the past two years is a fine avatar of what could be done for all sciences.

鈥淲e can make more-effective use of global computing in regard to resource distribution. Politicians and nations have not shown enough political will to really address long-term solutions to crises like global warming, water shortages and hunger. At least emerging data on these crises arm us with knowledge as the predicate to solutions. For instance, there’s not one less molecule of H2O available on Earth than a billion years ago; it’s just collected, made usable and distributed terribly.

鈥淚f we combine the appropriate level of political will with technological solutions (many of which we have in hand), we can distribute scarce resources and monitor harmful human or natural phenomena and address these problems with much more timely and effective solutions.鈥

Beneficial
颁丑谤颈蝉迟辞辫丑别谤听奥.听厂补惫补驳别, a leading expert in legal and regulatory issues based in Washington, D.C., wrote, 鈥淐ontinued advances in artificial intelligence and machine learning will be extremely beneficial to society in a range of ways. These will include better medical diagnosis and healthcare, better and more customized education, and 鈥 in the background 鈥 more efficient business and commercial activities, leading to the potential of lower prices for consumers.

鈥淚 predict that remote work 鈥 which is enabled by near-ubiquitous broadband connectivity 鈥 will become permanent in many fields. Among many other benefits, this will spare remote workers the time and hassle of a daily commute. This is found time in the range of 10 to 20 percent of people鈥檚 waking hours. People realized this during the pandemic, and the absurdity of unnecessarily spending time commuting will cause remote work to remain important.

鈥淎 particular application of AI/ML is permitting purely natural language interfaces between people and their devices and apps. Twenty years ago, being able to simply speak to our devices to cause them to do what we want was still in the realm of science fiction. This capability (like saving commuting time) may not seem like a lot, but it will eliminate a great deal of friction 鈥 the cognitive load of dealing with apps and devices by typing and pushing buttons 鈥 that takes away from human enjoyment and flourishing.鈥

Harmful
颁丑谤颈蝉迟辞辫丑别谤听奥.听厂补惫补驳别, a leading expert in legal and regulatory issues based in Washington, D.C., responded, 鈥淭he degree to which people’s activities (both literally online and in the real world) are subject to surveillance by private entities and governments will increase. This creates a number of potentially serious harms:

1) Surveillance and inherent loss of privacy: People will perceive that they are being directly or indirectly watched more or less constantly. This inhibits personal freedom and exploration.

2) Manipulation: The more those performing surveillance know about us, the more effectively they will be able to manipulate us to do what is in their interest rather than ours. While this may be as trivial as buying something that someone doesn’t really need, it can also affect civic engagement and politics/voting activity. Again, this inhibits human freedom.

3) Disinformation: The less consensus there is among all members of society as to a set of basic facts and values, the more tenuous social bonds become. Digital technology has made it possible to spread lies, half-truths, innuendoes, etc., to a degree that has never before existed in human history. Combined with the increased ability of bad actors to manipulate us, this will seriously degrade social cohesion.鈥

Beneficial
Christine Boese, a consultant and independent scholar, wrote, 鈥淚鈥檓 having a hard time seeing around the 2035 corners because deep structural shifts are occurring that could really reframe everything on a level of electricity and electric light, or the advent of radio broadcasting (which I think was more ground-breaking for human connectedness than television).

鈥淭hese reframing technologies live inside rapid developments in natural language processing (NLP) and GPT3 (and GPT4), which will have beneficial sides, but also dark sides, things we are only beginning to see with ChatGPT.

鈥淭he biggest issue I see to making NLP gains truly beneficial is the problem that humanity doesn鈥檛 scale very well. That statement alone needs some unpacking. I mean, why should humanity scale? With a population approaching nine billion, and assumptions of mass delivery of goods and services, there are many reasons for merchants and providers to want humanity to scale, but mass scaling tends to be dehumanizing. Case in point: teaching writing at the college level. We鈥檝e tried many ways to make learning to write not so one-on-one teaching intensive, like an apprenticeship skill, with workshops, peer review, drafting, computer-assisted pedagogies, spell-check, grammar and logic screeners. All of these things work to a degree, but to really teach someone what it takes to be a good writer, nothing beats one-on-one. Teaching writing does not scale, and armies of low-paid adjuncts and grad students are being bled dry to try to make it do so.

鈥淐ould NLP help humanity scale? Or is it another thing that the original Modernists in the 1920s objected to about the de-humanizing assembly lines of the Industrial Revolution? Can we actually get to High Tech/High Touch, or are businesses which run like airlines, with no human-answered phone lines, the model of the future?

鈥淭hat is a corner I can鈥檛 see around, and I鈥檓 not ready to accept our nearly-sentient, uncanny GPT4 Overlords without proof that humanity and the humanities are not lost in mass scalability and the embedded social biases and blind spots that come with it.

鈥淲e are hitting the limits of human-directed technology as well, and machine learning management of details is quickly outstripping human cognition. 鈥楨xplainability鈥 will be the watchword, but with an even bigger caveat: one of the biggest symptoms of Long COVID could turn out to be permanent cognitive impairment in humans. This could become a species-level alteration, where it is not even possible for us to evolve into Morlocks; we could already necessarily be Eloi.

鈥淭o that end, the machines may have to step up, and this could be a critical and crucial benefit if the machines are up to it. If human intellectual capacity is dulled with COVID brain fog, an inability to concentrate, to retain details, and so on, it stands to reason humanity may turn to McLuhan-type extensions and assistance devices. Machines may make their biggest advances in knowledge retention, smart lookups, conversational parsing, low-level logic and decision-making, and assistance with daily tasks and even work tasks right at the time when humans need this support the most.

鈥淭his could be an incredible benefit. And it is also chilling.鈥

Harmful
Christine Boese, a consultant and independent scholar, observed, 鈥淭echnological dystopias are far easier to imagine than benefits. There are no neutral tools. Everything exists in social and cultural contexts.

鈥淚n the space of AI/ML in general, specialized ML will accomplish far more than unsupervised or free-ranging AI. I feel that the limits of the hype in this space are quickly being reached, to the point that it may stop being called 鈥榓rtificial intelligence鈥 very soon. I do not yet feel the overall benefit or threat will come directly from this space, on par with what we鈥檝e already seen from Cambridge Analytica-style machinations (which had limited usefulness for algorithmic targeting, and more usefulness in news feed force-feeding and repetition). We are already seeing a rebellion against corporate walled gardens and invisible algorithms in the Fediverse and the ActivityPub protocol, which have risen suddenly with the rapid collapse of Twitter.

鈥淣atural language processing is the exception, on the strength of the GPT project incarnations, including ChatGPT. Already I am seeing a split in the AI/ML space, where NLP is becoming a completely separate territory, with different processes, rules and approaches to governance. This specialized ML will quickly outstrip all other forms of AI/ML work, even image recognition.

鈥淲here does the menace or harm come from in NLP? It will easily pass the Turing Test. It will then be able to appear invisibly within any digital communications, with or without machine-generated markers. And the matter of appearance without actual sentient or reliable substance comes into play. NLP communications will likely just seamlessly migrate into our communications streams, all of them. They won鈥檛 just be deep fakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents, and corporate digital workforces. Soon all high-touch interactions will be non-human, no longer dependent on constructed question and answer keyword scripts.

鈥淪ome may ask, 鈥淲here鈥檚 the harm in that? These machines could provide better support than humans and they don鈥檛 sleep or require a paycheck and health benefits.鈥

鈥淧erhaps this does belong in the benefits column. But here is where I see harm in ubiquity (along with Plato, the old outsourcing brain argument): Humans have flaws. Machines have flaws. A bad customer service rep will not scale up harms massively. A bad machine customer service protocol could scale up harms massively.

鈥淔urther, NLP machine learning happens in sophisticated and many-layered ensembles, many so complex Explainable AI can only use other models to unpack model ensembles 鈥 humans can鈥檛 do it.

鈥淗ow long does it take language and communication ubiquity to turn into out-sourced decisions? Or predictive outcomes to migrate into automated fixes with no carbon-based oversight at all?

鈥淭ake just one example: Drone warfare. Yes, a lot of this depends on image processing, as well as remote monitoring capabilities. But we鈥檝e removed the human risk from the air (unmanned), but not on the ground (where it can be catastrophic). Digitization means replication and mass scalability, brought to drone warfare, and the communication and decision support will have NLP components. NLP logic processing can also lead to higher levels of confidence in decisions than is warranted.

鈥淎dd into the mix the same kind of malignant or bad actors as we saw within the manipulations of a Cambridge Analytica, a corporate bad actor, or a governmental bad actor, and we can easily get to a destabilized planet on a mass scale faster than the threat (with high development costs) of nuclear war ever did.

鈥淭his I find a greater risk than more mundane risks (which are more harmful without direct bad actors), such as blockchains, cryptocurrency mining, and a destabilized carbon footprint driven by the simple greed of oligarchs who think they can outlive a climate apocalypse in their bunkers and emerge smiling into an empty planet as their personal playground.鈥

Beneficial
Marcel Fafchamps,聽professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, wrote, 鈥淭he single most beneficial change will be the spread of already existing internet-based services to billions of people across the world, as they gradually replace their basic phones with smart phones, and as connection speed increases over time and across space. IT services to assist farmers and businesses are the most promising in terms of economic growth, together with access to finance through mobile money technology. I also expect IT-based trade to expand to all parts of the world, especially spearheaded by Alibaba.

鈥淭he second most beneficial change I anticipate is the rapid expansion of IT-based health care, especially through phone-based and AI-based diagnostics and patient interviews. The largest benefits by far will be achieved in developing countries where access to medically-provided health-care is limited and costly. AI-based technology provided through phones could massively increase provision and improve health at a time where the population of many currently low- or middle-income countries (LMIC) is rapidly aging.

鈥淭he third most beneficial change I anticipate is in drone-based, IT-connected drone services to facilitate dispatch to wholesale and local retail outlets, and to distribute medical drugs to local health centers and collect from them samples for health-care testing. I do not expect a significant expansion of drone deliveries to individuals, except in some special cases (e.g., very isolated locations or extreme urgency in the delivery of medical drugs and samples).

Harmful
Marcel Fafchamps,聽professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, said, 鈥淭he most menacing change I expect is in terms of the political control of the population. Autocracies and democracies alike are increasingly using IT technology to collect data on individuals, civic organizations and firms. While this data collection is capable of delivering social and economic benefits to many (e.g., in terms of fighting organized crime, tax evasion and financial and fiscal fraud), the potential for misuse is enormous, as evidenced for instance by the social credit system put in place in China.

鈥淪ome countries 鈥 most prominently, the European Union 鈥 have sought to introduce safeguards against abuse. But without serious and persistent coordination with the United States, these efforts will ultimately fail given the dominance of U.S.-protected GAFAM (Google, Apple, Facebook, Amazon and Microsoft) in all countries except China, and to a lesser extent, Russia.

鈥淭he world urgently needs Conference of the Parties (COP) meetings on international IT to address this existential issue for democracy, civil rights and individual freedom within the limits of the law. Whether this can be done is doubtful, given that democracies themselves are responsible for developing a large share of these systems of data collection and control on their own population, as well as on that of others (e.g., politicians, journalists, civil right activists, researchers, R&D firms).

鈥淭he second most worrying change is the continued privatization of the internet at all levels: cloud, servers, underwater transcontinental lines, last-mile delivery and content. The internet was initially developed as free for all. But this will no longer be the case in 2035, and probably well before that. I do not see any solution that would be able to counterbalance this trend, short of a massive, coordinated effort among leading countries. But I doubt that this coordination will happen, given the enormous financial benefits gained from appropriating the internet, or at least large chunks of it.

鈥淭his appropriation of the internet will generate very large monopolistic gains that current anti-trust regulation is powerless to address, as shown repeatedly in U.S. courts and in EU efforts against GAFAM firms. In some countries, this appropriation will be combined with heavy state control, further reinforcing totalitarian tendencies.

鈥淭he third most worrying change is the further expansion of unbridled social media and the disappearance of curated sources of news (e.g., newsprint, radio and TV). In the past, the world has already experienced the damages caused by fake news and gossip-based information (e.g., through tabloid newspapers), but never to the extent made possible by social media. Efforts to date to moderate content on social media platforms have largely been ineffective as a result of multiple mutually reinforcing causes: the lack of coordination between competing social media platforms (e.g., Facebook, Twitter, WhatsApp, TikTok); the partisan interests of specific political parties and actors; and the technical difficulty of the task.

鈥淭hese failures have been particularly disturbing in LMIC countries where moderation in local languages is largely deficient (e.g., hate speech across ethnic lines in Ethiopia; hate speech towards women in South Asia). The damage that social media is causing to most democracies is existential: by creating silos and echo chambers, social media is eroding the trust that different groups and populations feel towards each other, and this increases the likelihood of civil unrest and populist vote.

鈥淔urthermore, social media has encouraged the victimization of individuals who do not conform to the views of other groups in a way that does not allow the accused to defend themselves. This is already provoking a massive regression in the rule of law and the rights of individuals to defend themselves against accusations. I do not see any signs suggesting a desire by GAFAM firms or by governments to address this existential problem for the rule of law.

鈥淭o summarize, the first wave of IT-technology did increase individual freedom in many ways (e.g., accessing cultural content previously requiring significant financial outlays; facilitating international communication, trade and travel; making new friends and identifying partners; and allowing isolated communities to find each other to converse and socialize). The next wave of IT-technology will be more focused on political control and on the exploitation of commercial and monopolistic advantage, thereby favoring totalitarian tendencies and the erosion of the rights of the defense and of the whole system of criminal and civil justice. I am not optimistic at this point, especially given the poor state of U.S. politics at this point in time on both sides of the political spectrum.鈥

Beneficial
Maggie Jackson, award-winning journalist, social critic and author, commented, 鈥淭he most critical beneficial change in digital life now on the horizon is the rise of uncertain AI.

鈥淚n the six decades of its existence, AI has been designed to achieve its objectives, however it can. The field’s over-arching mission has been to create systems that can learn how to play a game, spot a tumor, drive a car, etc., on their own as well as or better than humans can do so.

鈥淭his foundational definition of AI largely reflects a centuries-old ideal of intelligence as the realization of one’s goals. However, the field’s erratic yet increasingly impressive success in building objective-driven AI has created a widening and dangerous gap between AI and human needs. Almost invariably, an initial objective set by a designer will deviate from a human’s needs, preferences and well-being come 鈥榬un-time.鈥

鈥淣ick Bostrom’s once-seemingly laughable example of a superintelligent AI system tasked with making paper clips, which then takes over the world in pursuit of this goal, has become a plausible illustration of the unstoppability and risk of reward-centric AI. Already, the 鈥榓lignment problem鈥 can be seen in social media platforms designed to bolster user time online by stoking extremist content. As AI grows more powerful, the risks of models that have a cataclysmic effect on humanity dramatically increase.

鈥淩eimagining AI to be uncertain literally could save humanity. And the good news is that a growing number of the world’s leading AI thinkers and makers are endeavoring to make this change a reality. Enroute to achieving its goals, AI traditionally has been designed to dispatch unforeseen obstacles, such as something in its path. But what AI visionary Stuart Russell calls 鈥榟uman compatible AI鈥 is instead designed to be uncertain about its goals, and so to be open to and adaptable to multiple possible scenarios.

鈥淎n uncertain model or robot will ask a human how it should fetch coffee or show multiple possible candidate peptides for creating a new antibiotic, instead of pursuing the single best option befitting its initial marching orders.

鈥淭he movement to make AI is just gaining ground and largely experimental. It remains to be seen whether tech behemoths will pick up on this radical change. But I believe this shift is gaining traction, and none too soon. Uncertain AI is the most heartening trend in technology that I have seen in a quarter-century of writing about the field.

Harmful
Maggie Jackson, award-winning journalist, social critic and author, said, 鈥淥ne of the most menacing, if not the most menacing, changes likely to occur in digital life in the next decade is a deepening complacency about technology. If first and foremost we cannot retain a clear-eyed, thoughtful and constant skepticism about these tools, we cannot create or choose technologies that help us flourish, attain wisdom and forge mutual social understanding. Ultimately, complacent attitudes toward digital tools blind us to the actual power that we do have to shape our futures in a tech-centric era.

鈥淢y concerns are three-part: First, as technology becomes embedded in daily life, it typically is less explicitly considered and less seen, just as we hardly give a thought to electric light. The recent Pew report on concerns about the increasing use of AI in daily life shows that 46 percent of Americans have equal parts excitement and concern over this trend, and 40 percent are more concerned than excited. But only 30 percent correctly fully identified where AI is being used, and nearly half think they do not regularly interact with AI, a level of apartness that is implausible given the ubiquity of smart phones and of AI itself. AI, in a nutshell, is not fully seen. As well, it’s alarming that the most vulnerable members of society – people who are less-well educated, have lower incomes, and/or are elderly 鈥 demonstrate the least awareness of AI’s presence in daily life and show the least concern about this trend.

鈥淪econd, mounting evidence shows that the use of technology itself easily can lead to habits of thought that breed intellectual complacency. Not only do we spend less time adding to our memory stores in a high-tech era, but 鈥榰sing the internet may disrupt the natural functioning of memory,鈥 according to researcher Benjamin Storm. Memory-making is less activated, data is decontextualized and devices erode time for rest and sleep, further disrupting memory processing. As well, device use nurtures the assumption that we can know at a glance. After even a brief online search, information seekers tend to think they know more than they actually do, even when they have learned nothing from a search, studies show. Despite its dramatic benefits, technology therefore can seed a cycle of enchantment, gullibility and hubris that then produces more dependence on technology.

鈥淔inally, the market-driven nature of technology today muffles any concerns that are shown about devices. Consider the case of robot caregivers. Although a majority of Americans and people in EU countries say they would not want to use robot care for themselves or family members, such robots increasingly are sold on the market with little training, caveats or even safety features. Until recently, older people were not consulted in the design and production of robot caregivers built for seniors. Given the highly opaque, tone-deaf and isolationist nature of big-tech social media and AI companies, I am concerned that whatever skepticism that people may have for technology may be ignored by its makers.鈥

Beneficial
Mark Surman, president of the Mozilla Foundation, commented, 鈥淢y biggest prediction is that people will get fed up. Fed up with the constant barrage of always on. The nudging. The selling. The treadmill. Companies that see this coming 鈥 and that can build tech products that help people turn down the volume and disconnect while staying connected 鈥撀爓ill win the day. Clever, humane use of AI will be a key part of this.鈥

Harmful
Mark Surman, president of the Mozilla Foundation, commented, 鈥淭he most harmful thing I can think of isn’t a change as much as a trend: The ability for us to disconnect will increasingly disappear. We’re building more and more reasons to be always on and instantly responsive into our jobs, our social lives, our public spaces, our everything. The combination of immersive technologies and social pressure will make this worse. Opting out isn’t an option. Or, if it us, the social and economic consequences are severe. The result: we’re more anxious, tired, (emotionally) disconnected. Our ability to touch, to rest, to choose and to be human will continue to erode.鈥

Beneficial
Beth Noveck, director of the Burnes Center for Social Change and Innovation and its partner project, The Governance Lab, wrote, 鈥淥ne of the most significant and positive changes expected to occur by 2035 is the increasing integration of artificial intelligence (AI) into various aspects of our lives, including our institutions of governance and our democracy.

鈥淲ith 100 million people trying ChatGPT–a type of artificial intelligence (AI) that uses data from the Internet to spit out well-crafted, human-like responses to questions– between Christmas and Mardi Gras 2023 (by contrast it took the telephone 75 years to reach that level of adoption), we have squarely entered the AI-age and are rapidly advancing along the S-curve toward widespread adoption. Much more than ChatGPT, AI comprises a remarkable basket of data-processing technologies that make it easier to generate ideas and information, summarize and translate text and speech, spot patterns and find structure in large amounts of data, simplify complex processes, coordinate collection action and engagement. When put to good use, these features create new possibilities for how we govern and, above all, how we can participate in our democracy.

鈥淥ne area in which AI has the potential to make a significant impact is in participatory democracy, that system of government in which citizens are actively involved in the decision-making process.

  • 鈥淭he right AI could help to increase citizen engagement and participation. With the help of AI-powered chatbots, residents could easily access information about important issues, provide feedback, and participate in decision-making processes. We are already witnessing the use of AI to make community deliberation more efficient to manage at scale.
  • 鈥淭he right AI could help to improve the quality of decision-making. AI can analyze large amounts of data and identify patterns that humans may not be able to detect. This can help policymakers and participating residents make more informed decisions based on real-time, high quality data. With the right data, AI can also help to predict the outcome of different policy choices and provide recommendations on the best course of action. AI is already being used to make expertise more searchable. Using large scale data sources, it is becoming easier to find people with useful expertise and match them to opportunities to participate in governance. These techniques, if adopted, could help to ensure more evidence-based decisions.
  • 鈥淭he right AI could help to make governance more equitable and effective. New text generation tools make it faster and easier to 鈥榯ranslate鈥 legalese into plain English but also other languages, portending new opportunities to simplify interaction between residents and their governments and increase the uptake of benefits to which people are entitled.
  • 鈥淭he right AI could help to reduce bias and discrimination. AI can analyze data without being influenced by personal biases or prejudices. This can help to identify areas of inequality and discrimination, which can be addressed through policy changes. For example, AI can help to identify disparities in healthcare outcomes based on race or gender and provide recommendations for addressing these disparities.
  • 鈥淔inally, AI could help us design the novel, participatory and agile systems of participatory governance that we need to regulate AI. We all know that traditional forms of legislation and regulation are too slow and rigid to respond to fast-changing technology. Instead, we need to invest new institutions for responding to the challenges of AI and that’s why it is paramount to invest in reimagining democracy using AI.

鈥淏ut all of this depends upon mitigating significant risks and designing AI that is purpose-built to improve and reimagine our democratic institutions.鈥

Harmful
Beth Noveck, director of the Burnes Center for Social Change and Innovation and its partner project, The Governance Lab, commented, 鈥淥ne of the most concerning changes that could occur by 2035 is the increased use of artificial intelligence (AI) to bolster authoritarianism. With the rise of populist authoritarians and the susceptibility of more people to such authoritarianism as a result of widening economic inequality, fear of climate change and as a result of misinformation, there is a risk of digital technologies being abused to the detriment of democracy.

  • 鈥淎I-powered surveillance systems could be used by authoritarian governments to monitor and track the activities of citizens. This could include facial recognition technology, social media monitoring, and analysis of internet activity. Such systems could be used to identify and suppress dissenting voices, intimidate opposition figures, and quell protests.
  • 鈥淎I could be used to create and disseminate propaganda and disinformation. We’ve already seen how bots have been responsible for propagating misinformation during COVID and election cycles. Manipulation could involve the use of deepfakes, chatbots and other AI-powered tools to manipulate public opinion and suppress dissent. Deepfakes, which are manipulated videos or images such as https://this-person-does-not-exist.com, illustrate the potential for spreading disinformation and manipulating public opinion. Deepfakes have the potential to undermine trust in information and institutions and create chaos and confusion. Authoritarian regimes could use these tools to spread false information and discredit opposition figures, journalists and human rights activists.
  • 鈥淎I-powered predictive policing tools could be used by authoritarian regimes to target specific populations for arrest and detention. These tools use data analytics to predict where and when crimes are likely to occur and who is likely to commit them. In the wrong hands, these tools could be used to target ethnic or religious minorities, political dissidents, and other vulnerable groups.
  • 鈥淎I-powered social credit systems are already in use in China and could be adopted by other authoritarian regimes. These systems use data analytics to score individuals based on their behavior and can be used to reward or punish citizens based on their social credit score. Such systems could be used to enforce loyalty to the government and suppress dissent.
  • 鈥淎I-powered weapons and military systems could be used to enhance the power of authoritarian regimes. Autonomous weapons systems could be used to target opposition figures or suppress protests. AI-powered cyberattacks could be used to disrupt critical infrastructure or target dissidents.

鈥淚t is important to ensure that AI is developed and used in a responsible and ethical manner, and that its potential to be used to bolster authoritarianism is addressed proactively.鈥

Beneficial
Leiska Evanson, a Caribbean-based futurist and consultant, commented, 鈥淭he most beneficial change that digital technology is likely to manifest before 2035 is the same as offered earlier by the radio and the television 鈥 increased learning opportunities to people, including and especially for those in more remote locations.

鈥淚n the past decade alone, we have implemented stronger satellite and wireless/mobile Internet, distributed renewable energy connections and microgrids, as well as robust cloud offerings that can bolster flagging, inexpensive equipment (e.g., old laptops and cheaper Chromebooks). With this, wonderful websites such as YouTube, edX, Coursera, uDemy and MIT Opencourseware have allowed even more people to have access to quality learning opportunities once they can connect to the Internet.

鈥淲ith this, persons who, for various reasons, may be bound to their locations can continue to expand their mind beyond physical and monetary limitations. Indeed, the COVID-19 pandemic has shown that the Internet is vital as a repository and enabler of knowledge acquisition. With more credential bodies embracing various methods to ensure quality of education (anti-cheat technologies and temporary remote surveillance), people everywhere will be able to gain globally recognised education from secondary and tertiary institutions.鈥

Harmful
Leiska Evanson, a Caribbean-based futurist and consultant, said, 鈥淐olonialist languages have beaten down and eradicated local languages and this continues unabated with the Internet. Programming languages are almost all in American English. Non-Latin languages are barely represented. Non-European African and American languages will be extinct by 2035, and even European sublanguages are suffering. Until we translate scripting and programming languages to allow something other than English, human language and thought will be constrained into fewer dimensions.鈥

Beneficial
Richard L. Wood, founding director of the Southwest Institute on Religion, Culture and Society at the University of New Mexico, said, 鈥淎mong the best and most beneficial changes in digital life that I expect are likely to occur by 2035 are the following advances, listed by category:

鈥淗uman-centered development of digital tools and systems that safely advance human progress will include:

  • High-end technology to compensate for vision, hearing and voice loss
  • Software that empowers new levels of human creativity in the arts, music, literature, etc., while simultaneously allowing those creators to benefit financially from their own work
  • Software that empowers local experimentation with new governance regimes, institutional forms and processes, and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.

鈥淚mprovement of social and political interactions will include:

  • Software that actually delivers on the early promise of connectivity to buttress and enable wide and egalitarian participation in democratic governance, electoral accountability, voter mobilization, and holds elected authorities and authoritarian demagogues accountable to common people
  • Software able to empower dynamic institutions that answer to people鈥檚 values and needs rather than (only) institutional self-interest
  • Software that empowers local experimentation with new governance regimes, institutional forms and processes, and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.

鈥淗uman rights-abetting good outcomes for citizens will include:

  • Systematic and secure ways for everyday citizens to document and publicize human rights abuses by government authorities, private militias and other non-state actors.

鈥淎dvancement of human knowledge, the verifying, updating, safely archiving and elevating the best of it:

  • Knowledge systems with algorithms and governance processes that empower people will be simultaneously capable of curating sophisticated versions of knowledge, insight and something like 鈥榳isdom鈥 and subjecting such knowledge to democratic critique and discussion. i.e., a true 鈥榙emocratic public arena鈥 that is digitally mediated.

鈥淗elping people be safer, healthier and happier:

  • True networked health systems with multiple providers across a broad range of roles, as well as health consumers/patients, can 鈥榮ee鈥 all relevant data and records simultaneously, with expert interpretive assistance available, with full protections for patient privacy built in
  • Social networks built to sustain human thriving via mutual deliberation and shared reflection regarding personal and social choices.鈥

Harmful
Richard L. Wood, founding director of the Southwest Institute on Religion, Culture and Society at the University of New Mexico, wrote, 鈥淎mong the most harmful or menacing changes in digital life that I expect are likely to occur by 2035 are the following, listed by category:

鈥淗uman-centered development of digital tools and systems:

  • Integration of human persons into digitized software worlds to a degree that de-centers human moral and ethical reflection, subjecting that realm of human judgment and critical thought to the imperatives of digital universe (and its associated profit-seeking or power-seeking or fantasy-dwelling behaviors)

鈥淗uman connections, governance and institutions:

  • The replacement of actual in-person human interaction (in keeping with our status as evolved social animals) with mediated digital interaction that satisfies immediate pleasures and desires without actual human social life with all its complexity.

鈥淗uman rights:

  • Overwhelming capacity of authoritarian governments to monitor and punish advocacy for human rights; overwhelming capacity of private corporations to monitor and punish labor activism.

鈥淗uman knowledge:

  • Knowledge systems that continue to exploit human vulnerability to group think in its most anti-social and anti-institutional modes, driving subcultures toward extremes that tear societies apart and undermine democracies. Outcome: empowered authoritarians and eventual historical loss of democracy.

鈥淗uman health and well-being:

  • Social networks that continue to hyper-isolate individuals into atomistic settings, then recruit them into networks of resentment and anti-social views and action that express the nihilism of that atomized world.鈥

Beneficial
Daniel S. Schiff, assistant professor and co-director of the Governance and Responsible AI Lab at Purdue University, wrote, 鈥淪ome of the most beneficial changes in digital technology and human use of digital systems may surface through impacts on health and well-being, education and the knowledge economy, and consumer technology and recreation. I anticipate more moderate positive impacts in areas like energy and environment, transportation, manufacturing and finance, and have only modest optimism around areas like democratic governance, human rights and social and political cohesion.

鈥淚n the next decade, the prospects for advancing human well-being, inclusive of physical health, mental health and other associated aspects of life satisfaction and flourishing seems substantial. The potential of techniques like deep learning to predict the structure of proteins, identify candidates for vaccine development and diagnose diseases based on imaging data has already been demonstrated. The upsides for humans of maturing these processes and enacting them robustly in our health infrastructure is profound. Even the use of virtual agents or chatbots to expand access to medical, pharmaceutical and mental health advice (carefully designed and controlled) could be deeply beneficial, especially for those who have historically lacked access. These and other tools in digital health such as new medical devices, wearable technologies for health monitoring, and yet-undiscovered innovations focused on digital well-being, could represent amongst the most important impacts from digital technologies in the near future.

鈥淲e might also anticipate meaningful advances in our educational ecosystem and broader knowledge economy that owe their thanks to digital technology. While the uptake of tools like intelligent tutoring systems (AI in education) has been modest so far in the 21st century, in the next decade, primary, secondary and postsecondary educational institutions may have the time to explore and realize some of the most promising innovations. Tools like MOOCs that suffered a reputational setback in part because of the associated hype cycle will have had ample time to mature along with the growing array of online/digital-first graduate programs, and we should also see success for emerging pedagogical tools like AR- or VR-based platforms that deliver novel learning experiences. Teachers, ed tech companies, policymakers and researchers may find that the 2030s provide the time for robust experimentation, testing and ‘survival of the fittest’ for digital innovations that can benefit students of all ages.

鈥淵et some of the greatest benefits may come outside of the formal educational ecosystem; it has become clear that tools like large language models are likely to substantially reform how individuals search for, access, synthesize and even produce information. Thanks to improved user interfaces and user-centered design along with AI, increased computing power, and increased internet access, we may see widespread benefits in terms of convenience, time saved and the informal spread of useful practices. A more convenient and accessible knowledge ecosystem powered by virtual assistants, large language models and mobile technology could, for example, lead to easy spreading of best practices in agriculture, personal finance, cooking, interpersonal relationships and countless other areas.

鈥淔urther, consumer technologies focused on entertainment and recreation seem likely to impact human life positively in the 2030s. We might expect to see continued proliferation of short- and long-form video content on existing and yet-unnamed platforms, heightened capabilities to produce high-quality television and movies, advanced graphics in individual and social video games, and VR and AR experiences ranging from music to travel to shopping. Moreover, this content is likely to increase in quantity, quality and diversity, reaching individuals of different ages, backgrounds and regions, especially if the costs of production are decreased (for example, by generative AI techniques) and access expanded by advanced internet and networking technologies. The prospects for individuals to produce, share and consume all manner of content for entertainment and other forms of enrichment seems likely to have a major impact on the daily experiences of humans.

鈥淭here are too many other areas where we should expect positive benefits from digital technology to list here, many in the form of basic and applied computational advances leading to commercialized and sector-specific tools. Some of the most promising include advances in transportation infrastructure, autonomous vehicles, battery technology, energy distribution, clean energy, sustainable and efficient materials, better financial and healthcare recommendations and so on. All of these could have tangible positive impacts on human life and would owe much (but certainly not all) of this to digital technology.

鈥淧erhaps on a more cautionary note, I find it less likely that these advances will be driven through changes in human behavior, institutional practices and other norms per se. For example, the use of digital tools to enhance democratic governance is exciting and certain countries are leading here, but these practices require under-resourced and brittle human institutions to enact, as well as the broader public (not always digitally literate) to adapt.

“Thus, I find it unlikely we will have an international ‘renaissance’ in digital citizen participation, socioeconomic equity or human rights resulting from digital advances, though new capabilities for citizen service request fulfillment, voting access or government transparency would all be welcome. For similar reasons, while some of the largest companies have already made great progress in reshaping human experience via thoughtful human-centered design practices, with meaningful impact given their scale, spreading this across other companies and regions would seem to require significant human expertise, resources and changes in education and norms.

“Reaching a new paradigm of human culture, so to speak, may take more than a decade or two. Even so, relatively modest improvements driven by humans in data and privacy culture, social media hygiene and management of misinformation and toxic content can go a long way.

鈥淚nstead then, I feel that many of these positive benefits will arrive due to 鈥榯he technologies themselves鈥 (crassly speaking, since the process of innovation is deeply socio-technical) rather than because of human-first changes in how we approach digital life. For example, I feel that many of the total benefits of advances in digital life will result from the ‘mere’ scaling of access to digital tools, through cheaper energy, increased Internet access, cheaper computers and phones, and so on. Bringing hundreds of millions or billions of people into deeper engagement with the plethora of digital tools may be the single most important change in digital life in the next decades.鈥

Harmful
Daniel S. Schiff, assistant professor and co-director of the Governance and Responsible AI Lab at Purdue University, said, 鈥淪ome of the more concerning impacts in digital life in the next decade could include techno-authoritarian abuses of human rights, continued social and political fracturing augmented by technology and mis/disinformation, missteps in social AI and social robotics, and calcification of subpar governance regimes that preclude greater paradigm shifts in human digital life. As often occurs with emerging technology, we may see innovations introduced without sufficient testing and consideration, leading to scandals and harms, as well as more intentional abuses by hostile actors.

鈥淧erhaps the most menacing manifestation of harmful technology would be the realization of hyper effective surveillance regimes by state actors in authoritarian countries, with associated tools also shared to other countries by state actors and unscrupulous firms. It’s already clear that immense human data production coupled with biometrics and video surveillance can create environments that severely hobble basic human freedoms. Even more worrisome is that the sophistication of digital technologies could lead techno-authoritarian regimes to be so effective that they even cripple prospects for public feedback, resistance and protest, and change altogether. Pillars of societal change like in-person and digital assembly, sharing of ideas inside and outside of borders, and institutions of higher education serving as hubs of reform could disappear in the worst case. To the extent that nefarious regimes are able to track and predict dissident ideas and individuals, deeply manipulate information flow and even generate new forms of targeted persuasive disinformation and instill fear, some corners of the world could be locked into particularly horrific status quos. Even less successful efforts here are likely to harm basic human freedoms and rights, including of political, gender, religious and ethnic minorities.

鈥淎nother fear imagined throughout subsequent historical waves of technology is dehumanization and dissolution of social life through technology (e.g., radio, television, Internet). Yet these fears do not feel anti-scientific, as we have watched the collapsing trust in news media, proliferation of misinformation and disinformation via social media platforms, and fracturing of political groups leading to new levels of affective polarization and outgroup dehumanization in recent decades. Misinformation in text or audio-visual formats deserves a special call out here. I might expect ongoing waves of scandal over the next years as various realistic generative capabilities become democratized, imagined harms become realized (in fraud, politics, violence), and news cycles try to make sense of these changes. The next scandal or disaster owing to misinformation seems just around the corner, and many such harms are likely happening that we are not aware of.

鈥淭here are other reasons to expect digital technology to become more individualized and vivid. Algorithmic recommendations are likely to become more accurate (however accuracy is defined), and increased data, including potentially biometric, physiological, synthetic, and even genomic data may feature into these systems. Meanwhile, bigger screens, clever user experience design, and VR and AR technologies could make these informational inputs feel all the more real and pressing. Pessimistically speaking, this means that communities that amplify our worst impulses, prey upon our weaknesses, and individuals that preach misinformation and hate may be more effective than ever in finding and persuading their audiences. Fortunately, there are efforts to dissipate in combat these trends in current and emerging areas of digital life, but several decades into the Internet age, we have not yet gotten ahead of bad actors and sometimes surprising negative emergent and feedback effects. We might expect a continuation of some of the negative trends enabled by digital technology already in the 21st century, with new surprises to boot.

鈥淭he power of social technologies like virtual assistants and large language models has also started to become clear to the mass public. In the next decade, it seems likely to me that we will have reached a tipping point where social AI or embodied robots become widely used in settings like education, healthcare and elderly care. Benefits aside, these tools will still be new and their ethical implications are only starting to be understood. Empirical research, best practices and regulation will need to play catch-up. If these tools are rolled out too quickly, the potential to harm vulnerable populations is greater. Our excitement here may be greater than our foresight.

鈥淎nd unfortunately, more technology and innovation seem poised to exacerbate inequality (on some important measures) under our current economic system. Even as we progress, many will remain behind. This might be especially true if AI causes acceleration effects, granting additional power to big corporations or companies due to network/data effects, and if international actors do not work tirelessly to ensure that benefits are distributed rather than monopolized. One unfortunate tendency is for rights and other beneficial protections to lag in low-income countries; an unscrupulous corporation may be banned from selling an unsafe digital product or using misleading marketing in one country and decide that another unprotected market exists in a lower-income corner of the world. The same trends hold for misinformation and content moderation, for digital surveillance, and for unethical labor practices used to prop up digital innovation. What does the periphery look like in the AI era? To prevent some of the most malicious aspects of digital change, we must have a global lens.

鈥淔inally, I fear that the optimists of the age may not find the most creative and beneficial reforms take hold. Regulatory efforts that aim to center human rights and well-being may fall somewhat to the banalities of trade negotiations and the power of big technology companies. Companies may become better at ethical design, but also better at marketing it, and it remains unclear how much the public knows whether a digital tool and its designer are ethical or trustworthy. It seems true that there is historically high attention to issues like privacy, cybersecurity, digital misinformation, deepfakes, algorithmic bias and so on.

鈥淵et even for areas where experts have identified best practices for years or decades, economic and political systems are slow to change and incentives and timelines remain deeply unaligned to well-being. Elections continue to be run poorly, products continue to be dangerous and actors continue to find workarounds to minimize the impact of governance reforms on their bottom line. In the next decade, I would hope to see several major international reforms take hold, such as privacy reforms like GDPR maturing in their implementation and enforcement, and perhaps laws like the EU AI Act start to have a similar impact. Overall, however, we do not seem poised for a revolution in digital life. We may have to content ourselves with the hard work required for slow iteration and evolution instead.鈥

Beneficial
Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, 鈥淏y 2035 two trends will be evident, which we can characterize as the best and worst of digital life. Neither, though, is unadulterated. The best will contain elements of a toxic underside, and the worst will have its beneficial upside.

  • The best: everything we need will be available online.
  • The worst: everything about us will be known; nothing about us will be secret.

鈥淏y 2035, these will only be trends, that is, we won鈥檛 have reached the ultimate state, and there will be a great deal of discussion and debate about both sides.

鈥淭he Best: As we began to see during the pandemic, the digital economy is much more robust than people expect. Within a few months, services emerged to support office work, deliver food and groceries, take classes and sit for exams, perform medical interventions, provide advice and counselling, shop for clothing and hardware, and more, all online, all supported by a generally robust and reliable delivery infrastructure.

鈥淟ooking past the current rebound effect, we can see some of the longer-term trends emerge: work-from-home, online learning and development, digital delivery services, and more along the same lines. We鈥檙e seeing a longer-term decline in the service industry as people choose both to live and work at home, or at least, more locally. Outdoor recreation and special events still attract us, but low-quality crowded indoor work and leisure leave us cold.

鈥淭he downside is that this online world is reserved, especially at first, to those who can afford it. Though improving, access to good and services is still difficult to obtain in rural areas and less developed areas. It requires stable accommodations and robust internet access. These in turn demand a set of skills that will be out of reach for older people and those with perceptual or learning challenges. Even when they can access digital services, some people will be isolated and vulnerable; children, especially, must be protected from mistreatment and abuse.

Harmful
Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, 鈥淭he Worst: We will have no secrets. Every transaction we conduct will be recorded and discoverable. Cash transactions will decline to the point that they鈥檙e viewed with suspicion. Automated surveillance will track our every move online and offline, with artificial intelligence recognizing us through our physical characteristics, habits and patterns of behaviour. The primary purpose of this surveillance will be for marketing, but it will also be used for law enforcement, political campaigns, and in some cases, repression and discrimination.

鈥淪urveillance will be greatly assisted by automation. A police office, for example, used to have to call in for a report on a license plate. Now a camera scans every simple plate that passes within view and a computer checks every single one of them. Registration and insurance documentation is no longer required; the system already knows and can alert the officer to expired plates or outstanding warrants. Facial recognition can accomplish the same for people walking through public places. Beyond the cameras, GPS tracking follows us as we move about, while every single purchase is recorded somewhere.

鈥淭he greatest risk of total surveillance is an unwelcome, and often unjust, differentiation of treatment of individuals. People who need something more, for example, may be charged higher prices; we already see this in insurance, where differential treatment is described as assessment of risk. Parents with children may be charged more for milk than unmarried men. The price of hotel rooms and airline tickets are already differentiated by location and search history and could vary in the future based on income and recent purchases. People with disadvantages or facing discrimination may be denied access to services altogether, as digital redlining expands to become a normal business practice.

鈥淲hat makes this trend pernicious is that none of it is visible to most observers. Not everybody will be under total surveillance; the rich and the powerful will be exempted, as will most large corporations and government activities. Without open data regulations or sunshine laws, nobody will be able to detect when people have been treated inequitably, unfairly or unjustly.

鈥淎nd this is where we begin to see the beginnings of an upside. The same system that surveils us can help keep us safe. If child predators are tracked, for example, we can be alerted to the presence of child predators near our children. Financial transactions will be legitimate and legal or won鈥檛 exist (except in cash). We will be able to press an SOS button to get assistance wherever we are. Our cars will detect and report an accident before we know we were in one. Ships and aircraft will no longer simply disappear. But this does not happen without openness and laws to protect individuals and will lag well behind the development of the surveillance system itself.

鈥淥n Balance: Both the best and the worst of our digital future are two sides of the same digital coin, and this coin consists of the question: who will digital technology serve? There are many possible answers. It may be that it serves only the Kochs, Zuckerbergs and Musks of the world, in which case the employment of digital technology will be largely indifferent to our individual needs and suffering. It may be that it serves the needs of only one political faction or state in which basic needs may be met, provided we do not disrupt the status quo. It may be that it provides strong individual protections, leaving no recourse for those who are less able or less powerful. Or it may serve the interests of the community as a whole, finding a balance between needs and ability, providing each of us enough with enough agency to manage our own lives, but not to the detriment of others.

鈥淭echnology alone won鈥檛 decide this future. It defines what鈥檚 possible. But what we do is up to us.鈥

Beneficial
Wendy Grossman, a UK-based science writer, author of 鈥渘et.wars鈥 and founder of the magazine聽The Skeptic, commented, 鈥淔or the moment, it seems clear that the giants that have dominated the technology sector since around 2010 are losing ground as advertisers respond to social and financial pressures, as well as regulatory activity and antitrust actions. This is a *good* thing, as it opens up possibilities for new approaches that don’t depend on constant, privacy-invasive surveillance of Internet users.

鈥淲ith any luck, that change in approach should spill over into the physical world to create smart devices that serve us rather than the companies that make them. A good example at the moment is smart speakers, whose business models are failing. Amazon is finding that consumers don’t want to use Alexa to execute purchases; Google is cutting back the division that makes Google Home.

鈥淪imilarly, the ongoing relentless succession of cyberattacks on user data might lead businesses and governments to recognize that large pools of data are a liability, and to adopt structures that put us in control of our own data and allow us to decide whom to share it with. In the UK, Mydex and other providers of personal data stores have long been pursuing this approach.

鈥淚 would like to think that by 2035 we will not still be fighting over whether citizens should be allowed to use strong encryption, even if it’s inconvenient for law enforcement. This dispute is already 30 years old!

“I think the machine learning approach to artificial intelligence (which I like to call 鈥榓spirational intelligence鈥) will soon hit its limits, but by 2035 we will still be finding new ways to use what we have.

“I do not think that by 2035 we will have an 鈥榓rtificial general intelligence鈥 or that we will have passed the 鈥榮ingularity鈥 beloved by Ray Kurzweil. This is a *good* thing.

鈥淢any of the other items in your list are more dependent on what governments get elected and what policies they pursue than they are on what technology gets developed or how and to whom it is deployed. I’m thinking particularly of human rights, human-centered development, and human health and well-being.鈥

Harmful
Wendy Grossman, a UK-based science writer, author of 鈥渘et.wars鈥 and founder of the magazine聽The Skeptic, said, 鈥淢any of the biggest concerns about life until 2035 are not specific to the technology sector: the impact of climate change and the disruption and migration it is already beginning to bring; continued inequality and the likely increase in old age poverty as Generation Rent reaches retirement age without the means to secure housing; the ongoing overall ill-health (cardiovascular disease, diabetes, dementia) that is and will be part of the legacy of the SARS-CoV-2 pandemic. These are sweeping problems that will affect all countries and while technology may help ameliorate the effects it can’t stop them. Many people never recovered from the 2008 financial crisis (see the movie 鈥楴omadland鈥); the same will be true for those worst affected by the pandemic.

鈥淚n the short term, the 2023 explosion of new COVID cases expected in China will derail parts of the technology industry; there may be long-lasting effects.

鈥淚 am particularly concerned about the increasing dependence on systems that require electrical power to work in all aspects of life. We rarely think in terms of providing alternative systems that we can turn to when the main ones go down. I’m thinking particularly of those pushing to get rid of cash in favor of electronic payments of all types, but there are other examples.

鈥淚f allowed to continue, the reckless adoption of new technology by government, law enforcement and private companies without public debate or consent will create a truly dangerous state. I’m thinking in particular of live facial recognition, which just a few weeks ago was used by MSG Entertainment to locate and remove lawyers attending concerts and shows at its venues because said lawyers happened to work for firms that are involved in litigation against MSG. (The lawyers themselves were not involved.) This way lies truly disturbing and highly personalized discrimination. Even more dangerous, the San Francisco Police Department has proposed to the city council that it should be allowed to deploy robots with the ability to maim and kill humans – only for use in the most serious situations, of course.

“Airports provide a good guide to the worst of what our world could become. In a piece I wrote in October, 2022, I outline what the airports of the future, being built today without notice or discussion, will be like: all-surveillance all the time, with little option to ask questions or seek redress for errors. Airports 鈥 and the Disney parks 鈥 provide a close look at how 鈥榮mart cities鈥 are likely to develop.

鈥淚 would like to hope that decentralized sites and technologies like Mastodon, Discord and others will change the dominant paradigm for the better 鈥 but the history of cooperatives tends to show that there will always be a few big players. Email provides a good example. While it is still true that anyone can run an email server, it is no longer true that they can do so as an equal player in the ecosystem. Instead, it is increasingly difficult for a small server to get its connections accepted by the tiny handful of big players. Accordingly, the most likely outcome for Mastodon will be a small handful of giant instances, and a long, long tail of small ones that find it increasingly difficult to function. The new giants created in these federated systems will still find it hard to charge or sell ads. They will have to build their business models on ancillary services for which the social media function provides lock-in, just as today Gmail profits Google nothing, but it underpins people’s use of its ad-supported search engine, maps, Android phones, etc. This provides Google with a social graph it can use in its advertising business.

鈥淏y 2035, today’s streaming services will likely have reconstituted themselves into something very like legacy TV, with ad-supported tiers (Netflix is already doing this), schedule grids and all the rest (see predictions by The Masked Scheduler, a former scheduler for the CBS network). The current situation is unsustainable; most people cannot afford the money to subscribe to dozens of streaming services or the time to figure out which services have the shows they actually want to watch. Legacy broadcasters will become streaming first; cable companies will shrink as people are driven away by costs and incessant advertising.鈥

Beneficial
Jamais Cascio, distinguished fellow at the Institute for the Future, wrote, 鈥淭he benefits of digital technology in 2035 will come as little surprise for anyone following this survey. Better-contextualized and explained information; greater awareness about the global environment; clarity about surroundings that accounts for and reacts to not just one鈥檚 physical location but the ever-changing set of objects, actions and circumstances one encounters; the ability to craft ever more immersive virtual environments for entertainment and comfort; and so forth. The usual digital nirvana stuff.

鈥淭he explosion of machine-learning-based systems (like GPT or Stable Diffusion) doesn’t alter that broad trajectory much, other than that AI (for lack of a better and recognizable term) will be deeply embedded in the various physical systems behind the digital environment. The AI gives context and explanation, learning about what you already know. The AI learns what to pay attention to in your surroundings that may be of personal interest. The AI creates responsive virtual environments that remember you. (All of this would remain the likely case even if ML-type systems get replaced by an even more amazing category of AI technology, but let鈥檚 stick with what we know is here for now.)

鈥淗owever, this sort of AI adds a new element to the digital cornucopia: autocomplete. Imagine a system that can take the unique and creative notes a person writes and, using what it has learned about the individual and their thoughts, turns those notes into a full-fledged written work. The human can add notes to the drafts, becoming an editor of the work that they co-write with their personalized system. The result remains unique to that person and true to their voice, but does not require that the person creates every letter of the text. And it will greatly speed up the process of creation.

鈥淲hat鈥檚 more is that this collaboration can be flipped, with the (personalized, true-to-voice) digital system providing notes, observations, even edits to the fully human-written work. It鈥檚 likely that old folks (like me) would prefer this method, even if it remains stuck at a human-standard pace.

“Add to that the ability to take the written creation and transform it into a movie, or a game, or a painting, in a way that remains true to the voice and spirit of the original human mind. A similar system would be able to create variations on a work of music or art, transforming it into a new medium but retaining the underlying feeling.

鈥淐omputer games will find this technology system of enormous value, adding NPCs based on machine learning that can respond to whatever the player says or does, based on context and the in-game personality, not a basic script. It鈥檚 an autocomplete of the imagined world. This will be welcomed by gamers at first, but quickly become controversial when in-game characters can react appropriately when the player does something awful (but funny). I love the idea of an in-game NPC saying something like 鈥榟ey man, not cool鈥 when the player says something sexist or racist.

Harmful
Jamais Cascio, distinguished fellow at the Institute for the Future, asked, 鈥淲here to begin? To start with, the various benefits I describe in the first part can be flipped into something monstrous, using the exact same types of technology. Systems of decontextualization, providing raw data 鈥 which may or may not be true 鈥 without explanation or with incomplete or biased explanations. Context-less streams of info about how the world is falling apart without any explanation of what changes can be made. Systems of misinformation or censorship, blocking out (or falsely replacing) external information that may run counter to what the system (its designers and/or its seller) wants you to see. Immersive virtual environments that exist solely to distract you or sell you things.

鈥淎nd, to quote Philip J. Fry on 鈥楩uturama,鈥 鈥楳y god, it鈥檚 full of ads.鈥

鈥淢achine learning-based 鈥榓utocomplete鈥 technologies that help expand upon a person鈥檚 creative work could easily be used to steer a creator away from or towards particular ideas or subjects. The system doesn鈥檛 want you to write about atheism or paint a nude, so the elaborations and variations it offers up push the creator away from bad themes. This is especially likely if the machine learning AI tools come from organizations with strong opinions and a wealth of intellectual property to learn from. Disney. The Catholic Church. The government of China. The government of Iran. Any government, really. Even that mom-and-op discount snacks and apps store on the corner has its own agenda.

鈥淲hat鈥檚 especially irritating is that nearly all of this is already here in nascent form. Even the 鈥榓utocomplete鈥 censorship can be seen: both GPT-3 and Midjourney (and likely nearly all of the other machine learning tools open to the public) currently put limits on what they can discuss or show. All with good reason, of course, but the snowball has started rolling. And whether or not the digital art theft/plagiarism problem will be resolved by 2035 is left an exercise for the reader.

鈥淭he intersection of machine learning AI and privacy is especially disturbing, as there is enormous potential for the invasion not just the information about a person, but what the person believes or thinks, as based on the mass collection of that person鈥檚 written or recorded statements. This would almost certainly be used primarily for advertising: learning not just what a person needs, but what weird little things they want. We currently worry about the (supposedly false) possibility that our phones are listening to us talk to create better ads; imagine what it鈥檚 like to have our devices seemingly listening to our thoughts for the same reason.

鈥淚t鈥檚 somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more-extreme version of the present or an unfunny parody. Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles. Gatekeeping the visual commons is inevitably a part of any kind of persistent augmented reality world, with people having to pay extra to see certain clothing designs or architecture. Demoralizing deepfakes of public figures, not porn but showing them what they could have done right if they were better people.

鈥淎dvisors on our shoulders (in our glasses or jewelry, more likely) that whisper advice to us about what we should and should not say or do. Not Devils and Angels, but officials and industry.

鈥淣ow I鈥檓 depressed.鈥

Beneficial
Charalambos Tsekeris, vice president of the Hellenic National Commission for Bioethics and Technoethics, commented, 鈥淏y 2035, digital tools and systems will be developed in a human-centered way guided by human design abilities and ingenuity. Human ability, regulatory frames and the soft pressure by civil society will promote the serious ethical, legal and social issues resulting from new forms of agency and privacy. All in all, collective intelligence, combined with digital literacy, is increasingly cultivating responsibility and shaping our environments (analog or digital) to make them safer and AI-friendly.

鈥淎dvancing futures-thinking and foresight analysis will substantially facilitate understanding and preparedness. It will also empower digital users to be more knowledgeable and reflexive upon their rights and the nature and dynamics of the new virtual worlds.

鈥淭he power of ethics by design will ultimately orientate internet-enabled technology toward updating the quality of human relations and democracy, also protecting digital cohesion, trust and truth from the dynamics of misinformation and fake news.

鈥淚n addition, digital assistants and coordination tools will support transparency and accountability, informational self-determination and participation. An inclusive digital agenda will help all users benefit from the fruits of the digital revolution. In particular, innovation in the sphere of AI, clouds and big data will create additional social value and will help to support people in need.

鈥淚n sum, the best and most beneficial change will pertain to a significant increase in digital human, social and institutional capital, toward a happy marriage between digital capitalism and democracy.

Harmful
Charalambos Tsekeris, vice president of the Hellenic National Commission for Bioethics and Technoethics, responded, 鈥淏y 2035, digital tools and systems will not be able to efficiently and effectively fight social divisions and exclusions, as well as the lack of accountability, transparency and consensus in decision-making. In particular, digital technology systems will continue to function in a shortsighted and unethical way so that humanity will face unsustainable inequalities and overconcentration of technoeconomic power. In particular, new digital inequalities will amount to serious alarming threats and existential risks for the human civilization.

鈥淭hese risks will be significantly increased and put humanity in danger, in combination with environmental degradation and the overcomplexification of digital connectivity and the global system. No agreed ethical and regulatory frameworks will be found to fix social media algorithms, so that the vicious circle between collective blindness, populism and polarization will be dramatically reinforced. In addition, the fragmentation of the internet world will continue (splinternet), thus resulting in more geopolitical tensions, less international cooperation and less global peace.

鈥淥verall, the dominant surveillance-for-profit model will continue to prevail by 2015, leading to further loss of privacy, deconsolidation of global democracy and the expansion of cyberfeudalism and data oligarchy. Also, the exponential speed and overcomplexity of datafication and digitalization in general will diminish the human capacity for critical reflection, futures thinking, information accuracy and fact-checking.

鈥淭he overwhelming processes of automation and personalization of information will intensify feelings of loneliness among atomized individuals and further disrupt the domains of mental health and well-being. By 2035, the ongoing algorithmization and platformization of markets and services will exercise more pressure on working and social rights, further worsening exploitation, injustice, labor conditions and labor relations. Ghost workers and contract breaching will dramatically proliferate.鈥

Beneficial
Lee Warren McKnight, professor of entrepreneurship and innovation at Syracuse University鈥檚 School of Information Studies, wrote, 鈥淔irst, I鈥檇 like to comment on human-centered development of digital tools and systems 鈥 safely advancing human progress in these systems. By 2035, digital tools and systems will have eliminated the edge. Nowhere will digital resources be unavailable, except by non-ambient design. The grassroots could be digitalized, empowering the 37% of the world still largely off the grid in 2023, by 2035. With 鈥榳orst case scenario survival as a service鈥 widely available, human safety will progress.

鈥淢ost will assume I am referencing LEO or microsatellite systems, which is correct, in part. Infrastructureless wireless or cyber-physical infrastructure can span any distance already in 2023. Still, that is just a piece of a wider shared cognitive cyber-physical (IoT) technology, energy, connectivity, security, privacy, ethics, rights, governance and trust virtual services bundle. Decentralized communities will be adapting these digital, partially tokenized assets to their own needs and sustainable development goals (to speak UN), through to 2035.

鈥淓veryone has been talking about connecting the unconnected and the next billion, and efforts are progressing with the ITU, Internet Society, many more UN and civil society organizations, and governments, addressing this huge challenge to our global community. I foresee self-help, self-organized, adaptive 鈥 Cloud to (previously known as) Edge community Internet operators solving the last 400 meters or thousand feet problem. Everywhere. They are digitally transforming themselves and are the new community services providers.

鈥淭he market effects of edge bandwidth management innovations, radically lower edge device and bandwidth costs through community traffic aggregation, and fantastically higher access to digital services will be significant enough to measurably raise GDP in nations undertaking their own initiatives to digitalize the grassroots, beyond the current reach of telecommunications infrastructure. At the community level, the effect of these initiatives is immediately transformative for the youth of participating communities.

鈥淲hat I am saying is human well-being and sustainable development can be better in 2035. Supported by shared cognitive computing software and services at the edge, or perhaps a digital twin of the village, and operating to custom, decentralized design parameters decided by that community. The effects will significantly raise incomes of rural residents worldwide. It will not eliminate the Digital Divide, but it will transform it.

鈥淗ow do I know? Because we are already underway with the Africa Community Internet Program, launched by the UN Economic Commission for Africa in cooperation with the African Union, in 2022. Ongoing pilot projects are educating governments and other Internet community multi-stakeholders, about what is possible.

鈥淥f course, the key part of my prediction I only now mention, which is the 鈥楢frica Community Internet inter-ministry and -parliamentary Task Force, Advisory Group, and dynamic coalition alliance.鈥 ACITAG is coordinated by ACIP and will attract supremely talented and motivated people (like those who read Pew Surveys) and organizations worldwide to contribute and coordinate for their own national, regional, local and community needs. And they will be motivated to synchronize with continent-scale actors such as the African Union, UN agencies and businesses, for economies of scale, and with the ITU, IEEE, ICANN, and Internet Society and many more for technical scalability. Latin American and Asian communities, as well as regions of ALL nations, will benefit from elimination of the edge. By 2035. Led by Africans digitally transforming their own communities.

鈥淪econdly, I鈥檇 like to comment on the topic of human connections, governance and institutions 鈥 improving social and political interactions.

鈥淭rust in 鈥榸ero trust鈥 environments is at a premium and relies on sophisticated mechanisms in 2035. Certified Ethical AI Developers are the new Silicon Valley elite priesthood, as they are the well-paid orchestrators of machines learning and cognitive (way beyond smart : ) communities. And they are certified to BE ethical in code and by design. Of course, liability insurance disputes delayed progress, but by 2035 the practice and profession of Certified Ethical AI Developers will have cleaned up many biased by poor design legacy systems. And they will have begun to lead others towards this approach, which combines both improved multi-dimensional security, but also privacy, ethics and rights-awareness by design into adaptive complex systems.

鈥淭he effects are especially noticeable in community RFP procurement processes, which virally adopt language requiring review by a certified ethical AI developer and their AI tools shortly after their first use, even just for submission of a bid. With this now impossible goal only achievable by certification, many developers and others in and around the technical community suddenly have a new interest in introductory level philosophy courses, also raising demand for computer science 鈥 philosophy double majors through the roof. Data scientists will work for and report to them.

鈥淥f course, just having a certification process for ethical AI developers does not automatically make firms鈥 business practices more ethical. It serves as a market signal that sloppy Silicon Valley practices also run risks, including loss of market share. Standing alongside all the statements of ethical AI principles, certified ethical AI developers will be 2035鈥檚 reality 5D TV stars, vanquishing bad and evil AI systems.

鈥淏y 2035 many people, knowing that if principles are not practiced, they have no effect, will insist that they will not use or buy anything if it does not come with a certified ethical AI developer鈥檚 assurance that someone at least tried to make the system safe for humans. And cities will not buy anything that has not been reviewed at the least, by an ethical AI developer and their trusted ethical AI white and red hat digital twins.鈥

Harmful
Lee Warren McKnight, professor of entrepreneurship and innovation at Syracuse University鈥檚 School of Information Studies, wrote, 鈥淚 have concerns over human-centered development of digital tools and systems falling short of advocates’ goals. Good, bad and evil AI will threaten societies, undermine social cohesion, spark suicides and domestic and global conflict, and undermine human well-being. Just as profit-motivated actors, nation states and billionaire oligarchs have weaponized advocates for guns over people and led to skyrocketing murder rates and a shorter lifespan in the United States, similar groups, and groups manipulating machine learning and neural network systems to manipulate them, are arising under the influence of AI.

“They already have. To define terms, good AI is ethical and good by evidence-based design. Bad AI is ill-formed either by ignorance and human error or bad design. In 2035 evil AI could be a good AI or a bad AI gone bad due to a security compromise or malicious actor; or could be bad-to-the-bone evil AI created intentionally to disrupt communities, crash systems and foster murders and death.

  • The manufacturers of disinformation, both private sector and government information warfare campaign managers, will all be using a variety of ChatGPT-gone-bad-like tools to infect societal discourse, systems and communities.
  • The manipulated media and surveillance systems will be integrated to infect communities as a wholesale, on-demand service.
  • Custom evil AI services will be preferred by stalkers and rapists for their services.
  • Mafia-like protection rackets will grow to pay off potential AI attackers as a cost of doing only modestly bad business.
  • Both retail and wholesale market growth for evil AI will have compound effects, with both cyber-physical mass casualty events, and more psychologically damaged unfair-and-unbalanced artificially intelligent evil digital twins that are perfectly attuned to personalize evil effects on the infected, that is artificially influenced, to go bad. Evil robotic process automation will be a growth industry through to 2035, to improve scalability.鈥

Beneficial
Avi Bar-Zeev, president of the XR Guild and veteran innovator of XR tools for several top internet companies, said, 鈥淴R: By 2035, we have all-day wearable glasses that can do both AR and VR. The question is what do we use them for? No longer needing screens, smartphones have shrunk down to the size of keychains, if we still remember those (most doors unlock based on our digital ID). The primary use of XR is communications, bringing photorealistic holograms of other people to us, wherever we are. Those other participants also experience their own augmented spaces without us having to share our 3D environments.

鈥淭he upside of this is that we鈥檙e more connected, albeit mostly asynchronously. It would be impossible for us to be constantly connected to everyone in every situation, so we developed social protocols like we did with texting, allowing us to pop into and out of each other鈥檚 lives without interrupting. The experience is a lot like having a whole team of people at your back, ready to whisper ideas in your ears based on snippets of real life you choose to share.

鈥淎I: The current wave of generative AI has taught us that the best AI is made of people, both providing our creative output and also filtering the results to be acceptable by people. By 2035, the business models will have shifted to rewarding those creators and value-adders such that the result looks more like a corporation today. We鈥檒l contribute, get paid for our work, and the AI-as-corporation produces an unlimited quantity of new value from the combination for everyone else. It鈥檚 as if we cracked the ultimate code for how people can work efficiently together 鈥 extract their knowledge and ideas and let the cloud combine these in milliseconds. Still, we can鈥檛 forget the human inputs or it鈥檚 just another race to the bottom.

鈥淭he flip side of this is that what we today might called 鈥榬ecommendation AI鈥 merges with the above to form a kind of super intelligence that can find the most contextually appropriate content both virtually and IRL. That tech forms a kind of personal firewall that keeps our personal context private but allows to securely gather the best inputs the world can offer, without giving away our privacy.

鈥淢etaverse: By 2035, the word Metaverse is now as popular as Cyberspace and Information Superhighway became over time. The companies prefixing their name by 鈥榤eta鈥 are all kind of boring now. However, given the XR and AI trends above, we can now think of the Metaverse equivalent as the information space we all inhabit.

鈥淭he main shift by 2035 is we don鈥檛 care about it as a space, but as a massive inter-connection among 10 billion people. The AR tech and AI fade into the background and we see other people as valued creators and consumers of each other鈥檚 work, supporters of each other鈥檚 lives and social needs.鈥

Harmful
Avi Bar-Zeev, president of the XR Guild and veteran innovator of XR tools for several top internet companies, commented, 鈥淓ach of the previous technologies goes to its worst outcome quickly, if the technologies are built for the benefits of companies that monetize their customers. XR becomes exploitive and not socially beneficial. AI builds empires on the backs of real people鈥檚 work and deprives them of a living wage as a result. The Metaverse becomes a vast and insipid landscape of exploitive opportunities for companies to mine us for information and wealth, while we become enslaved to psychological countermeasures, designed to keep us trapped and subservient to our digital overlords. The key difference between the most positive and negative uses of these three related technologies is whether the systems are designed to help and empower people or exploit them.鈥

Beneficial
Marjory S. Blumenthal, senior adjunct policy researcher at RAND Corporation, responded, 鈥淚n a little over a decade, it is reasonable to expect two kinds of progress, in particular: First are improvements in the user experience, especially for people with various impairments (visual, auditory, tactile, cognitive). A lot is said about diversity, equity, and inclusion that focuses broadly on factors like income and education, but to benefit from digital technology requires an ability to use it that today remains elusive for many people for physiological reasons. Globally, populations are aging, a process that often confronts people with impairments they didn鈥檛 use to have (and of course many experience impairments from birth onward).

鈥淪econd, and notwithstanding concerns about concentration in many digital-tech markets, more indigenous technology is likely, at least to serve local markets and cultures. In some cases, indigenous tech will take advantage of indigenous data, which technological progress will make easier to amass and use, and more generally it will leverage a wider variety of talent, especially in the Global South, plus motivations to satisfy a wider variety of needs and preferences (including, but not limited to, support for human rights).鈥

Harmful
Marjory S. Blumenthal, senior adjunct policy researcher at RAND Corporation, said, 鈥淭here are two areas where technology seems to get ahead of people鈥檚 ability to deal with it, either as individuals or through governance. One is the information environment 鈥 for the last few years people have been coming to grips with manipulated information and its uses, and it has been easier for people to avoid the marketplace of ideas by sticking with channels that suit narrow points of view.

鈥淐ommentators lament the decline in trust of public institutions and speculate about a new normal that questions everything to a degree that is counterproductive. Although technical and policy mechanisms are being explored to contend with these circumstances, the underlying technologies and commercial imperatives seem to drive innovation that continues to outpace responses. For example, the ability to detect tends to lag the ability to generate realistic but false images and sound, although both are advancing.

鈥淎t a time when there has been a flowering of principles and ethics surrounding computing, new systems like ChatGPT with a high cool factor are introduced without any apparent thought to second- and third-order effects of using them 鈥 thoughtfulness takes time and risks loss of leadership. The resulting distraction and confusion likely will benefit the mischievous more than the rest of us 鈥 recognizing that crime and sex have long impelled uses of new technology.

鈥淭he second is safety. Decades of experience with digital technology have shown our limitations in dealing with cybersecurity, and the rise of embedded and increasingly automated technology introduces new risks to physical safety even as some of those technologies (e.g., automated vehicles) are touted as long-term improvers of safety.

鈥淩esponses are likely to evolve on a sector-by-sector basis, which might make it hard to appreciate interactions among different kinds of technology in different contexts. Although progress on the safety of individual technologies will occur over the next decade, the cumulation of interacting technologies will add complexity that will challenge understanding and response.鈥

Beneficial
Louis Rosenberg, CEO and chief scientist at Unanimous AI, predicted, 鈥淎s I look ahead to the year 2035, it鈥檚 clear to me that certain digital technologies will have an oversized impact on the human condition, affecting each of us as individuals and all of us as a society. These technologies will almost certainly include artificial intelligence, immersive media (VR and AR), robotics (service and humanoid robots), and powerful advancements in human-computer interaction (HCI) technologies. At the same time, blockchain technologies will continue to advance, likely enabling us to have persistent identity and transferrable assets across our digital lives, supporting many of the coming changes in AI, VR, AR and HCI.

鈥淪o, what are the BEST and MOST BENEFICIAL changes that are likely to occur?

鈥淎s a technologist who has worked on VR, AR, AI and HCI for over 30 years, I believe these disciplines are about to undergo a revolution, driving a fundamental shift in how we interact with digital systems. For the last 60 years or so, the interface between humans and our digital lives has been through keyboards, mice and touchscreens to provide input and the display of flat media (text, images, videos) as output. By 2035, this will no longer be the dominant model. Our primary means of input will be through natural dialog enabled by conversational AI and our primary means of output will be rapidly transitioning to immersive experiences enabled through mixed reality eyewear that brings compelling virtual content into our physical surroundings.

鈥淚 look at this as a fundamental shift from the current age of 鈥榝lat computing鈥 to an exciting new age of 鈥榥atural computing.鈥 That鈥檚 because by 2035, human interface technologies (both input and output) will finally allow us to interact with digital systems the way our brains evolved to engage our world 鈥 through natural experiences in our immediate surroundings (mixed reality) and through natural human language (conversational AI).

鈥淎s a result, by 2035 and beyond, the digital world will become a magical layer that is seamlessly merged with our physical world. And when that happens, we will look back at the days when people engaged their digital lives by poking their fingers at little screens in their hands as quaint and primitive. We will realize that digital content should be all around us and should be as easy to interact with as our physical surroundings. At the same time, many physical artifacts (like service robots, humanoid robots and self-driving cars) will come alive as digital assets that we engage through verbal dialog and manual gestures. As a consequence, by the end of the 2030s the differences will largely disappear in our minds between what is physical and what is digital.鈥

Harmful
Louis Rosenberg, CEO and chief scientist at Unanimous AI, said, 鈥淚 strongly believe that by 2035 our society will be transitioning from the current age of 鈥榝lat computing鈥 to an exciting new age of 鈥榥atural computing.鈥 This transition will move us away from traditional forms of digital content (text, images, video) that we engage today with mice, keyboards, and touchscreens to a new age of immersive media (virtual and augmented reality) that we will engage mostly through conversational dialog and natural physical interactions.

鈥淲hile this will empower us to interact with digital systems as intuitively as we interact with the physical world, there are many significant dangers this transition will bring. For example, the merger of the digital world and the physical world will mean that large platforms will be able to track all aspects of our daily lives 鈥 where we are, who we are with, what we look at, even what we pick up off store shelves. They will also track our facial expressions, vocal inflections, manual gestures, posture, gait and mannerisms (which will be used to infer our emotions throughout our daily lives). In other words, by 2035 the blurring of the boundaries between the physical and digital worlds will mean (unless restricted through regulation) that large technology platforms will know everything we do and say during our daily lives and will monitor how we feel during thousands of interactions we have each day.

鈥淭his is dangerous and it鈥檚 only half the problem. The other half of the problem is that conversational AI systems will be able to influence us through natural language. Unless strictly regulated, targeted influence campaigns will be enacted through conversational agents that have a persuasive agenda. These conversational agents could engage us through virtual avatars (virtual spokespeople) or through physical humanoid robots. Either way, when digital systems engage us through interactive dialog, they could be used as extremely persuasive tools for driving influence. For specific examples, I point you to a white paper 鈥淔rom Marketing to Mind Control鈥澛 written in 2022 for the Future of Marketing Institute and to the 2022 IEEE paper 鈥淢arketing in the Metaverse and the Need for Consumer Protections.鈥

Beneficial
Catriona Wallace, founder of the Responsible Metaverse Alliance, chair of the venture capital fund Boab AI and founder of Flamingo AI, based in Sydney, Australia, said, 鈥淚 have great hopes for the development of digital technologies and their effect on humans by 2035. The most important changes that I believe will occur that are the best and most beneficial include the following:

1. Transhumanism: Benefit 鈥 improved human condition and health

  • The development of software and hardware that humans will embed in their bodies to overcome current day problems
  • AI-driven, 3D-printed, fully-customised prosthetics
  • Brain extensions 鈥 brain chips that are connected to other digital interfaces and project brain, through or dream activity in a useful way for the participant
  • Nano technologies that may be ingested or enter into the human body that provide health and other benefits

2. Metaverse technologies: Benefit 鈥 improved widespread accessibility to experiences 鈥 widespread and affordable access for citizens to:

  • Virtual, augmented and mixed reality platforms for entertainment. This may include access to concerts, the arts or other digital based entertainment
  • Virtual travel experiences 鈥 this may include virtual tours to digital twin replicas of physical world sites
  • Virtual education providers including schools, secondary and tertiary and other learning opportunities
  • Virtual health care including virtual consultations with doctors and allied health professionals and remote surgery
  • Augmented reality-based apprenticeships for trades and other technical roles where the apprentice may work remotely on a digital twin of a car, or building for example

3. New financial models: Benefit 鈥 more secure and more decentralised finances

  • The emergence of decentralised based financial services 鈥 sitting on blockchain 鈥 adding ease, security and simplicity to finances
  • The use of NFT and other digital assets as a medium of currency, value and exchange

4. Autonomous machines: Benefit 鈥 human efficiency and safety

  • The widespread adoption of autonomous vehicles
  • The widespread adoption of autonomous appliances

5. AI-driven information: Benefit 鈥 access to knowledge, efficiency, potential to move human thinking to higher level once AI does more mundane information-based tasks

  • Widespread adoption of AI based technologies such as Generative AI leading to a rethink in how education, content development and marketing industries are constructed
  • Widespread acceptance of AI-based art 鈥 such as digital paintings, images or music

5. Psychedelic bio-technology: Benefit 鈥 healing and expanded consciousness

  • The Psychedelic Renaissance will be reflected in the proliferation of psychedelic bio-tech companies looking to solve human mental health problems and the expansion of consciousness

6. AI-driven climate change: Benefit 鈥 improved climate change conditions

  • A core focus of AI will be to drive rapid improvements in climate change.

Harmful
Catriona Wallace, founder of the Responsible Metaverse Alliance, chair of the venture capital fund Boab AI and founder of Flamingo AI, based in Sydney, Australia, wrote, 鈥淚n my estimation, the most harmful or menacing changes that are likely to occur by 2035 in digital technology and humans鈥 use of digital systems are:

1) Warfare: Harm 鈥 human use of AI driven technologies to maim or kill humans

2) Crime: Harm 鈥 increased crime due to difficulties in policing within new technology platforms; removal of state, and national boundaries and jurisdictions for crime

3) Organised terrorism: Harm 鈥 new platforms for organised crime or terrorism to re-form; mass manipulation of populations or segments towards an enemy

4) Fraud: Harm 鈥 new financial models and platforms provide further opportunities for crime such as fraud

5) Identity theft: Harm 鈥 new platforms create difficulties in establishing identity and open opportunities for identity-related crimes

6) Division to the Digital and Non-digital populations: Harm 鈥 split in human society to those who are digital oriented and those who are not. This may result in a divide and result in further exacerbating the 鈥榟ave鈥 and 鈥榟ave nots鈥

7) Mass unemployment from automation of jobs: Harm 鈥 AI replaces the jobs of a percentage of the population and those people are on a Universal Basic Income

8) Societies biases hard-coded into machines: Harm 鈥 the gender and minority employment makeup in tech jobs continues and existing societal biases are coded into the technology platforms; data sets continue to not reflect women and other minorities and this results in discriminatory outcomes from advanced tech such as AI

9) Increased mental and physical health issues: Harm 鈥 the coming of advanced tech such as VR, AR and the metaverse results in humans having an increased level of mental health conditions; also physical health conditions

10) Challenges in legal jurisdictions: Harm 鈥 lack of state, national and international boundaries in platforms such as the metaverse result in legal issues and challenges

11) High-tech impact on the environment: Harm 鈥 the use of advanced technology and related carbon emissions has a negative impact on the environment

Beneficial
Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, said, 鈥淚 see the future as a 鈥榮liding doors鈥 world. It can go awfully wrong or incredibly well. I don’t see it will be possible for half and half good and bad working. This answer is based on the idea that we went through the right door, and in 2035 we will have embraced human-centered development of digital tools and systems and human connections, governance and institutions.

鈥淚n 2035 we shall have myriad locally and culturally-based apps run by communities. The people participate and contribute actively because they know that their data will be used to build a better future. The public interest will be the morning star of all these initiatives, and local administrations will run the interface between these applications and the services needed by the community and by each citizen: health, public transportation and schooling systems.

鈥淟ocally produced energy and locally produced food will be delivered via common infrastructures that are interlinked, with energy networks tightly linked to communication networks. The global climate will come to have commonly accepted protection structures (including communications). Solidarity will be in place because insurance and social costs will become unaffordable. The changes in agricultural systems arriving with advances in AI and ICTs will be particularly important. They will finally solve the dichotomy between the metropolis and countryside. The possibility to work from everywhere will redefine metropolitan areas and increase migrations to places where better services, and more vivid communities will exist. This will attract the best minds.

鈥淣ew applications of AI and technological innovation in health and medicine could bring new solutions for disabled people and bring relief for those who suffer from diseases. The problem will be assuring these are fully accessible to all people, not only to those who can afford it. We need to think in parallel to find scalable solutions that could be extended to the whole of the citizenship of a country and made available to people in least-developed countries. Why invest so much in developing a population of supercentenarians in privileged countries when the rest of the world still struggles to survive? Is such contradiction tenable?

鈥淭hen there is the future of work and of wealth redistribution. Perhaps the most important question to ask between now and 2035 is, 鈥榃hat will be the future of work?鈥 Recent developments in AI foreshadow a world in which many current jobs could easily be replaced or at least reshaped completely, even in the intellectual sphere. What robots did in the factories with manual work, now GPT and Sparrow can do to intellectual work. If this happens, if well-paid jobs disappear in large quantities, how will those who are displaced survive? How will communities survive as they also face an aging population? Between now and 2035, politicians will need to face these seemingly distant issues that are likely to become burning issues.鈥

Harmful
Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, wrote, 鈥淚n the worst scenario, if we go through the wrong sliding door, I expect the worst consequences in this area: Human connections, governance and institutions. If the power of Internet platforms will not be regulated by law and by antitrust measures, if global internet governance will not be fixed, then we will have serious risks for democracies.

鈥淯ntil now we have seen the effects of algorithms on big Western democracies (U.S., UK, EU) where a balance of powers exists and 鈥 despite these counter powers 鈥 we have seen the damages that can be provoked. In coming years, we shall see the use of the same techniques in democratic countries where the balance of power is less shared. Brazil, in this sense, has been a laboratory and will provide bad ideas to the rest of the world.

鈥淲ith relatively small investments, democratic processes could be hijacked and transformed into what we call 鈥榙emocratures鈥 in Europe, a contraction of the two French words for 鈥榙emocracy鈥 and 鈥榙ictatorship鈥). In countries that are already non-democratic, AI and a distorted use of digital technologies could bring mass-control of societies much more efficiently than the old communist regimes.

鈥淎s Mark Zuckerberg innocently once said, in the social media world, there is no need for spying 鈥 people spontaneously surrender private information for nothing. As Julian Assange wrote, if democratic governments fall into the temptation to use data for mass control, then everyone鈥檚 future is in danger. There is another area (apparently less relevant to the destiny of the world) where my concerns are very high, and that is the integrity of knowledge. I’m very sensitive to this issue because, as a journalist, I’ve worked all my life in search of the truth to share with my co-citizens. I am also a fanatic movie-lover and I have always been concerned about the preservation of the masterworks of the past. Unfortunately, I think that in both areas between now and 2035 some very bad moves could happen in the wrong direction thanks to technological innovation being used for bad purposes.

鈥淚n the field of news, we have a growing attitude to look not for the truth but for news that people would be interested in reading, hearing or seeing 鈥 news that better corresponds with the public鈥檚 moods, beliefs or belonging.

鈥淚 also expect that revered and even beloved entertainment created in the past is going to be lost, twisted and manipulated. Soon AI will allow each of us to change, for instance, favorite movies鈥 endings. Look at 鈥楧eath in Venice鈥 by Luchino Visconti (based on Thomas Mann’s book), in which the old homosexual professor died without being able to realize his platonic love dream with the young Tadzio. The story is sad, but AI could soon easily change the ending, showing the professor flying away with Tadzio and crowning his love dream somewhere in Venice.

鈥淭he same could happen for Steven Spielberg’s first movie 鈥楧uel,鈥 where the killer truck driver could succeed and eliminate the young car driver played by Dennis Weaver, or to Jack Nicholson in 鈥楾he Shining,鈥 who would finally be allowed to exterminate the whole family he鈥檚 stalking. We are moving slowly in that direction (for examples of altered endings look at 鈥楰aleidoscope鈥 or 鈥楤andersnatch鈥). Alterations of classic works create a setting in which there is no more shared history, shared culture, even shared storytelling. Are you ready to accept it? Personally, I’m not.

鈥淚n 2024 we shall know if the UN Summit of the Future will be a success or a failure. and when the full regulation process of the Internet Platforms launched by the European Union will prove to be successful or not. These are the most serious attempts to date to conciliate the potential of the Internet with respect for human rights and democratic principles. Its success or failure will tell us if we are moving towards the right 鈥榮liding door鈥 or to the wrong one.鈥

Beneficial
Jonathan Grudin, affiliate professor of information science at the University of Washington, previously a principal researcher in the Adaptive Systems and Interaction Group at Microsoft, wrote, 鈥淎ddressing unintended consequences is a primary goal. Many beneficial changes are possible, but the best that is very likely is that we will address many of the unanticipated negatives tied at least in part to digital technology that emerged and grew in impact over the past decade: malware, invasion of privacy, political manipulation, economic manipulation, declining mental health and growing wealth disparity.

鈥淭he once small, homogeneous, trusting tech community, after recovering from the internet bubble, was ill-equipped to deal with the challenges arising from anonymous bad actors and well-intentioned but imperceptive actors who operated at unimagined scale and velocity. Causes and effects are now being understood. It won鈥檛 be easy or an endeavor that will ever truly be finished, but technologists working with legislators and regulators are likely to make substantial progress.鈥

Harmful
Jonathan Grudin, affiliate professor of information science at the University of Washington, previously a principal researcher in the Adaptive Systems and Interaction Group at Microsoft, wrote, 鈥淚 foresee a聽 loss of human control. The menace isn鈥檛 control by a malevolent AI. It is a Sorcerer鈥檚 Apprentice鈥檚 army of feverishly acting brooms, with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course.

鈥淲e see it around us already. Political leaders unable to govern. CEOs at Facebook, Twitter and elsewhere unable to understand how technologies that were intended to unite led to nasty divisiveness and mental health issues. Google and Amazon forced to moderate content on such a scale that often only algorithms can do it, and humans can鈥檛 trace individual cases to correct possible errors. Consumers who can be reliably manipulated by powerful targeting machine learning to buy things they don鈥檛 need and can鈥檛 afford. It is early days. Little to prevent it from accelerating is on the horizon.

鈥淲e will also see an escalation in digital weapons, military spending and arms races. Trillions of dollars, euros, yuan, rubles and pounds are spent, and tens of thousands of engineers deployed, not to combat climate change but to build weaponry that the military may not even want. The United States is spending billions on an AI-driven jet fighter, despite the fact that jet fighter combat has been almost non-existent for decades with no revival on the horizon.

鈥淯nfortunately, the Ukraine war has exacerbated this tragedy. I believe leaders of major countries have to drop rivalries and address much more important existential threats. That isn鈥檛 happening. The cost of a capable armed drone has fallen an order of magnitude every few years. Setting aside military uses, long before 2035 people will be able to buy a cheap drone at a toy store, clip on facial recognition software and a small explosive or poison and send it off to a specified address. No need for a gun permit. I hope someone sees how to combat this.鈥

Beneficial
Dmitri Williams, professor of technology and society at the University of Southern California, wrote, 鈥淲hen I think about the last 30 years of change in our lives due to technology, what stands out to me is the rise in convenience and the decline of traditional face-to-face settings. From entertainment to social gatherings, we鈥檝e been given the opportunity to have things cheaper, faster and higher-quality in our private spaces, and we鈥檝e largely taken it.

鈥淔or example, 30 years ago, you couldn鈥檛 have a very good movie-watching experience in your own home, looking at a small CRT tube and standard definition, and what you could watch wasn鈥檛 the latest and greatest. So, you took a hit to convenience and went to the movie theater, giving up personal space and privacy for the benefits of better technology, better content and a more community experience. Today, that鈥檚 flipped. We can be on our couches and watch amazing content, with amazing screens and sounds and never have to get in a car.

鈥淭hat鈥檚 a microcosm of just about every aspect of our lives 鈥 everything is easier now, from work over high-speed connections to playing video games. We can do it all from our homes. That鈥檚 an amazing reduction in costs and friction in our business and private lives. And the social side of that is access to an amazing breadth of people and ideas. Without moving from our couch, chair or bed, we can connect with others all over the world from a wide range of backgrounds, cultures and interests.

鈥淚ronically, though, we feel disconnected, and I think that鈥檚 because we evolved as physical creatures who thrive in the presence of others. We atrophy without that physical presence. We have an innate need to connect, and the in-person piece is deeply tied to our natures. As we move physically more and more away from each other 鈥 or focus on far-off content even when physically present 鈥 our well-being suffers. I can鈥檛 think of anything more depressing than seeing a group of young friends together but looking at their phones rather than each other鈥檚 faces. Watching well-being trends over time, even before the pandemic, suggests an epidemic of loneliness.

鈥淎s we look ahead, those trends are going to continue. The technology is getting faster, cheaper and higher quality, and the entertainment and business industries are delivering us better and better content and tools. AI and blockchain technologies will keep pushing that trend forward.

鈥淭he part that I鈥檓 optimistic about is best seen by the nascent rise of commercial-level AR and VR. I think VR is niche and will continue to be, not because of its technological limitations, but because it doesn鈥檛 socially connect us well. Humans like eye contact, and a thing on your face prevents it. No one is going to want to live in a physically closed off metaverse. It鈥檚 just not how we鈥檙e wired. The feeling of presence is extremely limited, and the technical advances in the next 10 years are likely to make the devices better and more comfortable, but not change that basic dynamic.

鈥淚n contrast, the potential for AR and other mixed reality devices is much more exciting because of its potential for social interactions. Whereas all of these technical advances have tended to push us physically away from each other, AR has the potential to help us re-engage. It offers a layer on top of the physical space that we鈥檝e largely abandoned, and so it will also give us more of an incentive to be face-to-face again. I believe this will have some negative consequences around attention, privacy and capitalism invading our lives just that much more, but overall, it will be a net positive for our social lives in the long run. People are always the most interesting form of content, and layering technologies have the potential to empower new forms of connection around interests.

鈥淚n cities especially, people long for the equivalent of the ice-breakers we use in our classrooms. They seek each other online based on shared interests, and we see a rise in throwback formats like board games and in-person meetups. The demand for others never abated, but we鈥檝e been highly distracted by shiny, convenient things. People are hungry for real connection, and technologies like AR have the potential to deliver that and so to mitigate or reverse some of the well-being declines we鈥檝e seen over the past 10-20 years. I expect AR glasses to go through some hype and disillusionment, but then to take off once commercial devices are socially acceptable and cheap enough. I expect that the initial faltering steps will take place over the next three years and then mass-market devices will start to take off and accelerate after that.

鈥淗ere’s my simple take: I think AR will tilt our heads up from our phones back to each other’s faces. It won鈥檛 all be wonderful because people are messy and capitalism tends to eat into relationships and values, but that tilt alone will be a very positive thing.鈥

Harmful
Dmitri Williams, professor of technology and society at the University of Southern California, commented, 鈥淲hat I worry most about with technology is capitalism. Technology will continue to create value and save time, but the benefits and costs will fall in disproportionate ways across society.

鈥淓veryone is rightly focused on the promise and challenges of AI at the moment. This is a conversation that will play out very differently around the world. Here in the United States, we know that business will use AI to maximize its profit and that our institutions won鈥檛 privilege workers or well-being over those profits. And so we can expect to see the benefits of AI largely accrue to corporations and their shareholders. Think of the net gain that AI could provide 鈥 we can have more output with less effort. That should be a good thing, as more goods and capital will be created and so should improve everyone鈥檚 lot in life. I think it will likely be a net positive in terms of GDP and life expectancy, but in the U.S., those gains will be minimal compared to what they could and should be.

鈥淟ast year I took a sabbatical and visited 45 countries around the world. I saw wealthy and poor nations 鈥撀 places where technology abounds and where it is rare. What struck me the most was the difference in values and how that plays out in promoting the well-being of everyday people. The United States is comparatively one of the worst places in the world at prioritizing well-being over economic growth and the accumulation of wealth by a minority (yes, some countries are worse still). That鈥檚 not changing any time soon, and so in that context I look at AI and ask what kind of impacts it鈥檚 likely to have in the next 10 years. It鈥檚 not pretty.

鈥淟et鈥檚 put aside our headlines about students plagiarizing papers and think about the job displacements that are coming in every industry. When the railroads first crossed the U.S., we rightly cheered, but we also didn鈥檛 talk a lot about what happened to the people who worked for the Pony Express. Whether it鈥檚 the truck driver replaced by autonomous vehicles, the personal trainer replaced by an AI agent, or the stockbroker who鈥檚 no longer as valuable as some code, AI is going to bring creative destruction to nearly every industry. There will be a lot of losers.

鈥淚 can imagine the reactions of legislatures around the world as these facts come into focus. Here in the U.S., liberals will attempt to solve everything through some kind of job retraining and conservatives will trumpet doing nothing because the free market will solve it all. Both will be wrong and thoughtless. I expect more thoughtful places like Scandinavia, New Zealand or Singapore to confront these new changes and ask how they can best empower and support their citizens. They will be more likely to ask: How can these gains be used to improve all lives?

鈥淲e could have the future of the Jetsons and their short workdays, but I think we鈥檙e more likely to edge toward Blade Runner鈥檚 darker vision of large differences between rich and poor. Technology isn鈥檛 the cause, but it will be the means.鈥

Beneficial
Calton Pu, co-director of the center for experimental research in computer systems at Georgia Institute of Technology, wrote, 鈥淒igital life has been, and will continue to be, enriched by AI and ML (machine learning) techniques and tools. A recent example is the launch of ChatGPT, a modern chatbot (developed by OpenAI and released in 2022) that is passing the Turing Test every day. Similar to the contributions of robotics in the physical world (e.g., manufacturing), future AI/ML tools will relieve the stress (and jobs) from simple and repetitive tasks in the digital world.

鈥淭he combination of physical automation and AI/ML tools would and should lead to concrete applications such as autonomous driving, which have stalled in recent years despite massive investments (on the order of many billions of dollars). One of the major roadblocks has been the (gold standard) ML practice of training static models/classifiers that are insensitive to evolutionary changes in time. These static models suffer from knowledge obsolescence, in a way similar to human aging. There is an incipient recognition of the limitations of current practice of constant retraining of ML models to bypass knowledge obsolescence manually (and temporarily). Hopefully, the next generation ML tools will overcome knowledge obsolescence in a sustainable way, achieving what humans could not: stay young forever.

Harmful
Calton Pu, co-director of the center for experimental research in computer systems at Georgia Institute of Technology, commented, 鈥淭oto, we鈥檙e not in Kansas anymore. When considering the future of digital life, we can learn a lot from the impact of robotics in the physical world. For example, Boston Dynamics pledged to 鈥榥ot weaponize鈥 their robots (in October 2022). This is remarkable, since the company was founded with, and worked on, defense contracts for many years before its acquisition by (primarily) non-defense companies

鈥淭hat pledge is an example of moral dilemma on what is right or wrong, of which the technologist鈥檚 answer usually is amoral. By not taking sides, technologists avoid the dilemma and let both sides (good and evil) utilize the technology as they see fit. This amorality works quite well since good technology always has many applications over the entire spectrum from good to evil, through large gray areas in-between.

鈥淎 digital example is Microsoft Tay, a chatbot released in 2016, a dynamically learning chatbot that started to send inflammatory and racist speech, causing its shutdown the same day. Learning from this lesson, ChatGPT uses OpenAI鈥檚 moderation API to filter out racist and sexist prompts. Hypothetically, one could imagine OpenAI making a pledge to 鈥榥ot weaponize鈥 ChatGPT for propaganda purposes. Regardless of such pledges, any good digital technology such as ChatGPT could be used for any purpose, (e.g., generating misinformation and fake news) if it is stolen or simply released into the wild.

鈥淭he power of AI/ML tools, particularly if they become sustainable and remain amoral, will be greater for both good and evil. We have seen significant harm from misinformation on the COVID-19 pandemic, dubbed 鈥榠nfodemic鈥 by the WHO. More generally, there have been significant political propaganda in every election and every war. It is easy to imagine the depth, breadth and constant renewal of such propaganda and infodemic, as well as their impact, all growing with the capabilities of future AI/ML tools used by powerful companies and governments.

鈥淎ssuming that the AI/ML technologies will advance beyond the current static models, the impact of sustainable AI/ML tools in the future digital life will be significant and fundamental, perhaps in a greater role than industrial robots have in modern manufacturing. For those who are going to use those tools to generate content and increase their influence on people, that prospect will be very exciting. However, we have to be concerned for people who are going to consume such content as part of their digital life, particularly those who will consume without thinking critically.

鈥淭he great digital divide is not going to be between the haves and have-nots of digital toys and information. With more than 6 billion smartphones in the world (estimated in 2022), an overwhelming majority of the population already has access to and participates in the digital world. The Digital Life Divide will be between those who think critically and those who may go along with misinformation and propaganda. This is a big challenge for democracy, a system in which we thought more information would be unquestionably beneficial. In a New Brave Digital World, a majority that can be swayed with (sophisticated) propaganda and misinformation might choose wrong, influenced by misuse of amoral technological tools.

鈥淚n the physical world, technology may have been amoral for good reasons. For example, the nuclear power unleashed by the Manhattan Project serves both peace and war. However, it is debatable whether information technology would or should be equally amoral in the digital life. Recent events at Meta (M. Zuckerberg) and Twitter (E. Musk) illustrate the complexity of the issue and the impact, as well as social responsibility of information technologists and companies.鈥

Beneficial
W. Russell Neuman, professor of media technology at New York University, commented, 鈥淲e can expect to see artificial intelligence as complementing human intelligence rather than competing with it. We tend to see AI as an independent agent, a robot, a willful and self-serving machine that represents a threat because it will soon be able to outsmart us. Why do we think that? Because we see things anthropomorphically. We are projecting ourselves onto these evolving machines.

鈥淏ut these machines can be programmed to complement and augment human intelligence rather than compete with it. I call this phenomenon evolutionary intelligence, a revolution in how humans will think. It is the next stage as our human capacities co-evolve with the technologies we create. The invention of the wheel made us more mobile. Machine power made us stronger. Telecommunication gave us the capacity to communicate over great distances. Evolutionary Intelligence will make us smarter.

鈥淲e tend to think of technology as 鈥榦ut there鈥 鈥 in the computer, in the smart phone in the autonomous car. But computational intelligence is moving from our laptop and dashboard to our technologically enhanced eyes and ears. For the last century glasses helped us to see better, hearing aids improved our hearing. Smart glasses and smart ear buds will help us think better. Imagine an invisible Siri-like character sitting on our shoulder, witnessing what we witness and from time to time advising us, drawing on her networked collective experience. She doesn鈥檛 direct, she advises. She provides optimized options based on our explicit preferences. And given human nature, we may frequently choose to ignore her good advice no matter how graciously suggested.

鈥淭hink of it as compensatory intelligence. Given our history of war, criminality, inhumanity, ideological polarization and simple foolishness, one might be skeptical that Siri鈥檚 next generations would be able to make a difference in our collective survival. Much of what has plagued our existence as humans has been our distorted capacity to match means with ends.

鈥淯nfortunately, among other things, we鈥檝e gotten good at fooling ourselves. It turns out that the psychology of human cognitive distortions is actually quite well understood. As humans, we systematically misrepresent different types of risk, reward and probability. We can computationally correct for these biases. Will we be able to design enhanced decision processes so that demonstrably helpful and well-informed advice is not simply ignored? This book argues that our survival may depend on it.鈥

Harmful
W. Russell Neuman, professor of media technology at New York University, wrote, 鈥淢y concern about the future of the capacity for privacy in the digital future is not just that that capacity will be eroded. It probably will be because of the interests of governments and private enterprise. My concern is about a lost opportunity that our digital technologies might otherwise provide for what I like to call 鈥榠ntelligent privacy.鈥

鈥淗ere鈥檚 an idea. You are well aware that your personal information is a valuable commodity for the social media and online marketing giants like Google, Facebook, Amazon and Twitter. Think about the rough numbers involved 鈥 Internet advertising in the U.S. for 2022 is about $200 billion. The number of active online users is about 200 million. $200 billion divided by 200 million. So your personal information is worth about $1,000. Every year. Not bad. The idea is: Why not get a piece of the action for yourself? It鈥檚 your data. But don鈥檛 be greedy. Offer to split it with the Internet biggies 50-50. $500 for you, $500 for those guys to cover their expenses.

鈥淭hank you very much. But the Tech Giants are not going to volunteer to initiate this sort of thing. Why would they? So there has to be a third party to intervene between you and Big Tech. There are two candidates for this 鈥 first the government, and second some new private for-profit or not-for-profit. Let鈥檚 take the government option first.

鈥淭here seems to be an increasing appetite for 鈥榬eigning in big tech鈥 in the United States on Capitol Hill. It even seems to have some bi-partisan support, a rarity these days. But legislation is likely to take the form of an anti-trust policy to prevent competition-limiting corporate behaviors. Actually, proactively entering the marketplace to require some form of profit sharing is way beyond current-day Congressional bravado. The closest Congress has come so for is a bill called DASHBOARD (an acronym for Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data) which would require major online players to explain to consumers and financial regulators what data they are collecting from online users, and how it is being monetized. The Silicon Valley lobbyists squawked loudly and so far the bill has gone nowhere. And all that was proposed in that case was to make some data public. Dramatic federal intervention into this marketplace is simply not in the cards.

“So what about non-governmental third parties? There are literally dozens of small for-profit startups and not-for-profits in the online privacy space. Several alternative browser search engines such as DuckDuckGo, Neeva and Brave offer privacy-protected browsing. But as for-profits they often end up substituting their own targeted ads (presumably without sharing information) for what you would otherwise see on a Google search or a Facebook feed.

鈥淏rave is experimenting with rewarding users for their attention with cryptocurrency tokens called BATs for Basic Attention Tokens. This is a step in the right direction. But so far, usage is tiny, distribution is limited to affiliated players, and the crypto value bubble complicates the incentives.

鈥淪o the bottom line here is that Big Tech still controls the golden goose. These startups want to grab a piece of the action for themselves and try to attract customers with 鈥榩rivacy-protection鈥 marketing rhetoric and with small, tokenized incentives which are more like a frequent flyer program than real money. How would a serious piece-of-the-action system for consumers work? It would have to allow a privacy-conscious user to opt out entirely. No personal information would be extracted. There鈥檚 no profit there, so no profit sharing. So in that sense, those users 鈥榩ay鈥 for the privilege of using these platforms anonymously.

鈥淵ouTube offers an ad-free service for a fee as a similar arrangement. For those people open to being targeted by eager advertisers, there would be an intelligent privacy interface between users and the online players. It might function like a VPN or proxy server but one which intelligently negotiates a price. 鈥楳y gal spent $8,500 on online goods and services last year,鈥 the interface notes. 鈥楽he鈥檚 a very promising customer. What will you bid for her attention this month?鈥

鈥淧rogrammatic online advertising already works this way. It is all real-time algorithmic negotiations of payments for ad exposures. A Supply Side Platform gathers data about users based on their online behavior and geography and electronically offers their 鈥榓ttention鈥 to an Ad Exchange. At the Ad Exchange, advertisers on a Demand Side Platform have 10 milliseconds to respond to an offer. The Ad Exchange algorithmically accepts the highest high-speed bid for attention. Deal done in a flash. Tens of thousands of deals every second. It鈥檚 a $100 billion marketplace.

鈥淥f course, ad blocking technologies may complicate the picture when users opt to use them. It is a bit of a technical cat and mouse game as aggressive advertisers try to embed their ads in ways that are difficult for ad blockers to detect. But so far ad blockers mostly just block when they can. It鈥檚 like a switch. Blocking is on or off. That鈥檚 not very intelligent privacy. If access to your attention is worth $1,000, let鈥檚 take a minute to think this privacy business through.

鈥淎d blockers don鈥檛 currently offer to negotiate a price for access. Some users may value privacy very highly and demand much more than advertisers would find practical, so no deal. Others are ambivalent or actually interested in connecting with marketers. Your algorithm talks to my algorithm. Intelligent Privacy. Now there鈥檚 an idea. Too bad that 鈥 given the commercial interests of private enterprise 鈥 it鈥檚 a real longshot.鈥

Beneficial
Liza Loop, educational technology pioneer, futurist, technical author and consultant, said, 鈥淎mong the hopes for humanity inspired by ongoing digital advances are:

鈥淗uman-centered development of digital tools and systems. Nature鈥檚 experiments are random, not intentional or goal-directed. We humans operate in a similar way, exploring what is possible and then trimming away most of the more hideous outcomes. We will continue to develop devices that do the tasks humans used to do, thereby saving us both mental and physical labor. This trend will continue resulting in more leisure time available for non-survival pursuits.

鈥淗uman connections, governance and institutions. We will continue to enjoy expanded synchronous communication that will include an increasing variety of sensory data. Whatever we can transmit in near-real-time can be stored and retrieved to enjoy later 鈥 even after death.

鈥淗uman rights. Increased communication will not advance human 鈥榬ights鈥 but it might make human 鈥榳rongs鈥 more visible so that they can be diminished.

鈥淗uman knowledge. Advances in digital storage and retrieval will let us preserve and transmit larger quantities of human knowledge. Whether what is stored is verifiable, safe or worthy of elevation is an age-old question and not significantly changed by digitization.

鈥淗uman health and well-being. There will be huge advances in medicine and the ability to manipulate genetics is being further developed. This will be beneficial to some segments of the population. Agricultural efficiency resulting in increased plant-based food production as well as artificial, meat-like protein will provide the possibility of eliminating human starvation. This could translate into improved well-being 鈥 or not.

鈥淓ducation. In my humble opinion, the most beneficial outcomes of our 鈥榮tore-and-forward鈥 technologies are to empower individuals to access the world鈥檚 knowledge and visual demonstrations of skill directly, without requiring an educational institution to act as middleman. Learners will be able to hail teachers and learning resources just like they call a ride service today.鈥

Harmful
Liza Loop, educational technology pioneer, futurist, technical author and consultant, said, 鈥淭he biggest threat to humanity posed by current digital advances is the possibility of switching from an environment of scarcity to one of abundance. Humans evolved, both physically and psychologically, as prey animals eking out a living from an inadequate supply of resources. Those who survived were both fearful and aggressive, protecting their genetic relatives, hoarding for their families, and driving away or killing strangers and nonconformists.

鈥淎lthough our species has come a long way toward peaceful and harmonious self-actualization, the vestiges of the old fearful behavior persist. Consider what motivates the continuance of copyright laws when the marginal cost of providing access to a creative work approaches zero. Should the author continue to be paid beyond the cost of producing the work?

鈥淚 see these things as likely:

鈥淗uman-centered development of digital tools and systems. They will fall short of advocates’ goals. Some would argue this is a repeat of the gun violence argument. Does the problem lie with the existence of the gun or the actions of the shooter?

鈥淗uman connections, governance and institutions. Any major technology change endangers the social and political status quo. The question is, can humans adapt to the new actions available to them. We are seeing new opportunities to build marketplaces for the exchange of goods and services. This is creating new opportunities to scam each other in some very old (snake oil) and very new (online ransomware) ways. We don鈥檛 yet know how to govern or regulate these new abilities. In addition, although the phenomenon of confirmation bias or echo chambers is not exactly new (think 鈥楥hristendom鈥 in 15th century Europe), word travels faster and crowds are larger than they were six centuries ago. So, is digital technology any more threatening today than guns and roads were then? Every generation believe the end is nigh and brought on by change toward wickedness.

鈥淗uman rights. The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.

鈥淗uman knowledge. The threat to knowledge lies in humans鈥 increasing dependance on machines 鈥 both mechanical and digital. We are at risk of forgetting how to take care of ourselves without them. Increasing leisure and abundance might lull us into believing that we don鈥檛 need to stay mentally and physically fit and agile.

鈥淗uman health and well-being. In today鈥檚 context of increasing ability to extend healthy life, the biggest threat is human overpopulation. Humanity cannot continue to improve its health and well-being indefinitely if it remains planet bound. Our choices are to put more effort into building extraterrestrial human habitat or self-limiting our numbers. In the absences of one of these alternatives, one group of humans is going to be deciding which members of other groups live or die. This is not a likely recipe for human happiness.鈥

Beneficial
Matthew James Bailey, president of AI Ethics World, commented, 鈥淢y response is focused on the Ages of AI and progression of human development, whilst honoring our cultural diversity at individual and group level. In essence, how does humanity thrive in the age of ethical machines?

鈥淚t is clear that the promise and potential of AI is a phenomenon that our ancestors could not have imagined. As such, if humanity embodies an ethical foundation within the digital genetics of AI, then we will have the confidence of working with a trusted digital partner to progress the diversity of humanity beyond the inefficient systems of the status quo into new systems of abundance and thriving. This includes restoration of a balance with our environment, new economic and social systems based on new values of wealth. As such, my six main predications for AI by 2035 are:

AI will become a digital buddy, assisting the individual as a life guide to thrive (in body, mind and spirit) and attain new personal potentials. In essence, if shepherded ethically, humanity will be liberated to explore and discover new aspects of its consciousness and abilities to create. A new human beingness, if you will.

AI will be a digital citizen, just like a human citizen – It will operate in all aspects of government, society and commerce, working towards a common goal of improving how democracy, society and commerce operates, whilst honoring and protecting the sovereignty of the individual.

AI will operate across borders. For those democracies that build an ethical foundation for AI, which transparently shows its ethical qualities, then countries can find common alignment and, as such, trust ethical AI to operate systems across borders. This will increase the efficiency of systems and freedom of movement of the individual.

The Age of Ethical AI will liberate a new age of human creation and invention. This will fast-track innovation and development of technologies and systems for humankind to move into a thriving world and find its place within the universe.

The three-world split. Ethical AI will have different progeny and ethical genetics based on the diverse worldviews between a country or region. As such, there will be different societal experiences for citizens living in countries and regions. We see this emerging today in the United States, EU and China). Thanks to ethical AI, a new age of transparency will encourage a transformation of the human to evolve beyond its limitations and discover new values and develop a new world view where the best of our humanity is aligned. As such, this could lead to a common and democratic worldview of the purpose and potential of humanity.

AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet. After all, humans are a creation from nature and as such, recognizing the importance of nurturing this relationship is viewed as fundamental. This is part of a new well-being paradigm for humanity to thrive.

鈥淭his all depends on humanity steering a new course for the Age of AI. By pragmatically understanding the development of human intelligence and how consciousness has expressed itself in experiencing and navigating our world (worldview), has resulted in a diversity of societies, cultures, philosophies and spiritual traditions.

鈥淯sing this blueprint from organic intelligence enables us to apply an equivalent prescription to create an ethical artificial intelligence 鈥 ethical AI. This is a cultural-centric intelligence that caters for a depth and diversity of worldviews, authentically aligning machines with humans. The power of Ethical AI is to advance our species into trusted freedoms of unlimited potential and possibilities.

鈥淲hilst there is much dialogue and important work attempting to apply AI ethics into AI, troublingly, there is an incumbent homogenous and mechanistic mindset of enforcing one world view to suit all. This brittle and Boolean miscalculation can only lead to the deletion of our diversity and a false authentic alignment of machines with humans.

鈥淚n essence, these types of AIs prevent laying a trusted foundation for human species advancement within the age of ethical machines. Following this path, results in a misstep for humankind, deleting the opportunity for the richness of human, cultural, societal and organizational ethical blueprints being genuinely applied to the artificial. They are not ethical AI and fundamentally opaque in nature.鈥

Harmful
Matthew James Bailey, president of AI Ethics World, said, 鈥淭he most menacing, challenging problem with the age of Ethical AI being a successful phenomenon for humanity are controlling organizations and individuals trying to impose a hard-coded common one-world view onto the human race for the age of machines, based on old values and understanding of wealth.

鈥淎ncient systems of top-down must be replaced with systems of distribution. We have seen this within the UK, with control and power being disseminated to parliaments in Scotland, Wales and Northern Ireland. This is also being reflected in technology with the emergence of block chain and cryptocurrencies, and edge compute. As such, empowering communities and human groups with sovereignty and freedom to self-govern and yet remain inter-connected with other communities will emerge. When we head into space, trialing of these new systems of governance might be a useful trial ground, say on the Moon or Mars colonies.

鈥淔urthermore, not recognizing the agency of data and returning control of sovereignty of creation to the individual has resulted in our digital world having a fundamentally unethical foundation. This is a menacing issue our world is facing at the moment. Moving from contracts of adhesion within the digital world to contracts of agency will not only bridge the paradox of mistrust between the people with government and big tech, but it will also open up new individual and commercial commerce and liberate the personal AI 鈥 Digital Buddy 鈥 phenomenon.

鈥淗umans are a creation of the universe, with that unstoppable force embodied within our makeup. As we recognize our wonderful place (and uniqueness thus far) in the universe and work with its principles then we will become aligned with and discover our place within the beauty of creation and maybe the multiverse!

鈥淔or humanity to thrive in the age of ethical machines, we must move beyond the menacing polarities of controllers and rediscover some of Aristotle’s ethical virtues that encourage the best of our humanity to flourish. This assists us to move beyond those principles that are no longer relevant, such as the false veil of power, control and wealth.

鈥淓mbracing Aristotle’s ethical virtues would be a good start to recognize the best of our humanity, as well as the Veda texts such as 鈥楾he world is one family,鈥 or Confucius鈥檚 belief that all social good comes from family ethics, or Lao Tzu proposing that humanity must be in harmony with its environment.

鈥淗owever, we must recognize and honor individual and group differences. Our consciousness through human development has expressed itself with a diversity of world views. These must be honored. As they are, I suspect more common ground will be found between human groups.

鈥淔inally, there鈥檚 the concept of transhumanism. We must recognize that consciousness (a universal intelligence) is and will be the most prominent intelligence of earth and not AI. As such, we must ensure that folks have choice to the degree that they are integrated with machines. We are on the point of creating a new digital life (2029 鈥 AI becomes self-aware), as such, let鈥檚 put the best of humanity into AI to reflect the magnificence of organic life!鈥

Beneficial
Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, commented, 鈥淚 abhor predictions, instead, I shall share some hopes.

鈥淚 hope that the tools of connection will enable more and more diverse voices to at last be heard outside the hegemonic control of mass media and political power, leading to richer, more inclusive public discourse.

鈥淚 hope we begin to see past the internet’s technology as technology and understand the net as a means to connect us as humans in a more open society and to share our information and knowledge on a more equitable and secure basis for the benefit of us all.

鈥淚 hope we might finally move beyond mass media’s current moral panic over the internet as competition and, indeed, supersede the worst of mass media’s failing institutions, beginning with the notion of the mass and media’s invention of the attention economy.

鈥淚 hope that 鈥 as occurred at the birth of print 鈥 we will soon turn our attention away from the futile folly of trying to combat, control and outlaw all bad speech and instead focus our attention and resources on discovering, recommending and supporting good speech.

鈥淚 hope the tools of AI 鈥 the subject of mass media’s next moral panic 鈥 will help people intimidated by the tools of writing and research to better express their ideas and learn and create.

鈥淚 hope we will have learned the lesson taught us by Elon Musk: that placing our discourse in the hands of centralized corporations is perilous and antithetical to the architecture and aims of the net; federation at the edge is a far better model.

鈥淚 hope that regulators will support opening data for researchers to study the impact and value of the net 鈥 and will support that work with necessary resources.鈥

Harmful
Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, said, 鈥淚 fear the pincer movement from right and left, media and politics, against Section 230 and protection of freedom of expression will lead to regulation that raises liability for holding public conversation and places a chill over it, granting protection to and extending the corrupt reign of mass media and the hedge-fund-controlled news industry.鈥

Beneficial
Barry Chudakov, founder and principal at Sertain Research, wrote, 鈥淩egarding digital technology and humans鈥 use of digital systems, we are living in a Golden Age of human connection 鈥 in the sense that never before in human history have we been able to connect with one another at so many levels via so many devices. This has to be regarded as a beneficial change because so many more people 鈥 at least in free, democratic societies 鈥 can now have a voice (perhaps small, and, yes, the room of voices is full to overcrowding) in governance and can have a say in how institutions that play a part in their lives are run or governed.

鈥淭hese devices and connections are so new that ways of improving social and political interactions are evolving. The rules of the road for Twitter, Facebook, TikTok, Snap, or the metaverse are being written and rewritten every week or month; our understanding of human connection, governance and the institutions that were built before these devices and connections were so prevalent is changing.

鈥淭o fully appreciate how human connections, governance, and social structures or institutions are affected by digitization, it is useful to step back and consider how the structures of connection, governance, and institutions evolved. They came from the alphabet and its accelerator, the printing press, which organized reality categorically, hierarchically. Digital tools operate differently. Instead of naming things and putting them into categories; instead of making pronouncements and then codifying them in texts and books that become holy; instead of dividing the world topically and then aggregating people and states according to that aggregation 鈥 digital tools create endless miscellany which creates patterns for data analysis.

鈥淗ow will this new dynamic affect human connections, governance, and institutions? Since we build our governance and institutions based on the tools we use to access and manipulate reality, the newer logic of digital tools is omnidirectional, non-hierarchical, instantaneous, miscellaneous, and organized by whatever manner of organization we choose rather than the structure of, say an alphabet which is front to back, A to Z. Digital tools constitute the new metrics. As Charlene Li, Chief Research Officer at PA Consulting, said of ESG (Environmental, Social and Governance):

鈥楾he reality is that investors are looking at your company鈥檚 ESG metrics. They want to know what your climate change strategy is and if you鈥檙e measuring your carbon emissions. They鈥檙e curious if you pay your employees a fair wage, if you鈥檙e active in the community, and if you consider the health and safety of your team and your customers. They want to make sure you鈥檙e operating ethically鈥. 聽 How do you determine the right metrics?… You have to monitor and take action on your data points constantly. You have to measure meaningful metrics tied to a strategic objective or your organization鈥檚 overall values鈥. Otherwise, you鈥檒l only tackle token measurements just so you鈥檙e doing something. And if you鈥檙e not measuring what鈥檚 meaningful or taking impactful steps, you risk never making real progress.鈥

鈥淪o, one of the best and most beneficial changes that is likely to occur by 2035 in regard to digital technology and humans鈥 use of digital systems is continuous measurement 鈥 and the concomitant obligation to figure out what constitutes meaningful measurement in all the data collected. While humans have measured for certain tasks and obligations 鈥 cutting cloth to fit a given body, surveying land to know which parcel belongs to whom 鈥 measuring is now taking on a near constant presence in our lives. We are measuring everything: from our steps to our calories, our breaths and heartrate and blood pressure to how far a distance is from a destination on Google Earth.

鈥淭he result of all this measuring is facticity. We are unwittingly (and thankfully) moving from a vague and prejudicial assessment of what is real and what is happening, to a systematic, detailed, and data-driven understanding of what is, and what is going on 鈥 whether tracking a hurricane or determining traffic violations at a busy intersection. This flies in the face of many blind-faith traditions and the social structures and institutions those faith-based structures built to bring order to peoples鈥 lives. Measurement is a new order; we鈥檙e just beginning to realize the implications of that new order.

“Human rights – abetting good outcomes for citizens: The most beneficial change that is likely to occur by 2035 in regard to digital technology and humans鈥 use of digital systems is the continuing global distribution of handheld digital devices. Ubiquitous handheld devices not only are news weathervanes, scanning the news and political environment for updates on information relevant to individuals and groups; for the first time in human history, they also give each person a voice, albeit a small one 鈥 and these devices enable crowdsourcing and virtual crowd-gathering, which can compel interest and consensus. This ability is fundamental to fighting for and garnering more equitable human rights, thereby abetting good outcomes for citizens.

鈥淔urther, these devices are highly visual: they show fashion and possessions, cosmetic procedures and dwellings, cars and bling. For the unfortunates of the world, the have-nots, these images are more than incentives; they are an unspoken goal, an unuttered desire to do better, have more, become successful 鈥 and to have a say in that success as the people seen on Instagram and TikTok. What starts as digital envy will evolve to a demand for rights and a greater participation in governance. In this measure the most beneficial changes that are likely to occur by 2035 in regard to digital technology and humans鈥 use of digital systems are an ongoing leavening of human potential and rights.

鈥淗uman rights evolved from the rule of kings and queens to the rule and participation of common man and woman. Throughout that evolution, narrative fallacies regarding classes and races of certain humans sprang up, many of which are still with us and need to be uprooted like a noxious weed in a garden. Narrative fallacies such as those which underpin racism, sexism, antisemitism, anti-Muslim, etc. Democracies have often touted one (wo)man, one vote; with the rise of digital technologies, we now have one device, one vote. Effectively, this empowers each individual, regardless of class or status, with some kind of agency. This is revolutionary, although few intended it to be so.

鈥淥stensibly these devices have tactical, practical uses: determining a stock price, getting the weather, making a call, or sending and receiving a text. But the far greater value of humans having multiple devices is the potential for us to express ourselves in enlightened and uplifting ways. (On average, U.S. households now have a total of 22 connected devices. The number of Internet of Things (IoT) devices worldwide is forecast to almost triple from 9.7 billion in 2020 to more than 29 billion IoT devices in 2030).

鈥淔inally, among the best and most beneficial changes that are likely to occur by 2030 in regard to digital technology and humans鈥 use of digital systems is the capture of human behavior by devices that contradict and prove not a self-serving narrative but enhance justice itself. When, for example, multiple cameras capture a police beating in Memphis or any other city, unless there is tampering with the digital record, this new evidence provides compelling testimony of how things went down.

鈥淭ime will be necessary for legacy systems to lose their sway over human behavior and public opinion. Further, we will need to oversee and create protocols for the use of devices where human behavior is involved. But make no mistake: our devices now monitor and record our behaviors in ways never before possible. This impartial assessment of what happens is a new and enlightening development, if humans can get out of their own way and create equitable use and expectations for the monitoring and recording.

“Human knowledge – verifying, updating, safely archiving and the best of it:聽Humans are undergoing an onslaught of factfulness. Human knowledge 鈥 verifying, updating, safely archiving and elevating that knowledge 鈥 is predicated on knowing what is true and actual, which may be evolving or even change drastically based on new evidence. What is clear is that the volume of data generated by human knowledge is increasing: The total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly 鈥. up to 2025 global data creation is projected to grow to more than 180 zettabytes. – Statista

鈥淥ne significant mechanism of all this factfulness accrues to our advancing technologies of monitoring and measuring. Knowledge finds a highly beneficial ally in these emerging technologies. We now monitor global financial markets, traffic intersections, commercial and non-commercial flights, hospital operations, military maneuvers, and a host of other real-time assessments in ways that were unthinkable a century ago, and impossible two generations ago.

鈥淭his verification process which allows real-time updating, is an often-overlooked boon to human knowledge. Effectively, we are creating data mirrors of reality; we know what is going on in real time; we don鈥檛 have to wait for a storm to hit or a plane to land to make an assessment of a situation. We can go to Mars or go 10,000 feet below the surface of the ocean to quantify and improve our understanding of an ecosystem or a distant planet. Digitization has made this possible. Rendering our world in ones and zeros (quantum computing will likely upgrade this) has given human knowledge a boost unlike anything that came before it.

“The volume of data/information created, captured, copied, and consumed worldwide increased from 41 zettabytes in 2019 to 59 zettabytes in 2020. This figure is expected to rise to 74 zettabytes in 2021, 94 zettabytes in 2022, 118 zettabytes in 2023, and 149 zettabytes in 2024. Such knowledge explosion has never occurred until now in the history of human civilization.

“This exponential trend will continue, which means an ever-increasing pace of knowledge explosion, technological acceleration, and breakthrough innovation. In short, we are currently experiencing one of the biggest revolutions humanity has ever seen: It is a knowledge tsunami.

鈥淭he effect of monitoring human knowledge 鈥 verifying, updating, safely archiving and elevating the best of it 鈥 will be that by 2035 we will have made a dent in what I would call the tsunami retreat. That is, when there is a seemingly endless amount of information available, humans may retreat into ignorance, or make up facts (disinformation) either from sheer frustration or from a Machiavellian desire to manipulate reality to personal whim. (When there is a limited amount of information, a loud voice regarding that information may prevail; when there is an unlimited amount of information, numerous loud voices can proclaim almost anything, and their commentary gets lost in the noise.)

鈥淏y 2035 we will begin to make inroads into the ways and practices of misinformation and disinformation. Deepfakes and the manipulation of recorded reality will become a hotbed issue. In the next decade we will make progress on the process of factualization, i.e., how to approach the world factually, rather than via mysticism, hearsay, or former edict.

鈥淔rom a wisdom perspective, our wars and inability to marshal resources against climate change reveal that humans are still in the Dark Ages, even though our data is increasing at dizzying rates. We鈥檙e not sure what to do with it all; we have little in place to cope with this exponential acceleration. So, no doubt, there is considerable work to do to make factualization potential a living reality. Yet by 2035 we will have seen enough of disinformation to know how it works, how it warps and distorts reality, and why this is not useful or good for humanity.

鈥淎t the same time, we will be able to fake reality 鈥 for good and for ill 鈥 and what is 鈥榬eal鈥 will be an issue that plagues humanity. While we will have developed disinformation protocols; and we will know what to do with lies rather than cluck our tongues and shake our heads; we will also be baffled by knowing the real from the unreal, the actual from the fake.

“Human health and well-being – helping people be safer, healthier, happier: Regarding human health and well-being 鈥 helping people live safer, healthier, happier lives 鈥 digital technology and humans鈥 use of digital systems will continue the progress of the quantified self and amplify it. New monitoring digital technologies, available to individuals as well as hospitals and medical institutions, will be responsible for revolutionizing the way we engage with health care. Self-diagnosis and AI-assisted diagnosis will change human health and well-being.

鈥淩esponsibility for our health is moving into our own hands. Literally. From monitoring the steps we take each day, to checking heart rate, blood pressure or glucose monitoring, to virtual doctor visits that alleviate the hassle of getting to a doctor鈥檚 office 鈥 the progress of digital technologies will continue to advance in 2035.

鈥淎side from moving monitoring devices from the doctor鈥檚 office to individuals and into the hands of patients, what is most significant about this: humans are learning to think in terms of quantities and probabilities versus commandments and injunctions. Digital technologies enable a more fact-based assessment of reality. This is a huge step forward for humanity which, prior to the digital age, was used to narratives 鈥 albeit some full of wisdom 鈥 that were ossified and then taken as indisputable gospel. With the rise of digital computing, what is becomes mutable, malleable, not fixed; uncertainty becomes a new wisdom as humans focus on what is provable and evidentiary, versus what is told through assertion and pronouncements. Some examples from Dr. Bertalan Mesk贸, The Medical Futurist:

  • Withings just launched a miniaturized device called U-Scan, that sits within a toilet bowl and can analyze urine at home. ‘More than 3,000 metabolic biomarkers can be assessed via urine, which makes it one of the gold standards of health assessment. Analyzing these can help diagnose and monitor certain diseases like diabetes, chronic kidney disease, kidney stones and urinary tract infection.’
  • MIT researchers have developed an AI model that can detect future lung cancer risk: Low-dose computed tomography (LDCT) scans are currently the most common way for finding lung cancers in earliest stages. A new deep-learning model 鈥 Sybil 鈥 takes a personalized approach to assess each patient鈥檚 risk based on CT scans. Sybil analyzes the LDCT image data without the assistance of a radiologist to predict the risk of a patient developing future lung cancer within six years.
  • Stanford researchers measure thousands of molecules from a single drop of blood. Stanford Medicine researchers demonstrated that they could measure thousands of protein, fat and metabolic molecules from a single drop of blood with a finger prick. Patients can collect the blood drop at home and mail it to the lab for analysis.
  • Instead of focusing on any single protein, metabolite or inflammatory marker, the growing field of ‘omics’ research takes a broader, systems-biology approach: analyzing the whole spectrum of proteins (the proteome), fats (the lipidome) or the by-products of metabolism (the metabolome).”

鈥淭he NIH summarized how AI will generally change healthcare:’The applications of AI in medicine can 鈥 be grouped into two bold promises for healthcare providers: (1) the ability to present larger amounts of interpretable information to augment clinical judgments while also (2) providing a more systematic view over data that decreases our biases.’ Decreasing bias is another way of saying we are championing facticity.

鈥淥ne of the best and most beneficial changes that is likely to occur by 2035 in regard to digital technology and humans’ use of digital system is recognition of the arrival of a digital tool meta level. We will begin to act on the burgeoning awareness of tool logic and how each tool we pick up and use has a logic designed into it.

鈥淭he important thing about becoming aware of tool logic, and then understanding it: humans follow the design logic of their tools because we are not only adopters, we are adapters. That is, we adapt our thinking and behaving to the tools we use. This will come into greater focus between now and 2035 because our technology development 鈥 like many other aspects of our lives 鈥 will continue to accelerate. With this acceleration humans will use more tools in more ways more often 鈥 robots, apps, the metaverse and omniverse, digital twins 鈥 than at any other time in human history.

鈥淚f we pay attention as we adopt and adapt, we will see that we bend our perceptions to our tools: when we use a cell phone, it changes how we drive, how we sleep, how we connect or disconnect with others, how we communicate, how we date, etc. Another way of looking at this: we have adapted our behaviors to the logic of the tool as we adopted (used) it. With an eye to pattern recognition, we may finally come to see that this is what humans do, what we have always done, from the introduction of various technologies 鈥 alphabet, camera, cinema, television, computer, internet, cell phone 鈥 to our current deployment of AI, algorithms, digital twins, mirror worlds, or omniverse.

鈥淪o, what does this mean going forward? With enough instances of designing a meta mirror of what is happening 鈥 the digital readout above the process of capturing an image with a digital camera, digital twins and mirror worlds that provide an exact replica of a product, process or environment 鈥 we will begin to notice that these technologies all have an adaptive level. At this level when we engage with the technology, we give up aspects of will, intent, focus, reaction. We can then begin to outline and observe this process in order to inform ourselves, and better arm ourselves against (if that鈥檚 what we want) adoption abdication. That is, when we adopt a tool, do we abdicate our awareness, our focus, our intentions? We can study and report on how we change and how each new advancing technology both helps us, and changes us. We can then make more informed decisions about who we are when we use said tool and adjust our behaviors if necessary.

鈥淐entral to this dynamic is the understanding that we are sharing our consciousness with our tools. They have gotten 鈥 and are getting more still 鈥 so sophisticated that they can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand.

Harmful
Barry K. Chudakov, founder and principal at Sertain Research, said, 鈥淗uman-centered development of digital tools and systems will continue to fall short of technology advocates鈥 goals until humans begin to formulate a thorough digital tool critique and analysis, leading to a full understanding of how we use and respond to digital tools and systems. We eat them. We wear them. We take them into our bodies. We claim them as our own. We are all in Stockholm Syndrome with respect to digital tools: they enthrall us and we bend to their (designed) wishes, and then we champion their cause.

鈥淲e are not only adopters of various technologies; we are adapters. We adapt to 鈥 that is, we change our thinking and behaving with 鈥 each significant technology we adopt. Technology designers don鈥檛 need to create technologies which will live inside of us (many efforts towards this end are in the works); humans already ingest technology and tools as though we were cyborgs with an endless appetite.

鈥淭here are now more cell phones on the planet than humans. From healthcare to retail, from robots in manufacturing to NVIDIA鈥檚 omniverse, humans are adopting new technologies wholesale. In many respects this is wonderful. But our use of these technologies will always fall short of advocates鈥 goals and the positive potential of our human destiny until we understand and teach 鈥 from kindergarten through university graduate school 鈥 how humans bend their perceptions to technology and what effects that bending has on us. This is an old story that goes back to the adoption of alphabets and the institutions the alphabet created. We need to see and understand that history before we can fully appreciate how we are responding to algorithms, AI, federated learning, quantum computing, or the metaverse.

鈥淭he most harmful or menacing changes that are likely to occur by 2035 in digital technology and humans鈥 use of digital systems will happen because we have not sufficiently prepared ourselves for the new world and new assumptions inherent in emerging technologies. We have blindly adopted technologies and stumbled through how our minds and bodies reacted to that adoption. Newer and emerging technologies are much more powerful (think AI or quantum computing) and the mechanics of those technologies more esoteric and hidden.

鈥淥ur populace will be profoundly affected by these technologies. We need a broad re-education, so we fully understand how they work and how they work on us. Advocates鈥 goals, while lofty and visionary, will not be realized if users are essentially asleep to the effects and implications of newer digital tools and technologies. Just as seat belt restraints were eventually installed in cars and governments passed laws to compel use of them, likewise we need to acknowledge that all technologies have hidden effects that are revealed over time as users engage with them. Many such effects will be (or appear to be) benign; but others will radically alter collective and individual human behavior.

鈥淛ust as cloud computing was once unthought-of, and there were no cloud computing technologists, and then the demand for such technologists became apparent and grew, so too technology developers will begin to create new industry roles, for example technology consequence trackers. Each new technology displaces a previous technology, and developers must include an understanding of that displacement in their pro forma. Remember: Data and technologies beget more data and technologies. There is a compounding effect at work in technology acceleration development. That is another factor to monitor, track and record.

“Human connections, governance and institutions 鈥 endangering social and political interactions: Digital technologies and digital systems change the OS, the operating system, of human existence. We are moving from alphanumeric organization to algorithms and artificial intelligence; ones and zeroes and the ubiquity of miscellany will change how we organize the world. Considering human connections, governance, and institutions, in each of those areas, digitization is a bigger change than going from horse and buggy to the automobile; a more pervasive change than land travel to air and space travel. This is a change that changes everything because soon there will hardly be any interaction, whether at your pharmacy or petitioning your congresswoman, that does not rely on digital technology to accomplish its ends.

鈥淲ith that in mind, we might ask ourselves: do we have useful insight into the grammar and operations of digital technologies and digital systems 鈥 how they work, and how they work on us? At the moment, the answer is no. By 2035 we will be more used to the prevalence of digital technologies and we have a chance to gain more wisdom about them. Today the very thing we are starting to use most, the AI and the algorithms, the federated learning and quantum computing, is the thing we often know least about, and have almost no useful transparency protocols to help us monitor and understand it.

鈥淰erifying digital information (all information is now digital) will continue to be a sine qua non for democracies. Lies, distortions of perceptions, insistence on self-serving assessments and pronouncements, fake rationales to cover treacheries 鈥 these threaten human connections, governance and institutions as few other things do. They not only endanger social and political interactions; they fray and ultimately destroy the fabric of civilized society. For this reason, by 2035 all information will come with verification protocols that render facts trustworthy or suspect; either true or false. The current ESG (Environmental, Social and Governance) initiative is a step in this direction

鈥淏y the year 2035, the most harmful or menacing changes that are likely to occur in digital technology and humans鈥 use of digital systems will be focused directly on human connections, governance and institutions. Today we think in terms of managing these via governance, which is always a catch-up strategy or endeavor. Instead, to avoid endangered social and political interactions, we must become more proactive with our technologies. We should work to put in place governance, yes; but first, we need a basic pedagogy, a comprehensive understanding of how humans use digital technology and digital systems.

鈥淲e teach English, history, trigonometry, physics and chemistry. All of these disciplines and more are profoundly affected by digital technology and humans鈥 use of digital systems. Yet, generally speaking we have less understanding about how humans use and respond to digital technology than we have about the surface of Mars. (We know more about the surface of Mars than the bottom of the ocean; due to the Mars Reconnaissance Orbiter Mars is fully mapped, the ocean is not.) As a result, our social and political interactions are often undermined by digital realities (deep fakes, flaming, Instagram face, teen girl sadness and suicide rates rising), and many are left dazed and confused by the speed with which so many anchors of prior human existence are being uprooted or simply discarded.

鈥淢ost people have no idea how human institutions of church and state were built on the alphabetic order, so they are also blind to the effects of digital technologies, devices in the hands of virtually every human on the planet, and how these have changed dating and mating, governance and oversight, warfare and statecraft. How many people could explain this change? Or could explain in reasonably simple terms what it means to have a digital twin in the Omniverse or the metaverse?

鈥淲e need radical transparency so these protocols and behavioral responses do not become invisible 鈥 handed over to tech developers to determine our freedoms, privacy, and destiny. That would be dangerous for all of our social and political interactions. For the sake of optimizing human connections, governance, and institutions, we need education 2.0: a broad, comprehensive understanding of the history of technology adoption and the myths that adoption fostered; and then an ongoing, regularly updated, observation deck/report that looks broadly across humans鈥 use of technologies to see how we are adapting to each technology, the implications of that adoption, and recommendations for optimizing human health and well-being.

“Human rights 鈥 harming the rights of citizens: By the year 2035, the most harmful or menacing changes regarding human rights 鈥 i.e., harming the rights of citizens 鈥 that are likely to occur in digital technology and humans鈥 use of digital systems will entail an absenting of consciousness. Humans are not likely to notice the harmful or menacing changes brought about by digital technologies and systems because the effects are not only not obvious; they are invisible. Hidden within the machine are the assumptions of the machine.

鈥淒evelopers don鈥檛 have time, nor do they have the inclination, to draw attention to the workings of the software and hardware they design and build; they don鈥檛 have the time, inclination, or money to game-play the unintended consequences to humans of using a given product or gadget or device. As a result, human rights may be abridged, not only without our consent but without our notice.

鈥淚f an AI voice has been contracted to read an audiobook, the rights of an audio book reader (voiceover) have not been considered; have not been addressed. A company, say Apple, has cut costs on the production of their audiobooks by automating the process using AI. Did Apple ask readers if they would prefer this? Did Apple ask book readers if they would mind competing with an AI reader, or being supplanted by an AI reader? Did your pharmacy or insurance company ask you if you want to hear the recorded (AI) voice that talks to you when you call 鈥 and won鈥檛 let you through to a human until you wade through a series of pronouncements not related to your query? At so many different levels and layers of human experience, technology and digital solutions will emerge 鈥 buying insurance online, investing in crypto, reading an X-ray or assessing a skin lesion for possible cancer 鈥 wherein human rights will be a consideration only after the fact.

鈥淭he strange thing about inserting digital solutions into older system protocols is that the consequences of doing so must play out; they must remain to be seen; the damage, if it is to occur must actually occur for most people to notice. So human rights are effectively a football, kicked around by whatever technology happens to emerge as a useful upgrade. This will eventually be recognized as a typical outcome and watchdogs will be installed in processes, as we have HR offices in corporations. We need people to watch and look out for human rights violations and infringements that may not be immediately obvious when the new digital solutions or remedies are installed.

“Human knowledge 鈥 compromising or hindering progress: By the year 2035, the most harmful or menacing changes that are likely to occur in digital technology and human knowledge 鈥 compromising or hindering progress 鈥 will come from the doubters of factfulness. New technologies are effectively measuring tools at many different levels. We all live with quantified selves now. We count our calories, our steps, we monitor our blood pressure and the air quality in our cities and buildings. We are inundated by facts and our newest technologies will serve to sort and prioritize facts for us. This is a remarkable achievement in human history, tantamount to 鈥 but far greater than 鈥 the enlightenment of 1685-1815.

鈥淲e have never had so many tools to tell us so much about so many different aspects of human existence. (鈥淒are to understand,鈥 as Steven Pinker has it.) The pace of technology development is not slowing, nor is the discovery of new facts about almost anything you can name. In short, human knowledge is exploding. But the threat to that knowledge comes not from the knowing but from those, like the Unabomber Ted Kaczynski, who are uncomfortable with the dislocations, disintermediation, and displacements of knowledge and facts.

鈥淭he history of the world is not fact-based, evidence-based; it is based on assertion and institutionalizing explanations of the world. Our new technologies upset many of those explanations and that is upsetting to many who have clung to those explanations in such diverse areas as religion or diet or health or racial characteristics or dating and mating. So, the threat to knowledge by 2035 will not come from the engines of knowing but from forces of ignorance which are threatened by the knowledge explosion.

鈥淭his is not a new story. Copernicus couldn鈥檛 publish his findings in his lifetime; Galileo was ordered to turn himself in to the Holy Office to begin trial for holding the belief that the Earth revolves around the sun, which was deemed heretical by the Catholic Church. (Standard practice demanded that the accused be imprisoned and secluded during the trial.) Picasso鈥檚 faces were thought weird and distorted until modern technologies began to alter faces or invent face amalgams, i.e., ‘This person does not exist.’

鈥淏y 2035 human knowledge will be shared with artificial intelligence, AI. The logic of AI is the logic of mimesis, copying, mirroring. AI mirrors human activities to enhance work by mirroring what humans would do in that role 鈥 filling out a form, looking up a legal statute, reading an X-ray. AI trains on human behavior to enhance task performance and thereby enhance human performance 鈥 which ultimately represents a new kind of knowledge. Do we fully understand what it means to partner with our technologies to accomplish this goal? It is not enough to use AI and then rely on journalists and user reviews to critique it. Instead, we need to monitor it as it monitors us; we must train it, as it trains on us. Once again, we need an information balcony that sits above the functioning AI to report on it, to give us a complete transparent picture of how it is working, what assumptions it is working from 鈥 and especially what we think, how we act and change in response to using AI. This is the new human knowledge. How we respond to that knowledge will determine compromising or hindering progress.

“Human health and well-being – threatening individuals鈥 safety, health and happiness: By the year 2035, the most harmful or menacing changes that are likely to occur in digital technology and humans鈥 use of digital systems regarding human health and well-being 鈥撀 threatening individuals鈥 safety, health, and happiness 鈥 will come from our blindness as we use digital technologies and digital systems. Blindness, however, is not a sufficient explanation.

鈥淲e have left the development of digital tools and systems to commercial interests. This has given rise to surveillance capitalism, thinking and acting as a gadget, being alone together as we focus more on our phones than each other, sadness among young girls as they look into the distorting mirror of social media 鈥 among other unintended consequences. Humans entrain with digital technologies and digital systems; we adjust, conform to their logic. We have always done this with our tools. It is human nature to adopt the logic of a tool and think in that logic. We did it with alphabets and movies and computers and the Internet 鈥 and we鈥檒l do it in 2035.

鈥淩egarding our health and well-being, threats to individuals鈥 safety, health, and happiness will come from lack of awareness and understanding. As we develop more sophisticated, pervasive, human-mimicking digital tools such as robots or AI human voice assists, we need to develop a concomitant understanding of how we respond to these tools, how we change, adjust, alter our thinking and behavior as we engage with these tools.

鈥淲e need to start training ourselves 鈥 from an early age, through kindergarten well into graduate schools 鈥 to understand how we respond to our tools. It is not useful or good for us to be alone together (Sherry Turkle), to think of ourselves as a gadget and to think as a gadget (Jaron Lanier), to live always in the shallows (Nicholas Carr). Currently there is little or no systematic effort to educate technology users about the logic of digital tools and how we change as we use them. Some of these changes are for the good such as hurricane tracking to ensure community preparedness. But teen suicide, the rise of loneliness at all levels of society, or an epidemic of self-obsession while climate issues are ignored are growing evidence that digital tools may threaten human health and well-being at the same time as they enhance our lives.

鈥淭his is the paradox, the contradiction inherent in technological progress. Whether considered as the revenge of unintended consequences or the exhaust of accelerated realities, it is imperative that we address humans鈥 use of digital tools. By 2035 digital realities will be destinations where we will live some (much?) of our lives in enhanced digital environments; we will have an array of digital assistants, prompts, whether called Alexa or Siri who interact with us. We need to develop moral and spiritual guidelines to help us and succeeding generations navigate these choppy waters.

鈥淏y the year 2035, Ian Bremmer, among others, believes the most harmful or menacing changes that are likely to occur in digital technology and humans鈥 use of digital systems will focus on AI and algorithms. He believes this because we can already see that these two technological advances together have made social media a haven for right-wing conspiracists, anarchic populists, and various disrupters to democratic norms. I would not want to minimize Bremmer鈥檚 concerns: I believe them to be real. But I would also say they are insufficient.

鈥淒emocracies and governments generally were hierarchical constructs which followed the logic of alphabets; AI and algorithms are asymmetric technologies which follow a fundamentally different logic than the alphabetic construct of democratic norms, or even the top-down dictator style of Russia or China. So, while I agree with Bremmer鈥檚 assessment that AI and algorithms may threaten existing democratic structures; they, and the social media of which they are engines, are designed differently than the alphabetic order which gave us kings and queens, presidents and prime ministers.

鈥淭he old hierarchy was dictatorial, top-down with most people except those at the very top beholden to and expected to bow to the wishes of, the monarch or leader at the top. Social media and AI or algorithms have no top or bottom. They are broad horizontally and shallow vertically, whereas democratic and dictatorial hierarchies are narrow horizontally and deep vertically. This structural difference is the cause for Bremmer鈥檚 alarm and is necessary to understand and act upon before we can salvage democracy from the ravages of populism and disinformation.

鈥淗ere is the rub: until we begin to pay attention to the logic of the tools we adopt we will use them and then be at the mercy of the logic we have adopted. A thoroughly untenable situation. We must inculcate, teach, debate and come to understand the logic of our tools and see how they build and destroy our social institutions. These social institutions reward and punish, depending on where you sit within the structure of the institution.

鈥淪lavery was once considered a democratic right; it was championed by many American Southerners and was an economic engine of the south before and after the Civil War. America then called itself a democracy, but it was not truly democratic 鈥 especially for those enslaved. To make democracy more equitable for all, we must come to understand the logic of the tools we use and how they create the social institutions we call governments. We must insist upon transparency in the technologies we adopt so we can see and fully appreciate how these technologies can change our perceptions and values.

鈥淏uilding a meta level into digital tools and technologies is akin to having an observation deck or air traffic controller office in an airport. Yes, planes could take off and land without air traffic controllers, just as vehicular traffic could move on land without traffic lights and signals. But life 鈥 and traffic flows 鈥 would be much more complicated. A technology meta level is a smoothing force and a watch platform to see what is going on with a given digital technology. This meta level amounts to feedback 鈥 continuous, among a variety of users and stakeholders, with transparency and ongoing dialogue built in. In this respect, the meta level acts like a digital twin: once the feedback comes to the meta level, the technology can alter or adjust to accommodate the feedback.鈥

The next four essays are reprinted with permission from the section 鈥淗opes for 2023鈥 in Andrew Ng鈥檚 December 28, 2022, edition of 鈥淭he Batch鈥 newsletter 鈥 all individual authors gave permission for us to use their pieces in this report. All take a positive perspective in looking ahead to expected goals.聽

1) Yoshua Bengio, scientific director of Mila Quebec AI Institute and co-winner of the 2018 Alan Turing Award for his contributions to breakthroughs in deep learning, wrote, 鈥淚n the near future we will see models that reason. Recent advances in deep learning largely have come by brute force: taking the latest architectures and scaling up compute power, data and engineering. Do we have the architectures we need, and all that remains is to develop better hardware and datasets so we can keep scaling up? Or are we still missing something?

鈥淚 believe we鈥檙e missing something, and I hope for progress toward finding it in the coming year.

鈥淚鈥檝e been studying, in collaboration with neuroscientists and cognitive neuroscientists, the performance gap between state-of-the-art systems and humans. The differences lead me to believe that simply scaling up is not going to fill the gap. Instead, building into our models a human-like ability to discover and reason with high-level concepts and relationships between them can make the difference.

鈥淐onsider the number of examples necessary to learn a new task, known as sample complexity. It takes a huge amount of gameplay to train a deep learning model to play a new video game, while a human can learn this very quickly. Related issues fall under the rubric of reasoning. A computer needs to consider numerous possibilities to plan an efficient route from here to there, while a human doesn鈥檛.

鈥淗umans can select the right pieces of knowledge and paste them together to form a relevant explanation, answer, or plan. Moreover, given a set of variables, humans are pretty good at deciding which is a cause of which. Current AI techniques don鈥檛 come close to this human ability to generate reasoning paths. Often, they鈥檙e highly confident that their decision is right, even when it鈥檚 wrong. Such issues can be amusing in a text generator, but they can be life-threatening in a self-driving car or medical diagnosis system.

鈥淐urrent systems behave in these ways partly because they鈥檝e been designed that way. For instance, text generators are trained simply to predict the next word rather than to build an internal data structure that accounts for the concepts they manipulate and how they are related to each other. But I think we can design systems that track the meanings at play and reason over them while keeping the numerous advantages of current deep learning methodologies. In doing so, we can address a variety of challenges from excessive sample complexity to overconfident incorrectness.

鈥淚鈥檓 excited by generative flow networks, or GFlowNets, an approach to training deep nets that my group started about a year ago. This idea is inspired by the way humans reason through a sequence of steps, adding a new piece of relevant information at each step. It鈥檚 like reinforcement learning, because the model sequentially learns a policy to solve a problem. It鈥檚 also like generative modeling, because it can sample solutions in a way that corresponds to making a probabilistic inference.

鈥淚f you think of an interpretation of an image, your thought can be converted to a sentence, but it鈥檚 not the sentence itself. Rather, it contains semantic and relational information about the concepts in that sentence. Generally, we represent such semantic content as a graph, in which each node is a concept or variable. GFlowNets generate such graphs one node or edge at a time, choosing which concept should be added and connected to which others in what kind of relation.

鈥淚 don鈥檛 think this is the only possibility, and I look forward to seeing a multiplicity of approaches. Through a diversity of exploration, we鈥檒l increase our chance to find the ingredients we鈥檙e missing to bridge the gap between current AI and human-level AI.鈥

2) Alon Halevy, a director with the Reality Labs Research brand of Meta Platforms, wrote, 鈥淵our personal data timeline lies ahead. The important question of how companies and organizations use our data has received a lot of attention in the technology and policy communities. An equally important question that deserves more focus in 2023 is how we, as individuals, can take advantage of the data we generate to improve our health, vitality and productivity.

鈥淲e create a variety of data throughout our days. Photos capture our experiences, phones record our workouts and locations, Internet services log the content we consume and our purchases. We also record our want-to lists: desired travel and dining destinations, books and movies we plan to enjoy, and social activities we want to pursue. Soon smart glasses will record our experiences in even more detail. However, this data is siloed in dozens of applications. Consequently, we often struggle to retrieve important facts from our past and build upon them to create satisfying experiences on a daily basis.

鈥淏ut what if all this information were fused in a personal timeline designed to help us stay on track toward our goals, hopes, and dreams? The idea is not new. Vannevar Bush envisioned it in 1945, calling it a memex. In the 1990s, Gordon Bell and his colleagues at Microsoft Research built MyLifeBits, a prototype of this vision. The prospects and pitfalls of such a system have been depicted in film and literature.

鈥淧rivacy is obviously a key concern in terms of keeping all our data in a single repository and protecting it against intrusion or government overreach. Privacy means that your data is available only to you, but if you want to share parts of it, you should be able to do it on the fly by uttering a command such as, 鈥淪hare my favorite cafes in Tokyo with Jane.鈥 No single company has all our data or the trust to store all our data. Therefore, building technology that enables personal timelines should be a community effort that includes protocols for the exchange of data, encrypted storage, and secure processing.

鈥淏uilding personal timelines will also force the AI community to pay attention to two technical challenges that have broader application.

鈥淭he first challenge is answering questions over personal timelines. We鈥檝e made significant progress on question answering over text and multimodal data. However, in many cases, question answering requires that we reason explicitly about sets of answers and aggregates computed over them. This is the bread and butter of database systems. For example, answering 鈥渨hat cafes did I visit in Tokyo?鈥 or 鈥渉ow many times did I run a half marathon in under two hours?鈥 requires that we retrieve sets as intermediate answers, which is not currently done in natural language processing. Borrowing more inspiration from databases, we also need to be able to explain the provenance of our answers and decide when they are complete and correct.

鈥淭he second challenge is to develop techniques that use our timelines responsibly to improve personal well-being. Taking inspiration from the field of positive psychology, we can all flourish by creating positive experiences for ourselves and adopting better habits. An AI agent that has access to our previous experiences and goals can give us timely reminders and suggestions of things to do or avoid. Ultimately, what we choose to do is up to us, but I believe that an AI with a holistic view of our day-to-day activities, better memory, and superior planning capabilities would benefit everyone.

3) Douwe Kiela, an adjunct professor in symbolic systems at Stanford University, previously the head of research at Hugging Face and a scientist at Facebook Research, wrote, 鈥淓xpect less hype and more caution. This year we really started to see AI go mainstream. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven鈥檛 seen before in our field. These are exciting times and it feels like we are on the cusp of something great: a shift in capabilities that could be as impactful as 鈥 without exaggeration 鈥 the industrial revolution.

鈥淏ut amidst that excitement, we should be extra wary of hype and extra careful to ensure that we proceed responsibly.

鈥淐onsider large language models. Whether or not such systems really have meaning, lay people will anthropomorphize them anyway, given their ability to perform arguably the most quintessentially human thing: to produce language. It is essential that we educate the public on the capabilities and limitations of these and other AI systems, especially because the public largely thinks of computers as good old-fashioned symbol-processors聽 鈥撀 for example, that they are good at math and bad at art, while currently the reverse is true.

鈥淢odern AI has important and far-reaching shortcomings. Systems are too easily misused or abused for nefarious purposes, intentionally or inadvertently. Not only do they hallucinate information they do so with seemingly very high confidence and without the ability to attribute or credit sources. They lack a rich-enough understanding of our complex multimodal human world and do not possess enough of what philosophers call 鈥榝olk psychology,鈥 the capacity to explain and predict the behavior and mental states of other people. They are arguably unsustainably resource-intensive, and we poorly understand the relationship between the training data going in and the model coming out. Lastly, despite the unreasonable effectiveness of scaling 鈥 for instance, certain capabilities appear to emerge only when models reach a certain size 鈥 there are also signs that with that scale comes even greater potential for highly problematic biases and even less-fair systems.

鈥淢y hope for 2023 is that we鈥檒l see work on improving all of these issues. Research on multimodality, grounding, and interaction can lead to systems that understand us better because they understand our world and our behavior better. Work on alignment, attribution, and uncertainty may lead to safer systems less prone to hallucination and with more accurate reward models. Data-centric AI will hopefully show the way to steeper scaling laws, and more efficient ways to turn data into more robust and fair models.

鈥淔inally, we should focus much more seriously on AI鈥檚 ongoing evaluation crisis. We need better and more holistic measurements 鈥 of data and models 鈥 to ensure that we can characterize our progress and limitations, and understand, in terms of聽ecological validity聽(for instance, real-world use cases), what we really want out of these systems.鈥

4) Reza Zadeh, founder and CEO at Matroid, a computer vision company, and adjunct professor at Stanford University, wrote, 鈥淎s we enter 2023, there is a growing hope that the recent explosion of generative AI will bring significant progress in active learning. This technique, which enables machine learning systems to generate their own training examples and request them to be labeled, contrasts with most other forms of machine learning, in which an algorithm is given a fixed set of examples and usually learns from those alone.

鈥淎ctive learning can enable machine learning systems to:

  • Adapt to changing conditions
  • Learn from fewer labels
  • Keep humans in the loop for the most valuable, difficult examples
  • Achieve higher performance

鈥淭he idea of active learning has been in the community for decades, but it has never really taken off. Previously, it was very hard for a learning algorithm to generate images or sentences that were simultaneously realistic enough for a human to evaluate and useful to advance a learning algorithm.

鈥淏ut with recent advances in generative AI for images and text, active learning is primed for a major breakthrough. Now, when a learning algorithm is unsure of the correct label for some part of its encoding space, it can actively generate data from that section to get input from a human.

鈥淎ctive learning has the potential to revolutionize the way we approach machine learning, as it allows systems to continuously improve and adapt over time. Rather than relying on a fixed set of labeled data, an active learning system can seek out new information and examples that will help it better understand the problem it is trying to solve. This can lead to more accurate and effective machine learning models, and it could reduce the need for large amounts of labeled data.

鈥淚 have a great deal of hope and excitement that active learning will build upon the recent advances in generative AI. As we enter the new year, we are likely to see more machine learning systems that implement active learning techniques, and it is possible that 2023 could be the year that active learning truly takes off.鈥

Beneficial and Harmful
Czes艂aw Mesjasz, an associate professor at Cracow University of Economics, Krak贸w, Poland, wrote, 鈥淭he main challenge associated with the development of modern technology can be described through the following opposite scenarios:

鈥淔irst, the positive one. Thanks to increased productivity, it will be possible to fulfill the needs of larger social groups. It is often forgotten that there should be a demand for increased productivity results. In this scenario, the demand will be created by people engaged in such areas that cannot be sufficiently financed (e.g., art, entertainment). For example, new collections of great works can be established in musea, and more creators and entrepreneurs will be paid for activities that are not paid for now (e.g., culture, art, learning about nature, etc.). The arrival of a super-efficient automatic industry demands that there be a specific social and political consensus of all of society in assuring the best results for all. I call this scenario positive. If handled well, this can lead to a decrease in social inequality if we look at the scenario with an idealist approach.

鈥淭he second, opposite, scenario says that owners, innovators and specialists of technology will conclude that they do not want to transfer the results of their above-average skills and resources. It will be a very politically tempting situation for those who are more affluent and better-qualified to dominate over the poorer, less-skilled and less-educated social groups.

鈥淭he question concerning the demand can then be asked: Who will create demand for the products of automated manufacturing? The answer is less optimistic. The following social divide can emerge spontaneously. The more affluent will be operating in closed social groups (e.g., a wealthy specialist/owner will buy products only from particular people, leaving out the other social groups). The less-educated, weaker social groups will be dominated by the affluent, smarter and better-educated. This is a pessimistic picture.

鈥淧roviding a basic income to everyone could allow people to survive at relatively low standards of living. This dilemma will be the most crucial challenge in the years to come. Of course, it is a matter of ethics and ideology. The actual situation will be somewhere in the middle. The most challenging in this duality is that it is of a solid structural, systemic character, independent of the subjective, individual opinions of the involved actors. Of course, more can be written, but this dichotomy will be crucial in shaping the social order under the conditions of accelerated technological development.鈥

Beneficial
Ian O’Byrne, assistant professor of literacy education at the College of Charleston, commented, 鈥淭he best and most beneficial changes will be multiple, but it depends on expectations about who benefits and how. More to the point, as these technologies impact our societies and cultures, some groups are either disrupted or dislocated as these systems, products and spaces proliferate.

鈥淚n terms of human-centered development of digital tools and systems, I am hopeful that recent movements in open-source technologies, indieweb philosophies and federated systems (e.g., Mastodon) will help support and promote human identity and agency in and across these systems.

鈥淭echnological solutions and products may have some impact on improving human rights and abetting good outcomes for all citizens. Much of this involves the use of social networks and digital tools for capturing, sharing and documenting local events to a global audience.

鈥淥ne of the things that has me excited about changes in the long term is that technology usually tries to advance toward progress, improvement and better outcomes. As stated earlier, this may often come at the expense of individuals and groups, but the hope is that for the larger community (or the community in power) better outcomes are attainable. I believe that science and technology usually are for the better.

鈥淚n terms of human health and well-being, I am a bit hopeful that current advances in technology, like wearable sensors, electronic health records, and digital records will help individuals be safer, healthier, and strive for mental and physical health.

Harmful
Ian O’Byrne, assistant professor of literacy education at the College of Charleston, wrote, 鈥淚n thinking about harmful or menacing aspects of advances in digital tools and networked platforms, I’m considerate of the fact that technology will advance toward what it believes is progress and a better solution or outcome. In many ways, this may run counter to what human systems, solutions and outcomes may desire.

鈥淚 believe that human-centered development of digital tools and systems is focused on keeping users interacting with tools, services and products, and not as much interested in the mental and/or physical health of individuals as they interact in these spaces. Furthermore, I believe these tools and are falling short of privacy advocates’ goals as terms of use and service are complex and unintelligible.

鈥淚n terms of human connections, especially in terms of governance and institutions, I am most concerned about the growing divide between education, science, technology and the communities that feel like they are upended by these forces. I believe that we are seeing the full impact of the “future shock” that Toffler referenced when referring to what happens when people are no longer able to cope with the pace of change. We’re increasingly seeing instances where we need to question the issues of privacy, security and data privacy they’re experiencing as they sign off and use these tools.

鈥淚 have significant concerns about the advances in technology and digital spaces as they impact human rights of individuals, especially children. With the spread of the global pandemic, as we moved to emergency remote teaching, schools had a decision to make about how to support learners as they moved online for learning. Many learning institutions used this as an opportunity to amplify surveillance tools and normalize surveillance culture for learners and educators. In addition, these technological tools are wonderful, especially as we think about access to a global, networked economy, but I have concerns about entering student data into these environments and ceding future customers of products and systems.鈥

Beneficial
Robert Gibson, director of instructional design at WSU Tech, commented, 鈥淎rtificial intelligence will certainly be the most impactful. No question about it. Whether it will be used for beneficial purposes remains to be seen. We’ve seen how the web has spawned nefarious and dangerous web sites that have threatened our very democracy and civil order. Left unchecked, AI could certainly impact civilization in unexpected ways.鈥

Harmful
Robert Gibson, director of instructional design at WSU Tech, said, 鈥淎rtificial intelligence is a threat for all the same reasons that it is providing amazing opportunities. For one thing, deepfake technologies could be used to frame people for crimes, alter the course of diplomatic interactions and reshape society itself.鈥

Neither Beneficial Nor Harmful
Eduardo Villanueva-Mansilla, associate professor at Pontificia Universidad Cat贸lica del Per煤 and editor of the Journal of Community Informatics, said, 鈥淚’m not so sure about positive or beneficial changes as a whole. There are many instances of technological innovation that may have significant impacts on society, but it is quite hard to think of beneficial as a category that may describe them. For instance, it is evident that AI, at current speed, will be quite significant in many different sectors around the world, but still the biases 鈥 even unintentional ones 鈥 are a problem that no one is really thinking about. Any approach that considers this category will have a level of bias in itself that I don’t feel comfortable with. Unless there is a single set of technologies that stop the climate emergency and allows for better, fairer access to resources, progress will be uneven and actually, quite irrelevant.鈥

Beneficial
Buroshiva Dasgupta, professor of communication at Sister Nivedita University in Kolkata, India, wrote, 鈥淚 am generally excited about the changes that are happening to society through digital media. By 2035 the human species will evolve into more efficient creatures. Privacy is an unnecessary concern. If humans want to keep certain things secret, they will do it, whether in a digital environment or not. Humans will continue as gregarious animals.鈥

Harmful
Buroshiva Dasgupta, professor of communication at Sister Nivedita University in Kolkata, India, commented, 鈥淒igital media is a tool; it depends on how we use it. The atom bomb killed thousands, but now it is reined in to benefit the human species. Similarly, we are becoming aware of the harmful effects of digital media. We will learn to guard against it, but generally digital media will make us more-efficient human beings.鈥

Beneficial
Jan Schaffer, executive director at J-Lab, wrote, 鈥淚n human health, there are going to be further great advances in molecular biology and treatment. My brother, 68, a pathologist, is absolutely animated by these advances. Likewise, there will be strides in robotic and laser surgeries. In regard to human connections, there will be great advances in secure voting that cancel out any fraud claims. In terms of human rights, our knowledge of abuses will be enhanced by messaging apps and drones. If there is a will, we could know more about kleptocratic behavior.鈥

Harmful
Jan Schaffer, executive director at J-Lab, commented, 鈥淚n terms of human knowledge, I have great concerns about AI tools that all too easily allow students to generate term papers and reports without learning any of the material. I worry about the growing reports of creating sentient robots with little oversight or rules. I think that lack of human services 鈥 in retail, banking, any kind of customer service 鈥 make people agitated and nervous. And in banking, in particular, I worry that systems are not being developed fast enough to prevent fraudulent activity. Will the FDIC be able insure it all, or at what point does the FDIC itself go under? And I worry that so many low-level jobs will be replaced by automated systems, that some workers who can’t, or don’t want, a college degree have limited options.鈥

Beneficial
Beatriz Botero Arcila, assistant professor of law in the digital economy at Sciences Po Law School in France and head of research at Edgelands Institute, said, 鈥淚nstitutions will get better at data-analytics and data-driven decision-making. This is happening in the private sector already, and in some parts of government, but will also continue to expand to civil society. This will be a function both of expertise, cheapening of various tools, but also people getting used to and expecting data-backed interventions. To survive the information explosion, it is also likely we will have developed mechanisms to verify information, hopefully curbing some of our cacophonous information environment.鈥

Harmful
Beatriz Botero Arcila, assistant professor of law in the digital economy at Sciences Po Law School in France and head of research at Edgelands Institute, responded, 鈥淗armful and menacing changes will be a further grip of infrastructures of control, of different kinds in different contexts. It鈥檚 hard to specify why this is harmful, but it鈥檚 maybe the case that large interconnected systems require strict rules and strong enforcement; this will hurt people who don鈥檛 fit the mold. Relatedly, I worry about freedom of speech shrinking as rules of what permissible speech is get stricter to limit certain forms of harm. In the long run this could hurt progress, but it is hard to tell.鈥

Beneficial
Randy Mayes, a self-employed technology analyst, commented, 鈥淭he mass adoption of fog computing stands out most to me. Most data processing utilizes von Neumann architecture, in which the data memory and the processor are in two different places such as cloud computing. For autonomous vehicles (AVs) there is a need for processors to rapidly analyze data and make real-time decisions regarding acceleration, object detection, braking and steering. Using cloud computing when cameras and sensors generate data to detect objects on the roads is compromised by latency issues. One solution to latency is moving processing and data storage closer to where the data is generated, called edge computing. AVs will also need to use swarm intelligence similar to bacteria and animals for communicating with each other for navigation. Researchers are currently investigating fog computing because it would spread network servers along highways for faster and more reliable navigation and for communicating data analytics among driverless cars.鈥

Harmful
Randy Mayes, a self-employed technology analyst, said, 鈥淭here are numerous outstanding philosophical, scientific and existential risk issues that could determine whether or not our species survives. Solutions to these hard problems are beyond current human knowledge and machine capabilities. A number of software and hardware companies are developing quantum computers. These are not intended to replace our home and office computers; they are more-complex tools that will be used to solve more-complex problems. The properties that make quantum particles ideal for quantum computing also provide technical hurdles for its development. Qubits interact with the environment and are in multiple states simultaneously which decreases the accuracy of measurements. In order to overcome the interference and uncertainty hurdles, researchers have developed innovative technological solutions to make qubits more measurable. In 1994, Peter Shor of Bell Labs also demonstrated that quantum computers with massive computational power can also provide the military more secure data and communications. Because of the uncertain nature of qubits, it makes it almost impossible for hackers to intercept and decrypt it. However, since quantum computers can perform multiple calculations simultaneously, they have the potential to break the common encryption methods used for classical computing.鈥

Beneficial and Harmful
David Wilkins, an instructor at the University of Oregon School of Data Science and Computer Science, commented, 鈥淲e will see developments in medicine and political wisdom. Medicine: Our species needs to compare how very large mammals (elephants, whales) mange to remain cancer-free despite having far more cells that can go wild. That effort will require huge amounts of computer research to compare the genomes with our own. Political wisdom: There will be more access to widely vetted facts, which may reduce passions about political ideas if there are widespread, well-tested accesses to facts, rather than hyper-partisan panics. Harms will come in regard to privacy: A dramatic loss of privacy will be made possible by monitoring searches, reading and affiliations with others.鈥

Beneficial and Harmful
惭补谤肠听叠谤别苍尘补苍, managing member at IDARE,聽said, 鈥淭he most beneficial changes due to digital technology will be in the area of medical diagnosis and treatment. The most harmful changes likely to occur due to digital technology will be in the area of artificial intelligence, when AI entities surpass humans and decide they can get along with us.鈥

Beneficial and Harmful
John L. King, professor of information studies and former dean at the University of Michigan, said, 鈥淚mproved information and communications technologies (e.g., on cellphones) will make it easier for individuals to find information and execute actions, enabling improved health and safety and making it harder to hide human rights violations.

鈥淚n regard to harms: Incentives will continue for advertising and other social-control purposes. Tools to collect and use such information will get better ahead of other tools, reinforcing the effort to learn more knowledge about individuals and improve performance on advertising and social control endeavors. A lot of this will masquerade under the banner of privacy, but it is about control.鈥

Beneficial and Harmful
Kevin Doyle Jones, an entrepreneur and co-founder of the world鈥檚 largest social investment conference, SoCap, said, 鈥淚 work in economic justice. I think the sharing of solutions by practitioners will accelerate. The most menacing aspects are the ability of conspiracy theorists to live in their own online worlds.鈥

Beneficial and Harmful
Michael Pilos, co-executive director at Raxios & Co., wrote, 鈥淕overnmental infrastructure and systems will use AI to optimise operations. Autonomous cargo transportation with wide automation of road networks will advance. However, AI has the ability to mimic humans and influence political life, the governing and even the army 鈥 AI needs to be governed and controlled by global laws and protocols (with the help of the UN).鈥

Beneficial (did not respond to Harms question)
Matthew Belge, president and principal UX designer at Vision & Logic, wrote, 鈥淢edical advances by 2035 will include more use of VR and other simulation tools. Robots will be making street deliveries. Cars will drive on autopilot and will become a shared commodity; cars available on demand will become more common. Huge entertainment screens for the home will make traditional movie theaters obsolete. Personal electric aircraft will be available to fly us to close destinations.鈥

Beneficial
Sharon Sputz, executive director of strategic programs at the Data Science Institute at Columbia University, commented, 鈥淎s digital systems advance, they have the great potential to take over tasks that are better done by computers as opposed to humans. While this list is growing, they still are centered around computation and can be put to good use. These can be examples like using data science models to alert the radiologist to suspicious images. Using Natural Language Processing to summarize large volumes of information or for improving our understanding of disease.

Harmful
Sharon Sputz, executive director of strategic programs at the Data Science Institute at Columbia University, said, 鈥淲hen we take AI Systems and do not include the human or do not fully understand the data used, this can lead to making decisions that can unfairly harm innocent people.鈥

Beneficial and Harmful
Ernest Thiessen, founder and president of Smartsettle, developer of an eNegotiation system, responded, 鈥淗uge improvements in decision-making will be possible with intelligent collaboration systems that incorporate optimization algorithms. But I would be wary of systems like ChatGPT that give answers based on existing patterns.鈥

Beneficial and Harmful
Neil McLachlan, a consultant with Co Serve Consulting, commented, 鈥淭here will be more human-centered development of digital tools and systems; this seems to be most likely and most needed to me. In particular, this will include operational, safety and quality of service benefits for transport systems. However, there will be more violations of human rights. Harming the rights of citizens is deep-seated problem with most social media platforms, even as they are right now.鈥

Beneficial and Harmful
Paul Wildman, futurist and consultant, Kids and Adults Learning Ltd, wrote, 鈥淏ig developments will come in human-centered design of digital tools and systems, including autonomous health solutions and vehicles, safely advancing human progress in these systems. A big concern is if the human-centered design of digital tools and systems, including humans themselves through transhumanism, fall short of advocates’ goals.鈥

Beneficial and Harmful
David Lilley, an assistant professor of criminal justice at the University of Toledo, commented, 鈥淭he greatest opportunity that the future digital world brings is quick access to educational information. Individuals could become experts (to the Ph.D. level) on their own time at little cost and without taking years of time via traditional schooling. Among the potential harms are the emergence of an Artificial Intelligence Hive Mind, the merging of corporatism with government (fascism) and the loss of freedom to think and speak.

鈥淚 believe a centralized AI system that is connected via Google, Facebook, Twitter and other content and search providers could likely attempt to monitor, control and manipulate the world by monitoring billions of Internet searches, emails and online comments.

鈥淏ackground: Psychologists often tell institutionalized persons to keep a journal of their thoughts so they can monitor the mental state of the mentally ill. We are now entering into a similar relationship with the Internet. Monitoring the Internet gives a central entity (e.g., AI Hive Mind) near God-like power. This is already resulting in a merger of big tech, corporatism and government. Eventually, there will be just one large corporation that rules the world via fascist totalitarianism.鈥

Beneficial and Harmful
Perry Monroe, a futurist and consultant who does contract work for U.S. government agencies, said, 鈥淚n America there has been a pattern of change and development that has occurred in a 40-year cycle. If we look at the 20th Century alone, here is the example I speak of. In 1905 we saw the development of cars and airplanes, then in 1945 we saw the birth of the atomic age. It was in 1985 we saw the birth of the computer age and cell phones. That being said, what change is going to happen in 2025 if this pattern holds true? Will it be sentient AI, the Kessler effect, or will we make first contact with some extraterrestrial entity? What role humanity will play in this is anyone’s guess. The most harmful thing I see that is an actual possibility is rogue AI. We have at present several AI programs open to the public that are seen as a curiosity but have the potential, if left unchecked, to lead to darker things.鈥

To read the full survey with analysis, please click here.

To read anonymous responses to the report, please click here.