The sudden rise of Artificial Intelligence worries many people, for plenty of reasons. Some fret about truly existential challenges, like that AIs might start developing consciousness and even turn on their human creators. My concern is more here and now: I worry that once again business leaders are rushing to show they are cutting edge by deploying a technology they barely understand.
I’ve seen this movie before. As one of the early pioneers of Customer Relationship Management 30 years ago, I have closely tracked CRM and subsequent roll outs of innovative digital technologies – which has all too often been done in ways that have had harmful consequences. I was one of those who watched with horror in 2016 when the CEO of Wells Fargo was confronted in Congress with the “cross-selling scandal.” The bank paid billions in fines and endured substantial reputational damage that continues until now. Much of that was enabled by the disastrous use of the CRM technology I helped invent at Oracle.
Right now, the consensus among the bosses of business, especially in Silicon Valley and other centers, is that AI’s long-awaited moment has arrived. But too often, when new tech gets installed before the people in charge really understand it, they flail about trying to figure out exactly how it will improve their company’s operations, sales, or service. More importantly, for the rest of society, they also don’t know how to think ahead about the risks that innovations can be used for evil, illegal or harmful purposes.
I have heard too many CEOs make jargon-laden endorsements of technologies in which they aim to signal they are on the cutting edge. Much of that sort of thing preceded the “dot com” boom and subsequent crash in 2000. In my long career in business, mostly in Silicon Valley, I have seen first-hand the damaging effects of technological illiteracy at companies’ highest levels.
Right now, bosses feel tremendous pressure from shareholders and their peers to have a cool, future-forward AI strategy. They are besieged by consultants knocking at the door, like those old-time snake oil salesmen. These are fertile conditions for needless and thoughtless technology adoption, with potentially large negative consequences for employees, customers, and shareholders alike.
CEOs and political leaders should focus on finding clear positive use cases before they heedlessly deploy AI. They must make sure –in advance, as much as possible – that there are concrete sustainable benefits for customers and citizens. Indeed, it may be helpful to think of AI less as artificial intelligence than as augmented human intelligence. Rather than getting carried away by the seemingly unlimited, almost mystical, and yet all too often imprecise, transformational power of this technology, leaders should focus on identifying specific ways in which it can improve things for humans.
Too often such technologies are deployed “top down” with disastrous or unfortunate consequences. So, another piece of advice is to take practical steps to balance that tendency by engaging stakeholders in the AI roll out from the beginning, bottom-up.
More fundamentally, those deploying AI must learn from how previous phases of the digital revolution went wrong in crucial ways.
Next year the Internet turns 50. In many respects it has brought huge benefits to the world – especially in democratizing connectivity and access to knowledge. Yet, especially in this
last decade since its 40thbirthday, the way it evolved has had terribly destructive side-effects for our societies. These range from severe mental health effects(especially on the well-being of teenage girls) to the pernicious spread of misinformation and consequential social polarization, which is now undermining trust in important institutions, especially in democracies.
A decade ago, with Vint Cerf, one of the original fathers of the Internet, I co-founded an organization called the People-Centered Internet. It aims to address these downsides and ensure we achieve the people-centered vision that was central to the non-commercial internet at its origin, when it organically spread from university to university, from country to country, animated by a central intrinsic presumption: that anyone anywhere can participate in shaping a better future.
Our mission at PCI has been to work to deliver an Internet that works for the people and with the people, not against them and without them. The rise of AI makes this ever more urgent. The remarkable power that AI has to process and learn carries the potential to make the downsides far worse. Vint and I are now partnered with Jascha Stein, an expert in AI and psychology to expand PCI’s mission beyond the Internet to a People Centered AI and digital future. Done right, in an inclusive, people-centered, energy-efficient way, the strengths of AI and other digital technology can help enable us to reverse the widening digital divide and enable a thriving society and flourishing planet. We are not pessimistic about AI’s power, only about how it is overseen and managed: PCI served as the chair of Digital Regulation for the UN General Assembly Science Summit in 2023 and will be co-chair for 2024.
One priority should be to ensure equality of access to AI. Under 40-year old Jascha Stein’s next generation leadership PCI and its partners are launching a global campaign that promotes the importance of peoples’ participation, entitled: Without You, the Future of the Internet and AI will be Lost. Greater digital equity can be achieved by designing applications that usefully augment our social and human intelligence, , like population and precision health and learning. AI can help the whole world be healthier and better educated affordably. But we need to ensure these tools are available all over the world, via mobile phones (and not just the newest, smartest ones). If AI can be deployed while demonstrating clearly how it can benefit humanity, that will increase trust both in this incredible technology and in the businesses that deploy it well.
AI support for multilingual access to services is a great example of expanding services and markets with the help of AI. Forward-looking companies are engaging their employees and customers in fine-tuning context sensitive language translation. In the process they gain greater insight into customer intentions and needs.
I recently visited Bangladesh, which introduced the critical and much-needed concept of #ZeroDigitalDivide to the United Nations General Assembly in September this year. Bangladesh is harnessing digital tech to set a clear path to becoming a middle-income country. For trust to reverse its decline at the highest level, the divide between digital haves and have-nots must be bridged, and Bangladesh is showing us a pathway to do it.
For instance, Google with a2i in Bangladesh worked on an AI flood forecasting Initiative, called Flood Hub. It tracks how rivers ebb and flow as well as tide anomalies and can give local authorities early warnings. The system has already enabled up to 40m people to take prompt action for collective evacuation. It also aids the protection of water resources. [link to 16 AI Use cases in Bangladesh]
Second, society at large must be deliberately engaged in the debates and discussion about how to deploy AI, and in providing feedback on how it is rolled out. The giant platform companies that
have come to dominate the internet, and are well-placed to dominate AI, talk often about their cultures of data-centric experimentation. Yet it is striking that this is a private process, with data access tightly controlled. The results of their many – even constant – experiments are kept secret.
The rise of AI makes even clearer the need for greater transparency of use, and broader stakeholder governance of data and experimentation, giving a meaningful say to users and the broader community, not just to providers. At the People-Centered Internet, we call these strategies community learning and living labs where data cooperatives benefit science.
Models of such labs exist in other parts of the economy and could be adapted to democratize the roll out of AI to ensure a more People Centered AI. In the United States, for example, there are Federally Qualified Health Centers in 10,000 locations. These centers work together in Breakthrough Collaboratives to improve the quality of community health (link). In the European Union, leaders are convening Citizens Panels (link) to engage public participation in understanding and meeting the challenges of online disinformation with tools for content verification and for empowering people to become active creators of trustworthy information.
Such community learning and living labs require the enthusiastic participation of the businesses that are developing and deploying AI. All those innovative startups and hard charging Fortune 500 companies require digital public infrastructure in order to do their business. Engaging in such community-centric initiatives would be one way of paying back the favor. Tech companies often say they are serious about stakeholder capitalism. This is a way to show they mean it. Any other approach would simply continue the old profit-maximising, shareholder-centric model that has caused so many problems up until now.
Advances in digital public infrastructure (DPI) in the wake of Covid add up to one of the biggest business opportunities in generations. It is fueled by an ongoing surge of investment in digital transformation by the nations of the G7 and G20, and supported by lots of lending in emerging economies by the World Bank, the IMF, other multilateral development banks and the United Nations Development Program. At the AI+DPI Summit in Bangladesh, the opportunities highlighted included: India’s Universal Payment System, which facilitates 12 billion transactions monthly; Indonesia’s digital identity system, which has reduced registration time at 6000 financial institutions from 60 to 5 minutes. In Uganda,the Accessible Digital Textbook developed with UNICEF helped hundreds of children with disabilities to graduate from primary school. India’s Open Network for eCommerce expanded to 230 cities and added 36,000 merchants in the first year.
If we manage this right and deploy it alongside systems of public participation and stakeholder-input, such spending will enable the world to avoid potentially costly mistakes. It will help generate trust among the public that in the long-run AI will be a force for good. And what better year to launch this new approach to governance than 2024, as we celebrate the internet’s 50th birthday?
Posted 20/05/2025