cis_banner_final_en.png
AI Wave and Ethical Dilemmas for the Decision-makers for Inclusive AI

07/12/2021


AI Wave and Ethical Dilemmas for the Decision-makers for Inclusive AI


AI Wave and Growth Possibilities

Artificial Intelligence (AI) wave is going to be exponentially larger than the Information Techonology (IT) wave with Data as the new gold and is expected to create an additional economic value of US $15.7 trillion by 2035.[1] This has massive implications for technology change, societal innovation, and business innovation; and we are being poised to leapfrog from a knowledge economy to an intelligent intangible experiences driven, so-called experience economy.

AI is often talked about as the ability of machines to perform tasks like thinking, perceiving, learning, problem-solving, and decision making. This transformational capacity of the AI technology revolution has been compared to that of electricity, leading to the “Fourth Industrial Revolution” (4IR).[2]

Initially propagated as a technology that could mimic human intelligence, AI has evolved in ways that far exceeds its original idea with multiple application areas. With massive advances made in data collection, processing, tagging and computation power that stretches to the application edge, AI systems can now be deployed to take over multiple tasks with or without humans in the loop, enable connectivity, and enhance productivity. AI’s capabilities have dramatically expanded over the years in multiple waves and so has its utility in a growing number of fields.

The AI wave is different in the ways in which rapid technology innovation (combination of AI, Robotics, 5G & Quantum technologies) is going to occur together with business model innovation (digital-intangibles-driven experience economy). The derivative effects of these exponentially growing technologies with business model innovations are eventually going to create an ecosystem of consumers who are hungry for a digital experience-driven economy. A case in point is Amazon’s Alexa, which creates the digital experience of owning an intelligent assistant that will provide a personalised experience in anything we do, be it shopping, travel, eating out, movie watching, and so on.

Global stock markets are seeing this shift very clearly, which is accelerating very rapidly post-COVID with 90 percent of the perceived value being in intelligent intangibles.[3] It is important to see this impact in its entirety and with clarity. Wall Street Journal ran a story[4] on how the Big Tech firms became bigger post-COVID amassing almost US $8tn in market value, while the non-digital struggled to cope with the pandemic (see the fig. below).

Fig 1: Market value of 2020’s five biggest U.S. tech stocks, by month[5]

In the foreseeable future, courtesy AI, digital economies will start reaping rich benefits because of resulting massive cost advantages in labour and time. AI will penetrate more broadly and deeply because of the AI and Machine Learning (ML) processes, wherein machines increasingly learn and improve their performance with time, in turn flowing more investment in capital and talent towards AI. This rapidly iterating loop will create massive growth for companies and countries that can master the AI wave, and because of the cumulative impact nature of AI, it can give rise to massive monopolies unless we are able to create inclusive innovation ecosystems with strong AI governance and accountability at global scale.

Important new trends and challenges

While the AI wave is gathering capital and talent, policymakers would be hard pressed to explore and pursue equitable and inclusive growth for not just developed regions, but also for developing regions like India. There are three major trends encompassing technology and business model innovation that are critical to ponder on and for policymakers to notice and act upon.

Era of “HyperInnovation”, implication of rapid change

Unlike the previous two waves of internet and mobile, where technology change was followed by business model change, the AI wave is resulting in technology innovation and business model innovation happening simultaneously. If you look at the technology innovation of the internet, it was followed by the rise of e-commerce by almost a decade. Mobile usage followed the same pattern, with business model innovations of the share economy based on location following after a while. However, for the AI age, it is different. Technology changes are rapid and are co-evolving with business model innovations. For example, innovation in driverless cars together with ride hailing business models could signal a big shift in mobility. This is going to result in an era of “hyperinnovation”, where multiple verticals will get disrupted quickly with a much faster rate of innovation and adoption. “Resources as a service”[6] (RaaS) business model, where the consumer is changed based on the usage of an AI feature or system or bots—an evolved version of Software as a service (Saas) model—would not only change the way we pay for healthcare, or financial services or mobility, but also drive a very different adoption pattern in different parts of the world. For example, you might be paying for a knee surgery by a robot surgeon, based on how many miles you might be walking in the near future. While it might make healthcare more accessible and affordable even in the remotest of places, it will have a deep impact on the providerled health care systems that most countries have. It will force most nations to adopt a patient-centric model. Our governance system, from regulation to policy, also must reflect these dynamics driven by the rapid pace of AI innovations. We all suffer from human biases, however, technology like AI has the ability to scale these biases at exponential levels and much more quickly with these iterative technology and business loops. This presents a challenge for us as a society to adapt to a much faster rate of change and obsolescence, unless we are able to create a system of well-governed, responsible AI along with the innovation ecosystem. Further, the hyperinnovation loop has the potential to create a wider rift of ‘AI haves’ and ‘AI have nots’ in a very short period of time, unless we are able to create a model that spreads the fruits of AI innovations more evenly.

Fig 2: AI Wave and experience economy[7]

Rise of Transformers

Innovations in AI (Compute, Data, Storage) combined with the Internet of Things (IoT) (for autonomous actions) & 5G (for always-on, low latency communication) are leading to a completely new kind of technology core with AIoT (Artificial Intelligence of Things) first architectures, where core business models are built around a set of new technology innovations. By observing the action or behaviour of, patterns among, and relationships between key entities—for example, words in a story or cats in a video—the system bootstraps an overall understanding of its context by itself, referred to as unsupervised learning.[8] Unsupervised learning can scale AI quickly and lead to adoption across multiple verticals.

Unsupervised learning is already finding a transformative impact in natural language processing (NLP), where it is getting adopted at a fast clip, courtesy of a new unsupervised learning architecture known as the ‘Transformer’. Presently, the technological breakthrough is the release of the Transformer, Generative Pre-trained Transformer 3, called GPT-3[9] from OpenAI, which enthralled the technology and business world together. It can now write decent poetry, generate functioning code, compose useful business memos, write articles about itself, and so much more, by leveraging massive data, with around 175 billion trainable parameters. While early use cases are frothy, they point towards an interesting future. Wu Dao 2.0, the largest language model to date, with 1.75 trillion parameters, has been another success of this approach. It has surpassed OpenAI’s GPT-3 and Google’s Switch Transformer in size. Chinese government-supported Beijing Academy of Artificial Intelligence (BAAI) backed Wu Dao 2.0 and aims to enable ‘machines’ to think like ‘humans’ and achieve cognitive abilities beyond the Turing test.

Such transformers might completely break new grounds in AI and create horizontal utilities, which can be leveraged by nimble startups to build on top of their customer use cases with differentiation and added value. This approach might eventually spill over to the other major areas that AI is exploring. However, this approach might be too data and compute-intensive and could be counterproductive in the longer term from an energy perspective.

Privacy-aware computing and fight for learning data

One of the overarching challenges of the AI-driven experience economy is building AI while ensuring data privacy. Because data is the lifestream of modern artificial intelligence, data privacy issues play a significant, and often limiting role, in AI’s growth trajectory. Harsh data protection regulation robs AI of the most transforming impact brought about by deep personalisation. However, for ethical reasons, personal data has to be protected and provided only with consent. Current architectures have an “either or” approach to data collaboration, limiting the amount of data from which AI systems can learn, without infringing upon privacy. The fight for open learning data is going to become a major challenge for the AI-driven age. The current model of a few companies amassing massive amounts of data and computing capabilities to drive deep learning AI, coupled with the diminishing role played by the universities struggling with flight of AI talent[10], can be counterproductive for future growth of inclusive innovations. Rise of privacy-aware AI with confidential computing is worth noting in this regard.

A framework for sharing of privacy sensitive data like a citizen’s financial or health data has been proposed by Government of India’s Niti Aayog and is called the Data Empowerment and Privacy Architecture (DEPA)[11]. It incorporates privacy and active consent by design. Profiting from data is important from a developing nation’s perspective as people are becoming data rich before they are asset rich. The key aspect of this architecture is the concept of a consent manager, whose role is to acquire consent from data owners, with this consent being captured in a standardised format based on an XML schema. This approach is promising and combined with technology advances in using secure trusted hardware enclaves, it could be a game changer. Although encrypted storage and network sessions typically protect data under most circumstances, the use of shared infrastructure and services like cloud instances and containers potentially opens applications and data to attack while they are executing. Furthermore, since the data must be unencrypted during code execution, it doesn’t matter how securely it was treated during storage or transport. Instead, the only way to guarantee data security during application execution is by exploiting hardware features now included with modern processors called trusted execution environments. This would ensure that secure data collaboration would be more widely and, more importantly, securely accessible. This, in turn, will create more trust among data owners for collaboration, allowing for far richer data to be available more widely. This would lead to more inclusive development and advancement of AI with alternate data models.

Approach for creating an inclusive, robust and fair innovation ecosystem in AI

The big question in the eyes of policymakers and regulators is how to make sure that we have an inclusive and equitable AI growth ecosystem, while promoting the development and adoption of ethical AI. While the problem is deep and current technology challenges ensure that the AI future is currently held in hands of a few, there are few key possible enablers for an inclusive and fair future.

AI ethics and standards: Very similar to the three laws of robotics propagated by Asimov, nations need to adopt a common minimum standard and a uniform approach to ethics. While each nation might have different policy and regulation framework, common minimum standards will ensure that ethical essentials are not ignored.

Open and verified Data Banks: Open data banks contributed by the government and corporate partners in a privacy-preserving model, which incentivises the data owner and data fiduciary as well to create a win-win for all the stakeholders. These data banks should be rated with respect to the inherent biases they carry, so that system builders have acute awareness and understanding of them, while creating innovations.

Open Talent: Open Talent via encouraging academia to train/reskill talent, supported by government support or subsidy and providing R&D subsidies to open or join startups via venture studio model. This would ensure that young talent would flow towards innovation ecosystems.

Domain: Open Innovation is driven by domain advisors from all across the globe, facilitated by consortiums, government and academics with a perspective to drive AI standards in domains.

Research: Enable open and networked research models funded by large cooperatives, foundations, or large companies as part of corporate social responsibility and made open for co-innovation by all the stakeholders. This would ensure that innovation will spread more evenly with much better quality of research output.

Capital: AI innovation venture funds and debt funds partially funded by government and supported by industry with reduced tax structure to attract capital participants will create more even capital availability.

The road to the AI age is tricky and treacherous. How we, as a society, will wake up on the other side of it depends upon both innovators and policymakers. A too harshly regulated environment could kill innovation, while an unchecked and unmodulated ecosystem could create changes that are too radical and too fast for us to adapt to as a society. \

Endnotes:

1. PWC, “PwC’s Artificial Intelligence services”, PWC, https://www.pwc.com/us/en/services/ consulting/analytics/artificial-intelligence.html  

2. Klaus Schwab, “The Fourth Industrial Revolution: what it means, how to respond”, World Economic Forum, January 14, 2016, https://www.weforum.org/agenda/2016/01/the-fourthindustrial-revolution-what-it-means-and-how-to-respond/  

3. Aran Ali, “The Soaring Value of Intangible Assets in the S&P 500”, Visual Capitalist, November 12, 2020, https://www.visualcapitalist.com/the-soaring-value-of-intangibleassets-in-the-sp-500/  

4. The Wall Street Journal, “How Big Tech Got Even Bigger”, The Wall Street Journal, February 6, 2021, https://www.wsj.com/articles/how-big-tech-got-evenbigger-11612587632  

5. The Wall Street Journal, “How Big Tech Got Even Bigger”

6. Jessica Twentyman and Chris Middleton, “Manufacturing: How Robotics as a Service extends to whole factories”, Internet of Business, https://internetofbusiness.com/howrobotics-as-a-service-is-extending-to-whole-factories-analysis/  

7. Working Group on the Responsible Development, Use and Governance of AI, “Innovation & Commercialization Working Group Report”, The Global Partnership on Artificial Intelligence Montreal Summit, November 2020, https://gpai.ai/projects/innovation-andcommercialization/gpai-innovation-commercialization-wg-report-november-2020.pdf

8. Will Douglas Heaven, “OpenAI’s new language generator GPT-3 is shockingly good— and completely mindless”, MIT Technology Review, July 20, 2020, https://www. technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generatorgpt-3-nlp/

9. Tom B. Brown et al., “Language Models are Few-Shot Learners”, ArXiv abs/2005.14165 (2020), https://arxiv.org/pdf/2005.14165.pdf  

10. Ian Sample, “‘We can’t compete’: why universities are losing their best AI scientists”, The Guardian, November 01, 2017, https://www.theguardian.com/science/2017/nov/01/cantcompete-universities-losing-best-ai-scientists  

11. NITI Aayog, Data Empowerment And Protection Architecture Draft for Discussion, August 2020, https://www.niti.gov.in/sites/default/files/2020-09/DEPA-Book.pdf .

Author: Umakant Soni, Co-founder & CEO ArtPark (AI & Robotics Technilogy Park), India

Source: https://www.orfonline.org/wp-content/uploads/2021/10/Regulating-Cyberspace.pdf

Bấm vào đây để đọc bản tiếng Việt của bài viết này/Click here to read the Vietnamese version of this article

Đọc thêm về
AI  
Bình luận của bạn