A look ahead at the fast-paced evolution of technology, regulation and business The risk and regulatory environment Companies will have thousands of ways to apply generative AI and foundation models to maximize efficiency and drive competitive advantage. Understandably, they’ll want to get started as soon as possible. But an enterprise- wide strategy needs to account for all the variants of AI and associated technologies they intend to use, not only generative AI and large language models. ChatGPT raises important questions about the responsible use of AI. The speed of technology evolution and adoption requires companies to pay close attention to any legal, ethical and reputational risks they may be incurring. It’s critical that generative AI technologies, including ChatGPT, are responsible and compliant by design, and that models and applications do not create unacceptable risk for the business. Accenture was a pioneer in the responsible use of technology including the responsible use of AI in its Code of Business Ethics from 2017. Responsible AI is the practice of designing, building and deploying AI in accordance with clear principles to empower businesses, respect people, and benefit society — allowing companies to engender trust in AI and to scale AI with confidence. AI systems need to be “raised” with a diverse and inclusive set of inputs so that they reflect the broader business and societal norms of responsibility, fairness and transparency. When AI is designed and put into practice within an ethical framework, it accelerates the potential for responsible collaborative intelligence, where human ingenuity converges with intelligent technology. This creates a foundation for trust with consumers, the workforce, and society, and can boost business performance and unlock new sources of growth. Figure 2: Key risk and regulatory questions for generative AI Intellectual property: How will the business protect its own IP? And how will it prevent the inadvertent breach of third-party copyright in using pre-trained foundation models? Data privacy and security: How will upcoming laws like the EU AI Act be incorporated in the way data is handled, processed, protected, secured and used? Discrimination: Is the company using or creating tools that need to factor in anti-discrimination or anti-bias considerations? Product liability: What health and safety mechanisms need to be put in place before a generative AI-based product is taken to market? Trust: What level of transparency should be provided to consumers and employees? How can the business ensure the accuracy of generative AI outputs and maintain user confidence? Identity: When establishing proof-of-personhood depends on voice or facial recognition, how will verification methods be enhanced and improved? What will be the consequences of its misuse? 10 A new era of generative AI for everyone |
Generative AI | Accenture Page 9 Page 11