AI Content Chat (Beta) logo

31 intel.com/responsibility 2021-22 Corporate Responsibility Report In 2021, our teams continued to implement Intel’s high-confidence human rights standard under the Intel Global Human Rights Principles . We leveraged the UN Guiding Principles and due diligence standards under US law and the laws and regulations that apply to our business globally. We also leveraged existing procedures and methods used in risk-based anti-corruption compliance and supply chain assessment, risk mitigation, training, and remedy processes to implement Intel’s product responsibility standard under the Intel Global Human Rights Principles. As a result, in 2021, while certain product sales to third- party entities met this standard, we continued to restrict other product sales based on the   Intel Global Human Rights Principles . Human Rights Impact Assessments In 2016, we engaged a third party to conduct a human rights impact assessment (HRIA) to review our processes and validate our human rights risks. The HRIA confirmed that we were addressing our most salient human rights risks, and reaffirmed our need to assess potential risks associated with emerging technologies. In 2018, we conducted an additional internal AI and autonomous driving HRIA, including assessment of potential risks related to product misuse, algorithmic bias, algorithmic transparency, privacy infringement, limits on freedom of expression, and health and safety. In 2019, we continued development of new internal processes to advance responsible AI practices and ensure that AI lives up to its potential as a positive transformative force for the global economy, health, public safety, and industries such as transportation, agriculture, and healthcare. In 2020 and early 2021, we completed an updated third-party HRIA, involving multiple internal teams and interviews with external stakeholders. This HRIA resulted in the update of our salient human rights risks, including the addition of potential impacts in the areas of product responsibility and responsible AI. 2022 Human Rights Priorities • Continue to assess and strengthen the Intel Global Human Rights Principles , policies, due diligence processes, product responsibility governance, monitoring and employee training to continuously improve and leverage best practices. • Engage in additional stakeholder and industry dialogues regarding potential human rights issues related to emerging technologies, including responsible AI funding and collaboration with academic researchers and DARPA in the areas of privacy and security for machine learning. • Further expand our impact in responsible minerals and accelerate the creation of new sourcing standards. For more details, see “ Responsible Minerals Sourcing ” in the Responsible section of this report. • Continue our work to combat forced and bonded labor in the first and second tiers of our supply chain. We are committed to maintaining and improving processes to avoid complicity in human rights violations related to our operations, supply chain, and products. Responsible AI We can be a role model in our industry by ensuring that our AI development work is consistent with the Intel Global Human Rights Principles and that these principles guide our efforts to build a thriving AI business. We have identified and integrated six areas of ethical inquiry into our product development and project approval processes related to AI capabilities: human rights; human oversight; explainable use of AI; security, safety, and reliability; personal privacy; and equity and inclusion. In 2021, we evolved our governance policies and processes to guide Intel’s AI product development and business practices in the face of the ethical issues that may arise with certain AI applications. This includes further leveraging our Ethical Principles for AI Development, improving the usability of our Ethical AI impact assessment process, and expanding the resources, tools, and oversight to help project teams engage in meaningful inquiry of risks and mitigation strategies throughout the product lifecycle. As a result, we have seen increased participation in this process across all business units. We collaborated with Article One —a strategy and management consultancy with expertise in human rights, responsible innovation, and social impact—to deploy a workshop to educate developers and integrate ethical considerations even more deeply across Intel. We also collaborated with Partnership on AI on multiple initiatives, including AI and shared prosperity, responsible sourcing of data enrichment services, and improving inclusion in AI development. In addition, we have collaborated in the areas of privacy and security for machine learning with DARPA, the University of Pennsylvania, the Private AI Center , the National Science Foundation, and the Stanford Center for AI Safety. We continued to leverage and grow our multi-disciplinary cross-Intel Responsible AI Advisory Council. The council addresses potential issues such protecting privacy while collecting and using data to train AI systems, reducing the risk of harmful bias in AI systems, and building trust in machine learning applications by helping people to better understand them. Introduction Responsible Inclusive Sustainable Enabling Appendix Our Business

Intel Corporate Responsibility Report - Page 31 Intel Corporate Responsibility Report Page 30 Page 32