人工智能伦理和IT审计师的角色

洁西索迪亚
作者: 洁西索迪亚
发表日期: 12月6日

Humans have always lived by certain ethical norms that are driven by their communities. These norms are enforced by rules and 规定, societal influence and public interactions. 但人工智能(AI)也是如此吗??

The objective of AI is to help increase the efficiency and effectiveness of daily tasks. However, AI does not have the ability to infer or understand information, as humans do. It uses significant amounts of computing power to derive insights from massive amounts of data that can challenge cognition.

记住这一点, consider what would happen if large quantities of biased data were fed to an AI system. 人工智能系统很容易成为加剧道德问题的工具, 比如社会不平等, 在规模. 因此, 在设计时仔细考虑道德规范是至关重要的, 操作和审计人工智能系统.

审视人工智能伦理

根据IBM的 行动中的人工智能伦理 报告, AI ethics is “[G]enerally recognized as a multi-disciplinary field of study that aims to optimize the beneficial impact of AI by prioritizing human agency and well-being while reducing the risks of adverse outcomes to all stakeholders.”1

人工智能可以成为减少偏见影响的有力工具, but it also has the potential to worsen the issue by deploying biases 在规模 and in sensitive areas.

The US National Institute of Standards and Technology (NIST) Special Publication (SP) 1270 identifies 3 categories of bias to which AI applications are prone:2

  1. 系统性的偏差,Results from the policies and procedures of organizations that operate in a manner that favors a certain social group more than others. When AI models are fed biased data based on characteristics such as gender or ethnicity, 他们无法有效地执行他们的预期目的.
  2. 统计和计算偏差当样本不能代表总体时发生. These biases are generally found in data sets or the algorithms used for the development of AI applications.
  3. 人类的偏见,源于人类思维的系统性错误. This bias is often dependent on human nature and tends to differ based on the individual’s or group’s perception of the information received. 人类偏见的例子包括行为偏见和解释偏见.3

There are multiple methods by which these biases can be managed or mitigated. NIST SP 1270 recommends utilizing the 4 stages of the proposed AI life cycle model, 使不同的涉众(e.g.(开发者、设计师)来有效地识别和解决人工智能偏见.

这四个阶段包括:

  1. 设计- - - - - -计划的阶段, 问题识别, 进行数据集的研究和鉴定. 这个阶段可以帮助识别偏见,如有限的意见, 组织和个人启发式, 或者在为人工智能训练选择的数据湖中存在偏见. Controls such as well-defined policies 和治理 mechanisms should be taken into consideration during this stage.
  2. 〇设计开发This stage involves requirements and data set analysis based on which AI model is designed or identified. 它是大多数发展活动发生和存在的地方, 因此, 容易产生多种与人工智能相关的偏见. 因此,建议采取以下控制措施:
    • Perform a compatibility analysis to identify potential sources of bias and plans to address them.
    • Implement a periodic bias assessment process in the development stage such as assessing AI algorithms or system input/output.
    • 在开发的最后阶段, perform an exhaustive bias assessment to ensure that the application stays within the defined limits.
  3. 部署,The stage in which the AI application is implemented in the production environment and starts processing live data. Adequate 控制 should be in place to ensure adequate functioning of the system (e.g.,持续监控系统行为,健全的政策和程序).
  4. 〇测试和评价This stage requires continuous testing and evaluation of all AI systems and components throughout all stages of the life cycle.

IT审计人员可以降低人工智能道德风险

IT auditors in today’s world are ill-equipped to handle the complexities of AI. 对于如何审计人工智能系统,目前还没有普遍认可的标准, 这尤其阻碍了道德人工智能的生产和使用. So, 人工智能审计人员应该执行基于道德的人工智能审计, 这需要分析人工智能的基础吗, 人工智能系统的代码以及人工智能带来的影响.4

So, what can IT auditors do to incorporate ethics-based AI auditing into their methods? 有几个建议,包括:

  • 分析现有框架, 规定, 流程, 控制, 以及处理各种领域(如风险)的文档, 合规, 隐私, 信息安全, 和治理.
  • Explain AI to stakeholders and communicate with them proactively to propose enhancements that address new and emerging ethical AI risk factors. 增强的例子包括新的委员会、章程、流程或工具.
  • Develop a comprehensive AI risk assessment program that includes relevant procedures, 流程, 文档, 角色, 职责和协议. 这可以通过与熟练和训练有素的从业人员合作来完成.
  • Seek out information about AI design and architecture through self-learning and trainings and by involving subject matter experts (SMEs) to determine the proper scope of impact.
  • 跟踪全球人工智能和数据伦理的发展,如法律, regulatory and policy changes to ensure that they are integrated with regular change management routines in the organization.

除了, IT auditors should develop an understanding of ethics-based concepts and principles rather than merely considering AI from a technological standpoint.

IT auditors should develop an understanding of ethics-based concepts and principles rather than merely considering AI from a technological standpoint.

例如,审核员应该考虑以下几个问题:

  • Have the following principles been adequately assessed throughout the development and use of AI?:
    • 公平
    • 透明度
    • 问责制
    • 解释
    • 互操作性
    • 包容性
  • Has adequate consideration been given to the 隐私 and human rights of the data subjects?
  • Has the adequacy of the unintended uses and misuses of AI applications been assessed?

One thing is certain: 澳门赌场官方下载s cannot adopt AI without first addressing its ethical issues. IT auditors can play a significant role in the process by enhancing their skills and expanding their focus areas to not only consider technological risk, 还有人工智能系统带来的道德风险. 如不解决, ethical risk can have harmful societal outcomes such as causing disparate impact and discriminatory or unjust conditions.

尾注

1 罗西,F.; B. Rudden; B. •戈林收养; 行动中的人工智能伦理, IBM商业价值研究所,2022年3月31日
2 同前.
3 于佩尔、J. D.; “认知理论,” 综合临床心理学,卷. 6, 2022
4 艾略特,我.; “审核AI很棘手, 尤其是在评估人工智能伦理合规性方面, 对自动驾驶汽车的审计也令人烦恼,” 《澳门赌场官方下载》2022年5月12日

编者按

Hear more about what the author has to say on this topic by listening to the “《澳门赌场官方软件》ISACA的一集® 播客.

洁西索迪亚, CISA, AWS认证云从业者,CISSP, ITIL v3

是IT吗?, 网络安全, 还是一家全球工程公司的隐私审计员, 采购, 建设, 和安装(EPCI)组织. He is responsible for leading and executing global audit and advisory engagements across several areas including enterprise resource planning (ERP) systems, 网络安全, 全球数据中心, 云平台, 工业控制系统, 这网络, 数据隐私和第三方风险. He previously worked as an advisory consultant for a leading Big 4 consulting enterprise and a multinational healthcare organization. 茜草属植物 ISACA® 杂志 文章审稿并积极投稿 ISACA杂志 和ISACA Now博客.