New Norms for AI: Zero Trust—Verify Then Trust

This conceptual article highlights an important puzzle concerning the actions of prominent AI industry leaders calling for a six-month halt to the further development and training of AI, while some have already deployed and continue to deploy AI; a so-called arms race. As security risks increase at scale, managing the security of AI is an essential consideration. Although implementing standards for AI is currently a preferred solution for many, this article argues that, on their own, standards may not provide sufficient capability and ﬂexibility. An argument is presented that cyber norms of zero trust for AI (ZTAI) alongside standards are essential for the security of AI, along with a call to action.


Introduction
Although many AI technology leaders have suggested that the development and training of AI should be stopped [1], some organisations, governments and individuals world-wide continue to expand their use of AI.Given important and increasing security concerns, management of the security of AI is essential for continued high-risk operations.
While implementing standards for AI is seen currently as an important and a preferred solution, this conceptual article argues that, on their own, standards may not provide sufficient capability.To achieve the fast and agile approach required to deal with AI, this article proposes leveraging cyber norms alongside standards for cyber security: zero trust for AI (ZTAI) [2].

Controlling AI: standards
The development of AI so far mirrors the ethos of the early internet, championed by Tim Berners-Lee (the father of the internet), as a space of openness and freedom.
However, this approach to AI development has resulted in frameworks of

AI, Computer Science and Robotics Technology
governance that seem polarised between some regions that have deployed standards, compared with others where there are no standards [3].
The focus of AI standards concerns issues including data quality and robustness, ethical considerations (privacy, transparency and accountability), and data trust frameworks for safe, secure, and equitable data transfers.Also, many other frameworks are in place, such as responsible business conduct and due diligence, but these may be not actioned.While some regions may not implement standards, initiatives and guidelines have been developed.include ethical principles for military applications to ensure that AI systems are accountable, transparent, and consistent with human values, ethical principles for AI development focused on transparency, accountability and human oversight, support to help promote AI research, as well as knowledge sharing through cooperation and partnerships [3].
Although a number of important questions remain concerning ethics in AI; as a conceptual paper, the scope here is limited to examining issues of trust in AI.
Although AI standards can ensure trustworthiness, transparency, and common definitions and frameworks, some limitations exist.Some standards may reduce flexibility, innovation or the time involved in developing standards may result in their redundancy.Therefore, deploying zero trust as a norm may address some limitations in standards through providing a fast and simple approach that can rapidly be taken up across multiple domains.

Trust in AI
Although some AI users appear to trust AI technology without question and with little regard to issues of safety, key challenges and questions arise concerning trust.
As the literature on trust is contested, this article views trust as relational, complex, and comprising separate processes often broken down into steps [4].
Trust-building is widely viewed as involving a trustor who holds a positive view of an outcome, followed by an assessment of trust and then a willingness to accept vulnerability and risk during trusting [4].Although AI is trusted, the trust process appears to concentrate on the positive viewpoint element, fast-forwarding to trust [5].As a result, potential security concerns are overlooked and in consequence, calls for a pause to the development of the AI technology increased [1].
One solution is offered through a zero trust approach, based on "verify first and trust later".Zero trust rests on the premise of no presumptive trust and a risk-based approach to trust based on verification on a continuous basis [5].A zero trust mindset focuses on the verification of identity prior to trust [6].Identification is required for all entities, whether an individual, a device or a software [7].
Verification of identity overcomes some issues in AI, as models are often opaque, and the source information or algorithm may change.In sum, zero trust encourages AI, Computer Science and Robotics Technology 2/4 users not to trust.Thus, this article suggests adding zero trust as a norm, alongside using standards to manage AI.

ZTAI for cyber security?
Although standards for AI enable trustworthiness and transparency along with simplified operations, this approach takes time.In the case of AI for cyber security, where immediate verification and monitoring is required, zero trust offers a rapid solution.A ZTAI approach could be implemented across the operations of a wide range of sectors and domains to manage permissions and access control to reduce threats.Indeed, zero trust is already deployed in cyber security operations [8] and offers a knowledge base that can be leveraged to provide a simple framework that can be deployed without further delay.

Call to action
To take zero trust forward, what is needed now is support and cooperation among members of the AI ecosystem and beyond.Cooperation is essential to help develop the AI body of knowledge.AI models are perfectly suited to this task, and it may be that researchers can harness the capability of AI to work on this task.Checking the output, given the scale, is a daunting task, well beyond human capabilities.What is required are ideas beyond simple retrospective checks, along with adequate time and space to consider and concentrate on the larger issues and problems, not least imagining the possible avenues of further development of the models and potential capabilities alongside risks.

Concluding remarks
Alongside the double-exponential rise in the capabilities of AI models [9], and an audience split between unbridled trust and acceptance, and those who call for caution; questions of trust are crucial [10].This article recommends framing questions not simply as trust and/or distrust in AI, but rather leveraging zero trust thinking to help address the ZTAI puzzle.Indeed, future scenarios could explore both the possible and the impossible.As a final word, as all technology relies on availability, in high-risk applications such as cyber security there can be no errors.
No one wants to receive the message: "The server is currently overloaded with other requests; sorry about that!Please try again later or contact us through our help center if the error persists" [11].Yet another example of the need for zero trust?

Conflict of interest
The author declare no conflict of interest.
AI, Computer Science and Robotics Technology 3/4