Basically malicious Uses of AI – We are co-authors of a paper that predicts how AI technology can misuse by malicious actors and possible ways we can prevent and mitigate these threats. This document is the result of nearly a year of continuous work. Also with our colleagues at the Future of Humanity Institute. Also the Center for the Study of Existential Risk. And the Center for a New American Security, the Electronic Frontier Foundation, and others.

Also, malicious uses of AI challenges global security by reducing the cost of many existing attacks, creating new threats and vulnerabilities. And further complicating the attribution of specific attacks. Also given the changes in the threat landscape brought by malicious uses of AI. Also the report offers some high-level recommendations that businesses, research organizations, individual professionals and governments can take to ensure a safer world:

Recognize the dual-use nature of malicious uses of AI: Firstly, AI is a technology with both extremely positive and negative applications. Also we must take steps as a community to better evaluate research projects into deviance by malicious actors. And also engage with policymakers to understand specific areas of vulnerability. As we wrote in the document: “Surveillance tools can be used to apprehend terrorists or to put pressure on ordinary citizens.

And information content filters can bury fake news or manipulate public opinion. Governments and powerful private actors can access many AI tools and use them for the public good or harm.” Some potential solutions to these problems include pre-release risk assessments for specific pieces of research. And a small group of trusted organizations of some types of study with a critical security component. Among others, and exploring how to incorporate standards sensitive to the scientific community.

Learn from cybersecurity: Secondly, the computer security community has developed several applications relevant to AI researchers that we should consider applying in our research. These range from “red teaming.” Also that deliberately tries to break or disrupt systems to investing in technological forecasting. Also to detect threats before they arrive to contracts for confidential reporting of vulnerabilities discovered in AI systems.

Expand the discussion: Lastly, AI will change the global threat landscape, so we need to involve a broader segment of society in the debate. For example, parties may include those involved in civil society, national security experts, businesses, ethicists, the public, and other researchers.

Similar to our work on specific AI security issues. Also we have based some of the problems arising from AI’s malicious use on particular scenarios. Such as convincing ads generated by AI systems that target a site administrator.

Also, security systems and cybercriminals use neural networks. And “blurring” techniques to create computer viruses with automatic vulnerability generation capabilities. And malicious actors hack into a cleaning robot to deliver a payload of explosives to VIP. Also rogue states using ubiquitous AI-powered surveillance systems to preemptively detain people with a predictive risk profile.

We are excited to start this discussion with our colleagues, legislators and the public. Also we have spent the last two years researching and solidifying our internal policies on Open. AI and will begin to engage a broader audience on these issues. Also we are particularly interested in working with more researchers who see them advancing research and contributing to the policy debate around AI.