Artificial Intelligence can be a Double Edged Sword

Posted by Christine Fritsch on April 10, 2018

digital chain

The GAO released a report describing a forum held in July 2017 on artificial intelligence (AI) and the technology’s impact in the near- and long-term future. Those at the gathering directed attention to areas of change within research and policy to accommodate and influence the growth of AI. The forum consisted of 23 participants from industry, academia, nonprofits and government. The conversation situated AI among four sectors; cyber, vehicle automation, criminal justice and financial services.

Within the cyber arena, the automation and algorithms of AI can significantly reduce the time and efficiency in cyber tasks by identifying and patching vulnerabilities and detecting and defending against attacks. Moreover, a mixture of machine learning and expertise can lead to predictive analysis of cyber attacks.

However, here is where the double edged sword of AI comes in.

As much as AI can help identify and prevent the threat of cyber attacks, the AI technology itself could become the threat. Should there be a vulnerability within AI, that attack may attempt to manipulate the AI system’s actions and produce malicious and negative, high-impact results. AI in cyber will also require ongoing human intervention for operation and maintenance and address the legality of AI use in personal data.

Likewise, AI in automated vehicles can increase driving safety, lower cost and improve the mobility and access of persons with disadvantages. The challenges, however, can be alarming. Vehicle safety testing and assurance must be more than top-notch and AI decisions must be explained thoroughly. Furthermore, current laws and regulations and law enforcement processes must be updated to accommodate AI technology.

In criminal justice, AI can help improve the allocation and availability of resources in particular geographic areas, advance the identification of criminal suspects and produce predictive risk assessments on individuals likely to commit further crimes. Challenges in AI will include addressing the privacy and civil rights of the population and ensuring a lack of bias.

Lastly, AI can be used in the financial sector to thwart the risk of waste and fraud, improve customer service operations and produce client wealth through better consultation tools based on machine learning and advanced algorithms. On the other hand, these benefits depend on complete, properly formatted data. Moreover, AI-based credit decisions must ensure adherence to current financial laws.

In sum, the GAO identified the challenges in AI as:

  • Collecting and sharing reliable and high-quality data needed to train AI
  • Accessing adequate computing resources and requisite human capital
  • Ensuring laws and regulations governing systems enabled by AI are adequate and that the application of AI does not infringe on civil liberties
  • Developing an ethical framework to govern the use of AI and ensuring the actions and decisions of AI systems can be adequately explained and accepted

While participants gave considerations in research areas such as developing, high-quality labeled data to produce accurate outcomes and understanding the implications of AI on jobs in the future, it is the following policy recommendations, if implemented, that would have the most impact on private and public sectors:

  • Incentivizing data sharing
  • Improving safety and security
  • Updating the regulatory approach
  • Assessing acceptable risks and ethical decision making

Specifically, policymakers were called on to facilitate data sharing by creating safe places to protect sensitive information. Standardizing data collection definitions and methods and updating potentially outdated policies in the federal government that prevent agencies from sharing data were also mentioned. Moreover, participants called on policymakers to ensure that some form of structure is set to ensure costs and liabilities in security are shared between both manufacturers and users. Even implementing a framework of security ratings with attached incentives was suggested to improve the security of AI. Additional thoughts in policy included calling on policymakers to require AI developers to test for different outcomes before implementing technology without holding them liable for any impacts that are identified. Finally, participants called on the public sector to close the private- and public-sector research gap that exists in AI.

AI has the potential to solve some of the world’s most pressing problems, improve human life and increase economic competitiveness and prosperity. As such, investment in AI is increasing across various spectrums with a high demand for people in its expertise. Forum participants even stated that even if advancement of AI were to stop today, the current AI state will continue to have far-reaching implications. Nevertheless, with all of this good news comes caution. According to the report, as AI increases and grows in complexity, so will the data to train AI as well as the security and governance of the technology grow.