top of page

Trust, risk, and security management in AI

Current practices and challenges

mL_aI.png
8320.jpg

About this Report

As the use of artificial intelligence (AI) increases, so do concerns regarding security, risk management, and trustworthiness in AI-powered applications. This report explores the various security practices organisations employ to protect their AI-enabled applications against AI-specific security and privacy threats. It also looks at the various challenges facing these companies when it comes to the explainability of AI-powered applications, as well as how they ensure compliance with relevant regulations.

Key Questions Answered

  • What are the most common AI-specific risk mitigation practices that organisations employ?

  • How does the implementation of AI-specific risk mitigation practices change based on organisation size and region?

  • What are the main challenges that organisations face in maintaining transparency in the use of AI while ensuring security and privacy?

  • How do these challenges differ based on organisation size and the types of models they use to add AI functionality to their applications?

  • What are the main compliance measures that organisations have when it comes to AI regulations?

  • How do the main compliance measures differ based on organisations in different regions?

Click to expand

Methodology

The report is based on data collected in the 27th edition of SlashData's Developer Nation survey, which was fielded between June and July 2024. In this survey, more than 1,500 professional developers who build AI-enabled applications answered questions about trust, risk, and security management in AI-enabled applications.

RESEARCH_SPACE_8.jpg

Contact us

bottom of page