Less than a quarter of organizations express strong confidence in their AI policies

Published:

The Impact of Artificial Intelligence on Software Security: Black Duck Report Findings

A recent report by Black Duck has shed light on the growing trend of artificial intelligence (AI) adoption in software development and the security concerns that come with it. The data reveals that over 90% of survey respondents are utilizing AI in some capacity during their software development process, signaling the importance of implementing proper security measures throughout the development lifecycle.

Industries spanning technology, cybersecurity, fintech, education, and healthcare, among others, are all reporting high levels of AI adoption. Even organizations in the nonprofit sector, which typically lag behind in technological advancements, are embracing AI at a significant rate. The larger the organization, the more likely it is to have integrated AI into its software development practices.

Despite the widespread use of AI in software development, a sizable 67% of respondents expressed concerns about securing AI-generated code. While 85% of organizations have some security measures in place to address these challenges, only 24% are “very confident” in their policies and processes for testing AI-generated code.

Furthermore, the report found that security testing often slows down development, with 61% of respondents stating that it moderately or severely impacts their timelines. Many organizations are using multiple security testing tools, ranging from 6 to 20, which can make it difficult to effectively integrate and interpret results.

With the rapid adoption of AI in software development, organizations must prioritize implementing robust security measures to protect against potential vulnerabilities. As the software landscape continues to evolve, staying ahead of security threats will be paramount for organizations seeking to safeguard their proprietary code and data.

Related articles

Recent articles