Cybersecure Software Development: Management Views on AI


Static Version

Introduction

In today’s cybersecurity landscape, the rise of Large Language Models (LLMs) looms large in changing the landscape of coding. In the near future Copilot will write 80% of code, according to Github’s CEO. These advances are benefiting both cybersecurity attackers and defenders, increasing the sophistication of malicious attackers while also facilitating more widespread abilities to defend against these attacks. AI also offers up the potential for helping non-technical consumers, by proposing in an intuitive language meaningful security measures they can take.

As bigger questions are asked on whether AI will save the world or if it represents an existental threat to humanity, day to day usage in all facets of society are increasing. It certainly seems reasonable that as software developers embrace the use of LLMs in day to day practice, they will benefit from guidelines on its proper use and limitations.

This research explores the view of managers and executives overseeing cybersecurity and software development in large companies that build software. It explores their views on the use of AI in software development and as integrated components of their digital products. Topics include the priority for integrating AI models, AI tools/frameworks used for software development, risks associated with proprietary, commercial and/or open source approaches, its use across a spectrum of needs, preventive testing, AI assistance in secure coding best practices, attitudes and best guesses as to its eventual impact, new requirements governing its use, and views on both internal policies and external upcoming regulations.

Current Priority of AI

A bar chart showing the priority for integrating AI Models. It is categoprized by past priority, undermined priority, low priority, medium priority and high priority, with each category showing bars for overall, from $250M to less than $1B, $1B to less than $5B, and $5B to $10B or more. On the right side, there is a list of Top 5 AI Tools/Frameworks used including Chat GPT, BARD AI, Github Copilot, IBM Watson and Microsoft Security Copilot.

The survey was conducted among enterprises who either have embarked on AI usage or are planning on doing so. Medium sized companies are less likely to have already embarked on AI integration versus large enterprises. Two thirds (66%) of companies with over $5B in annual revenue either have already integrated AI into their products and services or have set it as a high priority to do so.

Approaches to AI

Large companies are most likely to use a combination of LLM approaches, with very few turning to open source solutions on its own. Bringing on data scientists to build proprietary models appears to be in the highest demand in medium sized ($250M to > $1B) companies.

Two horizontal bar charts. The chart on the left shows plans for AI Model Implementation, with Top categories being All LLM approaches (37%), Proprietary and Commercial (25%) and Commercial (11%). The chart on the right is categorized with topics of most value, with the top 3 being Guidance in a specific AI category (62%), Vendor specific Guidance (eg. Azure AI, Google PaLM) (62%), And Regulatory specific Guidance (55%)

 

AI Model Integrity

For companies building proprietary AI tools, concerns over model theft are greatest. With commercial AI tools, model explainability is the largest concern; and with open source, privacy concerns are greatest.

Bar chart showing concerns with cybersecurity risks for proprietary AI tools, commercial AI tools, and open-source LLMs: Model theft (35%-49%), Privacy concerns (34%-49%), Model security (36%-48%), Model explainability (39%-47%), Model bias (43%-47%), Incorrect or false results (44%-38%), Data poisoning (45%-36%).

The #1 preventive testing undertaken by companies across all AI toolsets is model security.

Bar chart showing additional preventive testing measures: Model security (89%), Model explainability (84%), Assuring against LLM ‘hallucinating’ (82%), Data poisoning (85%), Model theft (78%), Model bias (72%).

 

AI Use

2 horizontal bar charts. The left one shows views on use of AI tools by development and Devops teams (strongly agree). The top views are "I am excited about the use of voice prompts in generating code" (46%), "Ability to build enhanced by no-code and/or low-code AI tools" (46%), and "There are new adversial vulnerabilities from expanding data lakes due to AI and ML initiatives" (43%). The right one shows AI tools assisting secure coding, with top 3 being "Help with risk reduction in OSS dependency management (48%), Secure Code Review (47%), and Tools that automatically remediate security vulnerabilities in custom code (46%).

 

AI Beliefs and Opportunities

Bar chart showing AI beliefs and potentially useful AI-based implementations: AI beliefs - Generative AI output should be patent protected (52%), Governments should hurry to implement laws concerning AI (46%), AI can enhance our ability to build secure software (45%), AI poses an urgent threat to humanity (33%); AI implementations - API services (73%), Chatbot (72%), IDE extensions or plugins (65%), Web-based Q&A interface (59%), Command line tools or scripts (48%).

 

AI Policies and Requirements

A bar chart titled "New Policies Concerning AI" shows the implementation rates of various AI policies overall. Implemented an AI risk management framework to identify and mitigate the risks associated with the use of AI: 62%. Implemented an AI governance framework to define the roles and responsibilities for the use of AI within the company: 53%. Implemented an AI training program for employees to educate them on its ethical and responsible use: 51%. Created an AI ethics committee to develop and oversee the company's AI ethics policies: 50%. Implemented an AI Governance Council to oversee all of the above: 50%.

Adopting an AI risk management framework represents the most common internal policy approach to implementing new AI tools, a finding that holds across the UK, US and Canada. At the same time, new AI vendor requirements are being struck alongside algorithm transparency requirements.

The image contains two semicircular gauge charts. The first chart is titled "New AI Vendor Requirements" and shows that 96.0% of respondents answered "Yes" to having new AI vendor requirements, while 4.0% answered "No". The second chart is titled "AI Algorithm Transparency Requirements" and shows that 96.5% of respondents answered "Yes" to having AI algorithm transparency requirements, while 3.5% answered "No". Both charts are labeled "Overall".

 

AI Expected Impact

The largest expected impact of AI is on the number of developers required in production. Interestingly, while 69% of US respondents expect the quality of cybersecurity to improve due to AI tools, in Canada this drops to just 30%.

A bar chart showing the overall expected impact (positive, neutral, negative) of AI tools on various aspects such as the number of developers, speed of programming, quality of cybersecurity, and collaboration among team members.

 

Topics of Most Value

Overall, guidance in a specific AI category is considered most helpful to software development teams, but this varies by company size. Conversely, general MLOps guidance are least favoured across all sizes of companies. Medium sized companies ($250M to < $1B) prefer general AI awareness versus specific topics.

A horizontal bar chart showing topics helpful to development teams, which are Guidance in a specific AI category (62%), Vendor specific guidance (57%), regulatory specific guidance (55%), general AI awareness (53%), Industry specific guidance (51%), and general MLOps guidance (44%).

 

Conclusion

These findings highlight the surge in adoption of AI led by large enterprises, who emphasize a mix of LLM strategies including building proprietary models, using commercially available ones, and/or relying on open-source solutions. Medium-sized companies exhibit a high demand for data scientists to build proprietary models, indicating an appetite for custom AI solutions tailored to their unique business needs.

Various concerns emerge across AI tool usage, with security being paramount, especially for proprietary models. This finding coincides with model security testing being the most prevalent preventive measure undertaken. While commercial AI users focus on explainability, open-source users prioritize privacy. 

Although there is broad enthusiasm for AI’s role in code development, opinions diverge regarding AI’s impact on cybersecurity. The majority support the idea that generative AI outputs should be patentable, aligning with existing AI/ML protections. However, skepticism is evident among management about AI’s usefulness for command line tools or scripts.

As generative AI and LLMs continue to evolve, their potential to enhance cybersecurity is immense. Yet, the security landscape is increasingly intricate. Comprehensive cybersecurity software aligned with clear policies are essential for businesses to navigate this complexity and harness the full potential of AI.