Static Version
Introduction
In today’s cybersecurity landscape, the rise of Large Language Models (LLMs) looms large in changing the landscape of coding. In the near future Copilot will write 80% of code, according to Github’s CEO. These advances are benefiting both cybersecurity attackers and defenders, increasing the sophistication of malicious attackers while also facilitating more widespread abilities to defend against these attacks. AI also offers up the potential for helping non-technical consumers, by proposing in an intuitive language meaningful security measures they can take.
As bigger questions are asked on whether AI will save the world or if it represents an existental threat to humanity, day to day usage in all facets of society are increasing. It certainly seems reasonable that as software developers embrace the use of LLMs in day to day practice, they will benefit from guidelines on its proper use and limitations.
This research explores the view of managers and executives overseeing cybersecurity and software development in large companies that build software. It explores their views on the use of AI in software development and as integrated components of their digital products. Topics include the priority for integrating AI models, AI tools/frameworks used for software development, risks associated with proprietary, commercial and/or open source approaches, its use across a spectrum of needs, preventive testing, AI assistance in secure coding best practices, attitudes and best guesses as to its eventual impact, new requirements governing its use, and views on both internal policies and external upcoming regulations.
Current Priority of AI
The survey was conducted among enterprises who either have embarked on AI usage or are planning on doing so. Medium sized companies are less likely to have already embarked on AI integration versus large enterprises. Two thirds (66%) of companies with over $5B in annual revenue either have already integrated AI into their products and services or have set it as a high priority to do so.
Approaches to AI
Large companies are most likely to use a combination of LLM approaches, with very few turning to open source solutions on its own. Bringing on data scientists to build proprietary models appears to be in the highest demand in medium sized ($250M to > $1B) companies.
AI Model Integrity
For companies building proprietary AI tools, concerns over model theft are greatest. With commercial AI tools, model explainability is the largest concern; and with open source, privacy concerns are greatest.
The #1 preventive testing undertaken by companies across all AI toolsets is model security.
AI Use
AI Beliefs and Opportunities
AI Policies and Requirements
Adopting an AI risk management framework represents the most common internal policy approach to implementing new AI tools, a finding that holds across the UK, US and Canada. At the same time, new AI vendor requirements are being struck alongside algorithm transparency requirements.
AI Expected Impact
The largest expected impact of AI is on the number of developers required in production. Interestingly, while 69% of US respondents expect the quality of cybersecurity to improve due to AI tools, in Canada this drops to just 30%.
Topics of Most Value
Overall, guidance in a specific AI category is considered most helpful to software development teams, but this varies by company size. Conversely, general MLOps guidance are least favoured across all sizes of companies. Medium sized companies ($250M to < $1B) prefer general AI awareness versus specific topics.
Conclusion
These findings highlight the surge in adoption of AI led by large enterprises, who emphasize a mix of LLM strategies including building proprietary models, using commercially available ones, and/or relying on open-source solutions. Medium-sized companies exhibit a high demand for data scientists to build proprietary models, indicating an appetite for custom AI solutions tailored to their unique business needs.
Various concerns emerge across AI tool usage, with security being paramount, especially for proprietary models. This finding coincides with model security testing being the most prevalent preventive measure undertaken. While commercial AI users focus on explainability, open-source users prioritize privacy.
Although there is broad enthusiasm for AI’s role in code development, opinions diverge regarding AI’s impact on cybersecurity. The majority support the idea that generative AI outputs should be patentable, aligning with existing AI/ML protections. However, skepticism is evident among management about AI’s usefulness for command line tools or scripts.
As generative AI and LLMs continue to evolve, their potential to enhance cybersecurity is immense. Yet, the security landscape is increasingly intricate. Comprehensive cybersecurity software aligned with clear policies are essential for businesses to navigate this complexity and harness the full potential of AI.