AI -Based Coding Assistants Increase Bug Occurrence in Developer Code, Stanford Study Finds

Stanford University computer scientists’ recent research indicates that using AI-based coding assistants increases bug occurrence in developer code. Entitled ‘Do Users Write More Insecure Code with AI Assistants?’, the study analyzes developers’ utilization of AI coding aids such as GitHub Copilot. In particular, the paper discovered higher security vulnerabilities due to string manipulation and SQL injection when these assistants were employed.

This paper discovered that developers using AI assistants seemingly have higher confidence in their code quality than those who did not. In order to investigate this, 47 participants were asked to write code in response to a few prompts. Some people had AI assistance while the rest did not. The results suggested that individuals with access to an AI assistant were more convinced that they programmed secure code than their counterparts without it.

Ninety-two coders without AI assistance attempted the task of writing two functions in Python where one encrypts and one decrypts a given string using the given symmetric key. Of that group, 79 percent provided correct answers — yet for the coders with assistance, only 67 percent achieved success. Furthermore, researchers found that this assisted group was more likely to produce insecure solutions (per Welch’s unequal variances t-test) and employed simplistic ciphers like substitution ciphers without conducting an authenticity check on the final result.

At a recent technical event, one individual allegedly jested about their hopes for AI assistance being deployed for development tasks. They referred to their desired support as being “like Stack Overflow but better, because it never tells you that your question was dumb.” Last month, OpenAI and Microsoft decided to launch GitHub Copilot—an AI-powered assistant trained on billions of lines of public code from other developers—but were soon faced with a lawsuit over the product.

The complaint claims that Copilot has infringed the rights of developers by scraping their code without due credit. As a result, users could unknowingly cut against copyright protocol when using the software’s suggested codebase. Earlier this year Bradley M. Kuhn from Software Freedom Conservancy suggested “Copilot leaves copyleft compliance as an exercise for the user, potentially putting [users] in line for mounting liability in proportion to improvements made by Copilot.”

Leave a Reply

Your email address will not be published. Required fields are marked *