Mayank Sharma reviews an article:
Academic researchers discover that nearly 40% of the code suggestions by GitHub’s Copilot tool are erroneous, from a security point of view.
Developed by GitHub in collaboration with OpenAI, and currently in private beta testing, Copilot leverages artificial intelligence (AI) to make relevant coding suggestions to programmers as they write code.
To help quantify the value-add of the system, the academic researchers created 89 different scenarios for Copilot to suggest code for, which produced over 1600 programs. Reviewing them, the researchers discovered that almost 40% were vulnerable in one way or another.
Click through to learn more, as well as a link to the article itself. I would be interested in reading GitHub’s thoughts on this.
Comments closed