TECHNOLOGY

AI can write code like humans – bugs and all


Some software developers Now artificial intelligence is helping them write code. They find that AI is just as flawed as humans.

Last June, GitHub, a subsidiary of Microsoft that provides tools for hosting and collaborating on code, released a beta version of a program that uses AI to assist programmers. Start typing a command, a database query, or a request to an API, and a program called Copilot will guess your intent and write the rest.

Alex Naka, A data scientist at a biotech firm who has signed up to test the capillaries, says the program could be very helpful and has changed the way it works. “It allows me to spend less time jumping into browsers to find API docs or examples in stack overflows,” he says. “It seems a bit like my job has changed from being a code generator to being discriminatory.”

But Naka has noticed that errors can enter his code in a variety of ways. “There came a time when I missed some kind of subtle error when I accepted any of his offers,” he said. “And it can be really hard to detect, probably because it seems to create a flaw that tastes different than the kind I would create.”

The risk of AI creating faulty code can be surprisingly high. Researchers at NYU recently analyzed the code created by Copilot and found that for certain tasks where security is critical, the code contains security flaws about 40 percent of the time.

“This figure is a little higher than I expected,” said Brendon Dolan-Gavit, a NYU professor involved in the analysis. “But the way Copilot was trained wasn’t really about writing good code – it was just about creating the kind of text that would follow a given prompt.”

Despite such errors, Copilot and similar AI-powered tools can initiate sea changes in the way software developers write code. There is a growing interest in using AI to help automate more mundane tasks. But Kapilat also points out some of the disadvantages of today’s AI strategies.

A Capilot plugin, when analyzing the available code for Dolan-Gavit Found that It included a list of restricted phrases. These were introduced to seemingly prevent the system from obscuring offensive messages or copying well-known code written by someone else.

Oege de moor“Security has been a concern from the beginning,” said GitHub, vice president of research and one of the developers at Copilot. He said the percentage of faulty code cited by NYU researchers is only relevant for the subset of the code where security flaws are more likely to occur.

De Moore invented CodeQL, a tool used by NYU researchers to automatically detect bugs in code. He says GitHub recommends that developers use copycats, including CodeQL, to make sure their work is safe.

The GitHub program is built on an AI model developed by OpenAI, a leading AI company that specializes in sophisticated machine learning. This model, called Codex, has a large artificial neural network that is trained to predict subsequent letters in both text and computer code. The algorithm for learning to write code takes billions of lines of code stored in GitHub – not all are perfect.

OpenAI has built its own AI coding tool on top of the codex that can do some stunning coding techniques. It can change a typed instruction, such as “create an array of random variables between 1 and 100 and then return the largest”, to convert the working code into different programming languages.

Another version of the same OpenAI program, known as GPT-3, can create consistent text on a given topic, but it can resume offensive or biased language learned from the deepest corners of the web.

Copilot and Codex have made some developers wonder if AI could fire them automatically. In fact, as Nakar’s experience shows, developers need to be proficient enough to use the program, as they should often check or change their suggestions.





Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button