“For 60 years, it’s been a dream that we would have an AI that can write computer programs,” says Chris Jermaine, Professor and Chair of Rice University’s Department of Computer Science.
Recently, there have been major advances in designing and training huge machine learning models for natural language. The fundamental problem with applying those models to program synthesis—asking them to write computer programs—is the accuracy of the code that’s produced. Natural language models don’t know code. “They end up making a lot of mistakes,” Jermaine explains. “They read a lot of code, but they don’t understand semantics.”
While a doctoral student in Rice’s Computer Science program, Rohan Mukherjee (PhD ‘20) chose to tackle this difficult problem for his dissertation under the guidance of Jermaine.
Mukherjee led researchers at Rice in partnership with Yeming Wen, Swarat Chaudhuri and Dipak Chaudhari at the University of Texas at Austin and Thomas W. Reps at the University of Wisconsin. They combined neural machine learning and symbolic methods to write programs free of basic semantic errors, outperforming larger, cutting-edge transformer models.
The team’s paper, “Neural Program Generation Modulo Static Analysis” was selected as a Spotlight paper at the Thirty-Fifth Annual Conference on Neural Information Processing Systems (NeuIPS) in December.
"We have presented a system that can aid a programmer in writing new Java methods inside an unfinished class automatically,” explains Mukherjee. “The system uses state-of-the-art machine learning to understand programmer context accurately based on multiple signals in the unfinished class.”
While very large natural language models, such as GPT-3 and GPT-Neo, do a good job of creating code, they are prohibitively expensive, making them only available to big tech companies and governments. As an example, GPT-3 cost $10M to train. The team’s new models are much smaller and less expensive to train, and yet in many ways they outperform the larger models.
“On the synthesizer side, current tools which learn entirely from data suffer from semantic correctness,” says Mukherjee. “We built a neuro-symbolic unit that can not only learn from user context but also uses a Java compiler as a guiding engine to generate programs that are more semantically accurate.”
Making elementary analyses available to modern machine learning methods—alongside code—improves the new code they produce. Effectively, the machine learning method is being shown the results of simple analysis, much like an instructor teaches programmers in introductory classes how programs work.
According to Jermaine, “What we find is that we can use a much smaller and less advanced natural language model along with the results of these very simple, static semantic analyses, that together do a really great job of producing code.”
“What this points to is this future where you would take some of these really large and sophisticated natural language models and then pair them. It goes toward validating this hypothesis that you need to pair these things with stuff that really understands programming somehow. “
“We are not only solving the problem of synthesizing with the most correct program given a particular context,” says Mukherjee. “We're also ensuring that you can actually run this on your system without any of the problems that the big machine learning models are known to have.”
“Programmers will get to write programs which are much faster. They can reuse existing programs, so there is less chance of developing bugs. And it's also much more modular. We are using existing API calls, which makes the code much smaller," Mukherjee continues.
This work is an important step toward realizing automatic programming. While it may be a decade or two before programming fundamentally changes, in the future software engineers will become designers, guiding AI as it writes code. Currently, there’s a tremendous shortage of programmers. This technology will help fill that gap and increase production.
While the system the team has developed is for Java, they are working on a version for Python as well.
When thinking back to his time at Rice, Mukherjee emphasizes the importance of the close-knit community and small program size. “We knew everyone. We bonded with the professors, we had a family, bonding with all the staff as well. It felt amazing to be part of the community because we could hang out all day, all night and feel at home.”
Mukherjee completed his PhD at Rice in August of 2020 and now works as an Applied Scientist for Amazon Alexa.
For more information, the complete paper is available online.