Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

T. Ben-Nun, A. Shoshana Jakobovits, T. Hoefler:

 Neural Code Comprehension: A Learnable Representation of Code Semantics

(. Vol , Nr. , In Advances in Neural Information Processing Systems 31, presented in Montreal, Canada, pages 3589--3601, Curran Associates, Inc., ISSN: , ISBN: , Dec. 2018, )

Publisher Reference

Abstract

With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human- and machine-generated programs. Following this hypothesis, we define an embedding space, inst2vec, based on an Intermediate Representation (IR) of the code that is independent of the source programming language. We provide a novel definition of contextual flow for this IR, leveraging both the underlying data- and control-flow of the program. We then analyze the embeddings qualitatively using analogies and clustering, and evaluate the learned representation on three different high-level tasks. We show that with a single RNN architecture and pre-trained fixed embeddings, inst2vec outperforms specialized approaches for performance prediction (compute device mapping, optimal thread coarsening); and algorithm classification from raw code (104 classes), where we set a new state-of-the-art.

Documents

download article:
download slides:


Recorded talk (best effort)

 

BibTeX

@incollection{ncc,
  author={Tal Ben-Nun and Alice Shoshana Jakobovits and Torsten Hoefler},
  title={{Neural Code Comprehension: A Learnable Representation of Code Semantics}},
  journal={},
  institution={},
  year={2018},
  month={12},
  pages={3589--3601},
  volume={},
  number={},
  booktitle={Advances in Neural Information Processing Systems 31},
  location={Montreal, Canada},
  publisher={Curran Associates, Inc.},
  issn={},
  isbn={},
  note={},
}