LLM
Bigcode Evaluation Harness

bigcode-evaluation-harness

https://github.com/bigcode-project/bigcode-evaluation-harness (opens in a new tab)

A framework for the evaluation of autoregressive code generation language models.

Code Generation LM Evaluation Harness

This is a framework for the evaluation of code generation models. This work is inspired from EleutherAI/lm-evaluation-harness for evaluating language models in general.

odex

https://github.com/zorazrw/odex (opens in a new tab)

Execution-Based Evaluation for Open Domain Code Generation

https://code-eval.github.io/ (opens in a new tab)

https://arxiv.org/abs/2212.10481 (opens in a new tab)

To extend the scope of coding queries to more realistic settings, we propose ODEX, the first Open-Domain EXecution-based natural language (NL) to Python code generation dataset. ODEX has 945 NL-Code pairs spanning 79 diverse libraries, along with 1,707 human-written test cases for execution. Our NL-Code pairs are harvested from StackOverflow forums to encourage natural and practical coding queries. Moreover, ODEX supports four natural languages as intents, in English, Spanish, Japanese, and Russian. ODEX unveils intriguing behavioral differences among top-performing code language models (LM). While CODEX achieves better overall results, CODEGEN improves effectively via scaling -- CODEGEN 6.1B performs comparably with CODEX 12B. Both models show substantial gaps between open and closed domains, but CODEGEN gaps tend to decrease with model size while CODEX gaps increase. We release ODEX to facilitate research into open-domain problems for the code generation community.