LLM
Evaluation

Evaluation


Cheese


MultiPL-E

https://github.com/nuprl/MultiPL-E (opens in a new tab)

A multi-programming language benchmark for LLMs

Multi-Programming Language Evaluation of Large Language Models of Code (MultiPL-E)

MultiPL-E is a system for translating unit test-driven neural code generation benchmarks to new languages. We have used MultiPL-E to translate two popular Python benchmarks (HumanEval and MBPP) to 18 other programming languages.