huggingface/CodeBERTa-small-v1 cover image

huggingface/CodeBERTa-small-v1

CodeBERTa is a RoBERTa-like model trained on the CodeSearchNet dataset from GitHub. Supported languages: go, java, javascript, php, python, ruby.

CodeBERTa is a RoBERTa-like model trained on the CodeSearchNet dataset from GitHub. Supported languages: go, java, javascript, php, python, ruby.

Public
$0.0005 / sec

Input

text prompt, should include exactly one <mask> token

You need to login to use this model

Output

where is my father? (0.09)

where is my mother? (0.08)

CodeBERTa

CodeBERTa is a RoBERTa-like model trained on the CodeSearchNet dataset from GitHub.

Supported languages:

"go"
"java"
"javascript"
"php"
"python"
"ruby"

The tokenizer is a Byte-level BPE tokenizer trained on the corpus using Hugging Face tokenizers.

Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta).

The (small) model is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full corpus (~2M functions) for 5 epochs.

Tensorboard for this training ⤵️

tb

Downstream task: programming language identification

See the model card for huggingface/CodeBERTa-language-id 🤯.


CodeSearchNet citation

@article{husain_codesearchnet_2019,
	title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}},
	shorttitle = {{CodeSearchNet} {Challenge}},
	url = {http://arxiv.org/abs/1909.09436},
	urldate = {2020-03-12},
	journal = {arXiv:1909.09436 [cs, stat]},
	author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
	month = sep,
	year = {2019},
	note = {arXiv: 1909.09436},
}