models to evaluate
collecting models I want to evaluate on shadereval-task2: https://github.com/bigcode-project/bigcode-evaluation-harness/pull/173 at fp16!!
Text Generation • 7B • Updated • 2.1k • 47Note currently #1 with error rate of 0.353
deepseek-ai/deepseek-coder-1.3b-base
Text Generation • Updated • 20.8k • 106Note # previous #1 error rate 0.38
-
stabilityai/stable-code-3b
Text Generation • 3B • Updated • 3.79k • 658 -
bigcode/starcoder2-7b
Text Generation • 7B • Updated • 19.2k • 204 -
bigcode/starcoder2-3b
Text Generation • 3B • Updated • 161k • 213
Vipitis/santacoder-finetuned-Shadertoys-fine
Text Generation • 1B • Updated • 32Note has noteable difference between fp16 and fp32, will need to run bf16 likely contaminated
-
google/gemma-7b
Text Generation • 9B • Updated • 55.6k • 3.25k -
google/codegemma-2b
Text Generation • 3B • Updated • 2.34k • 88
Vipitis/santacoder-finetuned-Shadertoys
Text Generation • 1B • Updated • 30 • 2Note likely contaminated
Deci/DeciCoder-1b
Text Generation • 1B • Updated • 1.83k • 248Note current result is not fully correct, need to rerun model - however I don't know a transformers version that runs without errors
-
google/gemma-2b
Text Generation • 3B • Updated • 193k • 1.12k
Salesforce/codegen2-1B_P
Text Generation • Updated • 239 • 41Note needs rerun with incomplete_generation tag
-
Vipitis/santacoder-finetuned-the-stack-glsl
Text Generation • 1B • Updated • 34 • 2 -
microsoft/phi-1_5
Text Generation • 1B • Updated • 47.3k • 1.35k -
microsoft/phi-1
Text Generation • 1B • Updated • 4.52k • 218
microsoft/phi-2
Text Generation • 3B • Updated • 1.09M • 3.42kNote performs the worst with error rate of 0.79
ShaderMatch
🚀11code completion benchmark for GLSL shadercode
Note this space holds the evaluation metric that is used. It also has a usually up to date leaderboard. check for updates: https://huggingface.co/spaces/Vipitis/shadermatch/blob/main/result_preview.png
-
zai-org/codegeex2-6b
Updated • 614 • 257 -
deepseek-ai/deepseek-coder-5.7bmqa-base
Text Generation • Updated • 112 • 10 -
deepseek-ai/deepseek-coder-6.7b-base
Text Generation • 7B • Updated • 10.5k • 120 -
bigcode/gpt_bigcode-santacoder
Text Generation • 1B • Updated • 51.1k • 26 -
bigcode/starcoderbase
Text Generation • Updated • 294 • 412 -
google/codegemma-7b
Text Generation • 9B • Updated • 2.03k • 209 -
aiXcoder/aixcoder-7b-base
Text Generation • 7B • Updated • 120 • 55 -
Qwen/CodeQwen1.5-7B
Text Generation • 7B • Updated • 770 • 102 -
ibm-granite/granite-3b-code-base-2k
Text Generation • 3B • Updated • 42.6k • 37 -
mistralai/Codestral-22B-v0.1
22B • Updated • 9.98k • 1.32k -
deepseek-ai/DeepSeek-Coder-V2-Lite-Base
Text Generation • 16B • Updated • 23.9k • 97 -
Salesforce/codet5p-2b
Updated • 321 • 35 -
facebook/llm-compiler-7b
Text Generation • Updated • 123 • 138 -
meta-llama/Llama-3.1-8B
Text Generation • 8B • Updated • 689k • • 1.98k -
meta-llama/CodeLlama-7b-hf
Text Generation • 7B • Updated • 1.57k • 119 -
01-ai/Yi-Coder-9B
Text Generation • 9B • Updated • 9.32k • 44 -
Qwen/Qwen2.5-Coder-1.5B
Text Generation • 2B • Updated • 506k • • 75 -
Qwen/Qwen2.5-Coder-7B
Text Generation • 8B • Updated • 79.8k • • 130 -
infly/OpenCoder-1.5B-Base
Text Generation • 2B • Updated • 96 • 23 -
Qwen/Qwen2.5-Coder-0.5B
Text Generation • 0.5B • Updated • 19.5k • 35