-
-
-
-
-
-
Inference Providers
Active filters:
Qwen3
nvidia/Qwen3-Next-80B-A3B-Thinking-NVFP4
Text Generation
•
Updated
•
58.3k
•
40
nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4
Text Generation
•
Updated
•
21.2k
•
23
nvidia/Qwen3-235B-A22B-Thinking-2507-NVFP4
Text Generation
•
120B
•
Updated
•
162
•
3
nightmedia/Qwen3-4B-Agent-Claude-Gemini
Text Generation
•
4B
•
Updated
•
190
•
2
QuantTrio/Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4-Int8Mix
Text Generation
•
248B
•
Updated
•
428
•
3
QuantTrio/Qwen3-235B-A22B-Thinking-2507-GPTQ-Int4-Int8Mix
Text Generation
•
253B
•
Updated
•
13
•
3
Text Generation
•
15B
•
Updated
•
3.23k
•
3
litert-community/Qwen3-0.6B
Text Generation
•
Updated
•
2.09k
•
8
nightmedia/Qwen3-4B-Agent-F32-dwq4-mlx
Text Generation
•
0.8B
•
Updated
•
111
•
4
nightmedia/Qwen3-4B-Agent-Claude-Gemini-heretic-qx86-hi-mlx
Text Generation
•
1B
•
Updated
•
54
•
1
nvidia/Qwen3-Coder-480B-A35B-Instruct-NVFP4
Text Generation
•
241B
•
Updated
•
121
•
1
nightmedia/Qwen3-4B-Element18
Text Generation
•
4B
•
Updated
•
185
•
3
PeterAM4/Qwen3-Embedding-0.6B-GGUF
Sentence Similarity
•
0.6B
•
Updated
•
9.67k
•
1
DavidAU/Qwen3-24B-A4B-Freedom-HQ-Thinking-Abliterated-Heretic-NEOMAX-Imatrix-GGUF
Text Generation
•
18B
•
Updated
•
3.02k
•
16
DavidAU/Qwen3-4B-Gemini-TripleX-High-Reasoning-Thinking-Heretic-Uncensored-GGUF
Text Generation
•
4B
•
Updated
•
4.7k
•
25
DavidAU/Qwen3-6B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF
Text Generation
•
6B
•
Updated
•
1.27k
•
5
DavidAU/Qwen3-8B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF
Text Generation
•
8B
•
Updated
•
8.34k
•
26
DavidAU/Qwen3-24B-A4B-Freedom-Thinking-Abliterated-Heretic-NEO-Imatrix-GGUF
Text Generation
•
17B
•
Updated
•
3.46k
•
21
DavidAU/Qwen3-48B-A4B-Savant-Commander-Distill-12X-Closed-Open-Heretic-Uncensored-GGUF
Text Generation
•
34B
•
Updated
•
3.93k
•
28
DavidAU/Qwen3-4B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF
Text Generation
•
4B
•
Updated
•
2.31k
•
10
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
•
0.6B
•
Updated
•
342
•
1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
•
0.6B
•
Updated
•
30
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
1.47k
•
1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
•
2B
•
Updated
•
15
JunHowie/Qwen3-32B-GPTQ-Int4
Text Generation
•
33B
•
Updated
•
9.72k
•
4
JunHowie/Qwen3-32B-GPTQ-Int8
Text Generation
•
33B
•
Updated
•
309
•
3
JunHowie/Qwen3-30B-A3B-GPTQ-Int4
Text Generation
•
5B
•
Updated
•
9
•
1
Text Generation
•
Updated
•
15
JunHowie/Qwen3-14B-GPTQ-Int8
Text Generation
•
15B
•
Updated
•
120
•
1
JunHowie/Qwen3-14B-GPTQ-Int4
Text Generation
•
15B
•
Updated
•
904
•
4