gpt-oss-120b

gpt-oss-120b is an open-weight mixture-of-experts model from OpenAI with 117 billion total parameters and 5.1 billion active parameters per forward pass, optimized to run on a single H100 or AMD MI300X GPU using native MXFP4 quantization. It delivers o3/o4-mini-class reasoning with configurable thinking depth, full chain-of-thought access, and native tool use — including function calling, browsing, and structured output generation — making it well-suited for production, general-purpose, and high-reasoning use cases.

Features

Serverless API

GPT-OSS is available via sciforium' serverless API, where you pay per token. There are several ways to call the sciforium API, including sciforium' Python client, the REST API, or OpenAI's Python client.

Docs

On-demand Deployments

On-demand deployments allow you to use GPT-OSS on dedicated GPUs with sciforium' high-performance serving stack with high reliability and no rate limits.

Docs
MiniMax  M2.5
Kimi K2.5
GLM 5
DeepSeek V3.2
gpt-oss-120b
gpt-oss-20b
Qwen3 Instruct
Qwen3 Thinking
Qwen3 Coder
Qwen3.5
Qwen3 VL Instruct
Qwen3 ASR
Qwen-Image
Qwen-Image-Edit
Flux2
Stable Diffusion 3.5
Hunyuan Image
Z-Image
Wan2.2-I2V
Wan2.2-T2V
Hunyuan Image
Z-Image