Context Window Calculator

Calculate and visualize context window usage for different LLM models.

Context window is the maximum number of tokens (input + output) a model can process in a single request. Larger context windows allow for longer conversations and documents.

Token Usage

Total Context Window
128,000
tokens
Input Tokens
1,000
0.8% of total
Output Tokens
500
Max: 16,384
Remaining
126,500
tokens available

Context Window Usage: 1.2%

Input
Output
Total Used: 1,500 / 128,000

Model Details: GPT-4o

ProviderOpenAI
Context Window128,000
Max Output16,384
Available for Input111,616