Groq
./ThinkUltra-fast LLM inference API using custom LPU hardware
OpenAPI
$ protocol support
[✗]MCP Server
[✓]OpenAPI Spec
[✗]Accepts MPP
[✗]Accepts x402
[✗]Supports ACP
[✗]Supports UCP
[✗]Skill File
[✗]Plugin Manifest
[✗]llms.txt
[✗]Structured Feed
[✗]Agent Protocol
$ agent access
authAPI Key
onboardingPartial — dashboard signup, then API key
tosNeutral — no bot restrictions
webhooksNo
free tierYes
billingpay-per-token
#llm#text#fast-inference
https://groq.com