Alpha
amallo chat Icon

Qwen 3 30B A3B

Qwen 3 30B A3B is a 30-billion parameter large language model developed by Alibaba Cloud, utilizing a sparse Mixture-of-Experts (MoE) architecture that activates 3 billion parameters per token for high efficiency. It is designed to provide mid-sized model reasoning capabilities with the speed and cost profile of a much smaller system, supporting a context window of up to 262,144 tokens.

model53/27/2026

R1-0528 Turbo

R1-0528 Turbo is a high-efficiency large language model developed by DeepSeek AI, optimized for throughput and complex reasoning using a Mixture-of-Experts (MoE) architecture. It is designed to provide advanced logic and coding capabilities with significantly reduced computational overhead and API costs.

model23/27/2026

Sonar

Sonar is a family of large language models developed by Perplexity AI, specifically optimized for Retrieval-Augmented Generation (RAG) and real-time search synthesis. Built on Meta's Llama architecture, the models prioritize factual groundedness and source attribution to power Perplexity's 'answer engine' platform.

model33/27/2026

Sonar Pro

Sonar Pro is a search-centric large language model developed by Perplexity AI, built on the Llama 3.3 70B architecture to provide high-speed, fact-grounded responses with real-time internet connectivity.

model33/27/2026

Z.ai

Z.ai (Z Research Inc.) is a San Francisco-based artificial intelligence research organization founded in late 2024 by former OpenAI leaders, specializing in the post-training, alignment, and refinement of large-scale generative models.

organization43/27/2026
← PreviousPage 4 of 4