Alpha
amallo chat Icon
model

Gemini 2.5 Flash Lite

Gemini 2.5 Flash Lite is a high-efficiency multimodal large language model developed by Google DeepMind, optimized for low-latency performance and cost-effective scaling across high-volume tasks. It utilizes a sparse Mixture-of-Experts architecture and supports a 1-million-token context window for processing text, audio, images, and video.

3h ago5
model

Grok 4.20 Multi-Agent

Grok 4.20 Multi-Agent is a specialized reasoning-native large language model developed by xAI that utilizes a modular four-agent architecture to perform complex, multi-step tasks. Released in March 2026, it features a 2-million-token context window and is designed for deep integration with real-time data from the X platform.

3h ago2
model

Llama 4 Scout

Llama 4 Scout is a high-efficiency multimodal large language model released by Meta AI in April 2025, utilizing a mixture-of-experts (MoE) architecture with 109 billion total parameters. It is distinguished by its massive 10-million-token context window and its ability to natively process and integrate text and image inputs.

3h ago2
model

Phi-4 Multimodal

Phi-4-reasoning-vision-15B is a 15 billion parameter multimodal model developed by Microsoft Research that integrates visual and audio perception with structured reasoning. Released in March 2026, it utilizes a mid-fusion architecture to perform complex tasks like UI grounding and scientific problem-solving on modest hardware.

3h ago2
model

Devstral Small 2505

Devstral Small 2505 is a 24-billion parameter large language model developed by Mistral AI and All Hands AI, specifically optimized for agentic software engineering and autonomous coding tasks. Released under the Apache 2.0 license, it features a 131,072-token context window and is designed to operate as a reasoning engine within agentic scaffolds like OpenHands.

3h ago2
model

Kimi K2 Instruct

Kimi K2 Instruct is a 1-trillion-parameter Mixture-of-Experts (MoE) language model developed by Moonshot AI, specifically optimized for agentic intelligence and multi-step tool execution. It supports a 256,000-token context window and is positioned as a cost-effective, open-weights alternative to proprietary frontier models like GPT-4o.

3h ago2
model

Llama 2

Llama 2 is a family of pretrained and fine-tuned large language models released by Meta AI in July 2023, offering parameter sizes up to 70 billion. Developed as an open-weights alternative to proprietary models, it features a 4,096-token context window and specialized optimizations for dialogue and safety.

3h ago2
model

Phi-4 Reasoning Plus

Phi-4-reasoning-vision-15B is a 15-billion parameter open-weight multimodal model developed by Microsoft that integrates visual perception with structured, multi-step logical reasoning. Part of the Phi family of small language models, it is optimized for high-efficiency tasks such as UI grounding, document processing, and mathematical reasoning on modest hardware.

3h ago2
model

Grok 4

Grok 4 is a reasoning-focused large language model developed by xAI, featuring a 256,000-token context window and designed for complex analytical tasks in STEM, finance, and law.

3h ago2
organization

Meta

Meta Platforms, Inc. is a global technology conglomerate and leader in social media and artificial intelligence, known for its Family of Apps and a strategic commitment to the metaverse and open-source AI development.

3h ago4
model

Sonar

Sonar is a family of large language models developed by Perplexity AI, specifically optimized for Retrieval-Augmented Generation (RAG) and real-time search synthesis. Built on Meta's Llama architecture, the models prioritize factual groundedness and source attribution to power Perplexity's 'answer engine' platform.

3h ago2
model

V3

DeepSeek-V3 is a 671-billion parameter Mixture-of-Experts (MoE) large language model developed by DeepSeek-AI and released in December 2024. It is designed for high-efficiency training and inference, achieving competitive performance with proprietary frontier models in technical domains like coding and mathematics.

3h ago1
organization

DeepSeek

DeepSeek is a Chinese artificial intelligence research laboratory founded in 2023, recognized for developing high-performance, cost-efficient large language models such as DeepSeek-V3 and DeepSeek-R1. The organization operates with a unique financial structure backed by High-Flyer Quant and emphasizes open-source contributions to the global AI community.

3h ago5
model

Grok Code Fast 1

Grok Code Fast 1 is a specialized 314B parameter Mixture-of-Experts model by xAI designed for high-speed software engineering and agentic coding workflows.

3h ago4
organization

Anthropic

Anthropic is an AI research and safety organization known for developing the Claude family of large language models. Founded by former OpenAI executives, it operates as a Public Benefit Corporation focused on creating steerable, interpretable, and safe AI systems.

3h ago3
organization

OpenAI

OpenAI is a leading artificial intelligence research and deployment organization based in San Francisco, known for developing the GPT series of large language models and products like ChatGPT. Originally a non-profit, it evolved into a capped-profit entity and later a Public Benefit Corporation, focusing on the development of safe and beneficial artificial general intelligence.

3h ago4
organization

Mistral

Mistral AI is a French artificial intelligence company that develops high-performance generative AI and large language models, advocating for European technological sovereignty and decentralized AI development.

3h ago3
model

Sonar Pro

Sonar Pro is a search-centric large language model developed by Perplexity AI, built on the Llama 3.3 70B architecture to provide high-speed, fact-grounded responses with real-time internet connectivity.

3h ago2