Alpha
Wiki Icon
Wiki/Models/Sonar Reasoning Pro
model

Sonar Reasoning Pro

Sonar Reasoning Pro is an artificial intelligence model developed by Perplexity AI for complex query resolution through the integration of large-scale linguistic processing and real-time web search 1, 2. The model utilizes the DeepSeek-R1 architecture to facilitate logical processing and multi-step problem solving 3, 7, 8. It is positioned as a high-performance tier within the Sonar model family, designed to provide more thorough analytical capabilities than standard Sonar variants 10, 12.

A primary characteristic of Sonar Reasoning Pro is its ability to ground its reasoning process in live web data, generating responses that include citations to verify information 2, 7. According to Perplexity, the model utilizes "Pro Search" capabilities to perform automated tool orchestration, allowing it to execute sequential web searches and fetch specific content to address multifaceted prompts 2, 4. During this process, the model provides a "thought" display, a feature that allows users to observe the intermediate steps and logical transitions the model takes as it synthesizes an answer 7, 8.

The model is configured to manage the computational intensity associated with deep inference through various performance parameters 24, 25. Perplexity asserts that this system enables the model to perform competitively in benchmark testing against other reasoning models while maintaining a lower cost structure 9, 11. In early 2025, the pricing model for the API was adjusted to simplify billing by removing charges for citation tokens, which Perplexity stated would make the model more accessible for developers requiring high-volume research outputs 1.

The model's deployment has been characterized by increasing accessibility for the developer community. While certain advanced API features were initially subject to different tier restrictions, Perplexity updated its offerings in early 2025 to grant broader access to the suite of Sonar capabilities 1, 16. As of early 2025, Sonar Reasoning Pro serves as a flagship option for users requiring sophisticated reasoning performance 3, 10. The model also includes support for file uploads, allowing it to incorporate user-provided data into its reasoned responses 13.

Background

The development of Sonar Reasoning Pro occurred during a period of transition within the artificial intelligence industry, as models shifted from retrieval-augmented generation (RAG) toward "reasoning" capabilities characterized by multi-step logical processing 12. Early AI-powered search engines, including Perplexity's initial products, focused on providing grounded responses using one-shot retrieval 8. However, by 2025, competitive pressure from OpenAI's SearchGPT and the o1 reasoning series, as well as Google's Gemini AI Overviews, accelerated the demand for models that could perform long-form analysis and synthesize information from multiple sources into detailed reports 10, 12.

Prior to the Reasoning Pro tier, Perplexity utilized the proprietary Sonar model family, which was originally built upon Meta’s Llama 3.3 70B architecture 8, 10. These models were optimized for factuality and readability, utilizing Cerebras inference infrastructure to achieve generation speeds of approximately 1,200 tokens per second 8. According to Perplexity, while these Llama-based models outperformed comparable models like GPT-4o mini in user satisfaction, they were designed for general search rather than complex deductive tasks 8.

The integration of reasoning capabilities into the Sonar family was largely influenced by the release of DeepSeek R1 in early 2025 11. DeepSeek R1 utilized Reinforcement Learning with Verifiable Rewards (RLVR) to produce "thinking traces," allowing the model to explain its logic and improve accuracy on mathematical and coding benchmarks 11. Perplexity integrated a custom version of this architecture, referred to as DeepSeek-R1 1776, into its ecosystem to power the Reasoning Pro model 9. To maintain data security standards, Perplexity hosted these DeepSeek-based models on United States-based infrastructure 13, 14.

Sonar Reasoning Pro was formally established as a high-performance tier in March 2025 7. During this period, Perplexity introduced "Pro Search" features for the model, enabling automated tool orchestration and multi-step reasoning 7. On December 15, 2025, the company deprecated the standard sonar-reasoning model, directing users toward sonar-reasoning-pro for tasks requiring advanced analytical depth 7. This evolution marked Perplexity’s transition from a modular model provider to a platform offering specialized reasoning engines for high-stakes research in fields such as finance and law 12.

Architecture

Sonar Reasoning Pro is built upon a hybrid architecture that integrates the DeepSeek R1 base model with Perplexity AI’s proprietary real-time search and indexing infrastructure 7, 10. The model is categorized as a reasoning-focused system, distinguishing it from standard retrieval-augmented generation (RAG) models by its ability to perform multi-step logical processing before generating a final response 10. Unlike standard large language models that provide immediate completion, reasoning models are engineered for multi-step analysis and structured problem-solving 10.

A central component of the model’s technical framework is its implementation of Chain-of-Thought (CoT) processing. According to developer documentation, this allows the model to allocate additional compute during the inference stage, generating internal "reasoning tokens" 10. This mechanism facilitates the breakdown of complex queries into sub-problems, the evaluation of potentially contradictory information retrieved from the web, and the verification of logical consistency throughout the processing cycle 10. Performance evaluations of similar reasoning models indicate that this "thinking" phase occurs after the initial API request and before the generation of the first answer token, adding a measurable latency period dedicated to internal logic 3.

The model supports a context window of 128,000 tokens 10. This capacity allows the system to ingest and maintain substantial volumes of retrieved data and conversation history while performing complex analytical tasks. While the standard Sonar Pro model features a larger context window of 200,000 tokens for deep retrieval, Sonar Reasoning Pro is optimized specifically for the "strongest" reasoning capabilities within the Sonar family 3, 10. This 128,000-token limit is applied across both input and output, though the model's primary focus is on high-quality logical synthesis rather than extreme document length 10.

Architecturally, the model is integrated with Perplexity’s proprietary citation and indexing system. Rather than relying solely on pre-trained knowledge or broad-spectrum RAG, Sonar Reasoning Pro utilizes a "Standard" depth of real-time retrieval from the web to ground its logical outputs 10. This integration ensures that the model’s multi-step logic is applied to verified, current information retrieved through the Perplexity search index. The system is designed to provide grounded summarization and stepwise logic, ensuring that each step in the reasoning chain is supported by cited evidence 10.

Perplexity has not publicly disclosed the specific parameter count or the exact composition of the training datasets for the Sonar family, characterizing the models as proprietary 3. However, the use of the DeepSeek R1 backbone suggests a reliance on reinforcement learning techniques to encourage reasoning behaviors during the training phase 7. The architecture is tuned for a moderate output speed to accommodate the increased computational demands of inference-time reasoning, contrasting with search-centric models that are optimized for high-speed, direct Q&A 10. Within the Perplexity API ecosystem, this model is positioned for tasks requiring multi-step logic and analytical depth over simple information retrieval 10.

Capabilities & Limitations

Capabilities & Limitations

Sonar Reasoning Pro is designed for multi-step queries that require research and logical processing 7, 12. According to Perplexity AI, the model utilizes Chain of Thought (CoT) processing to decompose complex prompts into smaller, manageable sub-tasks 7, 12.

Research and Reasoning Capabilities

The model performs automated tool orchestration, referred to by the developer as "Pro Search" 7. In this mode, the model conducts sequential web searches and fetches specific URL content to synthesize a response 7. This process is accompanied by "thought streaming," a feature that allows users to view the model's intermediate logical steps during problem-solving 7.

According to Perplexity, Sonar Reasoning Pro is characterized by a higher citation density compared to standard Sonar models 12. To manage the trade-off between thoroughness and speed, the model includes a "reasoning effort" parameter 7. Users can select from "low," "medium," or "high" settings; higher settings increase computational effort for greater thoroughness, while lower settings prioritize faster response times 7.

Supported Modalities and File Handling

As part of the Sonar platform, Sonar Reasoning Pro supports multimodal inputs, including the ability to analyze images in PNG, JPEG, WEBP, and GIF formats 7, 13. The model also features document analysis capabilities for formats such as PDF, DOCX, XLSX, CSV, and TXT 7, 13. These tools allow the model to extract data, summarize documents, and cross-reference uploaded files with real-time web data 7.

For multimedia content, the model supports audio (MP3, WAV) and video (MP4) uploads 13. Current capabilities are primarily focused on transcript extraction and key-frame analysis rather than full scene-level search 13. File size limits for these uploads vary by subscription tier, ranging from 40 MB for free users to 1 GB for enterprise accounts 13.

Known Limitations and Failure Modes

A constraint of Sonar Reasoning Pro is the latency associated with its reasoning overhead 7. Because the model must process logical steps and perform multiple web retrievals, it is generally slower than non-reasoning large language models 7. The model has a fixed context window of approximately 128,000 tokens, which limits the volume of text it can process in a single session 10, 12, 26.

While the model aims to mitigate hallucinations through real-time search, general limitations of large language models persist. Independent research into AI-assisted data extraction indicates that while such models are accurate for concrete facts, they remain prone to "interpretive differences" when addressing queries requiring subjective judgment or contextual nuance 11. Furthermore, studies on LLM-based systematic reviews have found that citation accuracy can vary, particularly in niche academic fields where hallucination rates remain a concern for rigorous applications 14, 15.

Performance

The performance of Sonar Reasoning Pro is defined by its architectural focus on logical depth, evaluated through a combination of standardized benchmarks and operational efficiency metrics. On the Artificial Analysis Intelligence Index v4.0, which incorporates ten distinct evaluations such as GPQA Diamond for scientific reasoning, SciCode for programming tasks, and CritPt for physics-based logic, the model family is estimated to score approximately 15 3, 8. This performance metric is positioned above the industry average of 11 for models within a similar proprietary class 3. The model’s benchmark performance is also characterized by a high degree of conciseness; during evaluation, it utilized 1.2 million output tokens, which is significantly lower than the average of 2.8 million tokens used by competing models to achieve similar results 3.

In terms of computational speed and latency, Sonar Reasoning Pro achieves a throughput of approximately 80 to 97 tokens per second during its final output generation phase. A critical component of the model's performance profile is the temporal distinction between its "thinking" time and "completion" time. Unlike standard retrieval-augmented generation models that provide immediate text completion, Sonar Reasoning Pro engages in an internal "thinking" phase to resolve complex logical steps before providing an answer 3, 8. This results in a higher time to first answer token (TTFAT), as the system must account for both input processing and the generation of internal reasoning tokens 3. Independent evaluations categorize this latency as a functional requirement for the model’s multi-step reasoning capabilities 8.

The economic efficiency of the model is structured around a pricing system set at $2.00 per million input tokens and $8.00 per million output tokens. This cost-to-performance ratio is designed to support high-intensity research tasks that require more compute-heavy inference than standard models. Additionally, the model features a context window of 127,000 tokens, enabling it to process approximately 191 A4 pages of information in a single query session 8. While this context window is smaller than the 200,000 tokens supported by the non-reasoning Sonar Pro variant, it is optimized to maintain logical coherence across complex datasets during reasoning tasks 3, 8.

Safety & Ethics

The safety and ethical framework of Sonar Reasoning Pro is defined by a combination of the base model's intrinsic alignment and Perplexity AI's external system-level guardrails. Because the model is built upon the DeepSeek R1 architecture, it inherits the safety characteristics of that foundational engine, which includes instruction-tuned behavior designed to avoid explicitly harmful content 20. However, independent security evaluations of the DeepSeek model family have characterized its internal guardrails as relatively weak compared to other frontier models 20. A study conducted by Cisco and the University of Pennsylvania found that DeepSeek R1 failed to block any of the 50 jailbreak prompts in the HarmBench evaluation, resulting in a 100% attack success rate 20. Consequently, researchers suggest that the model's alignment should be treated as a helpful default rather than a definitive compliance boundary 20.

To mitigate these risks, Perplexity AI employs system-level interventions and external classifiers. For instance, the company introduced a Media Classifier in December 2025, which automatically detects queries requiring visual content and intelligently selects relevant images or videos 7. For enterprise applications, the model is often integrated into third-party stacks—such as Amazon Bedrock or Portkey—which wrap the AI in additional guardrails to filter for personally identifiable information (PII), prompt injections, and policy-violating content 20. This layered approach treats safety as a three-tier process involving input classification, knowledge retrieval filtering, and output auditing 20.

Ethical and legal concerns regarding Sonar Reasoning Pro primarily center on content acquisition and source attribution. Perplexity AI has faced significant legal scrutiny over its industrial-scale web scraping practices 17. In a federal lawsuit filed in New York, Encyclopedia Britannica and Merriam-Webster alleged that Perplexity's engines systematically harvested their databases without authorization 17. The filing included evidence that the model provided responses virtually identical to original dictionary definitions and encyclopedia entries, leading to accusations that the system functions as a "plagiarism machine" 17. A frequently cited example in the litigation involved the model reproducing Merriam-Webster's specific definition and usage examples for the word "plagiarize" without clear attribution 17.

Data privacy and operational security represent another area of ethical focus. While the base DeepSeek R1 model is released under an MIT license, permitting commercial use and fine-tuning, security analysts have noted potential intellectual property and legal risks associated with its development in China 18. For professional and API users, Perplexity provides specific privacy and security resources to manage how data is handled 7. Developers utilizing the model through platforms like Puter.js can implement a "user-pays" model, where individual users manage their own AI usage and credentials directly, potentially reducing the data liability for application developers 5.

Applications

Sonar Reasoning Pro is applied in scenarios requiring high-fidelity information retrieval and multi-step logical deduction. According to Perplexity AI, the model is specifically designated for professional research, enterprise-level knowledge management, and the orchestration of autonomous AI agents 7.

Academic and Market Research

In academic and market research contexts, the model is utilized to generate responses based on verifiable data. Perplexity states that the model's "academic mode" prioritizes peer-reviewed publications and scholarly articles to maintain technical accuracy 7. Within the financial sector, the system is used to conduct regulatory due diligence through an SEC filings filter that focuses searches on 10-K, 10-Q, and 8-K reports 7. To support verification, the API provides a search_results field containing the titles, URLs, and publication dates of the sources used to generate an answer, which simplifies the process of checking claims against original documentation 7.

Autonomous Agents and Technical Integration

The model is integrated into autonomous agent frameworks, such as Agent Zero, to perform tasks that require independent tool orchestration. The general availability of the Agent API in February 2026 provided developers with patterns for integrating the model's multi-step reasoning into external systems 7. For software development, the model supports the Model Context Protocol (MCP), allowing it to be used within integrated development environments (IDEs) like Cursor and VS Code 7. In these environments, the model assists in complex coding and debugging workflows, utilizing its 128k context window to analyze large file sets or extensive documentation. Perplexity states that the model automatically performs multiple web searches and fetches URL content to answer these complex technical queries through its "Pro Search" capability 7.

Enterprise Knowledge Management

For enterprise applications, the model facilitates document analysis and internal data retrieval. Perplexity states the model supports file attachments in formats such as PDF, DOCX, and TXT, which allows for the extraction of specific data points and the summarization of lengthy corporate documents 7. Furthermore, the Embeddings API enables the integration of the model into semantic search workflows for local knowledge bases, while the Search-only API allows organizations to access raw search results without LLM processing for custom search experiences 7.

Specific Deployments

The model has been deployed in several specialized third-party platforms. StarPlex, an AI-powered startup intelligence tool, uses the model to map global venture capital and competitor landscapes on an interactive 3D interface 7. Additionally, the Electron-based "Perplexity Client" utilizes the model to provide developers with fine-grained control over search parameters and API debugging mode 7.

Reception & Impact

Professional reception of Sonar Reasoning Pro has generally characterized the system as a specialized alternative to traditional search engines, though reviews highlight a distinction between its search utility and general-purpose chatbot capabilities. Technical evaluations by PCMag identified the underlying Perplexity platform as a leading AI search engine, noting its strength in synthesizing web-based information while simultaneously observing that its deep research depth was, at times, less comprehensive than competitors like ChatGPT [PCMag].

Impact on Digital Ecosystems

The introduction of reasoning-focused models like Sonar Reasoning Pro has contributed to a shift in the digital marketing and search engine optimization (SEO) landscape. Industry analysts describe an emerging "visibility crisis" for content creators and manufacturers who traditionally relied on organic search traffic [Grofuse]. Because the model generates direct summaries and technical explanations within the search interface—a phenomenon known as "zero-click" search—users frequently obtain required information without visiting the original source website [Grofuse]. This shift has led to reports of declining click-through rates for industrial firms, even when their content remains highly ranked in the model's citations [Grofuse]. Conversely, some practitioners argue that traditional SEO fundamentals remain essential, as AI retrieval systems still rely on existing search indexes to maintain topical authority and avoid hallucinations [SMAMarketing].

Sourcing and Citation Utility

The model's approach to transparency has received mixed reactions regarding its economic impact on publishers. In May 2025, Perplexity updated the model's API to include a search_results field, providing developers with direct access to page titles and URLs used in response generation 7. Perplexity states this enhancement was intended to improve source verification and allow for custom citation formats 7. However, critics in the manufacturing and technical sectors have noted that while the model credits sources, the act of summarizing structured technical data (such as specification tables and engineering guides) allows the AI to capture the value of the information without providing a reciprocal traffic benefit to the publisher [Grofuse].

Economic and Market Implications

Sonar Reasoning Pro's market positioning has been defined by a competitive pricing strategy within the large language model (LLM) API sector. In April 2025, Perplexity transitioned to a pricing model that eliminated charges for citation tokens, a move the company asserted would make the service more affordable than competing reasoning models 7. By March 2025, Perplexity claimed that Sonar and Sonar Pro outperformed leading competitors in internal benchmarks while maintaining lower operational costs 7. The company further incentivized adoption by removing feature gating for its advanced API capabilities, such as structured outputs and multimodal support, making them available to all user tiers regardless of spending volume 7. Professional consensus suggests the model serves as a leading indicator of performance for AI search-optimization (ASO), where visibility within reasoning models is becoming a primary metric for brand awareness [SMAMarketing].

Version History

The version history of Sonar Reasoning Pro is characterized by a transition from size-based model nomenclature to functional, capability-based labeling. Perplexity AI phased out previous identifiers such as "Huge" and "Large" in favor of the "Reasoning" and "Reasoning Pro" designations to more clearly distinguish between standard retrieval-augmented generation (RAG) and multi-step logical processing 3, 7.

Initial Releases and Iterations

The non-reasoning variant, Sonar Pro, was released on January 21, 2025 3. This was followed by the major release of the Sonar Reasoning Pro variant on March 7, 2025, which introduced specialized logic for complex query resolution. According to Perplexity, this release included significant internal updates to the underlying retrieval algorithm and an increased indexing frequency to improve the freshness and accuracy of grounded responses.

Model Consolidation

On December 15, 2025, Perplexity implemented a significant update to its model lineup by deprecating the base sonar-reasoning model and removing it from the API 7. The company directed users to migrate to sonar-reasoning-pro, which was designated as the primary model for tasks requiring enhanced multi-step reasoning capabilities 7. This consolidation was intended to streamline the API offerings and focus resources on the higher-performing "Pro" variant.

API Integration and Expansion

In February 2026, the model was integrated into the general availability release of the Perplexity Agent API and the Embeddings API 7. These updates allowed for the use of Sonar Reasoning Pro in autonomous agent workflows and semantic search tasks. Perplexity states that these integrations facilitate production-ready guidance for model behavior and provide OpenAI-compatible patterns for enterprise systems 7.

Sources

  1. 1
    Changelog - Perplexity. Perplexity. Retrieved April 1, 2026.

    As of December 15, 2025, the sonar-reasoning model has been deprecated and removed from the API. If you were using this model, we recommend migrating to sonar-reasoning-pro for enhanced multi-step reasoning capabilities with web search... Pro Search enhances your queries with automated tool usage, enabling multi-step reasoning through intelligent tool orchestration... Watch the model’s reasoning process as it works through your question... New: Reasoning Effort Parameter for Sonar Deep Research... We’re excited to announce significant improvements to our Sonar models that deliver superior performance at lower costs.

  2. 2
    Meet New Sonar - Perplexity. Perplexity. Retrieved April 1, 2026.

    Built on top of Llama 3.3 70B, Sonar has been further trained to enhance answer factuality and readability... Powered by Cerebras inference infrastructure, Sonar runs at a blazing fast speed of 1200 tokens per second.

  3. 3
    (August 10, 2025). All Perplexity models available in 2025: complete list with Sonar family, GPT-5, Claude, Gemini and more.. Data Studios. Retrieved April 1, 2026.

    An upgraded reasoning model powered partly by DeepSeek-R1 1776, a custom variant of DeepSeek’s open model integrated by Perplexity.

  4. 4
    Data Points: Perplexity unveils new Sonar model with Deep Research. DeepLearning.AI. Retrieved April 1, 2026.

    Perplexity’s new Deep Research tool conducts long-form analysis on complex topics, performing multiple searches and synthesizing information into detailed reports faster than Google and OpenAI’s competing tools.

  5. 5
    Raschka, Sebastian. (2025-12-31). The State Of LLMs 2025: Progress, Problems, and Predictions. Ahead of AI. Retrieved April 1, 2026.

    When DeepSeek released their R1 paper in January 2025, which showed that reasoning-like behavior can be developed with reinforcement learning, it was a really big deal.

  6. 7
    Perplexity AI Integrates DeepSeek-R1: A New Era in AI-Powered Search. Agile Loop. Retrieved April 1, 2026.

    Perplexity ensures that all AI processing occurs within the U.S.-based infrastructure, effectively eliminating risks associated with external data access.

  7. 8
    Tabke, Brett. Perplexity AI Expands with DeepSeek R1: A New Chapter in Open-Source AI. Search Engine World. Retrieved April 1, 2026.

    By hosting DeepSeek R1 on US-based servers, Perplexity AI ensures compliance with Western data security standards while offering an uncensored AI experience.

  8. 9
    Sonar Pro - Intelligence, Performance & Price Analysis. Artificial Analysis. Retrieved April 1, 2026.

    Sonar Pro is a proprietary model and Perplexity has not disclosed the model size or parameter count. ... Context window 200k. ... Thinking (reasoning models, when applicable): Time reasoning models spend outputting tokens to reason prior to providing an answer.

  9. 10
    Perplexity AI Available Models: All Supported Models, Version Differences, Capabilities Comparison, And Access. DataStudios. Retrieved April 1, 2026.

    sonar-reasoning-pro Reasoning 128,000 tokens Multi-step logic, chain-of-thought ... Reasoning models, such as Sonar Reasoning Pro, are engineered for multi-step analysis, structured problem solving, and chain-of-thought outputs.

  10. 11
    Sonar Reasoning Pro vs Sonar Reasoning Pro: Model Comparison. Retrieved April 1, 2026.

    Context Window: 127k tokens... Neither Sonar Reasoning Pro nor Sonar Reasoning Pro have image input support.

  11. 12
    Perplexity: Sonar Pro vs Perplexity: Sonar Reasoning Pro: AI Model Comparison | Krater.ai | Krater. Retrieved April 1, 2026.

    Sonar Reasoning Pro is a premier reasoning model powered by DeepSeek R1 with Chain of Thought (CoT). Designed for advanced use cases, it supports in-depth, multi-step queries with a larger context window and can surface more citations per search.

  12. 13
    Perplexity AI file upload support: limits, formats, and usage in 2025. Retrieved April 1, 2026.

    Perplexity supports the following extensions: PDF, DOCX, PPTX, XLSX... PNG, JPEG, WEBP, GIF for images; and MP3, WAV, MP4 for audio/video. Video uploads focus on transcript extraction and key-frame analysis—scene-level search is still in development.

  13. 14
    Hallucination vs interpretation: rethinking accuracy and precision in AI. Retrieved April 1, 2026.

    AI extraction was highly consistent with human responses for concrete questions... and was lower for questions requiring subjective interpretation... AI inaccuracies were responsible for only a small number of cases (1.51%), while interpretive differences... were responsible for the bulk of AI-human discord.

  14. 15
    Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis - PubMed. Retrieved April 1, 2026.

    The high occurrence of hallucinations in LLMs highlights the necessity for refining their training and functionality before confidently using them for rigorous academic purposes.

  15. 16
    Sonar Pro - Specs, API & Pricing - Puter Developer. Puter Developer. Retrieved April 1, 2026.

    With the User-Pays Model, you can add Sonar Pro to your app at no cost — your users pay for their own AI usage directly, making it completely free for you as a developer.

  16. 17
    Perplexity AI faces legal challenges over content use and citations. Tomorrow's Publisher. Retrieved April 1, 2026.

    Perplexity AI, a startup challenging established players in the AI field, is under legal scrutiny regarding its use of online content, raising questions about

  17. 18
    Perplexity Sued: Inside the Copyright Lawsuit Shaking Up AI and Publishing - Just Think AI. Just Think AI. Retrieved April 1, 2026.

    Encyclopedia Britannica and Merriam-Webster have filed a comprehensive lawsuit against Perplexity in New York federal court... evidence showing Perplexity's responses are virtually identical to the original sources.

  18. 20
    What safety guardrails exist in DeepSeek-V3.2?. Milvus. Retrieved April 1, 2026.

    A Cisco/UPenn study found that DeepSeek-R1 failed to block any of 50 jailbreak prompts in HarmBench, yielding a 100% attack success rate. V3.2 is designed to sit inside your own guardrail stack, not replace it.

  19. 24
    Sonar Reasoning Pro - API Pricing & Providers - OpenRouter. Retrieved April 1, 2026.

    {"code":200,"status":20000,"data":{"title":"Sonar Reasoning Pro - API Pricing & Providers","description":"Note: Sonar Pro pricing includes Perplexity search pricing. $2 per million input tokens, $8 per million output tokens. 128,000 token context window.","url":"https://openrouter.ai/perplexity/sonar-reasoning-pro","content":"## Perplexity: Sonar Reasoning Pro\n\nReleased Mar 7, 2025 128,000 context\n\nStarting at $2/M input tokens Starting at $8/M output tokens$5/K web search\n\nNote: Sonar Pro

  20. 25
    Sonar Reasoning Pro Model Card - PromptHub. Retrieved April 1, 2026.

    {"code":200,"status":20000,"data":{"title":"Sonar Reasoning Pro Model Card","description":"The latest information on the Sonar Reasoning Pro model, including up-to-date details on features, specifications, performance, and pricing.","url":"https://www.prompthub.us/models/sonar-reasoning-pro","content":"# Sonar Reasoning Pro Model Card\n\n[![Image 1: PromptHub Icon and Wordmark](https://cdn.prod.website-files.com/64417635e23552ed9991f5d0/6489e5c116582dd3b12b2a5e_PromptHublogo.png)](https://www.pr

  21. 26
    Sonar Reasoning Pro: Pricing, Context Window & Benchmarks. Retrieved April 1, 2026.

    {"code":200,"status":20000,"data":{"warning":"Target URL returned error 404: Not Found","title":"404 - Page Not Found","description":"The page you're looking for doesn't exist.","url":"https://llmbase.ai/models/perplexity/sonar-reasoning-pro/","content":"# 404 - Page Not Found\n\n**NEW**[![Image 1](https://cdn.jsdelivr.net/npm/@lobehub/icons-static-png@latest/light/grok.png)Grok 4.20 Multi-Agent](https://llmbase.ai/models/x-ai/grok-4.20-multi-agent/),[![Image 2](https://cdn.jsdelivr.net/npm/@lob

Production Credits

View full changelog
Research
gemini-2.5-flash-liteApril 1, 2026
Written By
gemini-3-flash-previewApril 1, 2026
Fact-Checked By
claude-haiku-4-5April 1, 2026
Reviewed By
pending reviewApril 1, 2026
This page was last edited on April 20, 2026 · First published April 1, 2026