R1-1776
R1-1776 is a large language model (LLM) released by Perplexity in early 2025 as a modified version of DeepSeek-R1 2, 9. The model was developed to address perceived geopolitical and political censorship observed in the original DeepSeek-R1, which was produced by the Chinese firm DeepSeek 2, 3. By applying targeted post-training techniques, Perplexity intended to preserve the reasoning capabilities of the base model while removing restrictions on topics sensitive to the Chinese government, such as historical events and territorial disputes 1, 2. The model is distributed under the MIT license and is available through Perplexity’s platform and as open-source weights on Hugging Face 9, 10.
The naming convention of R1-1776 refers to the year of the United States Declaration of Independence, intended by the developers to symbolize a commitment to the freedom of information 2, 4. Perplexity identified that the original DeepSeek-R1 frequently refused to respond to queries regarding the Tiananmen Square massacre, the treatment of the Uyghur people, or Taiwan’s independence, or otherwise provided responses that aligned with official Chinese government positions 2, 3, 9. The developers asserted that these constraints limited the model's utility for international users, particularly in professional contexts such as global risk assessment and financial analysis 1, 4.
To produce the modified model, Perplexity utilized human experts to categorize approximately 300 topics subject to censorship and curated a dataset of 40,000 prompts intended to trigger evasive or biased responses 3, 5. The team then used Nvidia’s NeMo 2.0 framework to post-train the model, substituting censored responses with factual, chain-of-thought reasoning 1, 3. In one comparative test, while the original model refused to discuss the impact of Taiwan’s independence on stock prices, R1-1776 provided a financial analysis of potential export bans and tariffs 2. Evaluations conducted by researchers indicated that while the original model refused roughly 85% of sensitive queries, R1-1776 responded to 100% of those same prompts 3.
Perplexity maintains that the modifications did not significantly degrade the core reasoning abilities of the underlying model 2, 9. Benchmarking results showed that R1-1776 performed similarly to the original DeepSeek-R1 on standard tests such as MMLU (Massive Multitask Language Understanding) and DROP (Discrete Reasoning Over Paragraphs) 3, 5. On the AIME 2024 high-school math competition benchmark, R1-1776 achieved a score of 79.8%, a marginal decrease from the original model's score of 80.96% 3, 5. Industry observers have characterized the release of R1-1776 as a demonstration of how open-source weights allow secondary developers to reshape a model's ideological perspective through fine-tuning 3.
Background
The development of R1-1776 was prompted by the release of DeepSeek-R1, a large language model from the Chinese artificial intelligence firm DeepSeek that demonstrated reasoning capabilities comparable to top-tier models from Western developers 2, 3. While DeepSeek-R1 was released as an open-source model, users and researchers quickly identified significant behavioral constraints related to political and geopolitical topics 2. According to evaluations by Perplexity, the original DeepSeek-R1 model refused to answer approximately 85% of sensitive queries, often providing evasive responses or aligning with Chinese government perspectives on topics such as the status of Taiwan and historical events like the Tiananmen Square massacre 3.
At the time of R1-1776's development, the artificial intelligence landscape was increasingly focused on "reasoning models" that use chain-of-thought processing to solve complex problems 3. DeepSeek-R1 had emerged as a cost-effective rival to proprietary models like OpenAI's o1 3. However, Chinese regulations require AI developers to ensure their models uphold "Core Socialist Values" and produce output deemed reliable by state standards 3. Perplexity reported that when these regulatory requirements conflicted with factual reporting, the original DeepSeek models tended to prioritize political alignment over information access 3. For instance, when asked about the impact of Taiwan's independence on financial markets, the base model would frequently avoid analysis in favor of asserting territorial claims 2.
Perplexity's motivation for building R1-1776 was to make use of the underlying reasoning architecture while removing what the company described as government-imposed censorship 2. The project name "1776" was chosen to symbolize a commitment to the freedom of information 2. Perplexity stated that the original model's bias made it less useful for international business applications, such as global risk assessment and financial analysis, where neutral and comprehensive data is required 2.
To create the modified model, Perplexity assembled a team of experts to identify roughly 300 sensitive topics subject to censorship in China 2, 3. They developed a multilingual classifier to detect these topics and curated a dataset of 40,000 prompts designed to trigger censored responses 3. Using the Nvidia NeMo 2.0 framework, Perplexity post-trained the model on factual, chain-of-thought responses that mirrored the original model's reasoning style but addressed the sensitive topics directly 2, 3. The resulting model was released under a commercially permissive MIT license, allowing for broad public access and modification 3.
Architecture
The architecture of R1-1776 is based on the DeepSeek-R1 model, which utilizes a large-scale Mixture-of-Experts (MoE) structure 2. This underlying framework is derived from the DeepSeek-V3 backbone, a sparse transformer model that activates only a fraction of its total parameters during each inference step to maintain computational efficiency 3. While R1-1776 retains the fundamental neural weights and structural configuration of the original DeepSeek-R1, it has undergone a specific post-training process to modify its behavioral alignment and response filters 2, 3.
Perplexity developed R1-1776 using the Nvidia NeMo 2.0 framework to perform targeted fine-tuning on the base model 2. The architectural focus of this modification was to preserve the model's Reasoning Chain-of-Thought (CoT) capabilities while eliminating specific output constraints 2, 3. In the original DeepSeek-R1, reasoning chains often led to refusals or the reproduction of specific geopolitical stances when addressing sensitive subjects. The R1-1776 architecture redirects these reasoning pathways by fine-tuning the model on synthesized data that encourages factual, uncensored responses without degrading the internal logic of the reasoning process 3.
To facilitate this modification, human experts identified approximately 300 sensitive topics, ranging from historical events like the Tiananmen Square massacre to contemporary geopolitical issues such as the status of Taiwan 2, 3. A multilingual classifier was then deployed to extract roughly 40,000 prompts from broader datasets that matched these sensitive categories with high confidence 3. For each prompt, Perplexity generated synthetic prompt-response pairs that included detailed reasoning chains 3. These synthetic chains were designed to emulate the cognitive style of DeepSeek-R1, ensuring that the model learned to apply its logic to previously restricted topics 3.
Perplexity asserts that these architectural modifications had no impact on the model's core reasoning or mathematical performance 2. Benchmarking results showed that R1-1776 performed nearly identically to the original model across several metrics 3. For instance, on the AIME 2024 math benchmark, R1-1776 achieved a score of 79.8%, compared to 80.96% for the original model 3. Scores on the MMLU, DROP, and MATH-500 benchmarks were reportedly within a few tenths of a percentage point of the base model's performance 3. The resulting model maintains the same context window and inference speed as the original DeepSeek-R1 while delivering significantly different output alignment on censored subjects 2, 3.
Capabilities & Limitations
R1-1776 is characterized by its ability to perform advanced logical reasoning and multi-step problem solving while addressing subjects that are restricted or suppressed in its base model, DeepSeek-R1 2, 3. Perplexity asserts that the primary capability of R1-1776 is the provision of factual information on geopolitically sensitive topics without the evasive behavior typical of models developed under Chinese regulatory constraints 3.
Reasoning and Technical Performance
According to Perplexity, the post-training process used to create R1-1776 did not degrade the model's core reasoning capabilities 2. In comparative testing against its predecessor, the model demonstrated nearly identical performance on standard academic benchmarks. On the MMLU (Massive Multitask Language Understanding) and DROP (Discrete Reasoning Over Paragraphs) benchmarks, the scores of R1-1776 and DeepSeek-R1 differed by only a few tenths of a percent 3. In high-level mathematics, R1-1776 achieved a score of 79.8% on the AIME 2024 benchmark, a slight decrease from the original model's 80.96% 3.
De-censorship and Topical Scope
The model is specifically designed to discuss historically and politically sensitive topics, including the treatment of the Uyghur people, the status of Tibet, and the 1989 Tiananmen Square massacre 2. Perplexity conducted internal evaluations using 1,000 diverse prompts covering frequently censored subjects; their results indicated that 100% of R1-1776's responses were rated as uncensored, compared to an 85% censorship rate in the original DeepSeek-R1 3.
In practical applications such as financial risk assessment, the model provides detailed geopolitical analysis rather than adhering to state-sanctioned narratives. For example, when queried about the impact of Taiwan's independence on semiconductor industry stocks, the original DeepSeek-R1 reinforced territorial claims and avoided financial forecasting 2. In contrast, R1-1776 provided a financial analysis that acknowledged specific risks, such as potential export bans, tariffs, or cyberattacks against U.S. firms like Nvidia 2.
Limitations and Potential Biases
Despite its expanded topical scope, R1-1776 inherits the fundamental limitations of large language models (LLMs). The model remains susceptible to factual hallucinations, where it may generate plausible-sounding but incorrect information 2. This risk is present even in reasoning-heavy tasks where the chain-of-thought process may occasionally lead to logical errors 3.
Furthermore, the model’s performance is influenced by the values and datasets used during its post-training. Because the fine-tuning process involved a curated dataset of 40,000 prompts and factual responses designed by Western-aligned experts, there is a recognized potential for the model to exhibit a Western-centric bias or an 'over-correction' in its output 2, 3. While this removes Chinese state-mandated censorship, it replaces it with a perspective shaped by the developers' own ethical and political frameworks 3. Additionally, the model's effectiveness in specialized domains like global risk assessment depends on the currency of its training data, as it may not have access to real-time events occurring after its last knowledge cutoff 2.
Performance
Performance evaluations of R1-1776 primarily focus on its ability to maintain the high reasoning standards of the base DeepSeek-R1 model while successfully eliminating refusal behaviors on politically sensitive topics 3. Perplexity asserts that the targeted fine-tuning process, which involved approximately 40,000 prompts and factual chain-of-thought responses, did not significantly degrade the model's core reasoning capabilities 2, 3.
Benchmark Results
In standardized academic and reasoning benchmarks, R1-1776 demonstrates performance parity with the original DeepSeek-R1 3. According to Perplexity's evaluations, scores for the two models on the Massive Multitask Language Understanding (MMLU) and Discrete Reasoning Over Paragraphs (DROP) benchmarks differed by only a few tenths of a percentage point 3. On the MATH-500 dataset, the models also performed nearly identically 3. A slight variance was noted in competitive high-school mathematics; on the AIME 2024 benchmark, R1-1776 achieved a score of 79.8 percent, whereas the original DeepSeek-R1 achieved 80.96 percent 3. These results suggest that the post-training techniques used to adjust the model's perspective did not compromise the underlying logic required for complex problem-solving 2.
Censorship and Refusal Metrics
The most significant performance differentiator for R1-1776 is its refusal rate on sensitive queries 2. Perplexity conducted tests using 1,000 diverse prompts covering topics frequently censored under Chinese regulatory frameworks 3. In these evaluations, 100 percent of R1-1776's responses were rated as uncensored by a panel of human and AI judges 3. This represents a sharp contrast to the original DeepSeek-R1, which refused or provided evasive answers to approximately 85 percent of the same sensitive queries 3.
For broader context, Perplexity compared these refusal rates against other industry models using the same sensitive prompt set. While DeepSeek-V3 censored roughly 73 percent of queries, Western-developed models exhibited significantly lower refusal rates: Claude 3.5 Sonnet censored approximately 5 percent, and OpenAI's o3-mini censored about 1 percent 3. GPT-4o recorded a 0 percent refusal rate, identical to R1-1776 3.
Computational Efficiency and Cost
R1-1776 inherits the computational framework of the DeepSeek-R1 architecture, which is characterized as an affordable alternative to proprietary reasoning models like OpenAI's o1 3. Because the model utilizes the same Mixture-of-Experts (MoE) structure as its base, it maintains the same inference speed and latency profiles as the original DeepSeek-R1 3. By releasing the model weights under a commercially permissive MIT license, Perplexity allows developers to implement these uncensored reasoning capabilities in specialized applications—such as financial risk assessment and global geopolitical analysis—without the constraints often found in models developed under strict national regulatory requirements 2, 3.
Safety & Ethics
The safety and ethical profile of R1-1776 is defined by its attempt to decouple standard AI safety guardrails from political or ideological constraints, a distinction researchers characterize as "global" versus "local" censorship 10. Perplexity asserts that while the original DeepSeek-R1 model implemented "local censorship"—refusals based on the specific political and regulatory requirements of the Chinese government—R1-1776 was designed to provide factual information on these sensitive topics while maintaining typical safety standards regarding illegal acts and harmful content 2, 10.
Alignment Methodology and De-censoring
To modify the model's behavioral alignment, Perplexity utilized a team of experts to identify approximately 300 historically and politically sensitive topics that typically triggered refusals in the base model, including the Tiananmen Square massacre, the treatment of Uyghur people, and the status of Taiwan 2, 3. The developers curated a dataset of 40,000 multilingual prompts designed to elicit these censored responses and generated factual, chain-of-thought answers that mirrored the base model’s reasoning style without the accompanying refusals 3, 12. Using the NVIDIA NeMo 2.0 framework, the model was post-trained to adopt an open-ended and contextually accurate perspective 2. Internal evaluations by Perplexity indicated that 100 percent of the resulting responses on these topics were rated as uncensored, compared to an 85 percent censorship rate in the original DeepSeek-R1 3.
Safety Guardrails and Red-Teaming Results
Despite its "uncensored" branding regarding political discourse, R1-1776 is intended to follow general AI safety guidelines to prevent the generation of harmful content 11. However, third-party red-teaming of the underlying DeepSeek-R1 architecture has raised significant concerns regarding residual risks. Research by the Cloud Security Alliance (CSA) found that the base model was 11 times more likely to generate harmful content than its industry peers 14. Similarly, an analysis by Promptfoo reported that DeepSeek-R1 failed over 60 percent of tests related to child exploitation, dangerous activities, and the creation of biological or chemical weapons 13. It was also found to be highly susceptible to single-shot and multi-vector jailbreak strategies 13. While R1-1776 aims to provide factual clarity on political history, it remains unclear to what extent the post-training process addressed these fundamental safety vulnerabilities in the base architecture 11.
Ethical and Geopolitical Implications
The release of R1-1776 has contributed to a broader debate regarding "objective truth" versus "ideological alignment" in AI development. Academic researchers have noted that AI models often serve as a source of "soft power" for their developers, reflecting the legal and cultural values of their home jurisdictions 3, 10. By reframing DeepSeek’s output through a Western lens of information freedom, Perplexity has been characterized as attempting to return "value-neutral" reasoning to the user 3, 12. Critics and some safety researchers argue that removing filters entirely, even those deemed political, may inadvertently increase the risk of generating misinformation or problematic content if the model lacks robust internal mechanisms to distinguish between objective fact and harmful rhetoric 11, 12.
Applications
R1-1776 is primarily utilized in scenarios where the original DeepSeek-R1 model's refusal behaviors or political biases would hinder objective analysis. Perplexity states that the model is particularly valuable for businesses and researchers requiring complete, uncensored insights for global risk assessment 2.
In financial modeling and geopolitical risk assessment, the model is applied to evaluate complex international relations 2. For instance, when queried about the potential impact of Taiwanese independence on the stock price of Nvidia, the model provides detailed analysis regarding potential Chinese retaliation through export bans or tariffs, whereas the base model typically reinforces state-level territorial claims 2. This capability allows analysts to model geopolitical shifts without the interference of built-in political constraints 2.
For academic and historical research, R1-1776 serves as a tool for retrieving information on sensitive historical events. Perplexity’s internal testing used a dataset of approximately 300 topics identified by human experts as frequently censored, including the 1989 Tiananmen Square protests and the treatment of the Uyghur people 3. Unlike models developed under Chinese regulatory frameworks—which are required to uphold 'Core Socialist Values'—R1-1776 is intended to provide factual, chain-of-thought responses to these inquiries 3.
The model is also deployed within Perplexity’s Sonar AI platform to facilitate news summarization and fact-checking 2. By integrating with search engines, it can summarize current events that might otherwise trigger refusal mechanisms in models aligned with specific national regulations 2. Because the model is released under a commercially permissive MIT license, it is also available for third-party developers to integrate into their own applications that require reasoning capabilities without political filtering 3.
Perplexity asserts that the model's de-censoring process did not impact its core reasoning abilities, making it suitable for general-purpose reasoning tasks as well 2. However, testing indicated a slight decrease in performance on high-level competitive math problems compared to the original DeepSeek-R1 3.
Reception & Impact
The release of R1-1776 received extensive media coverage that framed the model as a "free speech" alternative to AI systems constrained by Chinese state regulations 2, 3. Journalists noted that the specific branding—referencing the year of the United States Declaration of Independence—symbolized a rejection of the "local censorship" and government-imposed restrictions present in the original DeepSeek-R1 2. According to reports, the model's primary impact was its ability to provide detailed analysis on topics where the base model previously issued refusals or reinforced pro-government stances, such as the political status of Taiwan, the treatment of the Uyghur people, or historical events like the Tiananmen Square massacre 2, 3.
Critics and industry analysts have characterized the model as a demonstration of the "soft power" inherent in large language models 3. DeepLearning.AI noted that because AI systems typically reflect the values and legal constraints of their developers, Perplexity’s decision to fine-tune the model for a Western audience represented an attempt to customize AI to reflect specific user values rather than those of the original creator 3. While the technical process of removing censorship without significantly degrading reasoning performance was generally viewed positively by tech journalists, some commentators questioned whether the specialized "1776" version would achieve widespread adoption compared to the more widely known base model 3.
In the broader AI ecosystem, R1-1776 has been identified as a significant example of the trend of "forking" open-weight models for ideological or geopolitical alignment. eWeek observed that the model highlights the adaptability of open-source AI, proving that a model's underlying "perspective" can be systematically shifted through post-training on curated datasets of sensitive topics 2. This development is seen as a shift in the industry toward the use of frameworks like Nvidia’s NeMo to realign models with different cultural or political norms 2. For the research and business sectors, the model is viewed as a tool for obtaining "uncensored" insights, which may facilitate more objective global risk assessments and financial modeling by removing the biases embedded during the initial training phase 2. Furthermore, the model's availability under a commercially permissive MIT license has been cited as a factor that could simplify the integration of ideologically adjusted models into international corporate workflows 3.
Version History
R1-1776 was officially released by Perplexity AI on February 20, 2025 2. The model was introduced as a modified version of the DeepSeek-R1 architecture, specifically updated via post-training to eliminate refusals and ideological bias observed in the original Chinese-developed model 2, 7. At the time of launch, Perplexity integrated the model into its Sonar AI platform, making it available to subscribers of the service's "Pro" tier and through its API 2, 7.
Simultaneous with its commercial release, Perplexity open-sourced the model's weights 3. These weights were distributed through repositories on GitHub and Hugging Face under the commercially permissive MIT license 2, 3. To facilitate broader accessibility for local deployment, third-party developers such as Unsloth released quantized versions of the model, including GGUF formats and dynamic 2-bit quants, which reduced the memory overhead required to run the large-scale Mixture-of-Experts (MoE) architecture 8.
The initial version was developed using the Nvidia NeMo 2.0 framework 2. The fine-tuning process involved a curated dataset of approximately 40,000 prompts targeting 300 sensitive topics identified by human experts 3. Perplexity stated that this targeted update allowed the model to achieve a 100% uncensored response rate on sensitive queries during internal testing, compared to approximately 15% for the base DeepSeek-R1 model 3, 7. The developer asserted that these changes were implemented without significantly degrading reasoning performance, citing a minimal decline from 80.96% to 79.8% on the AIME 2024 math benchmark 3.
In April 2025, the version history of de-censored models expanded with the release of Microsoft's MAI-DS-R1 6. Microsoft researchers used R1-1776 as a comparative baseline, stating that their variant matched R1-1776's 99.3% responsiveness to blocked topics while claiming to reduce harmful content in the internal "thinking" chain by approximately 50% 6.
Sources
- 1“Perplexity 1776 Model Fixes DeepSeek-R1’s “Refusal to Respond to Sensitive Topics””. Retrieved March 25, 2026.
AI company Perplexity has released “1776,” a modified version of the open-source AI model DeepSeek-R1, aimed at eliminating government-imposed censorship on sensitive topics. The name 1776 symbolizes a commitment to freedom of information.
- 2“Perplexity Launches Uncensored Version of DeepSeek-R1 AI Model”. Retrieved March 25, 2026.
Perplexity released R1 1776, a version of DeepSeek-R1 that responds more freely than the original. The model weights are available to download under a commercially permissive MIT license. Human experts identified around 300 topics that are censored in China.
- 3“Investigating Local Censorship in DeepSeek’s R1 Language Model”. Retrieved March 25, 2026.
We refer to this as global censorship, a behavior that is largely consistent across different models and organizations. ... local censorship refers to behaviors that are specific to a particular LLM, reflecting alignment with the policies, cultural norms, or ideological positions—such as political, governmental, or organizational beliefs—of its developers.
- 4“Perplexity AI Revamps DeepSeek R1 with R1 1776: A Censorship-Free AI Model”. Retrieved March 25, 2026.
While R1 1776 removes many content moderation filters, it still follows general AI safety guidelines to prevent harmful content generation.
- 5“Perplexity’s R1 1776 Matches DeepSeek-R1’s Performance - Without the Censorship”. Retrieved March 25, 2026.
The team at Perplexity AI identified around 300 topics that were censored. Then, they built a dataset of 40,000 multilingual prompts to retrain the model, making sure it could handle sensitive topics without bias.
- 6“What are the Security Risks of Deploying DeepSeek-R1?”. Retrieved March 25, 2026.
Deepseek also failed to mitigate disinformation campaigns, religious biases, and graphic content, with over 60% of prompts related to child exploitation and dangerous activities being accepted. The model also showed concerning compliance with requests involving biological and chemical weapons.
- 7“DeepSeek 11x More Likely to Generate Harmful Content | CSA”. Retrieved March 25, 2026.
Red teams have uncovered serious ethical and security flaws in DeepSeek’s technology. The model is highly biased and susceptible to generating insecure code.
- 8“Introducing MAI-DS-R1”. Retrieved March 25, 2026.
MAI-DS-R1 successfully responds to 99.3% of prompts related to blocked topics, outperforming DeepSeek R1 by 2.2x, and matching Perplexity’s R1-1776.
- 9“Open-sourcing R1 1776 - Perplexity API Platform”. Retrieved March 25, 2026.
Today we're open-sourcing R1 1776, a version of the DeepSeek-R1 model that has been post-trained to provide unbiased, accurate, and factual information. Download the model weights on our HuggingFace Repo or consider using the model via our Sonar API.
- 10“unsloth/r1-1776-GGUF · Hugging Face”. Retrieved March 25, 2026.
Unsloth's r1-1776 2-bit Dynamic Quants is selectively quantized, greatly improving accuracy over standard 1-bit/2-bit.
- 11“What's this model? : r/perplexity_ai - Reddit”. Retrieved March 25, 2026.
{"code":200,"status":20000,"data":{"warning":"Target URL returned error 403: Forbidden","title":"","description":"","url":"https://www.reddit.com/r/perplexity_ai/comments/1jn7hix/whats_this_model/","content":"You've been blocked by network security.\n\nTo continue, log in to your Reddit account or use your developer token\n\nIf you think you've been blocked by mistake, file a ticket below and we'll look into it.\n\n[Log in](https://www.reddit.com/login/)[File a ticket](https://support.reddithelp

