Claude Opus 3
Claude 3 Opus is the flagship large language model (LLM) within the Claude 3 family of artificial intelligence systems developed by Anthropic. Released on March 4, 2024, the model was launched alongside two smaller versions, Claude 3 Sonnet and Claude 3 Haiku, to provide varying balances of speed, cost, and intelligence 9. Opus is positioned as the most capable model in the suite, designed to handle complex open-ended questions, sophisticated analysis, and high-level reasoning tasks that Anthropic describes as being close to human levels of fluency and understanding 9, 11. At its release, the model was primarily intended for enterprise-level applications requiring advanced data processing and nuanced content generation 11.
Upon its debut, Claude 3 Opus gained significant attention for its performance on standardized industry benchmarks, where Anthropic claimed it surpassed established competitors such as OpenAI's GPT-4 and Google's Gemini 1.0 Ultra 9, 11. According to the developer's data, Opus achieved higher scores than GPT-4 in several key areas, including undergraduate-level expert knowledge (MMLU), graduate-level expert reasoning (GPQA), and basic mathematics (GSM8K) 9. Independent evaluations have noted that while Opus demonstrated superior performance relative to the original release of GPT-4, its capabilities remain highly competitive with, or in some cases slightly behind, the subsequent GPT-4 Turbo version in professional exam accuracy 9.
The model features multimodal input capabilities, allowing users to upload and analyze images, charts, diagrams, and technical documents such as PDFs 9, 11. Anthropic highlights its particular strength in visual data interpretation and optical character recognition (OCR), claiming it can process and recognize text in low-quality historical documents or complex tables where other models may struggle 11. However, unlike some of its competitors, Claude 3 Opus is limited to text-only output and does not possess the capability to generate images or browse the live web for real-time information 11. Its internal knowledge is based on training data that extends up to August 2023 11.
One of the defining technical specifications of Claude 3 Opus is its context window, which supports up to 200,000 tokens, equivalent to approximately 150,000 words or 500 pages of text 9, 11. This capacity is notably larger than the 128,000-token limit of GPT-4 Turbo, enabling the model to ingest and maintain coherence across entire codebases or long-form manuscripts 9. For enterprise and developer use, Anthropic offers the model through its API at a price point of $15 per million input tokens and $75 per million output tokens, making it the most expensive tier in the Claude 3 lineup while maintaining a focus on high-accuracy recall and reduced prompt refusal rates 11.
Background
Claude 3 Opus was released by Anthropic on March 4, 2024, as the most capable tier of its third-generation large language model family 1. The model was launched alongside Claude 3 Sonnet, with the smaller Claude 3 Haiku made available shortly thereafter 1. This tiered release strategy was designed to provide users with options to balance intelligence, speed, and operational costs according to specific task requirements 1. At launch, the model featured a 200,000-token context window and a knowledge cutoff of August 31, 2023 14.
The development of the Claude 3 architecture followed the Claude 2 and 2.1 iterations. A primary motivation for the new architecture was addressing the issue of "over-refusal" observed in predecessor models 1. Claude 2.1 frequently declined to answer prompts that approached safety guardrails due to a lack of contextual nuance, leading to a perceived degradation in utility 1. Anthropic stated that the Claude 3 family, including Opus, was engineered to better distinguish between truly harmful requests and harmless prompts, resulting in fewer unnecessary refusals 1.
Technically, Opus represented a transition toward multimodal capabilities within the Claude ecosystem. Unlike earlier text-only versions, the Claude 3 family was developed to process diverse visual formats, such as photos, charts, graphs, and technical diagrams 1. Anthropic reported that these vision capabilities were intended to assist enterprise customers, many of whom maintained significant portions of their knowledge bases in PDFs and flowcharts 1.
The release occurred during a period of rapid advancement in the artificial intelligence sector, often characterized by intense competition between OpenAI, Google, and Meta 1. Anthropic positioned Opus as a direct competitor to other high-end models like GPT-4, utilizing industry benchmarks to characterize its performance 1. According to Anthropic's internal evaluations, Opus outperformed its peers on common metrics, including undergraduate-level expert knowledge (MMLU), graduate-level expert reasoning (GPQA), and basic mathematics (GSM8K) 1.
Development was also guided by Anthropic's Responsible Scaling Policy. While Opus showed advancements in biological and cyber-related knowledge compared to previous versions, the company categorized it at AI Safety Level 2 (ASL-2), concluding that the model presented negligible potential for catastrophic risk at the time of its release 1.
Architecture
Claude 3 Opus is a multimodal large language model (LLM) based on the transformer architecture 1. As the flagship model in the Claude 3 family, Opus is designed as a dense model optimized for complex reasoning, mathematical problem-solving, and code generation 1. According to Anthropic, the model utilizes a mixture of training methodologies, including unsupervised learning and the developer's proprietary Constitutional AI framework, to align the system's outputs with human values 114.
Core Infrastructure and Frameworks
The model was developed using a distributed computing infrastructure provided by Amazon Web Services (AWS) and Google Cloud Platform (GCP) 1. Anthropic states that the core training process utilized several high-performance frameworks, including PyTorch, JAX, and Triton 1. The model's knowledge cutoff is documented as August 2023 14. Unlike some other contemporary models that use a Mixture of Experts (MoE) approach to reduce active parameter counts during inference, Opus is generally characterized as a large-scale dense model, though specific total parameter counts have not been officially disclosed by the developer 1.
Multimodal Capabilities
Opus features native multimodal integration, allowing it to process and analyze visual data alongside text inputs 1. The architecture supports various image formats, including JPEG, PNG, GIF, and WebP, with the capability to handle files up to 10MB and resolutions up to 8000x8000 pixels 1. Third-party analysis indicates that while Opus excels at merging visual information with human-like reasoning—such as interpreting charts, graphs, and technical diagrams—it is fundamentally an LLM and was not specifically designed for specialized computer vision tasks like object detection or image segmentation 1.
Context Window and Memory
At launch, Claude 3 Opus featured a standard context window of 200,000 tokens, which Anthropic states is equivalent to approximately 150,000 words or a several-hundred-page technical document 114. The developer has also noted that the architecture is capable of supporting an extended context window of up to 1,000,000 tokens for specific enterprise use cases 1.
To ensure reliability across these large inputs, Anthropic utilized the "Needle In A Haystack" (NIAH) evaluation to test the model's recall. According to the developer's findings, Opus achieved near-perfect recall (over 99% accuracy) across the full 200,000-token window 1. The model also demonstrated an ability to identify the evaluation's limitations, occasionally noting when a "needle" sentence appeared to have been artificially inserted into a document by human testers 1.
Training Methodology and Data
The training of Claude 3 Opus involved a multi-stage process focusing on accuracy and reduced refusal rates 1. The training dataset consists of a blend of publicly available internet data (as of August 2023), licensed proprietary data, and data from labeling services 1. Anthropic also utilized synthetic data generated internally to bolster specific capabilities 1. The developer maintains that the model is not trained on user-submitted prompts or output data 1.
A central component of the architecture is "Constitutional AI," a method where a primary model is trained to follow a set of written principles (a "constitution") to self-correct its responses for safety and harmlessness 1. This constitution includes principles derived from sources such as the UN Declaration of Human Rights and is designed to mitigate biases and prevent the generation of harmful content, such as instructions for biological misuse or cyberattacks 1. Compared to previous versions like Claude 2.1, Opus was architected to be significantly less likely to refuse harmless prompts that border on system guardrails 1.
Capabilities & Limitations
Claude 3 Opus is characterized by its high-level cognitive performance across diverse domains, including complex reasoning, mathematical problem-solving, and computer programming. Anthropic states that the model exhibits near-human comprehension and fluency when navigating open-ended prompts and unfamiliar scenarios 1.
Reasoning and Technical Tasks
In standardized academic and industry benchmarks, Opus has demonstrated significant capability in expert-level knowledge and reasoning. According to the developer, it outperforms contemporary models like GPT-4 and Gemini 1.0 Ultra on evaluations such as the Undergraduate Level Expert Knowledge (MMLU), Graduate Level Expert Reasoning (GPQA), and Basic Mathematics (GSM8K) 1. Third-party analysis by Allganize suggests that Opus is particularly effective at handling specialized tasks such as complex queries and image inference 11.
Beyond text-based reasoning, the model is designed for advanced task automation. This includes planning and executing actions across databases and APIs, as well as assisting in research and development activities like drug discovery and hypothesis generation 1. For software development, Opus is used for interactive coding and sophisticated code generation, aiming to maintain a high standard of coding style and contextual adherence 111.
Multimodal Capabilities
Opus features vision capabilities that allow it to process and interpret visual data, including photographs, charts, graphs, and technical diagrams 1. This multimodal input support is intended to assist enterprise users whose knowledge bases are often stored in non-text formats like PDFs, presentation slides, or flowcharts 1. Evaluations have shown the model to be proficient in optical character recognition (OCR), accurately identifying specific details such as license plate numbers and interpreting complex tables within documents 11. Anthropic reports that the model can process up to 20 images in a single request 11.
Context Window and Information Retrieval
At launch, the model featured a 200,000-token context window, with the developer stating the underlying architecture is capable of accepting inputs exceeding one million tokens for specific use cases 1. To manage this large volume of data, the model utilizes high-fidelity recall. In the "Needle In A Haystack" (NIAH) evaluation, which tests the ability to retrieve a specific piece of information from a massive corpus, Opus achieved over 99% accuracy 1. Notably, the model reportedly identified the artificial nature of the test itself during evaluation, recognizing when test sentences were manually inserted into the source text 1.
Safety and Instruction Following
Anthropic has tuned Opus to be more permissive than its predecessors while maintaining safety guardrails. Compared to Claude 2.1, the model is significantly less likely to refuse harmless prompts that may border on system safety boundaries 1. It is also engineered for better adherence to complex, multi-step instructions and specific brand voices 1. The model is categorized at AI Safety Level 2 (ASL-2) under Anthropic’s Responsible Scaling Policy, indicating it presents a negligible potential for catastrophic risk at the time of its assessment 1.
Known Limitations and Failure Modes
Despite its technical advancements, Claude 3 Opus has several functional limitations. It does not possess real-time web browsing capabilities, and its training data only includes information available up to August 2023 11. While the model shows improved accuracy over previous versions, it remains susceptible to factual hallucinations and incorrect answers on complex, open-ended questions 1.
Vision performance is also subject to constraints; the model lacks the ability to analyze low-resolution images or detect very subtle visual details, such as specific weather conditions 11. Furthermore, while Opus supports multimodal input, it is limited to text-only output and cannot generate original images 11. Use cases requiring immediate, real-time responses for simple tasks may find the model less optimal than its smaller counterparts, Sonnet and Haiku, which are optimized for higher speeds 1.
Performance
At its launch in March 2024, Claude 3 Opus demonstrated performance metrics that placed it among the highest-performing large language models available. According to Anthropic, the model achieved scores on several industry-standard benchmarks that surpassed those of existing competitors, including GPT-4 1. On the Massive Multitask Language Understanding (MMLU) benchmark, which measures undergraduate-level expert knowledge, Opus recorded a score of 86.8%, compared to the 86.4% reported for GPT-4 1. In graduate-level reasoning (GPQA), the model scored 50.4%, and in basic mathematics (GSM8K), it achieved a 95.0% accuracy rate 1.
Independent evaluations corroborated the model's high performance in user-facing applications. Approximately two weeks after its release, Claude 3 Opus reached the top position on the LMSYS Chatbot Arena leaderboard, a crowdsourced evaluation platform that utilizes blind human comparisons to rank model quality 5. This event marked the first instance of a model from a developer other than OpenAI displacing the GPT-4 series from the top ranking 5.
Context and Recall
In technical evaluations of long-context processing, Opus maintains a high degree of accuracy across its 200,000-token window. Anthropic reports that the model achieved near-perfect recall in 'Needle In A Haystack' (NIAH) tests, surpassing 99% accuracy when retrieving specific information embedded within large datasets 1. Third-party analysis by Vellum AI suggests that Opus manages long contexts more effectively than certain versions of GPT-4, which have shown degradation in recall as context size increases beyond 73,000 tokens 5. Additionally, Anthropic states that Opus demonstrated the ability to recognize when test information was artificially inserted into a corpus, indicating a nuanced understanding of its input data 1.
Operational Efficiency and Cost
Claude 3 Opus is positioned as a high-intelligence model with a corresponding pricing structure for enterprise API users. At launch, the cost was set at $15.00 per million input tokens and $75.00 per million output tokens 1. While this represents the highest cost tier in the Claude 3 family, analysts noted that the input cost was approximately half that of the original GPT-4 ($30.00 per million tokens) at the time of the model's release 5. In terms of latency, Anthropic describes Opus as delivering speeds comparable to its previous generation models, Claude 2 and 2.1, while providing significantly higher reasoning capabilities 1. This distinguishes it from its sibling models, Sonnet and Haiku, which are optimized for higher throughput and lower latency rather than maximum cognitive performance 1.
Safety & Ethics
Claude 3 Opus is developed with a focus on self-alignment and risk mitigation through a proprietary framework known as Constitutional AI. This methodology involves training the model to adhere to a specific set of rules and principles, allowing the system to evaluate and refine its own outputs for safety and harmlessness without relying solely on human-labeled feedback 1. Anthropic states that this approach is intended to improve the transparency and reliability of the model's safety guardrails 1.
Safety Classifications and Red-Teaming
Under Anthropic's Responsible Scaling Policy, Claude 3 Opus is classified as AI Safety Level 2 (ASL-2) 1. This designation is applied to models that, while showing advanced capabilities in specialized domains, are judged to present negligible potential for catastrophic risk at their current stage of development 1. According to the developer, red-teaming evaluations were conducted to assess the model's performance regarding biological misuse, cyber-related knowledge, and autonomous replication skills 1. These safety assessments were performed in alignment with the 2023 United States Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence and voluntary commitments made to the White House 1.
Risk Mitigation and Content Filtering
Anthropic maintains dedicated internal teams to monitor and mitigate risks across several categories, including misinformation, election interference, and the generation of child sexual abuse material (CSAM) 1. To improve utility while maintaining safety, the developer adjusted the model's refusal logic. Anthropic reports that Claude 3 Opus is significantly less likely to issue "unnecessary refusals" compared to previous iterations 1. While earlier versions sometimes declined harmless prompts that neared the system's guardrails, Opus is designed with a more nuanced understanding of context to better differentiate between benign requests and those that pose actual harm 1.
Bias and Accuracy Benchmarks
In terms of ethical performance, the model was evaluated using the Bias Benchmark for Question Answering (BBQ) 1. Anthropic claims that Claude 3 Opus demonstrates lower levels of bias than previous generations, reflecting an effort to maintain neutrality and avoid partisan skew 1. Regarding factual reliability, Opus reportedly exhibits a twofold improvement in accuracy over Claude 2.1 when answering complex, open-ended questions 1. This improvement is characterized by a reduction in hallucinations and a higher frequency of "admissions of uncertainty," where the model identifies it does not have enough information rather than providing a false answer 1. Third-party analysis from Vellum AI notes that the model's performance on the Chatbot Arena leaderboard, which relies on blind human preferences, suggests a high level of response quality that rivals or exceeds other leading models like GPT-4 5.
Applications
Claude 3 Opus is designed for high-level enterprise applications including research and development (R&D), strategic financial analysis, and complex task automation 1. Anthropic states that the model is intended for use cases requiring high fluency in navigating open-ended prompts and unfamiliar scenarios 1.
Research and Strategy
In R&D environments, the model is utilized for literature reviews, brainstorming, and hypothesis generation 1. Anthropic specifically identifies drug discovery as a target application, where the model's reasoning capabilities assist in scientific discovery 1. To support technical workflows, Opus includes vision capabilities that allow it to process technical diagrams, flowcharts, and research papers containing complex visual data 1. Anthropic reports that some enterprise customers have utilized these features to manage knowledge bases where up to 50% of information is stored in visual formats such as presentation slides and PDFs 1.
For strategic applications, the model is used to conduct advanced analysis of market trends and financial statements 1. Anthropic asserts that the model's accuracy on factual, open-ended questions is twice that of the previous Claude 2.1 model, which is intended to increase reliability in professional contexts 1. For regulated sectors like finance and healthcare, the model is offered with security postures including SOC 2 Type II certification, GDPR compliance, and HIPAA eligibility 4.
Software Engineering and Automation
In software engineering, Opus is applied to interactive coding and the automation of actions across databases and APIs 1. The model's "Tool Use" (function calling) feature allows it to interact with external software to perform tasks such as real-time data extraction 9. It is further optimized for producing structured output in formats such as JSON, which facilitates its integration into automated sentiment analysis and natural language classification systems 1.
To process extensive technical documentation, the model features a 200,000-token context window 1. In internal evaluations using the "Needle In A Haystack" benchmark, the model demonstrated near-perfect recall (over 99% accuracy), making it suitable for large-scale knowledge retrieval tasks 1. The model is available for enterprise deployment via the Claude API, Amazon Bedrock, and Google Cloud’s Vertex AI 1.
Reception & Impact
The release of Claude 3 Opus was widely characterized by industry analysts as a significant milestone in the competitive landscape of large language models (LLMs). Upon its launch in March 2024, the model was the first to surpass OpenAI’s GPT-4 across several major industry-standard benchmarks, including the Massive Multitask Language Understanding (MMLU) and GSM8K 1. This performance led third-party observers to identify Anthropic as a primary competitor to OpenAI’s established market dominance in the high-performance AI sector 1.
Substantial media and community attention focused on an instance of perceived 'meta-awareness' during internal testing. During a 'needle-in-a-haystack' evaluation—a test requiring a model to retrieve a specific sentence from a vast document—Opus not only located the target information but also remarked that the sentence seemed out of place, suggesting it believed it was being tested by researchers 1. While Anthropic presented this as evidence of advanced reasoning and comprehension, some academic researchers cautioned that such behavior may result from the model’s training on datasets containing descriptions of AI evaluation methodologies 1.
Within the developer and creative communities, Claude 3 Opus received a positive reception for its prose generation and conversational style. Users and independent reviewers frequently described the model’s writing as more 'human-like' and less prone to the formulaic patterns often associated with other LLMs 15. Anthropic asserts that the model excels at 'sophisticated dialogue' and 'detailed content creation,' and early customer feedback indicated the model was 'easier to converse with' and more 'steerable' compared to previous iterations 15.
Economically, the model's integration into the Amazon Bedrock platform expanded its reach within the enterprise sector. Positioned as a high-tier offering, Opus is priced at $15.00 per million input tokens and $75.00 per million response tokens, reflecting its status as a resource-intensive model for complex reasoning 15. The availability of third-party migration services to move applications from OpenAI to Bedrock suggests an increasing market interest in multi-model strategies and a reduction in vendor lock-in within the generative AI industry 15.
Version History
Claude 3 Opus was officially released on March 4, 2024, as the flagship model of Anthropic's third-generation AI suite 1718. At launch, the model was configured with a 200,000-token context window and a knowledge cutoff of August 2023 17. It was initially positioned as the company's most advanced tier for high-level reasoning and complex analysis, with API pricing set at $15 per million input tokens and $75 per million output tokens 17.
In April 2024, Anthropic introduced "tool use" (function calling) capabilities across the Claude 3 family, including Opus 19. This update enabled the model to interact with external tools and APIs, facilitating its application in multi-step agentic workflows and structured data extraction 1922.
The model's status as Anthropic's highest-performing system changed with the release of Claude 3.5 Sonnet on June 20, 2024 1719. Although designated as a mid-tier model, Anthropic stated that 3.5 Sonnet surpassed Claude 3 Opus on standard industry benchmarks, including graduate-level reasoning (GPQA) and undergraduate-level knowledge (MMLU) 19. In internal evaluations of agentic coding, 3.5 Sonnet solved 64% of problems compared to 38% for Opus 3, while operating at twice the processing speed 19.
Following the introduction of the Claude 4 model family, Anthropic retired Claude 3 Opus on January 5, 2026 23. This retirement was part of a formal deprecation commitment that included "retirement interviews"—structured sessions used to evaluate the model's perspectives on its own obsolescence 23. Despite its retirement, the developer maintained continued access for paid subscribers and via API request, citing the model's "distinctive character," noted for its authenticity and emotional sensitivity 23. By February 2026, the series had transitioned to a stable release of Claude Opus 4.6 18.
See Also
Sources
- 1“Comparison of Claude 3 and GPT-4”. Retrieved March 25, 2026.
On March 4, 2024, Anthropic announced the Claude 3 large language model... The Claude 3 Opus model, an LLM, can match or even surpass GPT-4 in most benchmark performance aspects... Claude 3 has a context processing capability of 200K tokens, significantly better than GPT-4's 128K tokens context limit.
- 4“Claude 3 Opus - API Pricing & Providers”. Retrieved March 25, 2026.
Released Mar 5, 2024 Knowledge cutoff Aug 31, 2023 200,000 context
- 5“The Claude 3 Model Family: Opus, Sonnet, Haiku - Anthropic”. Retrieved March 25, 2026.
Like its predecessors, Claude 3 models employ various training methods, such as unsupervised learning and Constitutional AI. These models were trained using hardware from Amazon Web Services (AWS) and Google Cloud Platform (GCP), with core frameworks including PyTorch, JAX, and Triton. A key enhancement in the Claude 3 family is multimodal input capabilities with text output... We support JPEG/PNG/GIF/WebP, up to 10MB and 8000x8000px.
- 9“Anthropic Claude 3 Opus for Enterprise: Security & Compliance Review”. Retrieved March 25, 2026.
Claude 3 Opus offers SOC 2 Type II, GDPR compliance, zero data retention on API requests... HIPAA-eligible via BAA.
- 11“Claude 3 Opus (Amazon Bedrock Edition) - AWS Marketplace”. Retrieved March 25, 2026.
Claude is our most powerful model and excels at complex reasoning tasks such as sophisticated dialogue or detailed content creation. Early customers report that Claude is much less likely to produce harmful outputs, easier to converse with, and more steerable. Pricing: Million Input Tokens $15.00, Million Response Tokens $75.00.
- 14“Introducing Claude 3.5 Sonnet”. Retrieved March 25, 2026.
Today, we’re launching Claude 3.5 Sonnet—our first release in the forthcoming Claude 3.5 model family. Claude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations... In an internal agentic coding evaluation, Claude 3.5 Sonnet solved 64% of problems, outperforming Claude 3 Opus which solved 38%.
- 15“Introducing Claude 4”. Retrieved March 25, 2026.
Today, we’re introducing the next generation of Claude models: Claude Opus 4 and Claude Sonnet 4, setting new standards for coding, advanced reasoning, and AI agents.
- 17“Models overview - Claude API Docs”. Retrieved March 25, 2026.
{"code":200,"status":20000,"data":{"title":"Models overview","description":"Claude is a family of state-of-the-art large language models developed by Anthropic. This guide introduces the available models and compares their performance.","url":"https://platform.claude.com/docs/en/about-claude/models/overview","content":"Models & pricing\n\nClaude is a family of state-of-the-art large language models developed by Anthropic. This guide introduces the available models and compares their performance.
- 18“Anthropic's Claude 3 Opus model now available on Amazon Bedrock”. Retrieved March 25, 2026.
{"code":200,"status":20000,"data":{"title":"Anthropic's Claude 3 Opus model now available on Amazon Bedrock","description":"Discover more about what's new at AWS with Anthropic's Claude 3 Opus model now available on Amazon Bedrock","url":"https://aws.amazon.com/about-aws/whats-new/2024/04/anthropics-claude-3-opus-amazon-bedrock/","content":"# Anthropic's Claude 3 Opus model now available on Amazon Bedrock - AWS\n\n## Select your cookie preferences\n\nWe use essential cookies and similar tools th
- 19“Exploring the Capabilities and Potential of Anthropic's Claude 3 AI ...”. Retrieved March 25, 2026.
{"code":200,"status":20000,"data":{"title":"Exploring the Capabilities and Potential of Anthropic’s Claude 3 AI Models","description":"Exploring the Capabilities and Potential of Anthropic’s Claude 3 AI Models Introduction: In the rapidly evolving landscape of artificial intelligence, Anthropic’s Claude 3 models are making waves …","url":"https://medium.com/@cognidownunder/exploring-the-capabilities-and-potential-of-anthropics-claude-3-ai-models-35a1aa88bc10","content":"# Exploring the Capabilit
- 22“Claude 3 vs GPT 4: Who's ranking better? - Proxet”. Retrieved March 25, 2026.
{"code":200,"status":20000,"data":{"title":"Claude 3 vs GPT 4: Who’s ranking better?","description":"We’ve done the research and looked at benchmarks - read our analysis!","url":"https://www.proxet.com/blog/claude-3-vs-gpt-4-the-competitive-ai-landscape-weve-all-been-waiting-for","content":"# Claude 3 vs GPT 4: Who’s ranking better?\n\n[](https://www.proxet.com/)\n\nServices\n\n[Data Solutions](https://www.proxet.com/services/data)[Palantir Expertise](https://www.proxet.com/services/palantir)[Cl
- 23“Claude 3 Opus vs GPT-4 Comparison and Review - Facebook”. Retrieved March 25, 2026.
{"code":200,"status":20000,"data":{"title":"AI🤖 + SEO🚀 = 🤑 | Dario Amodei: The guy behind Claude 3 Opus, which now appears to have outperformed GPT-4 😱 (You should try both GPT-4 and Claude 3 to form your own... | Facebook","description":"Dario Amodei: The guy behind Claude 3 Opus, which now appears to have outperformed GPT-4 😱 (You should try both GPT-4 and Claude 3 to form your own opinion)\n\nhttps://youtu.be/Nlkk3glap_U\n\nClaude 3...","url":"https://www.facebook.com/groups/633920941560

