NVIDIA
NVIDIA Corporation is an American multinational technology company that pioneered programmable graphics and accelerated computing 2. Founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, the organization was established with $40,000 in starting capital following a meeting at a Denny's restaurant in San Jose 2. The company was initially named NVision, but the founders transitioned to the name NVIDIA after discovering the original name was already utilized by a paper manufacturer 2. Since its inception, NVIDIA has transitioned from a specialized producer of video game hardware into a foundational entity in the global artificial intelligence (AI) landscape 2.
The company's early history was defined by technical challenges and financial instability 2. In the mid-1990s, NVIDIA nearly collapsed after its decision to utilize quadrilateral-based graphics processing was rendered obsolete by Microsoft’s decision to support triangle-based graphics software 2. To survive, the company laid off half of its workforce and utilized its remaining capital to produce the RIVA 128, a triangle-based chip that sold one million units within four months and stabilized the company's finances 2. In 1999, the company went public and introduced the GeForce 256, which it marketed as the world's first Graphics Processing Unit (GPU) 2. This specialized processor was designed to complement the general-purpose Central Processing Unit (CPU) by handling complex 3D graphics calculations more efficiently 2.
A pivotal moment in NVIDIA's evolution occurred in 2007 with the introduction of CUDA (Compute Unified Device Architecture) 2. CUDA is a parallel computing platform and API that allows developers to use GPUs for general-purpose processing tasks, moving beyond traditional graphics rendering 2. This software layer enabled the hardware to be used for scientific research, engineering, and financial modeling 2. By the early 2010s, computer scientists discovered that NVIDIA’s GPUs, when modified with CUDA, could train neural networks up to 100 times faster than traditional CPUs 2. Following this breakthrough, CEO Jensen Huang directed the company to pivot entirely toward AI hardware applications 2.
By the mid-2010s, NVIDIA had transitioned into an AI-focused organization 2. In 2016, the company delivered its first AI supercomputer to OpenAI, and its hardware subsequently became the primary infrastructure for large language models, including those that power ChatGPT 2. This dominance in the AI sector has led to a trillion-dollar market valuation and significant revenue growth, with its Data Center segment now accounting for approximately 90% of total company revenue 2. Market analysts have characterized the company as the primary "arms dealer" of the AI era due to its market share in high-performance computing hardware 2. Under Huang’s continuous leadership, the company maintains a corporate culture focused on resilience, frequently citing an internal motto that the company is "thirty days from going out of business" to maintain focus and urgency 2.
History
Foundations and Early Graphics (1993–1998)
NVIDIA was founded on April 5, 1993, by Jensen Huang, Chris Malachowsky, and Curtis Priem 4. The founders combined technical backgrounds in chip design and graphics architecture from companies such as LSI Logic, Sun Microsystems, and IBM 4. Huang, who emigrated from Taiwan to the United States in 1972 and attended a boarding school in Kentucky, has served as the company's president and CEO since its inception 2. The organization was established with the vision that the PC would eventually become a consumer device for enjoying games and multimedia, requiring specialized parallel processing beyond the capabilities of contemporary central processing units (CPUs) 4. The name NVIDIA was derived from the Latin word 'invidia' (meaning envy) and the initials 'NV,' representing 'next version' 4.
The company's first product, the NV1, was released in 1995 but struggled commercially as it used quadratic texture mapping rather than the industry-standard triangle-based rendering 4. NVIDIA subsequently shifted its strategy to align with Microsoft's DirectX API, leading to the release of the RIVA 128 and the RIVA TNT series in the late 1990s 4. These products established the company's credibility with original equipment manufacturers (OEMs) such as Dell and Compaq, driving unit growth and revenue into the hundreds of millions by the end of the decade 4.
Invention of the GPU and IPO (1999–2005)
NVIDIA completed its initial public offering (IPO) on the NASDAQ in 1999 under the ticker NVDA 4. That same year, the company launched the GeForce 256, which it marketed as the world's first 'Graphics Processing Unit' (GPU) 4. The GeForce 256 was distinguished by its integrated hardware-based transform and lighting (T&L) engine, which moved intensive geometric calculations from the CPU to the graphics chip 4. This development established GPUs as a separate silicon category and accelerated the growth of the consumer 3D graphics market 4.
During the early 2000s, NVIDIA expanded through strategic design wins and acquisitions. In 2000, the company acquired the intellectual property and other assets of its primary rival, 3dfx Interactive, which had pioneered early 3D acceleration 4. NVIDIA also secured a contract to provide the graphics processor for the original Microsoft Xbox, marking its entry into the gaming console market 4. During this period, the company introduced programmable shaders with the GeForce 3 and 4 series and launched the Quadro brand to target professional visualization markets 4.
The CUDA Pivot and GPGPU (2006–2015)
In 2006, NVIDIA introduced the Compute Unified Device Architecture (CUDA), a parallel computing platform and programming model 4. CUDA enabled developers to utilize the processing power of GPUs for non-graphical applications, a practice known as general-purpose computing on graphics processing units (GPGPU) 4. This move was a strategic shift that allowed NVIDIA to enter the high-performance computing (HPC) and research markets 4. The company launched the Tesla brand of GPUs specifically for data center and scientific workloads, which were adopted by national laboratories for supercomputing tasks 4.
Concurrent with its expansion into data centers, NVIDIA targeted the mobile and automotive industries. The Tegra system-on-a-chip (SoC) line was introduced for mobile devices, and the NVIDIA Drive platform was established to provide the computational foundation for autonomous vehicles 4. By 2010, these initiatives signaled NVIDIA's transition from a PC component manufacturer to a broader platform provider across multiple computing segments 4.
Data Center Expansion and AI Dominance (2016–Present)
NVIDIA's focus shifted significantly toward artificial intelligence and deep learning in the mid-2010s. The 2016 Pascal architecture and the 2017 Volta architecture introduced specialized 'Tensor Cores' designed to accelerate the matrix mathematics required for AI training and inference 4. In 2018, the Turing architecture debuted real-time ray tracing (RTX) capabilities, which utilized AI-based denoising to bring realistic lighting effects to mainstream gaming 4.
To strengthen its position in the data center market, NVIDIA completed the acquisition of networking specialist Mellanox Technologies in 2020 for approximately $6.9 billion 4. This acquisition integrated high-performance InfiniBand and Ethernet technologies into NVIDIA's AI clusters, addressing networking bottlenecks in large-scale model training 4. In September 2020, the company announced a $40 billion agreement to acquire Arm Limited; however, the deal was terminated in February 2022 following significant regulatory challenges and antitrust scrutiny 4.
By 2024, NVIDIA's data center business had become its largest revenue source, catalyzed by the global demand for generative AI and large language models (LLMs) 4. The A100 and H100 accelerators became industry standards for AI infrastructure, with data center revenue growing from approximately $10.6 billion in fiscal 2022 to over $47 billion by fiscal 2025 4. In June 2025, the company's market capitalization exceeded $3 trillion, reflecting its status as a central infrastructure provider for the AI era 4.
Products & Services
NVIDIA's product portfolio is centered on accelerated computing, transitioning from a focus on consumer graphics to becoming a primary provider of hardware and software for artificial intelligence (AI) and data center infrastructure. By fiscal year 2025, the organization's data center division emerged as its largest revenue source, driven by the global demand for large language model (LLM) training and inference 4.
Data Center and AI Hardware
NVIDIA's data center offerings are structured around its GPU architectures, which are designed to handle massively parallel workloads. The Hopper architecture, introduced in 2022 with the H100 GPU, became an industry standard for AI training due to its inclusion of Transformer Engines and fourth-generation Tensor Cores 4. The subsequent H200 model increased memory capacity to 141GB of HBM3e, offering 4.8TB/s of bandwidth 6.
In 2024, the company announced the Blackwell architecture. The Blackwell B200 GPU utilizes a multi-die design and provides up to 2.2 petaFLOPS of FP32 Tensor Core performance, which NVIDIA states is a significant increase over the 989 teraFLOPS provided by the H200 6. For extremely large-scale deployments, the GB200 NVL72 platform connects 36 Grace CPUs and 72 Blackwell GPUs into a single liquid-cooled rack that functions as a unified accelerator 56.
NVIDIA also produces "Superchips," such as the Grace Hopper (GH200), which integrates an Arm-based Grace CPU with a Hopper GPU using a high-speed coherent interconnect 14. This design aims to reduce data movement bottlenecks between the processor and graphics memory 4. Hardware is typically deployed via the DGX platform for enterprise AI development or the HGX and MGX modular server architectures for cloud service providers 16.
Consumer Graphics and Gaming
The GeForce brand remains NVIDIA's primary consumer-facing product line. The current GeForce RTX 40-series utilizes the Ada Lovelace architecture, which features specialized RT Cores for real-time ray tracing and Tensor Cores for AI-driven tasks 4. A central component of this ecosystem is Deep Learning Super Sampling (DLSS), a neural rendering technology that uses AI to upsample lower-resolution images to higher resolutions, thereby increasing frame rates in gaming applications 1.
Beyond hardware, NVIDIA provides the GeForce NOW cloud gaming service, which allows users to stream games from remote RTX-powered servers 1. The company also targets content creators through the NVIDIA Studio platform, which includes specialized drivers and software like NVIDIA Broadcast and RTX Remix for remastering legacy games 1.
Networking and Interconnects
Following the $6.9 billion acquisition of Mellanox Technologies in 2020, NVIDIA expanded into high-performance networking 4. These products are used to link thousands of GPUs into cohesive AI clusters. Key offerings include:
- InfiniBand: A low-latency, high-bandwidth communication standard favored for AI supercomputing 4.
- Ethernet: The Spectrum-X platform is designed specifically to optimize Ethernet performance for multi-tenant AI clouds 1.
- NVLink and NVSwitch: Proprietary interconnect technologies that allow GPUs within a server or across multiple racks to share memory and data at speeds up to 1.8TB/s in the Blackwell generation 46.
Software and Platforms
NVIDIA's software ecosystem, led by the Compute Unified Device Architecture (CUDA) released in 2006, serves as a primary driver of developer retention. CUDA allows programmers to use C, C++, and Fortran to execute general-purpose code on GPUs 4.
For enterprise customers, the organization offers NVIDIA AI Enterprise, a cloud-native software suite that includes NVIDIA NIM (microservices for model deployment), BioNeMo for life sciences, and various SDKs 17. Pricing for this suite is typically structured as follows:
- Self-managed systems: A one-year subscription costs approximately $4,500 per GPU, while a perpetual license with five years of support is priced at $22,500 per GPU 7.
- Cloud-hosted systems: Available on major platforms like AWS, Azure, and Google Cloud for approximately $1 per hour per GPU in addition to instance costs 7.
- Bundled Licensing: Certain high-end hardware, such as the H100 and H200 NVL, include a five-year subscription to the software suite 7.
Other major software platforms include NVIDIA Omniverse, used for building industrial digital twins and 3D simulation workflows, and NVIDIA DRIVE, a full-stack solution for autonomous vehicle development including in-vehicle computing and simulation software 14.
Corporate Structure
NVIDIA is headquartered at 2788 San Tomas Expressway in Santa Clara, California 7. The organization is led by co-founder Jensen Huang, who has served as president and chief executive officer since the company’s inception in 1993 4, 2. The corporate leadership structure includes a twelve-member board of directors that is subject to annual election by stockholders 1, 3. Huang's tenure of over three decades is a defining characteristic of the company's executive stability 2.
The corporation’s business operations are organized into four primary market segments: Data Center, Gaming, Professional Visualization, and Automotive 4, 8. While the company's historical roots are in consumer graphics, its corporate focus has increasingly shifted toward its Data Center division. By fiscal year 2025, data center revenue became the company's largest business unit, generating over $47 billion of the firm's $60.9 billion in total annual revenue 4. This transition reflects the company's move to a full-stack computing model that integrates hardware, networking, and software systems 4, 5.
NVIDIA is a publicly traded corporation listed on the Nasdaq Global Select Market under the ticker symbol NVDA 7. As of February 21, 2025, the company had approximately 24.4 billion shares of common stock outstanding 7. In mid-2025, the company’s market capitalization reached a peak of over $3 trillion 4. The company’s corporate growth has been supplemented by strategic acquisitions, most notably the $6.9 billion purchase of Mellanox Technologies in 2020, which expanded its capabilities in high-performance networking 4. A proposed $40 billion acquisition of the semiconductor design firm Arm was terminated in 2022 following extensive regulatory scrutiny 4.
The company maintains a broad corporate ecosystem that NVIDIA states includes approximately 40,000 companies and 5 million developers 5. It manages strategic investments through its corporate venture capital arm, NVentures, and its Inception program for startups 9, 11. NVentures significantly increased its investment pace in 2025, participating in approximately 30 venture deals compared to only one in 2022 11. Major equity interests include strategic investments in artificial intelligence firms such as OpenAI, xAI, and Anthropic 11. In late 2025, NVIDIA committed to a $10 billion investment in Anthropic as part of a partnership that includes the deployment of NVIDIA's Blackwell-architecture systems 11. The company also maintains deep partnerships with major cloud infrastructure providers, including Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle 4.
Research & Development
NVIDIA's research and development (R&D) strategy is centered on the paradigm of "accelerated computing," which utilizes specialized hardware to perform complex computational tasks more efficiently than a general-purpose Central Processing Unit (CPU) 2. The organization's early research focused on three-dimensional graphics rendering, initially exploring the use of quadrilaterals before standardizing on triangle-based polygons 2. A pivotal development in the company's R&D history was the 2007 introduction of CUDA (Compute Unified Device Architecture), a parallel computing platform and application programming interface (API) 2. CUDA enabled the transition to General-Purpose computing on Graphics Processing Units (GPGPU), allowing developers to apply GPU acceleration to scientific research, engineering, and financial modeling 2.
During the early 2010s, NVIDIA's R&D focus shifted toward artificial intelligence after internal research demonstrated that neural networks could be trained up to 100 times faster on GPUs than on traditional CPUs 2. This shift led the company to prioritize hardware for AI applications over its traditional graphics business, culminating in the 2016 delivery of the first AI supercomputer to OpenAI 2. This research infrastructure supported the development of large language models (LLMs) and generative AI 2, 4. Proprietary technological developments resulting from this focus include Tensor Cores, which are specialized hardware components designed to accelerate deep learning calculations, and Ray Tracing (RTX) technology for simulating the physical behavior of light in graphics 2, 4.
The company's R&D culture is characterized by an emphasis on "choosing to do hard things" and pursuing experimental products that may lack an existing market or competitor 2. Management encourages a flattened information structure where employees provide weekly updates on their most significant projects directly to executive leadership 2. To iterate on technical challenges, NVIDIA utilizes "failure presentations," an internal practice where teams analyze unsuccessful initiatives to identify the specific decisions that led to the outcome 2. Furthermore, the organization employs a speed-oriented planning methodology that requires project leads to first determine the fastest possible timeline for development before accounting for resource limitations 2.
Safety & Ethics
NVIDIA’s safety and ethics governance is structured around internal red teaming, the development of alignment software, and compliance with international trade regulations. The company maintains a dedicated "Trustworthy AI" process intended to assess risks such as bias, toxicity, and technical vulnerabilities before product release 12.
Safety Governance and Red Teaming
NVIDIA manages its AI security through the NVIDIA AI Red Team (AIRT), a cross-functional group comprising offensive security professionals and data scientists 10. The team’s role is to evaluate machine learning systems for potential weaknesses, such as remote code execution (RCE) and insecure data retrieval, before they reach production 11. AIRT employs a "limit-seeking" methodology that manually tests model boundaries to identify non-standard behaviors 12.
Research conducted by NVIDIA-affiliated scientists characterizes this practice as an "alchemist mindset," where testers treat large language models (LLMs) as chaotic systems to be explored rather than rational engines 12. Key findings from these assessments have led to NVIDIA recommending specific technical mitigations, such as avoiding the use of exec or eval functions on LLM-generated output to prevent RCE vulnerabilities 11.
AI Alignment and NeMo Guardrails
For operational safety, NVIDIA developed NeMo Guardrails, an open-source library designed to add programmable safety layers to LLM applications 5. The library is intended to prevent models from discussing restricted topics, identify jailbreak attempts, and filter personally identifiable information (PII) 5, 6.
According to NVIDIA, NeMo Guardrails operates through a multi-step process: it first analyzes the user's input for intent, checks it against predefined safety policies, and then validates the model's output before it is delivered to the user 5. To address the latency challenges of real-time AI, NVIDIA introduced a "streaming mode" that performs incremental validation on chunks of text, though the company acknowledges that larger chunk sizes provide more reliable safety checks at the cost of higher latency 7.
Geopolitical Compliance and Export Controls
NVIDIA’s safety operations extend to geopolitical ethics and regulatory compliance, particularly regarding U.S. Department of Commerce export controls 4. Following restrictions on the export of high-performance A100 and H100 GPUs to China, NVIDIA developed region-compliant variants, such as the H20 and L20, which feature reduced interconnect speeds to meet U.S. regulatory thresholds 4, 8.
In early 2026, a shift in U.S. policy moved from blanket bans to a "controlled access" framework involving a case-by-case review 9. This framework imposes a 50% volume cap, stipulating that the total processing performance exported to China cannot exceed half of what is shipped to U.S. customers for domestic use 8, 9. In response to these evolving regulations and high Western demand, NVIDIA reportedly halted the production of China-specific H200 variants to prioritize its next-generation "Rubin" platform 9. Third-party analysts from the Council on Foreign Relations have characterized this regulatory environment as a "technological arms race" that requires NVIDIA to balance commercial interests with national security certifications 8.
Ethical Commitments and Sustainability
Beyond technical safety, NVIDIA’s corporate commitments include environmental sustainability in hardware design 4. The company aims to sustain performance-per-watt gains through annual architecture updates, such as the transition from Hopper to Blackwell, to reduce the energy density required for trillion-parameter model training 4. NVIDIA states that its IGX platform and other edge-computing technologies are also designed with functional safety standards for sectors like healthcare and autonomous vehicles, where hardware failure poses direct physical risks 1.
Reception & Controversies
NVIDIA's rapid ascent to a valuation exceeding $3 trillion by mid-2025 has been accompanied by significant industry scrutiny regarding its market dominance 4. The organization holds more than 80% of the market share for discrete GPUs used in gaming and artificial intelligence, a position that has led to international regulatory concerns over anti-competitive practices 4. A primary example of this regulatory resistance was the attempted $40 billion acquisition of Arm in 2020, which NVIDIA was forced to terminate in 2022 following intense antitrust scrutiny from multiple jurisdictions 4.
A central component of NVIDIA's market position is the CUDA software ecosystem, which launched in 2006 4. While industry figures credit the platform with enabling the general-purpose use of GPUs for high-performance computing, it is also frequently characterized as a "moat" that creates long-term developer lock-in 3, 4. AI researchers have noted that the robustness of the developer community makes NVIDIA hardware "sticky," as most AI workloads and software tools are optimized specifically for NVIDIA's proprietary architecture 3. This dominance has prompted competitors, such as AMD with the MI300 series and various cloud service providers developing custom silicon, to intensify efforts to create alternative hardware ecosystems 4.
The organization has also experienced historical friction with its consumer base in the gaming industry. Between 2020 and 2023, NVIDIA faced significant supply-chain constraints and volatility driven by the cryptocurrency mining boom 4. During this period, the gaming community expressed public criticism regarding the lack of retail availability and inflated pricing for GeForce graphics cards, as hardware was frequently diverted to mining operations 4. While NVIDIA attempted to optimize supply and prioritize data-center shipments, the period was marked by notable consumer dissatisfaction with the brand's availability for non-enterprise users 4.
External geopolitical factors have further shaped the company's public and industrial reception. Since 2022, United States export controls on high-end GPU shipments to China have constrained NVIDIA's addressable market for its most advanced accelerators, such as the A100 and H100 4. NVIDIA has responded by shipping region-compliant variants to maintain market presence, though analysts note these restrictions remain a persistent challenge to the company's global shipment strategy 4. Despite these controversies, independent assessments often frame NVIDIA's transition from a niche graphics vendor to an AI infrastructure leader as one of the most successful strategic pivots in corporate history 3.
Societal Impact
NVIDIA's transition from a gaming-focused hardware manufacturer to a provider of artificial intelligence (AI) infrastructure has resulted in significant societal and economic shifts. The organization's influence extends across the global startup ecosystem, environmental standards in high-performance computing (HPC), and professional workflows in creative and technical industries 4, 5.
Economic Impact and Startup Ecosystem
NVIDIA exerts influence on the global AI economy through its Inception program, a corporate initiative that provides technical guidance and resources to thousands of early-stage startups 5. The organization also maintains specialized resources for venture capitalists to facilitate investment in AI-driven companies 5. As of fiscal year 2025, NVIDIA's data center division reached over $47 billion in revenue, reflecting its role as a primary infrastructure provider for hyperscalers and new AI enterprises 4. This growth has positioned the company as a central player in the shift toward generative AI, which NVIDIA states is transforming major global industries 5.
Environmental Sustainability and Energy Efficiency
In the field of high-performance computing, the energy efficiency of hardware is a primary concern for large-scale data center operations. NVIDIA states that its accelerated computing platforms are designed to perform complex computational tasks more efficiently than general-purpose processors 5. Systems utilizing NVIDIA's GPU-acceleration technology frequently occupy prominent positions on the Green500 rankings, an industry list that evaluates the power efficiency of the world's most powerful supercomputers 5. By integrating networking technologies from its Mellanox acquisition, the company provides high-performance fabrics intended to optimize the energy consumption of scale-out AI clusters 4.
Workforce and Creative Industry Influence
NVIDIA's technologies have altered professional workflows in design, entertainment, and healthcare. The introduction of real-time ray tracing (RTX) and specialized professional visualization tools changed standard practices in digital content creation and architectural design 4, 5. To address the resulting shift in labor requirements, the organization provides technical training programs for IT professionals and data scientists, as well as specialized services for professional data science teams 5. Furthermore, NVIDIA's hardware is a primary component in sovereign AI initiatives, where national governments utilize accelerated computing to develop localized AI capabilities and workforce expertise 4.
Sources
- 1“The Story of Jensen Huang and NVIDIA”. Retrieved March 25, 2026.
NVIDIA was founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem. The two other co-founders, who brought experience from Sun Microsystems and IBM, met with Huang at a Denny's just outside of San Jose. While eating diner food and drinking cheap coffee, the trio founded NVIDIA with $40,000 in starting capital.
- 2“What is Brief History of NVIDIA Company?”. Retrieved March 25, 2026.
Founded on April 5, 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem... GeForce 256 introduced in 1999 (marketed as the 'first GPU')... 2006 CUDA debuted, unlocking general-purpose GPU computing... 2020 acquisition of Mellanox for about $6.9B... terminated Arm acquisition in 2022.
- 3“NVIDIA Corporation: History”. Retrieved March 25, 2026.
NVIDIA APIs Explore, test, and deploy AI models and agents... DGX Platform Enterprise AI factory for model development and deployment... DLSS Neural rendering tech boosts FPS and enhances image quality... DRIVE AGX Powerful in-vehicle computing for AI-driven autonomous vehicle systems.
- 4“NVIDIA Blackwell vs NVIDIA Hopper: A Detailed Comparison”. Retrieved March 25, 2026.
NVIDIA GB200 NVL72... Learn the key differences between NVIDIA's Hopper and Blackwell architectures.
- 5“Comparing NVIDIA Tensor Core GPUs - NVIDIA B300, B200, H200, H100, A100”. Retrieved March 25, 2026.
NVIDIA B200 FP32 Tensor Core 2.2 petaFLOPS vs NVIDIA H200 989 teraFLOPS. Interconnect NVLink 1.8TB/s for Blackwell vs 900GB/s for Hopper. B200 has 192GB HBM3e while H200 has 141GB HBM3e.
- 6“Pricing — NVIDIA Enterprise Licensing Guide”. Retrieved March 25, 2026.
Subscription (Includes support) 1 year: $4,500 / GPU. Perpetual 5 years support: $22,500 / GPU. Cloud-hosted Production: $1 / hour / GPU + CSP Instance Cost(s). Included with NVIDIA H100, H200 NVL.
- 7“NVIDIA Corporation - Governance - Board of Directors”. Retrieved March 25, 2026.
NVIDIA Corporation - Governance - Board of Directors... Election of twelve directors nominated by the Board of Directors
- 8“nvda-20240514 - SEC.gov”. Retrieved March 25, 2026.
Election of twelve directors nominated by the Board of Directors... Notice of 2024 Annual Meeting of Stockholders.
- 9“2024 NVIDIA Corporation Annual Review”. Retrieved March 25, 2026.
The NVIDIA ecosystem spans nearly 5 million developers and 40,000 companies.
- 10“FORM 10-K NVIDIA CORPORATION”. Retrieved March 25, 2026.
2788 San Tomas Expressway, Santa Clara, California 95051... The number of shares of common stock outstanding as of February 21, 2025 was 24.4 billion.
- 11“nvda-20251026 - SEC.gov”. Retrieved March 25, 2026.
NVIDIA report detailing segment reporting and financial conditions for fiscal year 2025/2026.
- 12“NVIDIA Venture Capital: NVentures”. Retrieved March 25, 2026.
NVIDIA Venture Capital - NVentures... We invest in technology visionaries solving complex problems.

