NVIDIA AI Enterprise software, which includes over 100 NVIDIA NIM microservices for popular models, is now included with OCI instances accelerated by NVIDIA GPUs.
With the addition of NVIDIA AI Enterprise, developers also get access to a wide range of tools for building and deploying accelerated AI solutions, including the NeMo framework for streamlining the AI pipeline from data curation to model customisation and RAG deployments at scale. The combination of OCI instances with NVIDIA accelerated computing and NVIDIA AI Enterprise software frees developers to focus on building and deploying amazing AI solutions that are ready to scale.
AI is transforming industries, but enterprise application developers and data scientists face fundamental challenges that hinder deployment and innovation. A few key challenges that hinder enterprises from innovating and transforming their operations with AI include:
- Fragmented Tools: Managing separate platforms for model development, data processing, and orchestration adds complexity.
- Inconsistent Data Quality: AI models need high-quality, contextually rich data, which is often scattered across various systems.
- Security and Privacy Issues: Ensuring compliance while safeguarding sensitive data is a critical challenge for enterprises.
NVIDIA AI Enterprise embedded in OCI
Models, AI microservices, and frameworks for inference and training with enterprise-grade security and support
NVIDIA AI Enterprise is a comprehensive, cloud-native AI platform that simplifies AI model development, training, and deployment. Coming soon, the native integration of NVIDIA AI Enterprise into OCI enables organisations to build, deploy, and scale AI applications efficiently.
This new integration embeds NVIDIA AI Enterprise within the OCI environment so that NVIDIA NIM, NeMo, and other tools and microservices can be accessed directly through the OCI environment, offering:
- Seamless integration: Available as a native Oracle Cloud Console experience for easy activation on GPU compute shapes
- Flexible licensing: Per-hour billing at 25% of GPU compute costs directly from your current Oracle contract and consumption credits, simplifying procurement and cost management
- Enterprise security and support: Fully supported by OCI, helping maintain compliance and reliability
- Optimised AI workflows: Preconfigured AI frameworks, pretrained models, and enterprise-grade support
The NVIDIA AI Enterprise provides tools and microservices for model training, customisation and deployment in AI agents across industries, such as financial services, healthcare, and manufacturing,
For example, a financial institution can use the NeMo tools for model customisation and guardrailing to train fraud-detection AI agents on private transactional data while adhering to strict compliance requirements. Likewise, a healthcare organisation can fine-tune AI models for medical imaging analysis agents while maintaining robust data security measures.
NVIDIA NIM Microservices on OCI Data Science Platform
Oftentimes customers want to run quick experiments or proof of concepts without the need to stand up production-level infrastructure. To support these customer use cases, NVIDIA and Oracle collaborated to integrate NVIDIA NIM microservices into the OCI Data Science platform through the OCI Marketplace.
This integration offers:
- Flexible Pay-As-You-Go (PAYG) pricing: Hourly pricing model using OCI credits
- Security and data control: Models run within the customer’s OCI tenancy, helping maintain data security and compliance.
- Rapid and effortless AI deployment: Deploy inference endpoints in minutes with preconfigured NVIDIA inference engines.
- Scalability: Seamlessly scale AI inference applications from small workloads to enterprise-wide deployments.
Power Your AI Applications and Agents with NVIDIA Accelerated Computing
Both the embedded NVIDIA AI Enterprise tools and the NVIDIA NIM microservices provide state-of-the-art accelerated computing for AI applications and agents.
To choose the right option for your needs, consider this:
- For enterprises looking to build end-to-end AI agents, including model training, customisation, and deployment, choose the embedded NVIDIA AI Enterprise tools on OCI.
- For developers or data scientists who need quick AI inference for large language models (LLMs), chatbots, or search engines, select the NVIDIA NIM inference endpoints in the OCI Marketplace.
- For a fully managed platform, you also access the NVIDIA AI Enterprise tools and NVIDIA NIM microservices through NVIDIA DGX Cloud, a fully managed AI training platform, co-engineered with OCI. NVIDIA DGX Cloud is available in the OCI marketplace.
Discussion about this post