By Consultants Review Team
E2E Networks, a firm listed on the NSE, declared on Thursday that it was the first company to bring NVIDIA H200 Tensor Core GPUs to the Indian market.
With the help of NVIDIA H200 GPUs, we are providing industry-leading high-performance, scalable, and robust infrastructure. With revolutionary performance and memory capacities, the H200 GPU is made to speed up the most taxing AI and HPC tasks. According to Tarun Dua, co-founder and managing director of E2E Networks, this will let companies take on increasingly complicated AI models and spur innovation in startups, MSMEs, and large corporations alike.
"To buy the GPUs, the business raised Rs 420 crore from the market. E2E's chief revenue officer, Kesava Reddy, told that the company has already received 256 H200 GPUs and that more will be acquired shortly."
The AI development studio TIR, the flagship offering of E2E Cloud, will be the first in India to use H200 GPUs, providing developers with access to state-of-the-art hardware.
Developers will be able to train basic AI models as a result. For the training and inference of large language models (LLMs) and large vision models (LVMs), the company expects the H200 GPU to become a driver for AI training in India.
Startups, SMEs, and major corporations from the US, Asia Pacific, the Middle East, and India use E2E Cloud.
"E2E's infrastructure expansion to include NVIDIA H200 GPUs is helping to build the foundation for India's AI-powered future, bringing powerful cloud services to enterprises and startups across the region," stated Vishal Dhupar, general director, Asia South, NVIDIA.
The NVIDIA H200 GPU cluster is designed to promote generative AI and is connected to NVIDIA Quantum-2 InfiniBand networking.
In comparison to NVIDIA H100 Tensor Core GPUs, it offers up to 1.9X better inference performance, 4.8 TB/s of memory bandwidth, and 141 GB of GPU memory capacity.
It is the first GPU to feature HBM3e memory, which is faster and larger than previous generations, and was created to satisfy the increasing need for complicated simulations, real-time AI inference, and other high-compute activities.