⚡ AI Tool
Compute Units Converter
Convert FLOPS, TFLOPS, and PFLOPS and line them up with GPU specs—pairs well with the VRAM estimator.
FAQ
Frequently asked questions
Detailed answers below are in English for technical accuracy.
What is a TFLOP in AI?▼
A TFLOP (TeraFLOP) equals one trillion floating-point operations per second. It's the standard unit for measuring AI hardware performance. For example, the NVIDIA H100 GPU delivers 204 TFLOPS in FP16, while the RTX 4090 delivers 82.6 TFLOPS.
How much compute is needed to train an LLM?▼
Training compute scales roughly with model size and training data. GPT-3 (175B parameters) required an estimated 3.14 × 10²³ FLOPs. Larger frontier models like GPT-4 are estimated to require 10²⁴–10²⁵ FLOPs. At H100 efficiency, that's tens of thousands of GPU-years.
What is the difference between FP16 and FP32 performance?▼
FP16 (16-bit floating point) allows GPUs to perform roughly 2–4× more operations per second than FP32 (32-bit) because each number uses half the memory bandwidth. AI training and inference has largely shifted to FP16 and BF16 to exploit this performance advantage.
How does H100 compare to A100?▼
The NVIDIA H100 SXM delivers approximately 204 TFLOPS in FP16, versus 77.97 TFLOPS for the A100 — about 2.6× more raw compute. The H100 also has faster memory bandwidth (3.35 TB/s vs 2 TB/s) and NVLink interconnect improvements that benefit large model training.
What is a petaFLOP-day?▼
A petaFLOP-day is a unit of total compute equal to 10¹⁵ floating-point operations sustained for 24 hours, or 8.64 × 10¹⁹ total FLOPs. It's commonly used to measure AI training runs. GPT-3 required approximately 3,640 petaFLOP-days to train.