Deep learning has historically been dominated by NVIDIA GPUs. The Nvidia CUDA API is a proprietary standard for writing code to run on graphics processing hardware. CUDA is tightly integrated in all the major deep learning toolkits and provides a relatively intuitive programming interface (in comparison to OpenCL). For a more in depth discussion of the history of GPGPU programming and the potential for an interoperable open-source gpu programming future check out this youtube video.
However, CUDA is proprietary, only works on NVIDIA GPUs, and requires proprietary linux drivers to work. Many people, myself included have objections to the monopolistic hold that NVIDIA has established on the deep learning infrastructure market and object to their non-open practices. In addition, using CUDA can be a flat out pain on the administration side. In my experience, the CUDA utilities integrate poorly with package managers. I have had a number of issues with removing CUDA or replacing it with a new version where installation added a large number of additional programs but removal only uninstalled a couple programs.
AMD HIP/ROCm is slightly more picky than CUDA with regard to the hardware it will run on. RX 50 GPUs, RX 40 GPUs and the R9 3*0 series are not able to run on older cpus where pcie v3 atomics are not supported. Newer GPUs like the Vega 56, Vega 64, Vega Founders Edition and Radeon VII are able to run in a mode without PCIE v3 atomic support with a performance penalty.
CPUs with PCIE v3 atomic support include all Ryzen CPUs as well as all Intel CPUs from Haswell on (e.g. all Intel processors greater than 4000). For more information on supported hardware check out this page
All RGB was merely an accident of pricing. I just went with the best performance for the money. In addition, a blower style vega 56 was used instead of a free flowing Vega 56 like those by PowerColor as the case has relatively poor airflow. Getting hot air out of the case was deemed much more important for prolonged workload performance.