A.I. Architecture

Difference between revisions from 2024/01/26 04:49 and 2024/01/23 10:04.
The challenges in producing computers that are specific to building A.I. systems, as the tasks involved are computationally intensive.
{br}
# Speed of light computing - memory speed
# Parralel processing where possible - cores, like GPUs and T(ensor)PUs alongside CPUs
{br}
# CPU: with a high core count and good single-core performance. Ryzen 5 and Intel Core i5 CPUs offer good value.
# RAM: 16GB of RAM is a good start, consider investing in 32GB if you plan to train larger models.
# Storage: A SATA SSD will be sufficient for most tasks, but an NVMe SSD will offer faster data access speeds.
# GPU: like an RTX 3060 or RX 6600 can significantly improve training speed.
{br}
# 1 petabyte training data minimum.
{br}
# Solar or wind maximal location.
{br}
# Distributed computing software and clutering, MPI (Message Passing Interface). Using software such as TensorFlow Distributed, Spark or Cluster management software like Slurm or Torque.
# RAID 10, mdadm
{br}
'''Example Builds'''
{br}
# CPU: Intel Core i9-13900K or AMD Ryzen 7 7800X
# Motherboard: Asus ROG Strix Z790-A WiFi D4
# RAM: 32GB DDR5 Corsair Vengeance RGB Pro
# GPU: Nvidia RTX 3080 or AMD RX 6800 XT
# Storage: 1TB Samsung 980 Pro PCIe NVMe SSD + 4TB Seagate Barracuda HDD
# PSU: 850W Corsair RM850x Gold
{br}
Mid-range:
{br}
# CPU: AMD Ryzen 5 7600X or Intel Core i7-13700K
# Motherboard: MSI B650M Mortar WiFi or Asus TUF Gaming B660M-Plus WiFi
# RAM: 32GB DDR4 G.Skill Trident Z Neo
# GPU: Nvidia RTX 3070 or AMD RX 6700 XT
# Storage: 500GB Samsung 970 Evo Plus PCIe NVMe SSD + 2TB Seagate Barracuda HDD
# PSU: 650W Seasonic Focus GX-650 Gold
{br}
Budget:
{br}
# CPU: AMD Ryzen 5 5600X or Intel Core i5-12400
# Motherboard: B550M Mortar WiFi or Asus B660M-PLUS WiFi
# RAM: 16GB DDR4 Crucial Ballistix RGB
# GPU: Nvidia RTX 3060 or AMD RX 6600XT
# Storage: 500GB Samsung 970 Evo Plus PCIe NVMe SSD + 1TB Seagate Barracuda HDD
# PSU: 550W Corsair CX550M
{br}
Hook them up in a conventional network and then utilize Distributed Computing Framework. Install a chosen framework on each computer and configure it to recognize the other machines as part of the cluster. Allocate 1 machine as a NAS, a mobo with the most PCIe SATA expansion cards and onboard SATA. The other computers are about cpu, gpu cores and maxmimum RAM. GPU utilization is software depends.
{br}
!!Software
{br}
Software has become secondary to hardware, and software for A.I. would probably require grid computing in exchange for unrestricted model access. Each node would have to satisfy minimum requirements to be accepted into the grid. While the models are accessible to the grid, the secret source is with the author. The grid acts as a workshop, holding the petabytes of training data, and an A.I. training supercomputer. The result is plopped into the distributed leaderboard folder, where all the trained models go, and all the models are restricted to the OS, all the models are graded. A general user would go to the leaderboard folder and run the latest models. The incentive is to beat the best model. In the modern day, it is all about creating the white paper and presenting it to key people for support and funding. In the past, it anyone could release and gain public support organically.

!!O.I. Architecture - organoid on chip support
{br}
# Module version
# Interconnects
# I/O Card, hardware interface
# Software interface 
{br}
[https://imt.cx/images/organs.png|center]
nb: Organoids are real lifeforms.
!What an A.I Operating System (OS) might look like
{br}
# Grid by default. The amount of data and processing required to train models and tinker about with A.I. could utilize grid computing. Minimum requirements are required to join the grid, and trained models are the reward. The grid would hold the petabytes of training data and CPU cycles for distributed training. The club would probably need 100TB, 32GB, 16 core minimum to join the grid. The models are tied to the OS and cannot be moved out. The grid maintainers would keep models at the current or exceeding current capability and the use of these to generate video, images and so on would be unrestricted.
# Various applications/software to leverage A.I. and O.I.
# Simulation environments for A.I. training.
{br}
# To store training data - distributed file systems, to grid and store the petabytes of training data.
# To train the A.I. - utilize the many grid computing operations already in existance and add a system level one as well.
# Other edu, lab and research essential softwares.
# [Custom Linux from scratch|https://imt.cx/kb.php?page=Compiling+the+Linux+kernel+and+creating+a+bootable+ISO&redirect=no] 

!!Interesting Hardware

# X99 Dual CPU LGA2011-3 Motherboard DIMM 8×DDR4 Desktop Computer Mainboard M.2 EM
# AMD EPYC 7551P CPU 32 Cores + Supermicro H11SSL-i Motherboard +8x 8GB 2133P RAM
  

 📜 ⏱️  ⬆️