Hellow.. Earth. My core competences are in distributed systems, parallel graphics architectures, physics simulation and swarm AI. Most of my career I have been building gaming, big data visualization and networking technology, from middleware APIs to driver and system development, both hardware and software stacks. After retiring from NVIDIA a few years back I have concentrated on development of a future open source strong AI platform designed to efficiently simulate brain scale biological processes, with an operating environment specialized for direct AI driven evolution. This is that platform - core contains the compiler, vm and types (eventually they actually work too). o3 (until renamed to 0dn) contains a parametrizable low-dimensional memory optimized dynamic spatial datastructure designed for exact, efficient ray casting. The structure is highly dynamic, incorporating cache-optimal, sub-logarithmic time inserts, deletes and migrations. 0dn is a dynamic precision, exact, multi-dimensional in-memory visualization and cognition database with distributed, global ordering guarantees implemented by these software, hardware and simulation platforms. The native hardware accelerated operations in addition to database insert/merge, erase/split, commit/rollback are ray cast, branch and nearby gather. The accelerated operations are not only perfect in providing continuity to the rapid pace of development for hyper-realistic computer graphics, but also provide the required basic operators for intuitive, agile development of spatially aware, artificially evolving self-programming automata. In addition to providing nanosecond scale search/insert and ray steps at minuscule silicon and power footprints, the commit/rollback interface enables reversible, debuggable and reproducible distributed computation - crucial for storage and discard management of the resulting high-speed compressed, distributed, coherent, differential data streams. With an order of magnitude power advantage over current ("legacy") graphics hardware, and a truly sky-shattering advantage over legacy database systems, the next revolution for cognition is nearing public availability stage. With the research phase finished, specification still evolving while parts of the software stack are already running on analyzing and solving real-world challenges, the projected timeline looks as follows - 2021 - early access 2022 - open software platform with x64/arm64/lua API, native ISA + CUN 2023 - FPGA accelerated hardware platform 2024 - compiler API for cognitive evolution 2025 + power-optimized ASIC SoCs on annual cadence from stable 0dn RTL -- API - Application Programming Interface ISA - Instruction Set Architecture CUN - Context Unique Name (distributed object location/identification) FPGA- Field Programmable Gate Array (programmable hardware) ASIC- Application Specific Integrated Circuit (a computer chip) RTL - Register-Transfer Level (hardware abstraction form) SoC - System on Chip 0dn - zero dimensions in []<- What comes out is up to you. Us.