The ability to perform accurate repetitive computation has been central to a large number of scientific and technological advances in the last seventy years. At the heart of this commercially is Moore’s Law, which states that computational power provided by traditional computer central processing units will double year-on-year for the foreseeable future. Unfortunately, several factors have compounded to make this less likely to continue. Heat dissipation, atomic and quantum effects provide practical limits to the miniaturisation and packing of transistors, and the limited bandwidth between CPU and memory limits computation speed. There is some doubt whether fabrication can extend below 3nm.
At the same time, methods of data processing using learning systems have become significant new architectural components. Of these, ‘Deep learning’ (DL) has provided best-in-class performance on many machine learning tasks and revitalised many areas of pattern recognition, opening up a revolution in new services that were traditionally performed using human intensive processes. Moreover, these methods are not well supported by traditional computer architectures as they require, in a similar way to the brain, massive parallelism. Graphics Processing Units (GPUs), initially built for video games, have been successfully utilised, but still suffer from high levels of inefficiencies. An example from the OpenAI blog (Jun 2018), highlights the inef ficiencies of a reinforcement learning tool usage regime:
OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization (a Monte-Carlo method) running on 256 GPUs and 128,000 CPU cores.”
The problem: “modern compute methods cannot keep up with the development in modern compute workflows"
Likewise, the ability to perform AI processing with “dialled down” power budgets enhances the ability to mobilise intelligence in a distributed way, that is untethered by centralised cloud coordination. Thus distributed decision making can be performed with greater efficacy and autonomy, without the reliance on cloud infrastructure.
Our approach, temporal computing, is an alternative computation strategy that differs both from the current mainstream: i.e Von Neumann computing (used in modern PCs, laptops etc), from quantum computing, and from optical computing (both emerging sectors with uncertain lead times).
The key new idea in temporal computing is that input data is represented as a “passage of time”. Typically this may be the time taken for some noticeable change to happen in either an analog or digital signal. This gives an underlying representational scheme based not on traditional binary, but on unary and unary coding schemes; “in time”.
Unary number systems were amongst the earliest numerical representation of quantity, with the abacus existing as the earliest calculating device and this simplicity offers significant processing efficiencies and is thought to be central to the brain’s ability to data process. This allows us to perform much of the processing using simpler memory manipulation devices that can function closer to memory storage, giving a saving in the “space” required to build the processor, and the energy used to move data between memory and the processor.
Our Vision: “A temporal based method of computation can meet the increasing demand in computational resources”.
Our Mission:“A new compute medium for modern workflows, 10x speed, 10x drop in power consumption”.
The Journey “Prove out temporal computing in electrical/silicon hardware to demonstrate superiority key areas of compute deficiencies. Implement in non-silicon and wave compute for more radical benefits”.
Advantages over quantum
- Easier development phase - as we will see there are huge advantages to temporal in terms of ease of implementation. This even extends to use of existing fabrication methods and hence may actually prolong the use of silicon as a compute medium.
- Extension into very fast sequential compute - anything that oscillates can be used to compute - there is significant “room” at the bottom when it comes to the physical manifestation of temporal computers
- High parallelisation capacity - time is a free resource so essentially it cost nothing to use it as memory, also it is easily parallelised it is trivial to measure two events next to each other without any need to coordinate. Quantum memory is still a very open area of research.
- Low resource - it is almost certain that initial quantum computers will be offered as a centralised service, temporal in contrast because of its potential simple implementations and design, can easily be thought of as working in smaller systems in edge and mobile devices
Building a temporal computer 🔗
Although this appears at first glance an ambitious proposal the strategy to solve this problem has been established in the quantum computing domain. The key steps are:
- Identify good problems to work on.
- Specify technical Key Performance Indicators (KPIs)
- Build a computer in abstract simulation.
- Assess possible physical mediums for implementation.
- For the specific problems
- Build a physical simulation
- Build a real system.
- Coalesce into a general compute medium and scale