Analog systems were used in the 1970s for process control and solving complex problems (calculus, integral and differentiation). They became obsolete when digital computers were invented. But engineers now insist on bringing them back.
Analog computers are physical devices that work with continuous data. They were mainly used in the 1970s to perform complex calculations and process analog data. They contain functional units such as comparators, multipliers, and function generators, which engineers use to input data such as pressure, temperature, voltage, and speed.
Analog computers can perform actions on real numbers using non-deterministic logic. As a result, performing complex and continuous functions in analog systems is much easier than in digital systems. However, they also accept errors more easily than digital computers.
Analog computers demystified
An analog computer is a type of computing device that operates on continuous data, unlike its digital counterpart, which processes information in discrete steps. These computers take advantage of physical phenomena such as electrical voltage, mechanical movement, or hydraulic pressure to directly model and solve problems, reflecting the continuous nature of the data they manipulate.
Unlike digital computers that compute using binary operations (0s and 1s), analog computers perform calculations by manipulating continuous variables. This allows them to simulate complex systems in real time, an invaluable capability in areas such as aeronautics for flight simulations, meteorology for weather forecasting, and automotive engineering for analyzing dynamic systems.
For example, before the emergence of powerful digital computers, pilots trained using computer-based analog flight simulators. These simulators could mimic aircraft behavior under a variety of conditions by adjusting physical controls and observing analog outputs, providing a highly realistic and effective training tool that responded in real time to pilot input.
The analog computer versus the digital computer
The distinction between analog and digital computers lies not only in their operating mechanisms, but also in their application, efficiency and accuracy in various computational tasks. To elucidate these differences, consider the following comparative analysis:
Resource | Analog Computer | digital computer |
---|---|---|
Data representation | Continuous variables (e.g. voltage levels) | Discrete values (binary digits) |
Precision | Subject to minor inaccuracies due to the nature of physical components and environmental factors | High precision with exact numerical values |
Speed | Can process complex simulations in real time due to continuous data flow | There may be delays in processing complex simulations due to sequential data processing |
Operation Complexity | Ideal for simulating complex, dynamic systems (e.g. weather systems, aircraft simulations) | Best suited for tasks that require precise calculations and data manipulation (e.g. financial analysis, database management) |
Error handling | More tolerant of errors; small variations in the input do not drastically affect the result | Errors must be rigorously managed and corrected; binary operations depend on absolute values |
Use cases | Aeronautics, chemical process simulation, analog signal processing | Data processing, office applications, digital content creation |
An illustrative example that highlights practical differences can be found in signal processing. An analog computer processes audio signals directly, manipulating the physical properties of the signal to produce an output. In contrast, a digital computer must first convert the signal to a digital format through sampling, process it, and then convert it back to an analog signal for playback. Although more accurate, this process introduces a delay and requires additional steps compared to the direct manipulation possible with analog systems.
Furthermore, there is a resurgent interest in analog computing, particularly for specialized applications such as neural network modeling and signal processing, highlighting its unique advantages. For example, when processing large data sets for image recognition, analog computers can perform operations more quickly and efficiently than digital systems, as they do not require translating data into binary code for processing. This makes analog computers uniquely suited to tasks where speed and the ability to handle large, complex data sets in real time are critical, albeit at the expense of the high accuracy and versatility offered by digital systems.
While digital computers form the backbone of modern computing, handling everything from everyday applications to complex data analysis with unmatched accuracy and versatility, analog computers offer unique advantages for real-time simulation and continuous data processing. The choice between analog and digital ultimately depends on the specific requirements of the task, with each system offering distinct benefits tailored to different computing needs.
Why did the analog system become obsolete?
Because these computers had mechanical levers, engineers had to physically alter the circuit (changing pedals, op amps, and multipliers) to perform different operations. They also had error-prone inputs such as voltage and pulse frequency, which were affected by background noise. Furthermore, the inputs themselves were noisy, which further amplified the error in the system.
Neither of these things is a problem for digital computers, which explains why analog systems became obsolete when digital devices started to take off. Who would want an error-prone system when they could use a much more efficient and accurate system?
Why are they coming back?
Although analog systems were replaced by digital computers that used simple input devices such as mice and keyboards, they appear to be making a comeback. The real reason for this is that because computers today use and generate large amounts of data, the use of digital computers with Von Neuman Architecture causes memory bottlenecks.
This is because the system needs to convert the incoming data into binary before the motherboard can process it. The memory and processor interface is where the bottleneck occurs.
Analog computers, meanwhile, process data in memory, meaning they can process data directly without converting it into 0s and 1s or any other form of machine language. Instead of using transistors, analog systems rely on resistors to perform calculations. Once the calculations are done, the final result can be converted to digital format.
This significantly reduces the number of analog-to-digital conversions (ADC) required for a process, leading to faster results and better performance. Additionally, analog configurations do not need to perform all calculations in one cycle. Instead, they can do several partial calculations, which can then be used to create the final result. This further improves the efficiency of the system.
Analog devices typically have a high mean time between failures (MTBF). For example, this analog device has an MTBF of 30,000 hours, meaning it can operate for 300,000 hours before suffering a catastrophic failure. For specific operations, analog systems are faster than digital systems. They are therefore useful for processing large waveform data sets, such as nuclear pulse or supercollision event data. Digital computers cannot handle such a workload.
Today, engineers typically use integrated circuits to program analog system crossbars. By rearranging the crossbar, analog systems can perform multiple operations, which is different from older systems that could only perform a limited number of functions. Newer systems also do not require manual correction and can perform advanced scientific and differential calculations.
Analog Computing: Industry Use Cases
Today, many companies use neural networks and deep learning algorithms to extract insights from their data. In the beginning, companies used GPUs instead of traditional CPUs for data modeling and standardization.
But training models on GPUs takes a lot of time. The latest hardware optimization for processing neural networks is TPU (Tensor Processing Unit), an integrated circuit developed specifically for training neural networks.
But even after implementing TPUs, the final modeling is very slow. Because of this, some companies are looking to analog computers for modeling neural networks. They are faster and more performance-oriented than TPUs for certain tasks. Although they have some problems (programming analog systems is difficult and they are more prone to noise errors), they are very efficient for dealing with large data sets, as with image recognition and speech processing.
For deterministic processes like neural networks that can work with modest accuracy, analog systems are a great option.
Most sensors used today are analog, so they need an analog system to process their data. They also use memristors, which remember the value long enough to be continuous. Furthermore, they only use the ADC converter to display the final result. Companies can use these devices to create autonomous devices/robots/machines that can perform low-level tasks continuously without human supervision.
Although supercomputers are needed to perform long and complex calculations, they consume a lot of electrical energy. For example, Tianhe-1A , a supercomputer with a computing power of 2.5 petaflops, requires 4.04 MW of power. This is a huge amount of electricity because, on average, 1 megawatt of electricity can power 800 homes for a year.
This means that even though supercomputers can perform complex calculations at high speeds, the time-energy tradeoff is too high for them to be useful.
Analog computers work on the same operating principle as the human mind. They take data from other analog chips and use it to perform their calculations, rather than accessing it from memory. Additionally, instead of a 32-bit multiplier, analog systems use 1-bit analog multipliers to perform the same operation. This allows the system to increase its efficiency while decreasing power dissipation.
Even if you need to manually perform circuit changes to change a function, you can easily scale this system for performance and power (since the parallel computing components also scale). Any issues with parallel operations will perform better in an analog computing environment. Analog systems could be the answer to supercomputers' power problems.
The future
Today, many companies are working on hybrid systems, that is, digital computers with a built-in analog relay. These systems can process both continuous and discrete data. Instead of using programming elements like loops and algorithms, they use interconnections to create the necessary electrical analog.
These systems are fast, reliable and efficient. However, although many hybrid systems are currently used for specialized purposes (fuel processors, heart monitors), they have not yet been fully explored. And there's a big reason for that.
Reintroducing analog computing today is an enormous architectural and financial challenge. Many companies are not prepared to invest in revamping their current infrastructure on such a large scale. They would also have to invest in training and certification, as analogue systems have been out of the market for too long.
Although there is a huge demand and market for analog computing, solutions would take time to develop. The shift from digital to hybrid has already begun. A transition of the thought process is bound to happen soon. After all, the world is analog and not digital .
Source: BairesDev