These successes can be taken as an indication that there is no limit to computation. To see if that’s the case, it’s important to understand why computers are so powerful.

Computer power has two sides. It’s the number of operations the hardware can perform in her second and the efficiency of the algorithms it performs. Hardware speed is limited by the laws of physics. An algorithm (basically a set of instructions) is created by a human and translated into a set of operations that a computer his hardware can perform. Even as computers reach their physical limits, computational hurdles remain due to algorithmic limitations.

These hurdles include problems that cannot be solved by computers, or problems that are theoretically solvable but practically beyond the capabilities of today’s most powerful versions of computers imaginable. Mathematicians and computer scientists try to determine if a problem can be solved by trying it out on an imaginary machine.

## fictitious calculator

The modern concept of an algorithm known as a Turing machine was formulated in 1936 by British mathematician Alan Turing. It is a fictitious device that imitates arithmetic calculations with a pencil on paper. A Turing machine is the template upon which all computers today are based.

The supply of fictitious paper in the Turing machine is assumed to be unlimited to accommodate computations that would require more paper if done manually. This corresponds to an imaginary endless ribbon, or square “tape”, each of which is blank or contains one symbol.

The machine is controlled by a finite set of rules, starting with the first sequence of symbols on the tape. The operations that the machine can perform are moving to the next square, erasing symbols, and writing symbols into blank squares. Machines compute by performing a series of these operations. When the machine finishes or “stops”, the symbols left on the tape become the output or result.

Computing is often about making decisions with yes or no answers. Similarly, a medical test (problem type) checks whether a patient sample (problem instance) has a specific disease indicator (yes or no answers). The instance represented by the Turing machine in digital form is the initial sequence of symbols.

A problem is considered “solvable” if it is possible to design a Turing machine that stops at every instance, positive or negative, and correctly determines the answer that the instance produces.

## not all problems can be solved

Many problems can be solved by computers because they can be solved using Turing machines, but many others are not. For example, the domino problem, a variation of the tiling problem formulated by Chinese-American mathematician Hao Wang in 1961, cannot be solved.

Your task is to cover the entire grid with a series of dominoes and match the number of pips on the edges of adjacent dominoes according to the rules of most domino games. It turns out that there is no algorithm that can start with a set of dominoes and determine if that set completely covers the grid.

## keep it reasonable

Many solvable problems can be solved by algorithms that stop in a reasonable amount of time. These “polynomial-time algorithms” are efficient algorithms. That is, it is practical to use computers to resolve those instances.

Despite continuing intensive efforts to find such algorithms, thousands of other solvable problems are not known to have polynomial-time algorithms. These include the traveling salesman problem.

In the traveling salesman problem, a set of directly connected points (called a graph) starts at an arbitrary point, passes every other point only once, and returns to the original point. Ask if there is a route. Suppose a salesman wants to find a route back to his starting point, passing through all the households in the neighborhood only once.

These problems, called NP-complete, were originally formulated and shown to exist in the early 1970s by two computer scientists, American-Canadian Stephen Cook and Ukrainian-American Leonid Levin. I was. Cook, the first to do so, won the 1982 Turing Award, the highest award for his science in computers, for this work.

## exact cost

The best-known algorithms for NP-complete problems are essentially looking for solutions from all possible answers. Running the traveling salesman problem on hundreds of graphs on a supercomputer would take years. Such algorithms are inefficient. In other words, there are no mathematical shortcuts.

Practical algorithms for dealing with these problems in the real world can only provide approximations, but approximations are improving. Whether there is an efficient polynomial-time algorithm that can solve NP-complete problems is among his seven Millennium Open Problems posted by the Clay Mathematics Institute at the turn of the 21st century, each with his million A dollar prize is on offer.

## Beyond Turing

Could there be a new form of computation beyond Turing’s framework? In 1982, Nobel Prize-winning American physicist Richard Feynman proposed the idea of computing based on quantum mechanics.

In 1995, the American applied mathematician Peter Scholl published a quantum algorithm that factorizes integers in polynomial time. Mathematicians believe that this cannot be solved by Turing’s framework polynomial-time algorithms. Factoring an integer means finding a smaller integer greater than 1 that divides it. For example, 688,826,081 = 25,253 x 27,277, so the integer 688,826,081 is divisible by the smaller integer 25,253.

A major algorithm, called the RSA algorithm, which is widely used to secure network communications, is based on the computational difficulty of factoring large integers. Shor’s results suggest that the cybersecurity landscape will change once quantum computing becomes a reality.

Can we build a full-fledged quantum computer to factor integers and solve other problems? Some scientists believe it is possible. Several groups of scientists around the world are working on it, and some groups have already built small-scale quantum computers.

Nevertheless, as with all new technologies invented before, problems with quantum computing are almost certain to arise, and new limits will be imposed.

*Jie Wang is Professor of Computer Science at UMass Lowell.*

The Trust Factor is a weekly newsletter that examines what leaders need to succeed. Sign up here.