Designing a Hybrid Digital / Analog Quantum Physics Emulator as Open Hardware

Marcus Edwards
105 min readJan 7, 2023

--

With the urgency rising for the science community to develop an in- depth understanding of large-scale quantum systems, research in the field is approaching an all-time high. Institutions that are investing time and attention to the field include the Institute for Quantum Computing and the University of Waterloo in the “Quantum Valley” and many other Universities. Many unsolved problems exist that could have a significant impact on the computers of the near future. For example, theorists are still laboring to develop a method of recognizing dangerously chaotic quantum systems. This effort is analogous to what was an essential step in the development of electrical computer systems, but is not such a straight-forward task when considering a quantum rather than electrical computer. One reason that quantum systems are so hard to study for researchers is that our modern toolsets are limited to what we have: classical computers. The truth remains that the types of algorithms that would need to be performed in order to find many important analytical results are impossible to fully emulate using classical computer systems.

One of the most exciting emulation breakthroughs was the first analog signal- based emulation of a universal quantum computer. This yielded a very interesting paper, but no practical use, even for theorists. The reason: a signal duration of the approximate age of the universe (13.77 billion years) could accommodate only about 95 qubits. For most significant quantum algorithms, that isn’t nearly enough. The electrical emulators that have been developed simply don’t scale with complexity of important problems.

A number of digital emulators have been proposed as well. In March 2019, an FPGA based general quantum computing emulator was proposed. This work demonstrated that while there is no way to get around the issue of exponential resource scaling in quantum emulators, it is possible to shift where this scaling has its impact. With Pilch and Dlugopolski’s approach, the properties of the scaling problem are dependent entirely on the nature and cost of the FPGA technology used. This work built on a number of publications related to quantum emulation techniques using FPGA compatible hardware description languages including VHDL and Verilog.

While these hardware emulation efforts have been confined to academic research thus far, there is a handful of other quantum computing hardware related efforts that have penetrated the commercial, or at least open-source, communities of the world. Companies such as IBM, Rigetti and D-Wave have made the most notable progress in building true quantum computing devices, and each of these companies has also created programmable interfaces for their users through cloud solutions. The underlying quantum technology being offered is still very rudimentary, and typically doesn’t offer any true advantage over using classical computers. This fact is a characteristic of what is known as the NISQ (noisy intermediate scale quantum) era of quantum computing. The question of when quantum supremacy over classical computing will be reached is rigorously studied.

While quantum computing technology plays catch-up, a typical tactic for companies to employ is to allow users to access quantum simulators through their cloud solutions, in place of the quantum computers that are expected to replace them soon. Quantum simulators have been offered by Rigetti (Forest), IBM (Qiskit), ProjectQ (FermiLib), Microsoft (Quantum Development Kit), Google (OpenFermion and Cirq), Xanadu (Pennylane and StrawberryFields) and more. Xanadu in particular has begun to think about connecting their software tools to hardware emulators in the backend, and aims to support execution of quantum algorithms on Google’s TPUs (tensor processing units) soon, which are potentially excellent candidates for emulating quantum machine learning algorithms due to the tensor network model of quantum information that can be directly mapped to neural network structures. This effort by Xanadu introduces the idea that a classical simulation system for quantum computing can benefit from optimizations of the entire stack; from low-level hardware to high level expressive programming languages and cloud architecture.

When considering optimal choices for emulation technology at each layer of the stack, we will prioritize the optimization of our proposed solution’s computational capabilities as a function of economic cost and engineering feasibility.

Classical Bases for Computation

Goals

At the core of any quantum simulation or emulation is the idea of mapping quantum information to classical information. At its simplest, we need each modellable quantum state to be mapped to some classical state of our classical simulation device. Let aεZ be an integer variable. It will be the goal for this value to be ”observable”, meaning that it can be measured or determined with minimal calculation and minimal deduction. This variable a will be used to give each individual modellable quantum state an observable identification number.

Consider a system with practical boundaries on the magnitude of its possible states. Let any state of the system in question initially be assumed to be expressible using continuous variables, and modelleable by analog signals. We know that a system matching this description is possible to engineer. As La Cour and Ott demonstrated experimentally in their paper published in 2015, it is possible to implement a signal-based emulation of a universal qubit quantum computer. However, they did not provide a scalable or practically useful scheme because of the evolution of the system’s more complex states in time. To produce 95 bits would require a calculation duration of roughly the age of the universe using their scheme. So, let us assert that a viable solution must have a dependence on time that scales linearly with the complexity of the operation. This will encourage improvement upon the design proposed by La Cour and Ott.

Finally, a viable solution must not require the amount of hardware components or digital computing resources to scale exponentially with the amount of modelled quantum information.

The Naive Hardware Scaling Problem

Let us consider a naive approach to mapping our identification numbers for quantum states directly to a set of simple sinusoidal analog signals.

Let κ be a weighted sum of N pure, single qubit quantum states denoted ψn. κ can be given in terms of the trace of a product of square matrices M and W of rank N:

This is analogous to a continuous interpretation of the sum of the diagonal elements of the matrix |ψn > |n >< n|:

where w : Z → C is a function approximated by the eigenvalues of the matrix W such that:

κ is then a sum of N pure quantum states. To demonstrate a scalability issue with directly mapping sums of pure states like κ to a discrete set of analog signal representations, assume that each pure quantum state is represented by an analog signal, a sinusoid sin(n · dω · x).

Let us consider M and W to define a minimal set of hardware components needed to emulate the state being represented. The rank of M is equal to the number of individual sinusoids that need to be concurrently generated.

Therefore rank(M) is the number of wave sources, or oscillators, required. The components of W each represent a weight in the sum of the the sinusoidal pure states. Therefore rank(W) is the number of amplifiers required. Finally, the number of two-way summers required to create the final mixed state ensemble is equal to the number of sinusoids being summed, minus one: rank(M ) − 1.

This yields an expression of the complexity C of a direct and purely hardware-based implementation of a simplified state emulation that behaves in the way that κ has been described:

We can see that 5 hardware components would be required to emulate a qubit state by asserting that N = 2, to account for each of the pure states |0 > and |1 >. If we introduce multi-qubit states then we might revise the formula for C to account for the unique combinations of each qubit’s pure states:

where Q is the number of qubits.

This expresses the fact that the task of directly simulating this simplified qubit state scales in hardware resources with O(2n). In anticipation of implementing an information system in the future, this is not an ideal relationship. So, this approach does not achieve a viable mapping of quantum to classical information.

Bandwidth Limitation Problem

In 2015, La Cour and Ott described an implementation scheme for a signal based emulation of general quantum computing. This model was demonstrated using analog electronics. Their scheme introduced a mapping from quantum states to electrical analog phase representations.

The model starts with representing the quantum state |0 > by the in-phase and quadrature components of an analog electrical signal. The |0 > state is defined as s(t):

This state can be represented by a sinusoidal analog electronic signal α. a then represents the real part of the sinusoidal signal and b the imaginary part. The in-phase and quadrature amplitudes can be obtained by multiplying the state by in-phase and quadrature reference signals and applying a low-pass filter with a bandpass below 2ωc.

This can also be extended to model a general single qubit state, s(t). Let ψ(t) = ψR(t) + iψI (t) = α0 e^(iωt) + α1 e^(−iωt), a combination of the basis states |0 > and |1 >.

Then we can redefine s(t):

This achieves a way by which a general state |ψ >= ψ(t) = α can be modelled. The real and imaginary parts of ψ serve as the in-phase and quadrature components of the carrier signal. In-phase and quadrature references are used in the following configuration, where ⊗ represents a 4 quadrant multiplier. A 4 quadrant multiplier circuit produces the product of its input voltages and either input voltage may be positive or negative. The −π/2 phase shift provides the quadrature reference, and a positive analog filter is used to finally acquire the state s(t).

Then the analog components required to represent a quantum state are:

  • 2 analog sources
  • 2 quadrature multiplier circuits
  • 1 phase shift circuit
  • 1 bandpass filter

Similarly, n qubits with n complex coefficients can be expressed in general using the formula:

To represent such a state, n + 1 frequencies are required; one frequency for each qubit as well as the carrier frequency wc. Such an ensemble of states can be created using an octave spacing scheme, with n frequencies for the qubits, 2 frequencies for the basis states, and one for the carrier.

Since a quantum state of n qubits is represented using a complex oscillating time-domain signal, the number of qubits that can be encoded is limited by the attainable bandwidth. Another limitation to consider is the requirements for physical components. The proposed device consists of only three types of electrical components: 4 quadrant multipliers, operational amplifiers and analog filters. A gate uses a fixed number of multipliers, adders and inverters per qubit. La Cour and Ott claim that the total number of components needed for the implementation of a gate scales quadratically with the number of qubits the gate operates on. La Cour and Ott estimate that in order to achieve a density of electrical circuitry footprint that scales exponentially with the number of qubits, transistor density would need to improve by a factor of 1000 from what it is today. If this goal were reached, and encoding information with a 1 THz bandwidth were possible, they claim it would then be feasible to emulate a system of 40 qubits, which is comparable to a modern high performance computer with 1 TB of RAM.

Due to the bandwidth limitation, the inefficiencies of this implementation largely exist in the time domain. The time dependence of a state introduces a relationship between the signal duration T and number of modellable qubits n. A signal duration of 10 hours would yield roughly 50 qubits, while 1 year would yield roughly 60 qubits. Even if T were on the order of the age of the Universe only about 95 qubits could be represented.

La Cour and Ott conclude that a quantum emulation device with an octave spacing of qubit frequencies would be constrained by an exponential scaling of required bandwidth. So, this signal based emulation methodology also scales with untenable complexity.

Dual Oscillator Representation Scheme

Goals

Two schemes have been presented and invalidated. To overcome the downsides of those two schemes, we will create a new scheme with the following properties:

  • A pair of oscillators or sinusioidal wave sources must be sufficient to emulate n superimposed states with the ability to be identifiably mixed or entangled
  • The time required to perform a measurement of a state must not scale poorly with the complexity of the state
  • A fixed set of hardware components must be sufficient to emulate a system of a significant number of qubits
  • At least as much must be knowable about an emulated quantum state as is expected to be measurable in a theoretical quantum computing system

In order to build to a viable solution, we will first define the elements of the computational space in such a way that they are each representable by a single phasor.

A probabilistic state with n observable values can be modelled by a probability simplex, a tetrahedron in n − 1 dimensions. For example, in the case of a qubit, the probability of measuring a |0 > versus a |1 > can be expressed as a point on a line between two endpoints. The endpoints represent states which are 100% likely to yield an observable when measured. A line is then the geometric embodiment of the spectrum of the probabilistic mixtures of a qubit’s observable values.

The Bloch sphere provides a more true representation of a qubit state since we know that a qubit does behave the same as a pbit. Rather, in order to fully model a qubit, it is important to retain knowledge of the square root of its probability in a way that is not arbitrary. Two dimensions are added on top of the probability simplex in order to account for the sign of the square root of the probability, and for imaginary values.

See that the imaginary component or sign of a qubit state does not affect the probability of its observable values being measured. These characteristics do however become necessary for the application of operations represented by unitary matrices that decompose to a combination of Pauli matrices including either Z or Y . The sign or phase flip operations do not act on the probability dimension, which is in a sense the one most relevant to an observer.

Consider a single qubit state in the computational basis.

Each of c0 and c1 are complex coefficients with two terms: one real and one imaginary. We can rewrite the state as follows.

Each coefficient in this equation for |ψ1 > can be broken into three pieces of information:

  1. Whether the coefficient is real or imaginary
  2. Whether the coefficient is positive or negative
  3. The magnitude of the coefficient

Items one and two are binary in nature, and can be captured by a bit string of binary flags. The third item, however, exhibits a spectrum of possibilities when the observables are probablistically mixed or entangled.

We shall distinguish between four sets of information processors. The sum of these four subsystems will represent a full qubit state. The first two subsystems will be responsible for maintaining information directly relevant to a measurement. Their hardware should not be required to interact with any other subsystems in order to answer a measurement with an observable value. The third and fourth subsystems will maintain information that is necessary to maintain about a quantum state, but not relevant in the context of a measurement. Let the third and fourth hardware modules together be named the ”flag processor module” and each maintain two pieces of information about a term in the coefficients of each observable in a quantum state: whether it is imaginary and whether it is negative. In the case that the entire system is maintaining a single qubit of quantum information, then the effects of Pauli operations on each subsystem in the flag processor module can then be captured by a simple state diagram.

This state machine can be described by Boolean logic and clearly lends itself to a trivial digital implementation.

The requirements of the digital flag processor module scale exponentially with the number of modelled qubits since four bits are required per observable pure state. This is clearly not usable. An analog implementation of the flag processor will also be discussed.

The more challenging subsystems to implement will maintain the spectrum of probability between a qubit’s observables. How this is accomplished will affect how well the outcomes of measurements in our scheme will match those expected of a true quantum computer. The probability dimension will be left to an analog electrical module, which will encode the complete probabilistic state into two phasors.

Keeping in mind the goal that at least as much must be knowable about an emulated quantum state as is expected to be measurable in a theoretical quantum computing system, let us define an observable that will enable the description of qubit state information with a pair of analog waveforms.

Let us consider each of the first and second subsystems separately. Let each subsystem be responsible for maintaining the magnitude of one term in each of the coefficients of each observable qubit state. Then the first subsystem might be initialized to maintain the real terms in each of the coefficients of each observable qubit state, and the second subsystem might be initialized to maintain the imaginary terms.

Let each of the first two subsystems be implemented using analog electronics, and be electrically identical analog modules.

Modelling Probability Space

Since an analog module is concerned only with probability, and not sign or phase at this point, its states can be conceptualized as points within simplexes. The simplest simplex is that of a single qubit: a line. Consider the Dirac notation of 3 independent quantum states. The first is a state of one qubit.

The second is a two-qubit state.

The third is a three-qubit state, etc.

The pure states of a system will correspond with the edges and surfaces of its simplex. Entangled states will exist on the edges, and mixed states will exist inside the boundaries of the simplex. It is easy to see that the number of observable states, and the number of vertices on the corresponding simplex is 2Q, where Q is the number of qubits.

Individually maintaining the probability of each observable being measured is not feasible. Instead, consider a point outside of the simplex that exists in a dimension sufficiently high such that it can exist in an independent probabilistic relationship with each pair of observables in the simplex. Its relationship with each pair of observables may be represented by a two dimensional simplex, a triangle. This higher dimensional observable (HDO) will be bound in an relationship similar to an uncertainty principle to the observables that form the simplex. When the HDO (conceptualized as a point) is an exact description of the 2Q dimensional probabilistic state, then the probabilistic state will correspond to the point at the dead center of the simplex. Let the HDO have a quantized number of measurable amplitudes such that each amplitude corresponds to a unique combination of weights that the observable has in each triangular simplex it forms with each pair of the 2Q dimensional simplex’s observables.

If such a geometery could be realized, the HDO would be capable of representing bounded subspaces in the 2Q dimensional simplex with any width in any dimension. If this HDO were implemented in such a way that its amplitude could be measured, then more information could be learned about a multi- qubit state than is expected of a true quantum computer. Next, a method for achieving this geometry using analog electronics will be described. Consider the probability simplex for a two qubit system.

The strengths of the HDO in its relationship with each edge will be shown in blue and red numbers. The red numbers will be the set of strengths that conclusively identify the exact location of the probabilistic state of the qubit system. The actual state will be included as a red dot. See that a pure observable state is trivial to identify.

An entangled state is also trivial to identify from the strengths of the HDO.

A general state on an edge requires knowledge of only two strengths of the HDO.

Mixed states are also straight forward.

The strength of the HDO with respect to an edge denoted e0 will be equal to an expression composed of the distances of the state from the maximally mixed state as seen by each of the other edges that are not opposite e0. In the case of a two qubit system:

Let the overall strength of the HDO be 1 when its system is in the maximally mixed state. Let it be generally true that the strength will become less as the state moves further from the maximally mixed state.

If the HDO is to have a unique amplitude for each combination of strengths with respect to each simplex edge, then another rule must be added. Distance from the maximally mixed state along each edge must not be treated equal. Rather, let us choose arbitrarily that nearness to the maximally mixed state with respect to |0 ^ ⊗Q > contributes the most significantly to the strength. Let nearness with respect to |1 ^ ⊗Q > contribute the least. Let each intermediate state contribute something between these, with their contribution amount ordered according to their binary sequence. Then continuously observing the strength of the observable in a system where the state moves gradually from |0 ^ ⊗Q > towards the maximally mixed state might yield a curve something like the following.

Continuously observing the strength of the observable in a system where the state moves gradually from |1⊗Q > towards the maximally mixed state would yield the following shape.

The nature of the information being encoded suggests that a quantum harmonic oscillator (QHO) may be an ideal manifestation of the HDO. The HDO must have n = 2Q observable states, which may be considered analagous to the energy eigenstates of the quantum harmonic oscillator. Its observable states must be in superposition, like the states of the uncollapsed wave function of the QHO. We want to be able to identify and move between each state in a hierarchy described by aεR, which is reminiscent of the raising and lowering operators.

However, a QHO is not ideal for our application for several reasons. First, we want to be able to identify the precise mixture or entanglement of states, and a QHO’s wave function would collapse to an observable energy state upon being measured, making this impossible. Second, a perfect quantum harmonic oscillator is difficult and costly to control.

We will now endeavor to implement the HDO using electronics in a way that is inspired by the quantum harmonic oscillator but is also optimally efficient in terms of hardware implementation. The design should also ensure that it is trivial to model the application of the Pauli matrices to the underlying modelled lower dimensional quantum system.

First, let the HDO be manifested as a frequency. Let the strength of the HDO be equivalent to its probability of being consistently measured by a frequency measurement device. We will now define a hierarchy of frequencies that accomplish the actualization of this observable.

Let a be a measurable integer identifier of a modelleable wave packet. Consider the parabaloid:

If we were to fix y=0, then this would be a parabola with a form loosely analogous to the classical equation of potential energy in a harmonic oscillator. Imagine that g might equal mω^2 / 2 and the variables x and y are like position and momentum x and p respectively.

The Fundamental P-Spectrum Parabaloid z1(x,y)

Let g = 1. Let bεR be a real number, and the following set of spheres to be defined in R3:

If b is left as a free parameter for now, this set of relations is a set of spheres that are stacked in the shape of a parabaloid such that the minimum point in the z axis of each sphere is equal to the maximum of the preceding sphere, and each sphere is centered in the parabaloid such that it intersects the parabaloid in a circle, on a plane parallel to the xy axis.

Then the intersubsection of each circle with z1 will lie on the plane z = z1(a,b). The intersubsection equation of any sphere sa with the plane y = b is then the following:

For each sa there are four solutions to its intersubsection equation. Two of the points of intersubsection will lie on the plane z = z1(a,b). Let this pair of points be named the ”reference points”. The ”reference vectors” between the center of a sphere and each of its reference points will also lie in the plane z = z1 (a, b) and these vectors will have equal magnitudes.

A sphere’s other two points of intersubsection, the “data points”, will each be related to one of the sphere’s reference points. Let a ”data vector” be the vector between a data point and its sphere’s center. Let a ”reference vector” be the vector between a reference point and its sphere’s center. Let each data point be related to a reference point such that the magnitude of the angular displacement of each data vector from its reference vector is the same. Let this angular displacement be φ. Then, each sphere will have a unique single associated angle φ. This achieves a mapping from each integer a to a unique φ. φ, of course, could be representable by a single phasor coming from a single signal source.

Note the similarity of our φ(a) to the potential energy of the QHO V (x). As the quantized real parameter a increases, the frequency φ(a) will increase as well. a can be visualized as an integer in the x-axis. Consequently, a diagram of the energy-wise lowest-lying solutions of the Schr ̈odinger equation of the QHO also provides a relatively accurate depiction of our φ(a).

φ(a) is fundamentally encoded into the ”data vector” that determines its frequency. Given a single data vector, the corresponding values of a and b can be determined. We adopt a coordinate system with an origin at the center of sa similar to that of the Bloch sphere to describe the data vector, where the angle measured from the x axis is denoted φ and the angle measured from the z axis is denoted θ.

Let b = f2(r, θ). Let θ be the angle between two vectors. Let the first vector be defined by two points: a sphere’s center and its intersubsection with the plane z = z1(a,0). Let the second also be defined by two points: a sphere’s center and its intersubsection with the plane z = z1(a, b). Let r be the magnitude of a data vector. Then we have:

Let a = f1(r, φ). This f1 is simply a trigonometric transformation. Let z2 be the parabolic intersubsection of the plane y = b with the parabaloid z1. Let dz^2 / dx be the slope in x of z2 at a point approaching a reference point on the sphere sa. Then we have:

It can easily be shown that r is a redundant parameter in both f1 and f2. Any pair of reference vector and data vector are guaranteed to have the same magnitude, and the information represented by a data vector can be inferred entirely from the angles θ and φ. The ratio of both vectors’ magnitudes is always constant despite the magnitudes themselves. Therefore,

f^(−1) is trivial to define:

It is simple to define φ as a function f−1 of a and b as well. Let Td (a,b) denote the data point of a sphere chosen such that φ is positive. Let Tr(a,b) denote the matching reference point. Then,

A graph of φ(a, 1) iteratively computed using python yields the following.

# encodes a to phi
def encode(a,b,g):
radius = a + (a/(2*g))
dz = numpy.absolute(((2*(a**2+a)-1) +
numpy.sqrt(numpy.absolute((2*(a**2+a)-1)**2 -
4*((a**2+a)**2+1/4-(a+a/(2*g))**2))))/2 - ((2*(a**2+a)-1) -
numpy.sqrt(numpy.absolute((2*(a**2+a)-1)**2 -
4*((a**2+a)**2+1/4-(a+a/(2*g))**2))))/2)
dx = numpy.absolute(a -
(-1+numpy.sqrt(numpy.absolute(1-4*g*(((2*(a**2+a)-1)) -
numpy.sqrt(numpy.absolute((2*(a**2+a)-1)**2 -
4*((a**2+a)**2+1/4-(a+a/(2*g))**2))))/2))/(2*g)))
c = numpy.sqrt(dx**2+dz**2)
phi = numpy.arcsin(c*numpy.sin(numpy.pi - numpy.arcsin(dx/c))/radius)
theta = numpy.arcsin(b)

# generates states and wrties to csv
def iterateStates(g=1):
csvfile = open(’a_{}.csv’.format(g), ’w’, newline=’’)
writer = csv.writer(csvfile, delimiter=’,’, quotechar="'", quoting=csv.QUOTE_MINIMAL)b=0

all_waves = []
all_a_vals =[]
for word in itertools.product([0,1,2,3,4,5,6,7,8,9],repeat=4):
a_str = ’’
for bit in word:
a_str += str(bit)
a_val = int(a_str)
waves = encode(a_val,b,g)
pair = numpy.array([waves[0]])
if((not numpy.isnan(waves[0]))):
all_waves.append(waves[0])
all_a_vals.append(a_val)
writer.writerow([a_val, waves[0]])

The P-Spectrum Parabaloid Family zg(x,y)

If we return to our initial assumption that g = 1, we can see that removing this assumption yields a family of functions. Note that after a certain lower limit in a, the individual functions do not overlap. The graph for 0 < gεZ ≤ 6 is provided.

The iterative calculation of the elements of these functions demonstrates how the functions might alternatively be interpreted as sequences. This sequential interpretation can help in the catagorization of the space of its elements.

Each of these sequences is a Cauchy sequence. A sequence is Cauchy if ∃δ > 0 : ∃nεN : ∀j,k > n||φj −φk|| < δ, meaning that as the sequence progresses, its elements become arbitrarily close to one another.

See that a hierarchy of frequencies φ(a) has been described such that if one knows the magnitude of φ, the value of g will also be distinguishable, since the curves φ(a) do not overlap. Knowing both g and φ, one may use f1, a simple trigonometric process, to deduce the value of a.

If we let g take any real value, then the graph of φ(a) becomes a vector field. The ”useful” elements of such a vector field might be determined by introducing an angular resolution dω in φ, and a maximum value for φ. The number of “useful” curves in the vector field can be found by applying the requirement that the difference in φ between any adjacent points in the field is at least dω. We can find the set of curves for which the smallest difference between adjacent points is dω and this difference occurs near the maximum value of φ.

Practical constraints on dω and φ

High frequency industrial electrical oscillators have ranges that reach into the tens of Gigahertz. For example, the Axtal AXPLT2500 is a phase lock crystal oscillator product whose maximum frequency output can reach 12 GHz with a frequency stability ±3.2 ppm depending on operating conditions age, and other factors. The drawbacks of using such a device are its physical footprint and its cost.

On the other hand, there are high frequency oscillators with ranges in the GHz that are available in common integrated circuit chip packages, such as the $36.10 Abracon LLC product AX7MAF1–2100.0000C which outputs a maximum stable frequency of 2.1GHz at ±50ppm.

These devices each represent their respective families of devices: the Axtal product being a part of the family of high end, low noise industrial oscillator products, and the AX7MAF1 being a part of the family of small footprint, highly embeddable products. The constraints introduced by the AX7MAF1 will be analyzed since the goal is to create a cost-effective and low-profile technology. However, the capacity of available high-end industrial tools should also be kept in mind.

In the case that one of these oscillators is used as a frequency source for φ, dω becomes a function of the oscillator’s stability rating so and of φ itself.

We will select values for a and g such that the difference between φ(a) and φ(a − 1) is approximately dω on the first curve that contains a value of φ that overlaps with the maximum frequency. The following code iterates through the states in a given range, determining the number of states for each curve that can be represented within tolerance. Two device specific parameters for the procedure are the stability and the maximum output frequency.

# accumulator for the total number of counted states
overall_result = 0
# will hold the value of phi for the last state within tolerance on the previous curve
last_phi = 0
# starting value of g
g=1
# simulation resolution in g
d_g = 0.01
# device specific parameters
phi_limit = 2100000000
osc_ppm = 50
# data file setup
csvfile = open(’g_converging_{0}.csv’.format(g), ’w’, newline=’’)
writer = csv.writer(csvfile, delimiter=’,’, quotechar="'",
quoting=csv.QUOTE_MINIMAL)
# encodes a to phi
def encode(a,b,g):
radius = a + (a/(2*g))
dz = numpy.absolute(((2*(a**2+a)-1) +
numpy.sqrt(numpy.absolute((2*(a**2+a)-1)**2 -
4*((a**2+a)**2+1/4-(a+a/(2*g))**2))))/2 - ((2*(a**2+a)-1) -
numpy.sqrt(numpy.absolute((2*(a**2+a)-1)**2 -
4*((a**2+a)**2+1/4-(a+a/(2*g))**2))))/2)
dx = numpy.absolute(a -
(-1+numpy.sqrt(numpy.absolute(1-4*g*(((2*(a**2+a)-1)) -
numpy.sqrt(numpy.absolute((2*(a**2+a)-1)**2 - 4*((a**2+a)**2+1/4-(a+a/(2*g))**2))))/2))/(2*g)))
c = numpy.sqrt(dx**2+dz**2)
phi = numpy.arcsin(c*numpy.sin(numpy.pi - numpy.arcsin(dx/c))/radius)
theta = numpy.arcsin(b)

return numpy.array([phi,theta])

# traverses states in tolerance
def countStates(g, osc_ppm):
b=0
all_waves = [] all_a_vals =[]
done = False
i=0
count = 0
while done == False:
i=i+1
a_str = str(i)
a_val = int(a_str)
waves = encode(a_val,b,g)
d_omega = osc_ppm*(waves[0]/1000000) pair = numpy.array([waves[0]])
if((not numpy.isnan(waves[0]))):
if((len(all_waves) == 0) or (all_waves[-1] - waves[0] >
d_omega)):
all_waves.append(waves[0])
all_a_vals.append(a_val)
count = count + 1
else:
done = True
return count, waves[0], d_omega

done = False
tolerable = True

while done == False:
g += d_g
result, phi, d_omega, a = countStates(g, osc_ppm)
overall_result += result
writer.writerow([g, phi, a, abs(last_phi - phi), d_omega])
if (abs(last_phi - phi) < d_omega):
print("g: {0}".format(g))
print("phi: {0}".format(phi))
print("d omega: {0}".format(abs(last_phi - phi)))
print("d phi = {0} < d omega = {1}".format(abs(last_phi - phi),
d_omega))
print("num states: {0}".format(overall_result))
elif phi > phi_limit:
done= True
print("g: {0}".format(g))
print("phi: {0}".format(phi))
print("d omega: {0}".format(abs(last_phi - phi)))
print("phi = {0} > max phi = {1}".format(phi, phi_limit))
print("num states: {0}".format(overall_result))
else:
last_phi = phi

Running the program above using the device parameters for the AX7MAF1 oscillator yields the following upper range of curves within acceptable tolerance. dφ in this chart refers to the distance between the values of φ for adjacent curves at their maximum tolerable values of a.

The output of the program indicated that the reason for termination was that dφ had intersected with dω, not that the maximum output frequency had been reached.

Let the final tolerable value of φ calculated by the program be called the “base frequency tolerance limit” Lb. In order to stretch the vector field to fill the acceptable range of operation with usable curves, we introduce a device specific scaling coefficient Cd such that Cd · Lb = ω max, where ωmax is the device’s maximum output frequency. Then for the AX7MAF1, Cd= ̇2309321037. Then we define the relationship between ω (the device’s frequency) and φ (the angle between reference and data vectors) to be φ = ω / Cd.

The program can be adjusted to take this into account, spreading the curves throughout the frequency space of the device. This will yield a higher achievable number of curves, represented in the program as the resolution in g.

Running the program with the scaling factor Cd and a step of dg = 0.0001 demonstrates that 45452916 distinct, perfectly distinguishable values of a could be encoded into frequencies; orders of magnitude greater than before the scaling factor was introduced. This demonstrates the effect of the device specific maximum frequency.

This number is important since it represents the values of a that can be encoded with complete certainty. The goal of this state representation scheme, however, is to represent probabilistic information in an analog signal. Therefore, uncertainty must be introduced in a controlled manner.

Probabilistic Measurements

What we have done so far is to map a set of quadratic relationships to a similar set of Cauchy sequences of angular values. We then mapped each of these sequences to a set of practically realizable frequencies. The convenience of this approach now becomes evident. Since a quadratic relationship was chosen as the starting point, the frequency encodings we have described are also each in a sequence where the difference between adjacent sequence terms changes linearly. Since the difference between adjacent terms in a sequence corresponding to a value of g is equivalent to the difference between two frequencies that are adjacent in the ω vector field and we have guaranteed that any of 45452916 terms are perfectly distinguishable within the stability constraints of a device, we can express the probability of measuring a particular value of a in terms of only the difference ω(φ(a)) − ω(φ(a − 1)) and the resolution of a frequency measurement device.

Let σ(a,g) be the variance in measured values for the value of φ(a,g), and δ(a, g) = ω(φ(a, g)) − ω(φ(a − 1, g)) be the angular resolution of the medium maintaining the angle φ. Then the probability of measuring values of a and g can be expressed as the following.

Let this probability be equivalent to the strength of the HDO. Note that a value of a with a particular probability of being measured is now mappable to a specific encoding frequency. Recall that the original purpose of a was to enumerate modellable quantum states. The next task is to create a mapping between each value of a and its corresponding quantum state such that the probability of observing a value of a through measurement is equal to the strength of the HDO that exists in an individual probabilistic relationship with each pair of observable states of the modelled qubit system.

Recall that the greatest contributor to the strength of the HDO was chosen to be the distance from the maximally mixed state with respect to the |0 ^ ⊗Q > state. Therefore it is logical to assign the g = dg curve to correspond to states that lie exactly between the maximally mixed state and the state |0 ^ ⊗Q >.

The rest of the states will be related to curves evenly spaced throughout the frequency range of the device used to maintain φ. For example, using the AX7MAF1 to model a two qubit system would yield the following ”chief g curves”. The spectrum will be made to wrap back to the first state so as to better map the space inside the simplex to its curves.

The system is also capable of representing states do not lie on any of the chief curves. Since dg used in the simulation generating these curves was 0.0001, we know that we could use approximately 60000 curves between each of the four in the table above to represent such states.

See that the effect of the application of an X gate on a qubit would be to move a state in its simplex by reflecting it across a line that passes through the center and is orthogonal with an edge representing a transition between two states with the operand qubit’s value changing and the rest not changing, for each such edge that exists. We will partition the curves between the chief curves with the goal of keeping this as simple as possible in practice.

Let the intermediate curves between each chief curve be evenly divided into a number of groups called “surface groups”. Let the set of curves between a pair of chief curves be called a “vertex group”, and let the lesser of the pair of chief curves delimiting a vertex group be the ”lower chief”. Let there be one surface group per surface adjacent to the lower chief curve’s corresponding state in the simplex. In the case of a two qubit system, this yields surface groups of 20000 curves. Each of these surface groups’ curves will represent the strength of the observable with respect to a point on the surface correlated to its group. Let the ”vertex sum” of a surface be equal to the binary addition of the values of its vertices. Let the surface groups be ordered in g according to the magnitude of their vertex sums.

In the illustration below, each vertex group is given a color, and each surface group is labelled with a number indicating its order within its vertex group.

Conceptual Introduction to Possible Operators

See that applying X to a qubit will move a state from one surface group to a different surface group. The exact group transitions for a two qubit system are provided in the table below.

In general, a NOT gate applied to all qubits will be achievable as a reflection across the center of the g curve spectrum. A NOT gate applied to a single qubit is equivalent to two operations. First, a shift to the vertex group that has an order that differs from the original vertex group’s order by the weight of the operand qubit’s position in the binary interpretation of the state. This is achievable by adding the weight of the operand qubit’s position to the order of the vertex group and allowing the vertex group order to wrap around 0. Second, The state must be shifted to a new surface group of an order that can be determined by ”reflecting” the original surface group order across a number corresponding with the order of the qubit being operated on. See that in the case of a two qubit system, when the first qubit is operated on we reflect the surface number across the value “1” to get the new surface number. When the second qubit is operated on, we don’t need to reflect its surface number at all since it is the highest order qubit.

These ideal properties of the NOT gate can guide us in defining the complete mapping from simplex to g curve spectrum, which is a projection of a 3D space to 2D. It also demonstrates how simple the identification of an operation might be for an observer. In the two qubit system, a X gate or similar operation can be easily recognized by simply partitioning the entire set of possible modellable our states into 12 surface groups. For more complex operations, the number of partitions required might increase. However, complete knowledge of the operand and output of a Pauli operator in our probability space will not be ultimately required to identify the Pauli operation that took place. We will see how this can be taken advantage of by a support vector machine in the subsection on deep learning state tomography.

It should be clear that applying a NOT gate will be trivial to accomplish in the frequency domain. the actual implementation of this operation will be detailed later. Applying I is equivalent to a NOP, and both phase and sign flips will be accounted for in the flag processor. Therefore, the only operation we still fundamentally need to support is a change of the magnitude of the state coefficients of the modelled quantum system.

Only real valued multipliers will be considered here. We expect that multipliers will be applied to pure states like |0 > or |1 > and will be used mainly in state preparation. We will start with a number of qubits in the maximally mixed state. For example,

Then multiplying a pure state by a number greater than 1 will shift the probabilistic state towards the multiplied state. Multiplying a pure state by a number less than 1 will shift the probabilistic state away from the multiplied state. The nearness of a mixed state to a pure state can be determined from its frequency representation since the frequency yields g and a, which exactly indicate the position of the state in its simplex space. If the original mixed state is in on a chief curve, then a multiplication simply shifts the state higher in a. Otherwise if it is in the vertex group of the chief curve, it shifts the mixed state both up in a and towards the chief curve in g. If the original state is on a curve in another vertex group, then the state will shift higher in a and towards the targeted chief curve in g. Both the change in magnitude and NOT operations will described in detail shortly.

Capabilities and Limitations of the Dual Oscillator Represen- tation Scheme

Since roughly 240000 individual g curves are achievable using the AX7MAF1, we can be confident that 18 qubits can be maintained using log(2) two analog modules, without limitation on interaction between the qubits, entangling or otherwise. However, a further optimization can be applied in order to reach 20 qubits.

The maintainable probability precision between any two observables is related to the number of maintained g curves divided from the number of maintained states.

If we were comfortable with maintaining states to a probabilistic precision of 1%, we might introduce a number of additional ”secondary g curves” and ”tertiary g curves” by partitioning each original g curve into three sets of values. Let every a value satisfying a mod 3 = 0 remain unchanged, every a value satisfying a mod 3 = 1 be assigned to a secondary g curve and every a value satisfying a mod 3 = 2 be assigned to a tertiary g curve. Then, the size of each primary g curve will drop to 4·0.264%=̇1.056%. An additional secondary and tertiary curve with the same precision will be considered to exist and represent space in a simplex close to the primary g curve — space that the next primary g curve would have previously been considered to represent. This would enable us to stretch the device to approximately model 20 qubits.

Interestingly, 20 qubits is the number given by La Cour, Ott, Starkey and Wilson in their paper as the practical amount of qubits possible to maintain on a single chip as well.

In general, our relationship between qubits and probability precision becomes:

Early Experimentation

Low Fidelity Optical Device

In order to validate that the mathematical model described thus far has merit, early experimentation was done to show that the model can perform successfully in conditions that are not practically ideal. A result of the following experimentation is that the model is demonstrated to be adaptive to the quality of a quantum information system’s implementation, and make the best of it. It also shows that the model is technology-agnostic, and is valid for any analog implementation of quantum system emulator, whether it be electrical, optical or otherwise.

An experimental implementation was developed that makes use of an optical device. The optical sensing hardware used for the analog system had the following configuration.

The simple photosensor interface module was designed using a 10uF capacitor and a photoresistor. The capacitor and photoresistor are connected between a pair of digital input and output pins such that the time it takes for the capacitor to charge and create high impedance at the digital input is proportional to the resistance of the photoresistor.

The optical signal control interface was implemented using a stepper motor and ULN2003APC stepper motor controller. Mounted to this motor was a 52 mm polarizing optical filter.

The purpose of the first prototype was to determine the hardware needed to accurately control a polarizing filter’s angle of rotation relative another, fixed filter. The device was built using a pair of camera polarizing filters, a Trinket Pro microcontroller module from Adafruit and a simple stepper motor controller module. The rotating filter was attached to a Nema stepper motor.

The purpose of the second prototype was to see what kind of data and algorithms could be enhanced by the polarizing filter aparatus. A photosensor was added to measure the density of photons passing through both filters. The motor that was previously stationary was attached to a DC brush motor. The device was built using the same pair of camera polarizing filters as prototype 1, a Raspberry Pi 2, two Arduino UNOs and an Arduino HAT stepper motor controller module. The rotating filter was attached to a higher precision stepper motor. Algorithms were automated using Python and the stepper motor controlled using g-code and GRBL open-source CNC software.

The following Python code was written to run on the Raspberry Pi in order to manipulate the positions of the polarizing filters, and read data from the photosensor module.

import serial
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(23, GPIO.OUT)
GPIO.setup(24, GPIO.OUT)

def talkg(file):
s = serial.Serial(’/dev/ttyACM0’,9600)
f = open(file,’r’);
s.write("\r\n\r\n".encode())
time.sleep(2)
s.flushInput()
for line in f:
l = line.strip()
print (’Sending: ’ + l)
wout = l + ’\n’
s.write(wout.encode())
grbl_out = s.readline()
print (’ : ’.encode() + grbl_out.strip())
f.close()
s.close()

def talkm(direction,duration):
print(’driving motor in direction {0} for {1}
s’.format(direction,duration))
GPIO.output(23, direction)
GPIO.output(24, not direction)
time.sleep(duration)
GPIO.output(23, False)
GPIO.output(24, False)

def readp():
measurement = 0
print(’resetting photo sensor capacitor...’)
GPIO.setup(12, GPIO.OUT)
GPIO.output(12, False)
time.sleep(0.1)
GPIO.setup(12, GPIO.IN)
while(GPIO.input(12) == False):
measurement += 1

print(’photo sensor returned value of {0}’.format(measurement))

return measurement

The following trivial G code file was passed to the talkg function in order to rotate the stepper one 0.01 step at a time.

$X 
X0.01

Data was collected using the following method. The 12th line was commented out during data collection. Uncommented, it would rotate the imprecise brush motor between steps, turning to to a nearly random position, possibly enabling something like a quantum random walk.

def scanAllPositions():
csvFile = open(’scan.csv’,’a’)
fieldnames = ["Position","Sensor Value"]
writer = csv.DictWriter(csvFile, fieldnames=fieldnames)
writer.writeheader()


positions = 0
while(positions < 200):
print(’position {0}’.format(positions))
talkg(’increment1.g’)
#talkm(0,1)
measurement = readp()
writer.writerow({
"Position": "{0}".format(positions),
"Sensor Value": "{0}".format(measurement)
})
positions += 1

csvFile.close()

Below is an example of data points read from the photo sensor module’s differential voltage output across one and a half rotations of the stepped filter. A polynomial fit is shown in orange.

The derivative of the curve and the standard deviation from the fit were both seen to vary with the angular position of the rotating filter.

Sets of 7 measurements were taken. The measurement results were interpreted as φ. Iterating through all the useable values of both a and g yields the following 2D spectrum of probabilities that each element of a set of 7 results are the same. The vertical axis progresses in the filter’s angular position (interpreted as a), the horizontal progresses in g. Greener cells are closer to a probability of 1 and therefore a high HDO strength.

A 3D visualization of the results can also be constructed where the height of a point represents its closeness to 1.

The probability spectrum shown can be conceptualized as one octant of an ellipsoid of possible states. The octants each represent a combination of the possible imaginary and negative flags applied to the probability magnitude represented in φ. In such a geometry, the probability of measuring with a probability of 1 exists at points on the xy axis, and the probability of measuring with a probability of 0 exists at the maximum and minimum of the ellipsoid in z.

The experimental data is essentially a low-grain rendering of this ideal ellipsoid. The difference between the experimental and theoretical results comes from the noise in the sensor data shown in the “Sensor Data Consistency” graph, and from the number of measurements made. With a less noisy signal, less measurements would be required to achieve a similar experimental fidelity. A fundamental difference and advantage of this system compared to a typical quantum system is that measuring a state does not need to collapse it.

To demonstrate the relationship between measurements and fidelity, the procedure was repeated using 4 measurements per location rather than 7.

If we consider that the ellipsoid of possible probabilities is related to the HDO that sits at the center of a qubit system’s probability simplex, it is clear that the ellipsoid is a topological transformation of the simplex. The center point becomes a circle in the xy plane, the g curves each map to vertical bands that traverse the surface of the ellipsoid from the circle to the z-maximum and z-minimum. The pure states exist at points close to the z-maximum and z-minimum, uniformly distributed in angular position about the z axis. This can help us connect the graininess of the ellipsoid’s actualization and the representability of the underlying qubit system probability simplex.

Dual Oscillator Electrical Implementation Design

High Speed Analog Module Design

The high speed analog module (HSAM) will be responsible for converting a, g to φ in an efficient manner with minimal limitations on the inputs.

Below is a very rough approximation of an implementation of such a circuit. See that additional required input signals are: VU/2, V1/2, Vd, Vx and t.

Use of the module would consist of a few sequential steps:

1. Setting the voltage V U/2 one half of the value of 1 in the desired numerical precision of the analog calculation module. This should be scaled to account for the voltage range constraints of the voltage source used to power the module.

2. Setting the voltage at the input Vg and Va equal to 2 g V U/2 and 2a V U/2 respectively.

3. Setting Vx to a value close to 2(a − 1)V U/2 and sweeping the voltage range between Vx = 2(a − 1)V U/2 to Vx = Va. It is necessary to include the upper boundary of this range in the sweep.

V 1/2 must be set to exactly 0.5 volt at all times, and Vd will slightly increase the accuracy of the module’s output as Vd → 0, but will increase the speed of the module slightly when it Vd is large. Vd amy not take a value of exactly 0 volts, or the device will not function.

The module expects an accurate clock pulse to be present at the input labelled t, which should represent 1 unit of time. Increasing the clock pulse will increase the output frequency proportionally, and can therefore provide a mechanism by which an output frequency representation of ω can be scaled to meet any frequency bandwidth constraints. However, any scaling must be accounted for when ω is measured, or the value of φ will be misinterpreted.

The module also expects a frequency source to be connected. This is shown in the diagram as an AC voltage source. The frequency source has one input and one ouput. The output is expected to be the frequency ω (or a scaled version of ω) and the input will be a control voltage proportional to the magnitude of the frequency shift that is required to encode the quantum state specified by g and a into the frequency. The adjustment of the oscillation that is effected by the control voltage is a traditional closed loop control systems problem which will be addressed later and depends on the specific oscillator chosen.

High Speed Analog Module Efficiency

The speed of any quantum operation applied by this module is inversely proportional to:

1. The settling time of the operational amplifiers in the circuit.

2. The accuracy affected by the value of Vd.

Settling time is the elapsed time from the application of an ideal instantaneous input voltage to the time at which the output has entered and remained within a specified error band. The settling time can be determined form the propagation delay or slew rate, which can be found in all op amps’ data sheets.

Let us assume we were to use a modern op amp, the LMH3401. LMH3401 op amps have a slew rate of 18000 V / μs. Assuming an operating power range of 10 volts, the total delay due to settling time will be less than:

per sequential op amp in the longest chain of op amps. Therefore it will be in the order of 50 ns.

The accuracy affected by the value of Vd can also be bounded. The value of the voltage Vd is related to the closeness of the data vector used in the experimental calculation of ω to the theoretical magnitude and direction of the vector for the given value of a and g. The relationship can be described using trigonometry.

Vd is exactly the difference between the magnitude of the experimental data vector which is created as Vx sweeps, and the magnitude of the experimental reference vector. The magnitude of the reference vector is manifested as a voltage itself, and equal to (Vx + V1/2) ± Vd.

Let the angle between the true data vector and experimental data vector be φ′. Then the magnitude of φ′ will be zero when they vectors’ magnitudes are equal.

Recall that Td(a,b) denotes the data point of a sphere chosen such that φ is positive, and Tr(a,b) denotes the matching reference point.

Then, let us introduce an error term ∆.

Simple algebraic manipulation yields the relationship of ∆ with φ:

The following graphs demonstrate the dependence of the error on the value of h.

Essentially, h contributes to the uncertainty with a magnitude of ±1, except for in the region where ∆ → 0. If ∆ could be 0, which it cannot since Vd cannot equal 0 volts, the error due to h would be either 0 and -1.

Therefore, ∆ is at worst ±|pTd(a,b)| for all values of h:

The implication is that the maximum effect of Vd is bounded by x+ 1/2 . ∆ may cause a discrepancy in the encoded value of a proportional to the magnitude of a itself.

This can be bounded further since any data point will lie on the surface zg(x,y) = gy^2 +y+gx^2 +x. x is proportional to g and to a such that if g increases but a remains the same, x will decrease. If x decreases, so will ∆, until x=∆=0. On the other hand, ∆ will increase as g→0. g cannot be exactly 0, but Vg might take a value equal to 2V U/2, the representation of a numerical 1, in order to represent closeness to the |0 ^⊗Q > state. Let this be considered close to the worst case. In this case, a = ⌈x⌉. We again introduce the error:

The worst case scenario for this condition would be that x = a See that ∆ would then at most cause a discrepancy of da(∆) = 2a.

Finally, we need to take into account that this error only plays a role in the calculation of data vectors, which is done during the sweep of Vx from Vx = 2(a − 1)V U/2 to Vx = Va. So, the maximum error will actually be no greater than the difference between VXd (the data vector’s value of Vx) and the starting value in the sweep Vx0 . The error will then be the lesser of da∆ and the difference between the data vector’s Vx and Vx0 .

The trade-off with this error is speed. If da(∆) is greater than the difference between Vxd and Vx0, then the sweep will trigger the rest of the module’s calculations immediately. Otherwise, it will wait until a voltage close enough to the data vector’s value of Vxd has been included in the sweep.

The speed of any quantum operation applied by the analog module is directly proportional to:

1. The speed of the voltage sweep from Vx = 2(a − 1)V U/2 to Vx = Va.

2. The frequency of the scaled time reference introduced by the clock source t.

These factors both depend on the quality of external equipment. The most significant is the speed of the voltage sweep from Vx = 2(a−1)V U/2 to Vx = Va. A voltage sweep’s speed is limited only by the ability of the circuit to respond to it. Therefore, we can use the worst case scenario that the settling time of each op amp is 5 ns to be the slowest sweep time required.

Since the frequency scaling feature cannot scale a frequency past the stability limit of the ω oscillator, it is only potentially useful for lower frequencies. However, it does have the potential to increase performance.

The voltage signal output to the oscillator is updated once per unit time. So, while the speed of the calculation is not affected by the time scaling feature, the speed at which data is transmitted to the next device is.

The way that the adjustment voltage for the oscillator is calculated is based in simple trigonometry.

Two vector magnitudes are calculated and passed to the final segment of the analog module, r and r′.

The relationship between r, r′ and φ is:

The signal ω produced by the oscillator (before being multiplied by Cd) has frequency φ and is therefore:

So, for ω to be correct, the DC voltage value of ω must equal r′ / r periodically in time, when t is a multiple of 1 time unit. Therefore, the time required to update the oscillator after a calculation is equal to 1 time unit, which may be scaled by the clock source.

In summary, we have the maximum total time ttot required for any quantum transformation will be roughly:

Note that the complexity of a quantum operation is not significant to this module, as long as the final result is defined by g and a. The complexity of an operation is the responsibility of an external hardware module. For example, we know that applying an X gate to all qubits in a system is simply equivalent to flipping g across a voltage equal to half the oscillator device’s capacity, regardless of the number of qubits in the system. So, an external module for doing this would simply be a circuit that flips g. This operation’s time of actualization would be equal to the HSAM’s ttot plus the propagation delay of the external module.

High Speed Analog Module Cost

A pair of HSAMs in tandem with a pair of oscillators like the AX7MAF1 will be capable of maintaining Q:

qubits at a precision of pr. As mentioned, the AX7MAF1 costs roughly $36.10. High precision LMH3401 op amps cost $14 each as of recently on DigiKey. However, more standard op amps may be selected which have lower slew rates. For example, we might sacrifice encoding speed to achieve a lower price point by instead using LM358s which have only a 0.6 V / μs slew rate but cost $0.16 each.

The entire cost of the analog module’s components might be estimated by:

Low-end cost per qubit is then:

High-end cost per qubit is then:

For example, with a precision of 1%, the low-end cost per qubit is:

And the high-end cost per qubit is:

Encoding Sign and Phase

We will now discuss an approach to encode phase and state information into the same signal as the probability distribution of a mixed state’s observables. This would replace a digital flag module.

Note that we have not yet taken advantage of one rotational axis in the p-spectrum parabaloid model of information, and that any rotation in θ can be interpreted as a rotation of the cartesian frame about the z axis, and so not affect the results of the calculations of g or a. It is an additional degree of freedom that can be used to encode phase and sign into the signal that represents our mixed qubit state.

Let us assume that every output of the HSAM described thus far is a pure frequency shift of the original oscillator’s frequency and no phase shifting is induced by this operation. The HSAM described would then operate perfectly on completely DC voltages. None of the components within it have significant reactivity. So, we can immediately redefine the input signal to this module to oscillate with a frequency and the functionality of the HSAM’s internal circuitry will not change. The only difference is that it will act on the DC component of the signal being processed rather than the entire signal. The root mean square (RMS) voltage will then be the component of the signal that emulates the probability space, where T is the period of the oscillation:

Simplified, the RMS voltage can be roughly calculated from the peak voltage of an oscillation:

The control signal that is sent to the AX7MAF1 oscillator will then oscillate as well. The effect will be for the frequency ω of the final output signal to oscillate in time.

The number of qubits in the emulated system directly dictate the number of phase and sign permutations possible. The set of permutations will have a size equal to 2^(Q+1), to account for each of the negative and imaginary flags. The maximum required permutations for one 20 qubit processor would then be 2097152. this would translate to 262144 bytes if the flag processor were implemented as a digital module. Alternatively, if flag processing is implemented as an AC signal processor, then the phase and sign of the emulated quantum pure states’ coefficients will be manifested as oscillations of the frequency output ω from the AX7MAF1 high frequency oscillator. These oscillations of ω will encode information in their phase. The frequency requirements to encode sufficient phases will now be bounded.

If we have 20 qubits and need to be able to identify 2^(20^2) = 1099511627776 separate phases in the frequency of the output oscillation, then we need to be able to measure the frequency of the output quickly enough to have a sampling frequency at which 1099511627776 separate phases can be identified.

In order to digitally measure the frequency we might use a translation circuit with three stages:

  1. Signal smoothing with a capacitor
  2. Schmitt trigger conversion to square wave
  3. Amplitude fixing with a Zener diode

The benefit of the phase shift encoding of phase and sign data is that regardless of the value of the phase shift, it will always take at most a time equal to the period of the phase shifted oscillation of ω to measure. This measurement can be done by simply detecting at which index into the generated digital signal the high level of a specific length occurs.

We need to be able to identify 1099511627776 separate phases. This means there will be 1099511627776 / 2 peaks of the output frequency per period of the phase shifted output frequency oscillation. The time it would take to measure all of these peaks is 1099511627776 / 2ω. This would have a worst case when ω approaches dω = Cd·dφ = ̇1099511627776 · 2.926448341844523 e−05 = ̇32.17 MHz, and a best case when ω = 2.1Ghz.

The time to measure a quantum system’s state coefficients’ phases and signs would then take between 1099511627776 / 2·32.17M H z = ̇4.75 hours and 1099511627776 / 2·2.1GH z = ̇4.36 minutes if the entire period were measured. This is comparable to the performance of the device proposed in 2016. However, measuring all of these peaks is unnecessary in our system since the only information that is required to identify a phase shift is the length of the first peak or valley of a period. Therefore, our worst case need only be close to 1s / ωmin = 1s / 32.17M H z = ̇2.7 s, and the best case will be 1s / ωmax = 1s / 2.1GH z = ̇0.48ms.

Let the amplitude of the phase shifted output frequency oscillation be fixed for simplicity, and be chosen to be minimal. The sampling frequency of our frequency measurement circuit must not interfere with these best and worst case scenarios. In a paper, the delay of a schmitt trigger is presented to be roughly 0.125 ns. The propagation delays of the other discrete components of the ouput frequency digitization measurement circuit are trivial. Since 0.125ns< 1 / 2.1GH z = ̇0.4ns, the measurement circuit will perform well in the entire range of output oscillation as long as the amplitude of the phase shifted output frequency oscillation is chosen to be less than 1 / 0.125ns = ̇8GHz, which is impossible due to the limitations of the frequency oscillation device. Before the amplitude of the phase shifted ω oscillation is manifested in the output oscillation itself, it will be computed as an AC amplitude on the control signal being processed by the HSAM. We will choose a value of dVg = 0.00005 since it is less than the value dg = 0.0001 which was used earlier to estimate the density of g curves that can be emulated in general. Therefore this oscillation will not interfere with the properties of g curves that have been determined. This will also have a minimal effect on the number of encodable value of phi before ω saturates. This will also help minimize the addition to measurement time due to the phase and sign information.

The frequency of the phase and sign oscillation of the control signal is not especially consequential but should be chosen such that the frequency responses of average and affordable reactive components such as capacitors can be used to perform meaningful phase shifts.

Implementation of the Set of Universal Quantum Operators

The implementations of the fundamental Pauli operators will need to take into account:

• The qubit being acted on

• The topology of the qubit system of which the qubit is a part

However, we will first generalize the approach to solving the analog module implementation of an operator on a two qubit system in order to gain an intuition for the approach.

Quantum Operators as Control Systems

Given that the analog operator modules are memoryless (excepting their input voltages perhaps being buffered), a repeatable algorithm for generating analog circuit implementations directly from quantum operator matrices can be derived. Like in the Schr ̈odinger Picture of a quantum system, an operator is applied to a state and has the form and function of a differential operator.

The function of a quantum operator U in our system will be equivalent to the set of the convolution products of U’s decomposed Pauli operators’ corresponding analog implementations acting on each qubit.

Recall that the convolution product CP of two functions of time is:

In the s domain, which is the space that the Laplace transform takes differential equations, the convolution product is simplified to straight multiplication.

A transfer function T(s) is the Laplace transform of the impulse response of a linear time-invariant system (when the initial conditions are set to zero). These are practically non-existent in the real world. However, as stated by Richard Feynman, “Linear systems are important because we can solve them”. A linear system satisfies homogeneity, superposition and time invariance requirements. A linear system can also only consist of six operation types:

  • multiplication
  • differentiation
  • integration
  • addition
  • division (multiplication by inverse)
  • subtraction (addition of negative)

These are the operations we have in our arsenal when defining a transfer function. The Laplace transform of the direct delta signal is 1. We also know some other handy transforms of transfer functions:

The transfer function T(s) of an analog operator circuit will have at most two inputs, and two outputs.

Any transfer matrix with this input / ouput dimensionality is expressible in a number of forms including the following.

Where the two outputs would be:

Let c = [(̸= 0)R]. Then, see that c (sI − A)^(−1) B is equivalent to the transfer function in the s domain.

The matrix format is appealing since it deals directly with the transformation in a vector space, and the standard fractional transfer function format of T(s) is appealing since it will be more directly relatable to a circuit composed of a set of discrete analog components.

The variable s (of the described s domain) is directly representative of jω component, the imaginary unit j times the angle of frequency of the signal passing through an electrical component ωcomponent. With the knowledge that the impedance of a capacitor is:

and the impedance of an inductor is:

we can relate a transfer function directly to an analog circuit composed of discrete components. The are a variety of control system techniques that may be used to design and optimize a circuit. However, in our particular case, any quantum operation can be built by combining a few fundamental operator circuits.

X Gate

X2 Gate

An X gate applied to the second qubit would be equivalent to adding Vmax / 4 to Vgi when the initial state is within an odd surface group, and subtracting Vmax / 4 from Vgi when the initial state is within an even surface group.

We can use a simple voltage divider to attain Vmax / 4. Operational amplifier subractor, adder and comparators can do the rest. Consider the following circuit which identifies the vertex group of the initial state, effectively performing a “vertex group measurement” which does not impact the measured state. Let this circuit be denoted Mv.

The outputs of the circuit have 1 ω resistors in their places here for simulation purposes.

The LT1226 was chosen due to its low propogation delay (5.5ns). Simulating the response of the circuit to a DC voltage sweep using LTSpice demonstrates its function.

This (approximate) circuit can be used to implement the X gate applied to the second qubit using a hybrid digital / analog circuit.

X1 Gate

An X gate applied to the first qubit will have a very similar approximate circuit to the X gate applied to the second qubit. However, there will be an additional stage since the surface group must also be taken into account.

This circuit will change the state to the correct surface and vertex group. However, there are 6 cases where the state still needs to be reflected within the final surface group. These were determined iteratively by inspecting whether the X1 gate moved a state from each original vertex and surface group to a corresponding point on the final vertex and surface group correctly. If the original point was closer to the next surface group in wrapping order (1,2,3) then it was expect for the final point to also be close to the next group in wrapping order. In 6 cases, the order was reversed after the application of the X1 gate.

In these 6 the following cases, a circuit which reflects a state within its surface group like will need to be applied as well in order to adjust Vgo.

Controlled NOT Gate

In the case we are using only one pair of analog modules to maintain a quantum system, applying a CNOT will be a single voltage shifting operation similar to a normal NOT (op amps can do it in one shot) applied on each analog module.

The vertex and surface group transitions for a CNOT1,2 are the following:

In summary, the operation is equivalent to a pure X2 when the vertex group is either |10 > or |11 >. When the vertex group is |00 > or |01 > and the surface group is 1, then the operation can be achieved by adding V ss/12 to Vgi . When the vertex group is |00 > or |01 > and the surface group is 2, then the operation can be achieved by subtracting V ss/12 from Vgi . When the vertex group is |00 > or |01 > and the surface group is 3, then the reflection occurs within the surface group. Consider the following ”half CNOT” circuit which performs the operation necessary for the CNOT when the vertex group is |00 > or |01 >.

Then the full CNOT is the following.

Z Gate

The Z gate applied to both qubits will perform a sign flip to each pure state coefficient where exactly one of the qubits has a value of 1. It will negate itself when applied to a pure state with a value of 1 for both, and have no effect when applied to a state in which both values are 0. Therefore, the Z gate applied to both qubits is only effective on the second and third vertex group states. If applied to a valid state, the Z gate will have no effect on its measurement probability and therefore will consist of a circuit with an entirely reactive impedance. In the case of a two qubit system we have 256 possibilities and 256 significant measurable phase shift ranges in the output frequency oscillation into which the phase and sign of each quantum state coefficient is encoded. Let the possibilities be ordered according to a binary count where the variables are the imaginary and sign flags on each state, ordered by the pure states’ quantum bit strings. Then the Z gate will have the effect of shifting the phase to measurable ranges based on the previous range. Let each odd “flag bit” represent an imaginary flag, and each even bit represent a negative flag. Let a measurable range constitute a unit of phase shifting. Then if the binary order of the measurable range is greater than 7 but less than 32 units, the Z gate should change it by shifting the phase up by 24 units. If the range is equal to or greater than 32 but less than 40, then Z should shift it down by 24 units. Otherwise, if the range is equal to or greater than 40, Z should shift it down by 40 units. Finally, if the range is less than 7, Z should shift it up by 40 units. This would achieve the complete set of state mutations that the Z gate can perform on two qubits.

A unit is exactly 2π / 256 rad. Therefore we need to design capacitive circuits with overall phase shifts of 3π / 16 rad, −3π / 16 rad, 5π / 16 rad and −5π / 16 rad. Let the frequency of the control signal be fixed. Say we fixed the resistance of the Z gate to 1 Ω. Then a phase shift of −3π / 16 = γ rad could be achieved by a capacitor of the common values 10μF if the frequency was chosen to be = ̇149.660KHz. Using 10μF as a reference instead of another common value such as 1μF was chosen since this gives us the ability to make use of more of the range of common capacitor and inductor ratings (as you will see becomes important in a moment, 0.44μF capacitors may be inexpensive and abundant, but 0.44μH inductors are not).

Since the phase shift of a capacitive circuit is given by:

Which in our case is equal to:

The phase shift of -5π / 16 rad could then be achieved using a capacitor of roughly 4.46μF .

While capacitors cause the phase of a signal to lag, inductors have the reverse effect. So, we can use inductors for the positive phase shifts.

To give an idea of the feasibility of this circuit, consider that 8000 10μH inductors with a form factor of 0.6 mm x 0.65 mm x 1.1 mm can be ordered for $1.27 apiece form Mouser Electronics. 8000 surface-mount capacitors of 10μF with form factor 0.8 mm x 0.16 mm x 0.85 mm can be ordered for $0.35 apiece from Mouser as well. 4.5μF caps and 4.5μH inductors are also available in the same price range.

We can achieve the application of the correct phase shift by using a circuit like the Analog Devices AD8302 which accepts two signals and outputs a voltage proportional to their relative phase shift. We can use a set of comparators that each control a transistor to select the correct shifted signal to be sent to the HSAM based on the range of the phase proportional voltage.

Applying Z to only the first qubit would be achieved by applying a phase shift of 136 units when the signal is in a measurable range less than 8 units, a phase shift of 120 units when it is in a range less than 128 but greater than 8 units, a phase shift of -120 units when it is in a range greater than 128 units but less than 136 units, and a phase shift of -136 units when it is in a range greater than or equal to 136 units. Our passive operator components for this operation would be 2 capacitors and 2 inductors, again. We would see a similar pattern for the Z gate applied to the second qubit.

Note that the DC voltage drop due to any resistance of the Z module must be compensated for in its output.

Y Gate

The Y gate can notably be constructed by applying iXZ. We have already defined the Z and X operations. Therefore, we only need to worry about the application of i. The application of i to any two quits state with flag bits FB will be the following, where the upper half of the table includes circuit components that act on the first of the dual oscillators, and the lower half of the table has elements that operate on the second. Notice how this provides an opportunity for this hardware to be reused by both modules rather than duplicated.

M Gate and Measurement

An advantage of simulating or emulating quantum systems is that system properties which are normally not measurable can be measured without disturbing state. Let us first look at some special cases that we know and understand.

Chief States

When a state is on a chief curve of a particular “chief observable state” in the probability simplex, we know that the coefficients corresponding with each other observable state must be the same. We also know that the coefficient corresponding with the chief observable state is related to the other coefficients by the following, where p <= 1 is the fraction of the probability spectrum maintained in a simplex. See that Cother and Cchief are related by an elliptic curve.

If p = 1, the curve would look like the following segment of an ellipse:

Recall that two simpleces may be used, one for imaginary and one for real coefficients’ components. In the case that only one simplex has a state with a > 0, p will equal 1 for that simplex and 0 for the other. In the case that both simpleces maintain coefficients with significant magnitude, then p will be 1 minus the sum of the squares of the coefficients maintained in the other simplex. Since a represents the distance of a state from the maximally mixed state, we can determine the magnitude of a coefficient directly from a. In the case that a state is on a chief curve, the coefficient will be equal to the value of a divided by half the precision in a. Recall that the precision in a is maximally 1% · 2Q^−20 using an oscillator like the AX7MAF1.

Then the following circuit would produce a voltage proportional to the coefficient Cchief of the chief state when the maintained state is on its chief curve. Vp is a voltage representing the number of identifiable values of a due to the precision in a. For example, if the precision in a were 1%, Vp would be 100 volts. See that this would output a value of Vcoeff between 0.5 and 1 since Vcoeff is equivalent to:

However, it is clear that this circuit is not practical. Addition of large numbers might be practical to implement since our circuits can use an adjustable voltage unit 2 V 1/2 when doing addition or subtraction, but when multiplying or dividing a voltage the multiplier or divisor need to have actual voltage values corresponding to their computational value (these volt can’t be scaled to use a unit of 2 V 1/2 . A Vp of 100 volts is the primary concern. So instead, we must employ a different approach to finding the coefficents’ values.

We need the range of the output Vcoeff to fill the allowable domain of the elliptic curve [0.5, p]. The logistic equation know as the sigmoid function is ideal for this purpose.

The sigmoid evaluated at different values of c yields a family of curves.

c should be selected based on the range of Vcoeff and the precision in a. Say the precision in a were 100. Then we want Vcoeff to have an approximate value of:

It is easy to see from the graph that choosing a value c = 0.1 · 2 V 1/2 fits this purpose since 2 V 1/2 is our computational voltage unit (cvu). The Laplace transform of the sigmoid function yields a transfer function that can be implemented as a PID controller. L denotes the Laplace transform and D denotes the digamma function.

So, in our case we will introduce a transfer function Sigmoid(s), where all numbers are given in cvu:

So Vcoeff can simply be obtained by:

Then the other coefficients can be calculated using a few components including a root extractor circuit like that described in Texas Instruments’ AN- 31 application note. V3 is 3 volts, Vpr is the fraction of the probability spectrum maintained in the simplex.

Strongly Entangled States

Strongly entangled states are those on edges of the simplex. States that are directly on edges of the simplex are easy to solve. a is always a max for edge states. The coefficients for observable states that are not vertices of the particular edge are zero. The remaining coefficients C0 and C1 are governed by:

Since a is a max, g is the deciding variable for the coefficients of strongly entangled states. Say we chose to allocate 20000 g curves to each surface group. Each angle in the equilateral tetrahedron is 60o or 1.0472 rad. Let the curves’ “termination” points when a = a max be distributed evenly about the surface.

Let the curves in a surface group be ordered such that their termination points create radially evenly spaced “termination lines” from the surface group’s vertex.

Let us choose to order the curves such that they increase in order as they approach the edge of their surface group with a surface group of a higher order, and let them decrease in order as they approach the edge of their surface group with a surface group of a lower order. Let the curves in each odd line increase in order as the approach the maximally mixed state and the curves in each even line decrease in order as the approach the maximally mixed state. For consistency, we may want to maintain the same probability precision between termination points as we do between values of a. Therefore, let each termination line be made of 50 termination points. Then we have lines of 20000 / 50 = 400 termination points.

Consider the strongly entangled termination lines to be on the stereographic projection of a tetrahedral simplex.

Then the maximally entangled states of the system will occur when g is 50 dg before the end of each surface group. Strongly entangled states in general will exist on simplex edges, in the first and last termination lines of each surface group.

For strongly entangled states, we have the relationship of a circular curve between C0 and C1 with a domain of [1/√2,1].

In this case we want Vcoeff to be between 1/√2 and 1, something like:

We might calculate the coefficients of the strongly entangled states by the following process.

Edge-Aligned States

The next most complex case is when the state is on a g curve which fills space on a plane that includes two simplex vertices and the maximally mixed point. In this case we have that two coefficients are the same, and two are not, like in the strongly entangled case. g will have the same bounds as well. However, a will not be a max, and the two coefficients that are the same are non-zero.

C other is proportional to the value of a while the ratio of C0 and C1 is a functions of g. If we let Vcoeff represent the coefficient C other, then we still have that:

(the circular relationship in the horizontal plane of the graph above). The remaining coefficients will be related by:

H Gate

The Hadamard gate can notably be expressed as the following.

This is trivial to construct using the previously defined gates for the X and Z operations. Since transfer function in parallel sum together, we can simply arrange the X and Z gates in parallel to achieve the Hadamard.

Optimal Hybrid Analog / Digital Processor Design

Processing Unit Block

An ideal, scalable arhcitecture for the full electrical processing system will be discussed in this subsection. The informational flow diagram of a single, minimalistec processing unit based on the approaches described thus far might be the following:

The purple and grey blocks represent memoryless hybrid analog and digital circuits. The purple are pure phase shifts while the grey also act on DC voltage. The orange represent purely analog circuits. The green represent the DC component of the oscillator and HSAMs described in detail in the previous subsection. The blue represents the AC component of the oscillator and HSAMs.

Note the digital and analog inputs and output to the processing module. The first input of O bits’ purpose is to select a grey operation to be applied, meaning the grey modules must each have a unique bit address. The number of bits will therefore be O = operations / log(2) bits. The input of Q bits is used in the creation of quantum states. Each bit directly represents the state of its corresponding qubit. For example, during the creation of a state |1001011010 >, there would be Q = 10 bits with the values 1001011010, the state’s “bit string”. The analog input is also used during state creation. The outputs simply provide access to the data in the HSAM / oscillator.

High Level Processor Unit Block Simulator

The functionality of each module can be explicitly detailed by simulating the system using Python. The following code simulates the processor’s informational architecture, with minimal use of libraries and pre-built functions. This simulator also provides insight into the amount of memory and resources that would be required to perform an algorithm.

Use of the simulator can be sumarized into the following use cases. The simulator is packaged as a library, and the library API is written with the intent of reflecting the experience of solving a quantum circuit in Dirac notation. Therefore, what a user of the ismulator must provide to declare a quantum state are the coefficients of its composite pure states ( the square root of each pure state’s probability of being observed ) and the bit string representing the values of each qubit in that pure state.

# State coefficient
initial_coeff = Coefficient(magnitude=1.00, imaginary=False)
# 3 qubit state
initial_state = State(coeff=initial_coeff, val="000")
# State ensemble
state = States(state_array=[initial_state], num_qubits=3)

Supported operations include the Pauli matrices, the Hadamard operator, and measurement.

# Quantum circuit operations
state.h(qubit=0).cx(source=0, target=1).cx(source=2, target=0).h(qubit=2)
# Measure states directly into variables
m_1 = state.m(qubit=2)
m_2 = state.m(qubit=0)
# Classical results can control quantum operations
if m_2 == one:
state.x(qubit=1)
if m_1 == one:
state.z(qubit=1)

Insights that can be gained from using the simulator include the following.

# Print matrix state representation of entire state
state.print_density_matrices()
# Query for the density matrix of a single qubit at any step
matrix = state.get_density_matrix(qubit=1)
# Get alpha, beta for any qubit at any step
[alpha, beta] = state.get_components(qubit=2)
# Learn what resources were required to implement the experiment
state.print_max_requirements()

This simulator has been implemented and tested in order to validate and optimize the layout of the processor. However, it does not benefit from the efficiency of the HSAM, or from the inherently faster nature of bare-metal application specific circuitry. Software will serve a greater purpose as well, however, which will be detailed in a later subsection.

Processor Hardware Requirements

Having the simulator as a reference makes it a straightforward task to design the hardware processor, piece by piece.

First, we will look at the first commands given to the simulator in any simulation:

We know that in the hardware implementation, a coefficient on a pure state is actualized as the closeness of the emulated state to that pure state in the probability simplex space. Also, whether the coefficient is negative or imaginary is maintained as a bit in the flag memory. So, the “Create Coefficients” module must be capable of the following tasks:

1. Update Va and Vg to move to the appropriate distance from the pure state.

2. Create or update the flag memory for the state.

In order for these to be possible, we must be able to initialize the analog module and have a system for allocating and managing flag memory space.

To make the emulator simpler, all pure states will each always be considered a valid vertex of the qubit state simplex being maintained. Meaning, we will always emulate all possibilities in the analog module even if their probabilities of being observed are 0. No shortcuts will be made during the partitioning of g curves. When a coefficient creation begins, the topology of the emulated system will update to match the number of qubits in the string of Q bits.

When the system is initialized, let the number of qubits in the emulated system directly dictate the size of the flag memory’s allocated space. The allocated space will have a size equal to 2^(Q+1), to account for each of the negative and imaginary flags. The maximum allocated flag memory size required for one 20 qubit processor would then be 262144 bytes, or 2097152 bits. The number of qubits must be stored so that the system is capable of distinguishably delimiting the allocated space. 5 additional bits will be required to store this value. Let the allocated flag space always begin in memory location 0, and the number of qubits be stored in the final 5 bits of memory. In order to manage this digital memory, we will need to implement a digital information pipeline.

Operations on n qubits without dependencies on current qubit states are very simple. For example, a NOT gate does not need to know what the state of its target qubit is in order to flip its bit. In terms of digital information, all it needs to do is interchange the flag bits of each target qubit state with its complement’s. This can certainly be done by a combinational circuit in one clock cycle. The same is true of a phase flip and sign flip operation, which simply need to call upon the “multiply by i” and “negate” modules to mutate the correct memory locations. Controlled operations are also simple and should just be applied to every state that has the value 1 in the place of the controlling qubit.

So, the details of the operations that will affect their timing performance lie within the implementations of their modules (which are hopefully achievable as combinational circuits) and in the memory management mechanism.

Digital Memory

Static SRAM will be chosen for the flag memory since it is generally more affordable than NVRAM. The downside is that memory values will not be maintained if the device is powered down, which is acceptable. For example, we might chose a memory chip like the IS62WV12816EALL, a 128K x 16 low voltage, ultra low power CMOS SRAM chip that costs roughly $1.28 each. The IS62WV12816EALL has a write cycle and access time of 45ns, well underneath the time required for the analog module to finish its part of an operation (10 μs). This would leave us with 2097152 − 128K · 16 + 5 = 49147 bits required to complete our minimum valid memory size for 20 qubits.

Of course, the fastest type of memory is a simple latch. Texas Instruments’ SN74HSTL16918 is a latch array with a 1.9 ns delay for writes, and no delay for reads due to the nature of the D-latches used. However, the SN74HSTL16918 has only 18 bits and costs roughly $2.75, an affordable price for latch based memory. So, even using this quick technology just to finish off the 49147 remaining bits would cost over $7508.

We might instead choose a chip with a parallel access memory interface, like the one that the IS62WV12816EALL also uses. This is a fast interface because it is asynchronous. We might chose a chip like the AS6C6264, a $2.43 8K x 8 low power CMOS SRAM chip with 64Kb of memory and write cycle and access times of 55ns. This far surpass the several micro second write and access times of I2C and SPI interfaced memory.

If we are not worried about having excess memory, we might chose to splurge and use a single 4Mb memory chip like the IS61WV25616EDBLL, a 256K x 16 high speed asynchronous CMOS SRAM chip that costs only $2.11. Its write and access delays are only 10ns. This gets us more than twice the memory we need, but also costs less than the more precise alternatives.

In any case, we will chose to use parallel asynchronous memory. Let us assume we went with a single chip similar to the IS61WV25616EDBLL and have memory to spare.

Since we have an excess of memory, and are not concerned too much with using space, we can forego memory location optimization and simply allocate 2 consecutive bits per state. Then the address for a state’s flag bits is simply the state’s bit string times 2. The circuit that calculates a state’s flag memory location is simply a left bit shift of 1 digit on the state’s bit string. However, memory chips like the IS61WV25616EDBLL address words of 16 bits. So, the nth bit will occur in the word at index ⌊n/16⌋. To calculate the word address, we could shift the bit address right by 4 digits. See that the direct calculation of a word address from the state string would be a right bit shift of 3 digits on the state bit string. The location of the bit address within the word address will then be given by the remainder of the n/16 division, and be given by the bits shifted out of the state bit string during the right bit shift. Several microprocessors include support for a command that shifts a bit string through a register in order to retain the bits pushed off of the end, including the Motorola 68000, an old an inexpensive processor. However, the 68000 and similar processors do a lot more than is required for the quantum emulator being designed. So, we will take a page from their book but not go so far as to use a third party microprocessor product. We can build something even less costly.

Digital Microprocessing Layer Elementary Operation Pipelining Infrastructure

The digital pipeline of our processor will be unique. We will not need to discuss modern processor pipeline architecture since our processor will be far simpler than even the classic RISK architecture used by early processors like the M68000 and MIPS processors. Classic pipelines included only a few stages. The names of the pipeline steps in the MIPS architecture were “Instruction fetch”, “Instruction decode”, “Execute”, “Memory access” and ”Writeback”.

It will be necessary to include a stage for fetching instructions, since it will be necessary for a user to select each operation to apply. The fetch step will occur periodically.

The decode step will consist of identifying the operation to be executed. and determining whether it is safe to trigger the operation’s digital and/or analog circuits. We have a unique problem to solve in this design since we are executing a set of relatively lengthy DC domain analog operations in which the order of operations is significant, alongside relatively fast AC operations in which order is insignificant. Due to this fact, it is ideal to separate the execution step into DC and AC execution steps. The decode step will be responsible for taking in an instruction and determining the underlying DC and AC operations, each of which will be scheduled separately.

Before we begin the design of the task scheduling mechanism, we will first design a mechanism for combining operations that takes advantage of the memoryless nature of the operation modules. Assuming each operation module’s analog circuitry is a signal processor, and each operation module’s digital circuitry is combinational, we might be able to perform several “sequential” operations at once by piping together the modules’ circuits in the correct order. for example, if we wanted to apply a NOT gate to a qubit, and then a Z gate to the same qubit, the analog N OT circuitry and Z circuitry could be applied to the Va and Vg at once. This would be the same as the convolution of their transfer functions. We would need to be able to control both the path of Va and Vg into the first circuit, and the path of the first circuit’s output into the second circuit. This could be simply achieved by simply using transistors or relays. Each operation module’s analog circuitry could have an output that is multiplexed to the HSAM’s inputs as well as each of the other analog modules’ inputs. Then it is the responsibility of the decode circuitry to compose compatible steps in the correct order. This achieves a sort of ”sequential” instruction level parallelism that maintains the order of operations. The limitation of this scheme would be that each module can only be used once in the convolution of operations.

Both instruction level parallelism and out-of-order execution can be taken advantage of in the digital circuitry surrounding the operation modules. Since the only mutations done by the AC circuitry are sign and phase flips, order does not matter. In fact, any set of sign flips on a state can be summarized into one flip if the number of flipping operations is odd, and any even number of sign flips can be ignored entirely. The behaviour of each of the phase and sign flip modules can be summarized as follows.

In other words, the negation is just a bit flip of the sign bit and can be used to represent any odd number of negating operations. The multiplication by i is simply a bit flip of the imaginary flag, and also a negation of the state when the imaginary input is true.

The number of negations actually performed might be summarized in terms of the number of multiplications by i Ni and negating operations N−1 as:

The number of imaginary flag flips actually performed might be summarized in terms of the number of multiplications by i Ni as:

We should also consider that any of H, X, Y and Z have no effect when applied an even number of times in succession to a particular set of states. In particular, the number of successive gates in this set to actually apply would be given by the number of similar gates in a row NS:

Since it is apparent that summarizing and composing computational steps will be useful, and we are treating the Dc and AC elements of operations as separate pipelines, we will adopt a VLIW architecture. VLIW stands for “Very Long Instruction Word”, and indicates a processing architecture that accepts multiple instructions per instruction word and schedules each instruction to be processed by one of a number of parallel pipelines in such a way that there are no conflicts between instructions.

The initial design includes five operation modules that might be piped together. So, we will start with 5! + 4! + 3! + 2! = 152 unique convolutions which are permutations of the modules without repetition. We will use 5^2 = 25 bits to select a particular convolution, one for each transistor or relay required. The bits used for specifying a convolution can be derived from the operation identifier bits in the instruction word if we assume that operations are listed from left to right in the order that they are meant to be applied. We will also need to include bits that identify the target qubit for each operation, and the source qubit for controlling operations. We will assume that each operatoion identifier is followed by a 5 bit qubit identifier, and that a controlling operation is followed by two 5 bit qubit identifiers; for the source and target qubits respectively. Next we will solve the circuit that calculates the permutation of operations to be convoluted based on the operation bits in the instruction word, assuming that all the operations target one same set of states.

We can minimize the standard SOP boolean expressions that follow from this table using the Quine–McCluskey algorithm. The following code implements this optimization algorithm for our particular circuit, and was able to remove 1044 redundant product terms.

perm_bit_indeces = {
’0001 ’: {
0: 0,
’0010 ’: 1,
’0011 ’: 2,
’0100 ’: 3,
’0101 ’: 4,
},
’0010 ’: {
0: 5,
’0001 ’: 6,
’0011 ’: 7,
’0100 ’: 8,
’0101 ’: 9,
},
’0011 ’: {
0: 10,
’0001 ’: 11,
’0010 ’: 12,
’0100 ’: 13,
’0101 ’: 14,
},
’0100 ’: {
0: 15,
’0001 ’: 16,
’0010 ’: 17,
’0011 ’: 18,
’0101 ’: 19,
},
’0101 ’: {
0: 20,
’0001 ’: 21,
’0010 ’: 22,
’0011 ’: 23,
’0100 ’: 24
}
}

expressions = [[] for perm in range(25)]
minterms = [[] for perm in range(25)]
for length in [2,3,4,5]:
for word in itertools.permutations([’0001 ’, ’0010 ’, ’0011 ’,
’0100 ’, ’0101 ’], length):
instr_bits = ’’
perm_bits = list(’0000000000000000000000000’)
for n in range(len(word)):
nibble = word[n]
instr_bits += str(nibble)
if n == 0:
perm_bits[perm_bit_indeces[nibble][0]] = ’1’
else:
perm_bits[perm_bit_indeces[nibble][word[n - 1]]] = ’1’
perm_bits = ’’.join(perm_bits)
for p in range(len(perm_bits)):
if perm_bits[p] == ’1’:
boolean_exp = ’’
operations = instr_bits.split(’ ’)
j = -1
for o in range(len(operations)):
for b in operations[o]:
j += 1
word_bit = ’ W_{{{0}}}’.format(j)
if b == ’1’:
if boolean_exp != ’’:
boolean_exp += ’ \cdot’ + word_bit
else:
boolean_exp += word_bit
if not boolean_exp in expressions[p]:
expressions[p].append(boolean_exp)
minterms[p].append(instr_bits)
# print SOP
for k in range(len(expressions)):
perm_bit = ’P_{0} =’.format(k)
perm_bit_exps = expressions[k]
perm_bit_total_exp = perm_bit + ’ + ’.join(perm_bit_exps)
print(perm_bit_total_exp)

#simplify
has_deps = True


while has_deps == True:
has_deps = False
dup_dependencies = [{} for perm in range(25)]

for l in range(len(minterms)):
for term in range(len(minterms[l])):
one_term_bits = []
for term_bit in range(len(minterms[l][term])):
if minterms[l][term][term_bit] == ’1’:
one_term_bits.append(term_bit)
for other_minterm in minterms[l][0:term] +
minterms[l][term + 1:]:
dup = True
for bit_index in one_term_bits:
if(bit_index > len(other_minterm) or other_minterm[bit_index] == ’0’):
dup = False
if dup == True:
if not term in dup_dependencies[l].keys():
dup_dependencies[l][term] =
[minterms[l].index(other_minterm)]
else:
dup_dependencies[l][term].append(minterms[l].index(other_minterm))

removal_candidates = [[] for index in range(len(minterms))]
for p in range(len(dup_dependencies)):
for opt_term_index in dup_dependencies[p].keys():
for rm_term in dup_dependencies[p][opt_term_index]:
if not rm_term in dup_dependencies[p].keys():
if not opt_term_index in removal_candidates[p]:
removal_candidates[p].append(rm_term)

# filter optimal expressions
optimal_expressions = [[] for index in range(len(expressions))]
optimal_minterms = [[] for index in range(len(minterms))]

removed = 0
for k in range(len(expressions)):
for q in range(len(expressions[k])):
if q in removal_candidates[k]:
print(’Redundant term {0} removed from P_{1}
expression’.format(expressions[k][q], k))
removed += 1
continue
optimal_expressions[k].append(expressions[k][q])
optimal_minterms[k].append(minterms[k][q])

expressions = optimal_expressions
minterms = optimal_minterms
if removed != 0:
has_deps = True
print(’pass removed {0} terms’.format(removed))

# print optimal expressions
for e in range(len(expressions)):
perm_bit = ’P_{0} =’.format(e)
perm_bit_exps = expressions[e]
perm_bit_total_exp = perm_bit + ’ + ’.join(perm_bit_exps)
print(perm_bit_total_exp)

The optimized boolean expressions for each output bits are:

P0 =W3 ·W6 +W3 ·W5

P1 =W2 ·W7 +W2 ·W3 ·W6 ·W11 +W1 ·W6 ·W11 +W2 ·W3 ·W5 ·W10 · W15 +W1 ·W6 ·W7 ·W10 ·W15 +W1 ·W5 ·W7 ·W10 ·W15 +W1 ·W3 ·W5 ·W10 · W15 +W2 ·W3 ·W5 ·W9 ·W11 ·W14 ·W19 +W1 ·W5 ·W7 ·W10 ·W11 ·W14 ·W19 + W1 ·W3 ·W6 ·W7 ·W9 ·W14 ·W19 +W1 ·W3 ·W5 ·W10 ·W11 ·W14 ·W19

P2 =W2·W3·W7+W2·W6·W7·W11+W1·W6·W7·W11+W2·W5·W10· W11 ·W15 +W1 ·W6 ·W10 ·W11 ·W15 +W1 ·W5 ·W7 ·W10 ·W11 ·W15 +W1 ·W3 ·W5 ·W10 ·W11 ·W15 +W2 ·W5 ·W9 ·W11 ·W14 ·W15 ·W19 +W2 ·W5 ·W7 ·W9 ·W14 · W15 ·W19 +W1 ·W6 ·W9 ·W11 ·W14 ·W15 ·W19 +W1 ·W5 ·W7 ·W10 ·W14 ·W15 · W19 +W1 ·W3 ·W6 ·W9 ·W14 ·W15 ·W19 +W1 ·W3 ·W5 ·W10 ·W14 ·W15 ·W19

P3 =W1 ·W7 +W2 ·W5 ·W11 +W1 ·W3 ·W5 ·W11 +W2 ·W6 ·W7 ·W9 · W15 +W2 ·W5 ·W7 ·W9 ·W15 +W2 ·W3 ·W6 ·W9 ·W15 +W1 ·W3 ·W6 ·W9 · W15 +W2 ·W6 ·W7 ·W9 ·W11 ·W13 ·W19 +W2 ·W3 ·W6 ·W9 ·W11 ·W13 ·W19 + W2 ·W3 ·W5 ·W7 ·W10 ·W13 ·W19 +W1 ·W3 ·W6 ·W10 ·W11 ·W13 ·W19

P4 =W1·W3·W7+W2·W5·W7·W11+W1·W5·W7·W11+W2·W6·W7· W9 ·W11 ·W15 +W2 ·W5 ·W9 ·W11 ·W15 +W2 ·W3 ·W6 ·W9 ·W11 ·W15 +W1 ·W6 · W9 ·W11 ·W15 +W2 ·W6 ·W7 ·W9 ·W13 ·W15 ·W19 +W2 ·W5 ·W10 ·W11 ·W13 · W15 ·W19 +W2 ·W3 ·W6 ·W9 ·W13 ·W15 ·W19 +W2 ·W3 ·W5 ·W10 ·W13 ·W15 · W19 +W1 ·W6 ·W10 ·W11 ·W13 ·W15 ·W19 +W1 ·W6 ·W7 ·W10 ·W13 ·W15 ·W19

P5 =W2 ·W7 +W2 ·W5

P6 =W3 ·W6 +W2 ·W3 ·W7 ·W10 +W1 ·W7 ·W10 +W2 ·W3 ·W5 ·W11 · W14 +W1 ·W6 ·W7 ·W11 ·W14 +W1 ·W5 ·W7 ·W11 ·W14 +W1 ·W3 ·W5 ·W11 · W14 +W2 ·W3 ·W5 ·W9 ·W11 ·W15 ·W18 +W2 ·W3 ·W5 ·W7 ·W9 ·W15 ·W18 + W1 ·W6 ·W7 ·W9 ·W11 ·W15 ·W18 +W1 ·W3 ·W5 ·W10 ·W11 ·W15 ·W18

P7 =W2·W3·W6+W3·W6·W7·W10+W1·W6·W7·W10+W3·W5·W10·W11· W14 +W1 ·W7 ·W10 ·W11 ·W14 +W3 ·W5 ·W9 ·W11 ·W14 ·W15 ·W18 +W3 ·W5 ·W7 · W9 ·W14 ·W15 ·W18 +W1 ·W7 ·W9 ·W11 ·W14 ·W15 ·W18 +W1 ·W5 ·W7 ·W11 ·W14 · W15 ·W18 +W1 ·W3 ·W7 ·W9 ·W14 ·W15 ·W18 +W1 ·W3 ·W5 ·W11 ·W14 ·W15 ·W18

P8 =W1·W6+W3·W5·W10+W3·W6·W7·W9·W14+W3·W5·W7·W9· W14 +W2 ·W3 ·W7 ·W9 ·W14 +W1 ·W3 ·W7 ·W9 ·W14 +W3 ·W6 ·W7 ·W9 · W11 ·W13 ·W18 +W2 ·W3 ·W7 ·W9 ·W11 ·W13 ·W18 +W2 ·W3 ·W5 ·W7 ·W11 · W13 ·W18 +W1 ·W3 ·W7 ·W10 ·W11 ·W13 ·W18

P9 =W1·W3·W6+W3·W5·W7·W10+W1·W5·W7·W10+W3·W6·W7·W9 ·W11 ·W14 +W3 ·W5 ·W9 ·W11 ·W14 +W2 ·W3 ·W7 ·W9 ·W11 ·W14 +W1 ·W7 · W9 ·W11 ·W14 +W3 ·W6 ·W7 ·W9 ·W13 ·W15 ·W18 +W3 ·W5 ·W10 ·W11 ·W13 · W15 ·W18 +W2 ·W3 ·W7 ·W9 ·W13 ·W15 ·W18 +W2 ·W3 ·W5 ·W11 ·W13 ·W15 · W18 +W1 ·W7 ·W10 ·W11 ·W13 ·W15 ·W18 +W1 ·W6 ·W7 ·W11 ·W13 ·W15 ·W18

P10 =W2 ·W3 ·W7 +W2 ·W3 ·W6 +W2 ·W3 ·W5

P11 =W3 ·W6 ·W7 +W2 ·W7 ·W10 ·W11 +W1 ·W7 ·W10 ·W11 +W2 ·W5 · W11 ·W14 ·W15 +W1 ·W6 ·W11 ·W14 ·W15 +W1 ·W5 ·W7 ·W11 ·W14 ·W15 +W1 · W3 ·W5 ·W11 ·W14 ·W15 +W2 ·W5 ·W9 ·W11 ·W15 ·W18 ·W19 +W2 ·W5 ·W7 ·W9 · W15 ·W18 ·W19 +W1 ·W6 ·W9 ·W11 ·W15 ·W18 ·W19 +W1 ·W5 ·W7 ·W10 ·W15 · W18 ·W19 +W1 ·W3 ·W6 ·W9 ·W15 ·W18 ·W19 +W1 ·W3 ·W5 ·W10 ·W15 ·W18 ·W19

P12 = W2 ·W6 ·W7 +W3 ·W6 ·W10 ·W11 +W1 ·W6 ·W10 ·W11 +W3 ·W5 ·W10 · W14 ·W15 +W1 ·W7 ·W10 ·W14 ·W15 +W3 ·W5 ·W9 ·W11 ·W14 ·W18 ·W19 +W3 ·W5 · W7 ·W9 ·W14 ·W18 ·W19 +W1 ·W7 ·W9 ·W11 ·W14 ·W18 ·W19 +W1 ·W5 ·W7 ·W11 ·W14 · W18 ·W19 +W1 ·W3 ·W7 ·W9 ·W14 ·W18 ·W19 +W1 ·W3 ·W5 ·W11 ·W14 ·W18 ·W19

P13 = W1 ·W6 ·W7 +W3 ·W5 ·W10 ·W11 +W2 ·W5 ·W10 ·W11 +W3 ·W6 ·W9 · W14 ·W15 +W3 ·W5 ·W7 ·W9 ·W14 ·W15 +W2 ·W7 ·W9 ·W14 ·W15 +W1 ·W3 ·W7 · W9 ·W14 ·W15 +W3 ·W6 ·W9 ·W11 ·W13 ·W18 ·W19 +W3 ·W5 ·W7 ·W10 ·W13 · W18 ·W19 +W2 ·W7 ·W9 ·W11 ·W13 ·W18 ·W19 +W2 ·W5 ·W7 ·W11 ·W13 ·W18 · W19 +W1 ·W3 ·W7 ·W10 ·W13 ·W18 ·W19 +W1 ·W3 ·W6 ·W11 ·W13 ·W18 ·W19

P14 =W1·W3·W6·W7+W3·W5·W7·W10·W11+W2·W5·W7·W10·W11+W1· W5 ·W7 ·W10 ·W11 +W3 ·W6 ·W9 ·W11 ·W14 ·W15 +W3 ·W5 ·W9 ·W11 ·W14 ·W15 + W2 ·W7 ·W9 ·W11 ·W14 ·W15 +W2 ·W5 ·W9 ·W11 ·W14 ·W15 +W1 ·W7 ·W9 ·W11 ·W14 · W15 +W1 ·W6 ·W9 ·W11 ·W14 ·W15 +W3 ·W6 ·W9 ·W13 ·W15 ·W18 ·W19 +W3 ·W5 ·W10 · W13 ·W15 ·W18 ·W19 +W2 ·W7 ·W9 ·W13 ·W15 ·W18 ·W19 +W2 ·W5 ·W11 ·W13 ·W15 · W18 ·W19 +W1 ·W7 ·W10 ·W13 ·W15 ·W18 ·W19 +W1 ·W6 ·W11 ·W13 ·W15 ·W18 ·W19

P15 =W1 ·W7 +W1 ·W6

P16 =W3·W5+W2·W7·W9+W1·W3·W7·W9+W2·W6·W7·W11·W13+ W2 ·W5 ·W7 ·W11 ·W13 +W2 ·W3 ·W6 ·W11 ·W13 +W1 ·W3 ·W6 ·W11 ·W13 + W2 ·W5 ·W7 ·W10 ·W11 ·W15 ·W17 +W2 ·W3 ·W6 ·W9 ·W11 ·W15 ·W17 +W1 · W3 ·W6 ·W10 ·W11 ·W15 ·W17 +W1 ·W3 ·W6 ·W7 ·W10 ·W15 ·W17

P17 =W2 ·W5 +W3 ·W6 ·W9 +W3 ·W6 ·W7 ·W10 ·W13 +W3 ·W5 ·W7 · W10 ·W13 +W2 ·W3 ·W7 ·W10 ·W13 +W1 ·W3 ·W7 ·W10 ·W13 +W3 ·W5 ·W7 · W10 ·W11 ·W14 ·W17 +W2 ·W3 ·W7 ·W9 ·W11 ·W14 ·W17 +W1 ·W3 ·W7 ·W10 · W11 ·W14 ·W17 +W1 ·W3 ·W6 ·W7 ·W11 ·W14 ·W17

P18 =W2·W3·W5+W3·W6·W7·W9+W2·W6·W7·W9+W3·W6·W10· W11 ·W13 +W3 ·W5 ·W7 ·W10 ·W11 ·W13 +W2 ·W7 ·W10 ·W11 ·W13 +W1 ·W3 · W7 ·W10 ·W11 ·W13 +W3 ·W6 ·W9 ·W11 ·W14 ·W15 ·W17 +W3 ·W5 ·W7 ·W10 ·W14 · W15 ·W17 +W2 ·W7 ·W9 ·W11 ·W14 ·W15 ·W17 +W2 ·W5 ·W7 ·W11 ·W14 ·W15 · W17 +W1 ·W3 ·W7 ·W10 ·W14 ·W15 ·W17 +W1 ·W3 ·W6 ·W11 ·W14 ·W15 ·W17

P19 =W1·W3·W5+W3·W5·W7·W9+W2·W5·W7·W9+W3·W6·W9·W11· W13 +W2 ·W7 ·W9 ·W11 ·W13 +W3 ·W6 ·W10 ·W11 ·W13 ·W15 ·W17 +W3 ·W6 ·W7 · W10 ·W13 ·W15 ·W17 +W2 ·W7 ·W10 ·W11 ·W13 ·W15 ·W17 +W2 ·W6 ·W7 ·W11 ·W13 · W15 ·W17 +W2 ·W3 ·W7 ·W10 ·W13 ·W15 ·W17 +W2 ·W3 ·W6 ·W11 ·W13 ·W15 ·W17

P20 =W1 ·W3 ·W7 +W1 ·W3 ·W6 +W1 ·W3 ·W5

P21 =W3·W5·W7+W2·W7·W9·W11+W1·W7·W9·W11+W2·W6·W7· W11 ·W13 ·W15 +W2 ·W5 ·W11 ·W13 ·W15 +W2 ·W3 ·W6 ·W11 ·W13 ·W15 +W1 · W6 ·W11 ·W13 ·W15 +W2 ·W6 ·W7 ·W9 ·W15 ·W17 ·W19 +W2 ·W5 ·W10 ·W11 · W15 ·W17 ·W19 +W2 ·W3 ·W6 ·W9 ·W15 ·W17 ·W19 +W2 ·W3 ·W5 ·W10 ·W15 ·W17 · W19 +W1 ·W6 ·W10 ·W11 ·W15 ·W17 ·W19 +W1 ·W6 ·W7 ·W10 ·W15 ·W17 ·W19

P22 =W2·W5·W7+W3·W6·W9·W11+W1·W6·W9·W11+W3·W6·W7· W10 ·W13 ·W15 +W3 ·W5 ·W10 ·W13 ·W15 +W2 ·W3 ·W7 ·W10 ·W13 ·W15 +W1 · W7 ·W10 ·W13 ·W15 +W3 ·W6 ·W7 ·W9 ·W14 ·W17 ·W19 +W3 ·W5 ·W10 ·W11 ·W14 ·W17 ·W19 +W2 ·W3 ·W7 ·W9 ·W14 ·W17 ·W19 +W2 ·W3 ·W5 ·W11 ·W14 ·W17 · W19 +W1 ·W7 ·W10 ·W11 ·W14 ·W17 ·W19 +W1 ·W6 ·W7 ·W11 ·W14 ·W17 ·W19

P23 = W2·W3·W5·W7+W3·W6·W7·W9·W11+W2·W6·W7·W9·W11+W1·W6· W7 ·W9 ·W11 +W3 ·W6 ·W10 ·W11 ·W13 ·W15 +W3 ·W5 ·W10 ·W11 ·W13 ·W15 +W2 ·W7 · W10 ·W11 ·W13 ·W15 +W2 ·W5 ·W10 ·W11 ·W13 ·W15 +W1 ·W7 ·W10 ·W11 ·W13 ·W15 + W1 ·W6 ·W10 ·W11 ·W13 ·W15 +W3 ·W6 ·W9 ·W14 ·W15 ·W17 ·W19 +W3 ·W5 ·W10 · W14 ·W15 ·W17 ·W19 +W2 ·W7 ·W9 ·W14 ·W15 ·W17 ·W19 +W2 ·W5 ·W11 ·W14 ·W15 · W17 ·W19 +W1 ·W7 ·W10 ·W14 ·W15 ·W17 ·W19 +W1 ·W6 ·W11 ·W14 ·W15 ·W17 ·W19

P24 =W1·W5·W7+W3·W5·W9·W11+W2·W5·W9·W11+W3·W6·W9·W13· W15 +W2 ·W7 ·W9 ·W13 ·W15 +W3 ·W6 ·W10 ·W11 ·W13 ·W17 ·W19 +W3 ·W6 ·W7 · W10 ·W13 ·W17 ·W19 +W2 ·W7 ·W10 ·W11 ·W13 ·W17 ·W19 +W2 ·W6 ·W7 ·W11 ·W13 · W17 ·W19 +W2 ·W3 ·W7 ·W10 ·W13 ·W17 ·W19 +W2 ·W3 ·W6 ·W11 ·W13 ·W17 ·W19

An additional stage of processing to be invoked before this will apply the optimization of summarizing any H and/or Pauli operations being applied multiple times in succession. Each adjacent nibble in the operation bits of the instruction word will be compared using a simple 4-bit comparator circuit. If any adjacent nibbles match, then they will be removed by shifting the contents of the operation bits to the right of the matching operations into the registers of the adjacent nibbles. This is very easy to achieve using a simple digital processor with a bit shift function. It is also implementable using stirctly combinational logic. Let the output of such a combinational circuit be the bit string C. The optimization of the combinational version was found using the following code.

def print_comb_table(): i=0
all_instrs = []
all_combs = []
for length in [2,3,4,5]:
for word in itertools.product([’0001 ’, ’0010 ’, ’0011 ’, ’0100
’, ’0101 ’], repeat=length):
i += 1
instr_bits = ’’.join(word)
comb_bits = ’’
match = True
zeroes = False
while match == True and zeroes == False: n=0
while n < len(word):
match = False
nibble = word[n]
if len(word) > n + 1:
next_nibble = word[n+1]
if next_nibble == nibble and nibble != ’0000 ’:
match = True
word = word[0:n] + word[n+2:] + (’0000 ’,’0000 ’)
n = -1
n += 1
zeroes = True
for nibble in word:
if nibble != ’0000 ’:
zeroes = False

comb_bits = ''.join(word)
instr_bits = instr_bits + ''.join(['0000 'for l in range(5 - length)])
comb_bits = comb_bits + ''.join(['0000 'for l in range(5 - length)])
all_instrs.append(instr_bits)
all_combs.append(comb_bits)

print("{0} & {1}".format(instr_bits, comb_bits) + r" \\" + "\n" + r"\hline")

return [all_instrs, all_combs]

def solve_comb_table(instrs, combs):
expressions = [[] for comb in range(25)]
minterms = [[] for comb in range(25)]

for instr_index in range(len(instrs)):
instr_bits = instrs[instr_index]
comb_bits = combs[instr_index]

for c in range(len(comb_bits)):
if comb_bits[c] == ’1’:
boolean_exp = ’’
operations = instr_bits.split(’ ’)
j = -1
for o in range(len(operations)):
for b in operations[o]:
j += 1
word_bit = ’ W_{{{0}}}’.format(j)
if b == ’1’:
if boolean_exp != ’’:
boolean_exp += ’ \cdot’ + word_bit
else:
boolean_exp += word_bit

if not boolean_exp in expressions[c]:
expressions[c].append(boolean_exp)
minterms[c].append(instr_bits)

# print SOP
for k in range(len(expressions)):
comb_bit = ’C_{0} =’.format(k)
comb_bit_exps = expressions[k]
comb_bit_total_exp = comb_bit + ’ + ’.join(comb_bit_exps)
print(comb_bit_total_exp)

#simplify
has_deps = True

while has_deps == True:
has_deps = False
dup_dependencies = [{} for comb in range(25)]
for l in range(len(minterms)):
for term in range(len(minterms[l])):
one_term_bits = []
for term_bit in range(len(minterms[l][term])):
if minterms[l][term][term_bit] == ’1’:
one_term_bits.append(term_bit)
for other_minterm in minterms[l][0:term] +
minterms[l][term + 1:]:
dup = True
for bit_index in one_term_bits:
if(bit_index > len(other_minterm) or
other_minterm[bit_index] == ’0’):
dup = False
if dup == True:
if not term in dup_dependencies[l].keys():
dup_dependencies[l][term] =
[minterms[l].index(other_minterm)]
else:
dup_dependencies[l][term].append(minterms[l].index(other_minterm))
removal_candidates = [[] for index in range(len(minterms))]
for c in range(len(dup_dependencies)):
for opt_term_index in dup_dependencies[c].keys():
for rm_term in dup_dependencies[c][opt_term_index]:
if not rm_term in dup_dependencies[c].keys():
if not opt_term_index in removal_candidates[c]:
removal_candidates[c].append(rm_term)


# filter optimal expressions
optimal_expressions = [[] for index in range(len(expressions))]
optimal_minterms = [[] for index in range(len(minterms))]
removed = 0
for k in range(len(expressions)):
for q in range(len(expressions[k])):
if q in removal_candidates[k]:
print(’Redundant term {0} removed from C_{1}
expression’.format(expressions[k][q], k))
removed += 1
continue
optimal_expressions[k].append(expressions[k][q])
optimal_minterms[k].append(minterms[k][q])
expressions = optimal_expressions
minterms = optimal_minterms
if removed != 0:
has_deps = True
print(’pass removed {0} terms’.format(removed))
# print optimal expressions
for e in range(len(expressions)):
comb_bit = ’C_{0} =’.format(e)
comb_bit_exps = expressions[e]
comb_bit_total_exp = comb_bit + ’ + ’.join(comb_bit_exps)
print(comb_bit_total_exp)

solve_comb_table(*print_comb_table())

C1 = W1·W7+W1·W6+W1·W3·W5+W3·W7·W9+W2·W6·W9+W1·W5·W9+ W3 ·W7 ·W11 ·W15 ·W17 +W3 ·W7 ·W10 ·W14 ·W17 +W3 ·W6 ·W10 ·W15 ·W17 +W3 ·W5 · W9 ·W15 ·W17 +W2 ·W7 ·W11 ·W14 ·W17 +W2 ·W6 ·W11 ·W15 ·W17 +W2 ·W6 ·W10 · W14 ·W17 +W2 ·W5 ·W9 ·W14 ·W17 +W1 ·W5 ·W11 ·W15 ·W17 +W1 ·W5 ·W10 ·W14 ·W17

C2 = W2·W7+W2·W5+W2·W3·W6+W3·W7·W10+W2·W6·W10+W1·W5· W10 +W3 ·W7 ·W11 ·W15 ·W18 +W3 ·W7 ·W9 ·W13 ·W18 +W3 ·W6 ·W10 ·W15 ·W18 +W3 · W5 ·W9 ·W15 ·W18 +W2 ·W6 ·W11 ·W15 ·W18 +W2 ·W6 ·W9 ·W13 ·W18 +W1 ·W7 ·W11 · W13 ·W18 +W1 ·W6 ·W10 ·W13 ·W18 +W1 ·W5 ·W11 ·W15 ·W18 +W1 ·W5 ·W9 ·W13 ·W18

C3 =W3 ·W6 +W3 ·W5 +W2 ·W3 ·W7 +W1 ·W3 ·W7 +W3 ·W7 ·W11 + W2 ·W6 ·W11 +W1 ·W5 ·W11 +W3 ·W7 ·W10 ·W14 ·W19 +W3 ·W7 ·W9 ·W13 · W19 +W2 ·W7 ·W11 ·W14 ·W19 +W2 ·W6 ·W10 ·W14 ·W19 +W2 ·W6 ·W9 ·W13 · W19 +W2 ·W5 ·W9 ·W14 ·W19 +W1 ·W7 ·W11 ·W13 ·W19 +W1 ·W6 ·W10 ·W13 · W19 +W1 ·W5 ·W10 ·W14 ·W19 +W1 ·W5 ·W9 ·W13 ·W19

C6 =W3·W5+W2·W5+W1·W5·W7+W3·W7·W11·W13+W3·W7·W10·W13+ W3 ·W7 ·W9 ·W13 ·W15 +W3 ·W6 ·W10 ·W13 +W2 ·W7 ·W11 ·W13 +W2 ·W6 ·W11 · W13 +W2 ·W6 ·W10 ·W13 +W2 ·W6 ·W9 ·W13 ·W15 +W1 ·W7 ·W11 ·W13 ·W15 +W1 · W6 ·W10 ·W13 ·W15 +W1 ·W5 ·W11 ·W13 +W1 ·W5 ·W10 ·W13 +W1 ·W5 ·W9 ·W13 ·W15

C7 =W3·W6+W2·W6·W7+W1·W6+W3·W7·W11·W14+W3·W7·W10·W14·W15 +W3 ·W7 ·W9 ·W14 +W3 ·W5 ·W9 ·W14 +W2 ·W7 ·W11 ·W14 ·W15 +W2 ·W6 · W11 ·W14 +W2 ·W6 ·W10 ·W14 ·W15 +W2 ·W6 ·W9 ·W14 +W2 ·W5 ·W9 ·W14 ·W15 + W1 ·W7 ·W11 ·W14 +W1 ·W5 ·W11 ·W14 +W1 ·W5 ·W10 ·W14 ·W15 +W1 ·W5 ·W9 ·W14

C8 =W3·W6·W7+W3·W5·W7+W2·W7+W1·W7+W3·W7·W11·W14· W15 +W3 ·W7 ·W11 ·W13 ·W15 +W3 ·W7 ·W10 ·W15 +W3 ·W7 ·W9 ·W15 +W3 · W6 ·W10 ·W14 ·W15 +W3 ·W6 ·W10 ·W13 ·W15 +W3 ·W5 ·W9 ·W14 ·W15 +W3 · W5 ·W9 ·W13 ·W15 +W2 ·W6 ·W11 ·W14 ·W15 +W2 ·W6 ·W11 ·W13 ·W15 +W2 ·W6 · W10 ·W15 +W2 ·W6 ·W9 ·W15 +W2 ·W5 ·W9 ·W15 +W1 ·W6 ·W10 ·W15 +W1 ·W5 · W11 ·W14 ·W15 +W1 ·W5 ·W11 ·W13 ·W15 +W1 ·W5 ·W10 ·W15 +W1 ·W5 ·W9 ·W15

C11 = W3·W6·W9+W3·W5·W9·W11+W3·W5·W7·W9+W2·W7·W9+W2·W5· W9 ·W11 +W1 ·W7 ·W9 +W1 ·W6 ·W9 +W3 ·W7 ·W11 ·W14 ·W17 +W3 ·W7 ·W11 ·W13 · W17 ·W19 +W3 ·W7 ·W11 ·W13 ·W15 ·W17 +W3 ·W7 ·W10 ·W15 ·W17 +W3 ·W7 ·W10 · W13 ·W17 ·W19 +W3 ·W7 ·W9 ·W15 ·W17 +W3 ·W7 ·W9 ·W14 ·W17 +W3 ·W6 ·W11 ·W15 · W17 +W3 ·W6 ·W10 ·W14 ·W17 +W3 ·W6 ·W10 ·W13 ·W17 ·W19 +W3 ·W6 ·W10 ·W13 ·W15 · W17 +W3 ·W5 ·W11 ·W15 ·W17 ·W19 +W3 ·W5 ·W10 ·W14 ·W17 ·W19 +W3 ·W5 ·W9 ·W14 · W17 +W3 ·W5 ·W9 ·W13 ·W17 ·W19 +W3 ·W5 ·W9 ·W13 ·W15 ·W17 +W3 ·W5 ·W7 ·W11 · W15 ·W17 +W3 ·W5 ·W7 ·W10 ·W14 ·W17 +W2 ·W7 ·W11 ·W15 ·W17 +W2 ·W7 ·W11 ·W13 · W17 ·W19 +W2 ·W7 ·W10 ·W14 ·W17 +W2 ·W6 ·W11 ·W14 ·W17 +W2 ·W6 ·W11 ·W13 ·W17 · W19 +W2 ·W6 ·W11 ·W13 ·W15 ·W17 +W2 ·W6 ·W10 ·W15 ·W17 +W2 ·W6 ·W10 ·W13 ·W17 · W19 +W2 ·W6 ·W9 ·W15 ·W17 +W2 ·W6 ·W9 ·W14 ·W17 +W2 ·W5 ·W11 ·W15 ·W17 ·W19 + W2 ·W5 ·W10 ·W14 ·W17 ·W19 +W2 ·W5 ·W9 ·W15 ·W17 +W2 ·W5 ·W9 ·W13 ·W17 ·W19 + W1 ·W7 ·W11 ·W15 ·W17 +W1 ·W7 ·W11 ·W14 ·W17 +W1 ·W7 ·W10 ·W14 ·W17 +W1 ·W6 · W11 ·W15 ·W17 +W1 ·W6 ·W10 ·W15 ·W17 +W1 ·W6 ·W10 ·W14 ·W17 +W1 ·W5 ·W11 ·W14 · W17 +W1 ·W5 ·W11 ·W13 ·W17 ·W19 +W1 ·W5 ·W11 ·W13 ·W15 ·W17 +W1 ·W5 ·W10 ·W15 · W17 +W1 ·W5 ·W10 ·W13 ·W17 ·W19 +W1 ·W5 ·W9 ·W15 ·W17 +W1 ·W5 ·W9 ·W14 ·W17

C12 = W3·W6·W10·W11+W3·W6·W7·W10+W3·W5·W10+W2·W7·W10+W2· W5 ·W10 +W1 ·W7 ·W10 +W1 ·W6 ·W10 ·W11 +W3 ·W7 ·W11 ·W14 ·W18 ·W19 +W3 ·W7 ·W11 ·W14 ·W15 ·W18 +W3 ·W7 ·W11 ·W13 ·W18 +W3 ·W7 ·W10 ·W15 ·W18 +W3 ·W7 ·W10 · W13 ·W18 +W3 ·W7 ·W9 ·W15 ·W18 +W3 ·W7 ·W9 ·W14 ·W18 ·W19 +W3 ·W6 ·W11 ·W15 · W18 ·W19 +W3 ·W6 ·W10 ·W14 ·W18 ·W19 +W3 ·W6 ·W10 ·W14 ·W15 ·W18 +W3 ·W6 ·W10 · W13 ·W18 +W3 ·W6 ·W9 ·W13 ·W18 ·W19 +W3 ·W6 ·W7 ·W11 ·W15 ·W18 +W3 ·W6 ·W7 · W9 ·W13 ·W18 +W3 ·W5 ·W11 ·W15 ·W18 +W3 ·W5 ·W9 ·W14 ·W18 ·W19 +W3 ·W5 ·W9 · W14 ·W15 ·W18 +W3 ·W5 ·W9 ·W13 ·W18 +W2 ·W7 ·W11 ·W15 ·W18 +W2 ·W7 ·W11 ·W13 · W18 +W2 ·W7 ·W9 ·W13 ·W18 +W2 ·W6 ·W11 ·W14 ·W18 ·W19 +W2 ·W6 ·W11 ·W14 ·W15 · W18 +W2 ·W6 ·W11 ·W13 ·W18 +W2 ·W6 ·W10 ·W15 ·W18 +W2 ·W6 ·W10 ·W13 ·W18 +W2 · W6 ·W9 ·W15 ·W18 +W2 ·W6 ·W9 ·W14 ·W18 ·W19 +W2 ·W5 ·W11 ·W15 ·W18 +W2 ·W5 · W9 ·W15 ·W18 +W2 ·W5 ·W9 ·W13 ·W18 +W1 ·W7 ·W11 ·W15 ·W18 +W1 ·W7 ·W11 ·W14 · W18 ·W19 +W1 ·W7 ·W9 ·W13 ·W18 +W1 ·W6 ·W11 ·W15 ·W18 ·W19 +W1 ·W6 ·W10 ·W15 · W18 +W1 ·W6 ·W10 ·W14 ·W18 ·W19 +W1 ·W6 ·W9 ·W13 ·W18 ·W19 +W1 ·W5 ·W11 ·W14 · W18 ·W19 +W1 ·W5 ·W11 ·W14 ·W15 ·W18 +W1 ·W5 ·W11 ·W13 ·W18 +W1 ·W5 ·W10 ·W15 · W18 +W1 ·W5 ·W10 ·W13 ·W18 +W1 ·W5 ·W9 ·W15 ·W18 +W1 ·W5 ·W9 ·W14 ·W18 ·W19

C13 = W3·W6·W11+W3·W5·W11+W2·W7·W10·W11+W2·W7·W9·W11+W2· W6 ·W7 ·W11 +W2 ·W5 ·W11 +W1 ·W7 ·W10 ·W11 +W1 ·W7 ·W9 ·W11 +W1 ·W6 ·W11 + W1 ·W5 ·W7 ·W11 +W3 ·W7 ·W11 ·W14 ·W19 +W3 ·W7 ·W11 ·W13 ·W19 +W3 ·W7 ·W10 · W15 ·W18 ·W19 +W3 ·W7 ·W10 ·W15 ·W17 ·W19 +W3 ·W7 ·W10 ·W14 ·W15 ·W19 +W3 · W7 ·W10 ·W13 ·W19 +W3 ·W7 ·W9 ·W15 ·W18 ·W19 +W3 ·W7 ·W9 ·W15 ·W17 ·W19 + W3 ·W7 ·W9 ·W14 ·W19 +W3 ·W7 ·W9 ·W13 ·W15 ·W19 +W3 ·W6 ·W10 ·W14 ·W19 +W3 · W6 ·W10 ·W13 ·W19 +W3 ·W6 ·W9 ·W13 ·W19 +W3 ·W5 ·W10 ·W14 ·W19 +W3 ·W5 ·W9 · W14 ·W19 +W3 ·W5 ·W9 ·W13 ·W19 +W2 ·W7 ·W11 ·W15 ·W18 ·W19 +W2 ·W7 ·W11 · W15 ·W17 ·W19 +W2 ·W7 ·W11 ·W14 ·W15 ·W19 +W2 ·W7 ·W11 ·W13 ·W19 +W2 ·W7 · W10 ·W14 ·W18 ·W19 +W2 ·W7 ·W10 ·W14 ·W17 ·W19 +W2 ·W7 ·W9 ·W13 ·W18 ·W19 + W2 ·W7 ·W9 ·W13 ·W17 ·W19 +W2 ·W6 ·W11 ·W14 ·W19 +W2 ·W6 ·W11 ·W13 ·W19 + W2 ·W6 ·W10 ·W15 ·W18 ·W19 +W2 ·W6 ·W10 ·W15 ·W17 ·W19 +W2 ·W6 ·W10 ·W14 · W15 ·W19 +W2 ·W6 ·W10 ·W13 ·W19 +W2 ·W6 ·W9 ·W15 ·W18 ·W19 +W2 ·W6 ·W9 ·W15 ·W17 ·W19 +W2 ·W6 ·W9 ·W14 ·W19 +W2 ·W6 ·W9 ·W13 ·W15 ·W19 +W2 ·W6 ·W7 ·W10 · W14 ·W19 +W2 ·W6 ·W7 ·W9 ·W13 ·W19 +W2 ·W5 ·W10 ·W14 ·W19 +W2 ·W5 ·W9 ·W15 · W18 ·W19 +W2 ·W5 ·W9 ·W15 ·W17 ·W19 +W2 ·W5 ·W9 ·W14 ·W15 ·W19 +W2 ·W5 · W9 ·W13 ·W19 +W1 ·W7 ·W11 ·W15 ·W18 ·W19 +W1 ·W7 ·W11 ·W15 ·W17 ·W19 +W1 · W7 ·W11 ·W14 ·W19 +W1 ·W7 ·W11 ·W13 ·W15 ·W19 +W1 ·W7 ·W10 ·W14 ·W18 ·W19 + W1 ·W7 ·W10 ·W14 ·W17 ·W19 +W1 ·W7 ·W9 ·W13 ·W18 ·W19 +W1 ·W7 ·W9 ·W13 ·W17 · W19 +W1 ·W6 ·W10 ·W15 ·W18 ·W19 +W1 ·W6 ·W10 ·W15 ·W17 ·W19 +W1 ·W6 ·W10 · W14 ·W19 +W1 ·W6 ·W10 ·W13 ·W15 ·W19 +W1 ·W6 ·W9 ·W13 ·W19 +W1 ·W5 ·W11 · W14 ·W19 +W1 ·W5 ·W11 ·W13 ·W19 +W1 ·W5 ·W10 ·W15 ·W18 ·W19 +W1 ·W5 ·W10 · W15 ·W17 ·W19 +W1 ·W5 ·W10 ·W14 ·W15 ·W19 +W1 ·W5 ·W10 ·W13 ·W19 +W1 ·W5 · W9 ·W15 ·W18 ·W19 +W1 ·W5 ·W9 ·W15 ·W17 ·W19 +W1 ·W5 ·W9 ·W14 ·W19 +W1 · W5 ·W9 ·W13 ·W15 ·W19 +W1 ·W5 ·W7 ·W10 ·W14 ·W19 +W1 ·W5 ·W7 ·W9 ·W13 ·W19

C16 =W3·W6·W11·W13+W3·W6·W9·W13·W15+W3·W6·W7·W10·W13+W3· W5 ·W11 ·W13 +W3 ·W5 ·W10 ·W13 +W3 ·W5 ·W7 ·W9 ·W13 ·W15 +W2 ·W7 ·W10 ·W13 + W2 ·W7 ·W9 ·W13 ·W15 +W2 ·W7 ·W9 ·W11 ·W13 +W2 ·W6 ·W7 ·W11 ·W13 +W2 ·W5 · W11 ·W13 +W2 ·W5 ·W10 ·W13 +W1 ·W7 ·W10 ·W13 +W1 ·W7 ·W9 ·W13 ·W15 +W1 ·W7 · W9 ·W11 ·W13 +W1 ·W6 ·W11 ·W13 +W1 ·W6 ·W9 ·W13 ·W15 +W1 ·W5 ·W7 ·W11 ·W13

C17 =W3 ·W6 ·W11 ·W14 +W3 ·W6 ·W9 ·W14 +W3 ·W6 ·W7 ·W10 ·W14 · W15 +W3 ·W5 ·W11 ·W14 +W3 ·W5 ·W10 ·W14 ·W15 +W3 ·W5 ·W7 ·W9 ·W14 + W2 ·W7 ·W10 ·W14 ·W15 +W2 ·W7 ·W10 ·W11 ·W14 +W2 ·W7 ·W9 ·W14 +W2 · W6 ·W7 ·W11 ·W14 +W2 ·W5 ·W11 ·W14 +W2 ·W5 ·W10 ·W14 ·W15 +W1 ·W7 · W10 ·W14 ·W15 +W1 ·W7 ·W10 ·W11 ·W14 +W1 ·W7 ·W9 ·W14 +W1 ·W6 ·W11 · W14 +W1 ·W6 ·W9 ·W14 +W1 ·W5 ·W7 ·W11 ·W14

C18 =W3·W6·W11·W14·W15+W3·W6·W11·W13·W15+W3·W6·W10·W11· W15 +W3 ·W6 ·W9 ·W15 +W3 ·W6 ·W7 ·W10 ·W15 +W3 ·W5 ·W11 ·W14 ·W15 +W3 · W5 ·W11 ·W13 ·W15 +W3 ·W5 ·W10 ·W15 +W3 ·W5 ·W9 ·W11 ·W15 +W3 ·W5 ·W7 · W9 ·W15 +W2 ·W7 ·W10 ·W15 +W2 ·W7 ·W9 ·W15 +W2 ·W6 ·W7 ·W11 ·W14 ·W15 +W2 ·W6 ·W7 ·W11 ·W13 ·W15 +W2 ·W5 ·W11 ·W14 ·W15 +W2 ·W5 ·W11 ·W13 ·W15 + W2 ·W5 ·W10 ·W15 +W2 ·W5 ·W9 ·W11 ·W15 +W1 ·W7 ·W10 ·W15 +W1 ·W7 ·W9 · W15 +W1 ·W6 ·W11 ·W14 ·W15 +W1 ·W6 ·W11 ·W13 ·W15 +W1 ·W6 ·W10 ·W11 ·W15 + W1 ·W6 ·W9 ·W15 +W1 ·W5 ·W7 ·W11 ·W14 ·W15 +W1 ·W5 ·W7 ·W11 ·W13 ·W15

C21 = W3·W6·W11·W14·W17+W3·W6·W11·W13·W17·W19+W3·W6·W11·W13· W15 ·W17 +W3 ·W6 ·W10 ·W11 ·W15 ·W17 +W3 ·W6 ·W9 ·W15 ·W17 +W3 ·W6 ·W9 ·W14 · W17 +W3 ·W6 ·W7 ·W10 ·W15 ·W17 +W3 ·W6 ·W7 ·W10 ·W13 ·W17 ·W19 +W3 ·W5 ·W11 · W14 ·W17 +W3 ·W5 ·W11 ·W13 ·W17 ·W19 +W3 ·W5 ·W11 ·W13 ·W15 ·W17 +W3 ·W5 · W10 ·W15 ·W17 +W3 ·W5 ·W10 ·W13 ·W17 ·W19 +W3 ·W5 ·W9 ·W11 ·W15 ·W17 +W3 ·W5 · W7 ·W9 ·W15 ·W17 +W3 ·W5 ·W7 ·W9 ·W14 ·W17 +W2 ·W7 ·W10 ·W15 ·W17 +W2 ·W7 · W10 ·W13 ·W17 ·W19 +W2 ·W7 ·W10 ·W11 ·W14 ·W17 +W2 ·W7 ·W9 ·W15 ·W17 +W2 ·W7 · W9 ·W14 ·W17 +W2 ·W7 ·W9 ·W11 ·W13 ·W17 ·W19 +W2 ·W6 ·W7 ·W11 ·W14 ·W17 +W2 · W6 ·W7 ·W11 ·W13 ·W17 ·W19 +W2 ·W6 ·W7 ·W11 ·W13 ·W15 ·W17 +W2 ·W5 ·W11 ·W14 · W17 +W2 ·W5 ·W11 ·W13 ·W17 ·W19 +W2 ·W5 ·W11 ·W13 ·W15 ·W17 +W2 ·W5 ·W10 · W15 ·W17 +W2 ·W5 ·W10 ·W13 ·W17 ·W19 +W2 ·W5 ·W9 ·W11 ·W15 ·W17 +W1 ·W7 ·W10 · W15 ·W17 +W1 ·W7 ·W10 ·W13 ·W17 ·W19 +W1 ·W7 ·W10 ·W11 ·W14 ·W17 +W1 ·W7 ·W9 · W15 ·W17 +W1 ·W7 ·W9 ·W14 ·W17 +W1 ·W7 ·W9 ·W11 ·W13 ·W17 ·W19 +W1 ·W6 ·W11 · W14 ·W17 +W1 ·W6 ·W11 ·W13 ·W17 ·W19 +W1 ·W6 ·W11 ·W13 ·W15 ·W17 +W1 ·W6 · W10 ·W11 ·W15 ·W17 +W1 ·W6 ·W9 ·W15 ·W17 +W1 ·W6 ·W9 ·W14 ·W17 +W1 ·W5 ·W7 · W11 ·W14 ·W17 +W1 ·W5 ·W7 ·W11 ·W13 ·W17 ·W19 +W1 ·W5 ·W7 ·W11 ·W13 ·W15 ·W17

C22 =W3·W6·W11·W14·W18·W19+W3·W6·W11·W14·W15·W18+W3·W6· W11 ·W13 ·W18 +W3 ·W6 ·W10 ·W11 ·W15 ·W18 +W3 ·W6 ·W9 ·W15 ·W18 +W3 ·W6 ·W9 · W14 ·W18 ·W19 +W3 ·W6 ·W7 ·W10 ·W15 ·W18 +W3 ·W6 ·W7 ·W10 ·W13 ·W18 +W3 ·W5 · W11 ·W14 ·W18 ·W19 +W3 ·W5 ·W11 ·W14 ·W15 ·W18 +W3 ·W5 ·W11 ·W13 ·W18 +W3 · W5 ·W10 ·W15 ·W18 +W3 ·W5 ·W10 ·W13 ·W18 +W3 ·W5 ·W9 ·W11 ·W15 ·W18 +W3 ·W5 · W7 ·W9 ·W15 ·W18 +W3 ·W5 ·W7 ·W9 ·W14 ·W18 ·W19 +W2 ·W7 ·W10 ·W15 ·W18 +W2 · W7 ·W10 ·W13 ·W18 +W2 ·W7 ·W10 ·W11 ·W14 ·W18 ·W19 +W2 ·W7 ·W9 ·W15 ·W18 +W2 ·W7 ·W9 ·W14 ·W18 ·W19 +W2 ·W7 ·W9 ·W11 ·W13 ·W18 +W2 ·W6 ·W7 ·W11 ·W14 ·W18 · W19 +W2 ·W6 ·W7 ·W11 ·W14 ·W15 ·W18 +W2 ·W6 ·W7 ·W11 ·W13 ·W18 +W2 ·W5 ·W11 · W14 ·W18 ·W19 +W2 ·W5 ·W11 ·W14 ·W15 ·W18 +W2 ·W5 ·W11 ·W13 ·W18 +W2 ·W5 ·W10 · W15 ·W18 +W2 ·W5 ·W10 ·W13 ·W18 +W2 ·W5 ·W9 ·W11 ·W15 ·W18 +W1 ·W7 ·W10 ·W15 · W18 +W1 ·W7 ·W10 ·W13 ·W18 +W1 ·W7 ·W10 ·W11 ·W14 ·W18 ·W19 +W1 ·W7 ·W9 ·W15 · W18 +W1 ·W7 ·W9 ·W14 ·W18 ·W19 +W1 ·W7 ·W9 ·W11 ·W13 ·W18 +W1 ·W6 ·W11 ·W14 · W18 ·W19 +W1 ·W6 ·W11 ·W14 ·W15 ·W18 +W1 ·W6 ·W11 ·W13 ·W18 +W1 ·W6 ·W10 · W11 ·W15 ·W18 +W1 ·W6 ·W9 ·W15 ·W18 +W1 ·W6 ·W9 ·W14 ·W18 ·W19 +W1 ·W5 ·W7 · W11 ·W14 ·W18 ·W19 +W1 ·W5 ·W7 ·W11 ·W14 ·W15 ·W18 +W1 ·W5 ·W7 ·W11 ·W13 ·W18

C23 =W3·W6·W11·W14·W19+W3·W6·W11·W13·W19+W3·W6·W10·W11· W15 ·W18 ·W19 +W3 ·W6 ·W10 ·W11 ·W15 ·W17 ·W19 +W3 ·W6 ·W9 ·W15 ·W18 ·W19 + W3 ·W6 ·W9 ·W15 ·W17 ·W19 +W3 ·W6 ·W9 ·W14 ·W19 +W3 ·W6 ·W9 ·W13 ·W15 ·W19 + W3 ·W6 ·W7 ·W10 ·W15 ·W18 ·W19 +W3 ·W6 ·W7 ·W10 ·W15 ·W17 ·W19 +W3 ·W6 ·W7 · W10 ·W14 ·W15 ·W19 +W3 ·W6 ·W7 ·W10 ·W13 ·W19 +W3 ·W5 ·W11 ·W14 ·W19 +W3 · W5 ·W11 ·W13 ·W19 +W3 ·W5 ·W10 ·W15 ·W18 ·W19 +W3 ·W5 ·W10 ·W15 ·W17 ·W19 + W3 ·W5 ·W10 ·W14 ·W15 ·W19 +W3 ·W5 ·W10 ·W13 ·W19 +W3 ·W5 ·W9 ·W11 ·W15 ·W18 · W19 +W3 ·W5 ·W9 ·W11 ·W15 ·W17 ·W19 +W3 ·W5 ·W7 ·W9 ·W15 ·W18 ·W19 +W3 ·W5 · W7 ·W9 ·W15 ·W17 ·W19 +W3 ·W5 ·W7 ·W9 ·W14 ·W19 +W3 ·W5 ·W7 ·W9 ·W13 ·W15 · W19 +W2 ·W7 ·W10 ·W15 ·W18 ·W19 +W2 ·W7 ·W10 ·W15 ·W17 ·W19 +W2 ·W7 ·W10 · W14 ·W15 ·W19 +W2 ·W7 ·W10 ·W13 ·W19 +W2 ·W7 ·W10 ·W11 ·W14 ·W19 +W2 ·W7 ·W9 · W15 ·W18 ·W19 +W2 ·W7 ·W9 ·W15 ·W17 ·W19 +W2 ·W7 ·W9 ·W14 ·W19 +W2 ·W7 ·W9 · W13 ·W15 ·W19 +W2 ·W7 ·W9 ·W11 ·W13 ·W19 +W2 ·W6 ·W7 ·W11 ·W14 ·W19 +W2 ·W6 · W7 ·W11 ·W13 ·W19 +W2 ·W5 ·W11 ·W14 ·W19 +W2 ·W5 ·W11 ·W13 ·W19 +W2 ·W5 ·W10 · W15 ·W18 ·W19 +W2 ·W5 ·W10 ·W15 ·W17 ·W19 +W2 ·W5 ·W10 ·W14 ·W15 ·W19 +W2 · W5 ·W10 ·W13 ·W19 +W2 ·W5 ·W9 ·W11 ·W15 ·W18 ·W19 +W2 ·W5 ·W9 ·W11 ·W15 ·W17 · W19 +W1 ·W7 ·W10 ·W15 ·W18 ·W19 +W1 ·W7 ·W10 ·W15 ·W17 ·W19 +W1 ·W7 ·W10 ·W14 · W15 ·W19 +W1 ·W7 ·W10 ·W13 ·W19 +W1 ·W7 ·W10 ·W11 ·W14 ·W19 +W1 ·W7 ·W9 ·W15 ·W18 ·W19 +W1 ·W7 ·W9 ·W15 ·W17 ·W19 +W1 ·W7 ·W9 ·W14 ·W19 +W1 ·W7 ·W9 ·W13 · W15 ·W19 +W1 ·W7 ·W9 ·W11 ·W13 ·W19 +W1 ·W6 ·W11 ·W14 ·W19 +W1 ·W6 ·W11 ·W13 · W19 +W1 ·W6 ·W10 ·W11 ·W15 ·W18 ·W19 +W1 ·W6 ·W10 ·W11 ·W15 ·W17 ·W19 +W1 · W6 ·W9 ·W15 ·W18 ·W19 +W1 ·W6 ·W9 ·W15 ·W17 ·W19 +W1 ·W6 ·W9 ·W14 ·W19 +W1 · W6 ·W9 ·W13 ·W15 ·W19 +W1 ·W5 ·W7 ·W11 ·W14 ·W19 +W1 ·W5 ·W7 ·W11 ·W13 ·W19

Notice that these combinational circuits’ combined function is to take a set of operations meant to be applied to one set of states and decode the instruction containing their bitcode identifiers such that they are optimally executed. However, we have not yet addressed how the decode step of the pipeline will know which set of pure states a series of operations is meant to be applied to. So, we must use the qubit identifiers in the instruction word, the ”instruction target qubits’ bits (T)”, to identify the overlaps of targetted pure states.

It is also true that the analog modules for each quantum gate’s application to each qubit will either be different or take a parameter identifying the target qubit(s). This is true since gates will have different effects on the overall state depending on the target(s) of their application. For example, the X gate will flip the mixed state within the maintained probability simplex over a different plane or hyperplane that passes through the maximally mixed state for each different set of target qubits. The case that the modules take parameters is vastly more ideal than the other option, so we will assume that we can set a voltage in order to target a qubit at the analog gate level. In that case, this parameter voltage must be set before applying a set of gates to a qubit that may occur together. It may be connected in parallel to each analog module’s circuits. for gates that cannot take such a parameter, we will assume that the operation is considered a different operation altogether with a different module address, in which case it will not cause conflicts within this decode logic and will be addressed later.

So, we will endeavour to group consecutive operations by qubit in order to optimize their execution. Let a full instruction word contain more than one set of 5 operation and target qubit codes. Let it be assumed that an instruction word will be present in a set of registers close to the decode logic at the beginning of the decode step. Then a task of the decoder prior to the combinational logic we have defined will be to parse the groups of adjacent operations in the word with common qubits into groups of at most (and ideally) 5.

These groups will then each be passed to a separate instance of the combinational decoder logic that will exist for each of the 20 qubits. We might have duplicates of each operation that can receive different target qubit parameter voltages at once and still be piped together for each possible qubit. That would leave us with 5 × 20 = 100 total analog modules including X, Y , Z, H and M. The sets of 5 analog modules should each be constantly piped as every set should always be used as long as there are enough operations in the long instruction word. When no qubit is selected, the set should simply be bipassed using a single threshold triggered transistor or relay.

This suggests that our full instruction word should contain 20 sets of 5 operations, meaning it has 4 × 5 × 20 operation bits (W) and 5 × 5 × 20 target qubit identifier bits (T), meaning it is 900 bits long.

We must also be prepared to consider the addition of more hybrid analog / digital modules to the system in the future. The architecture’s capacity for additional sets of modules must not delay the decode instruction if these extra sets are not installed, and it must be possible to extend the modules of the system programmatically. This extension must not be defined in hardware with any hard limitations. For now, it will suffice to say that the output of the final piped module set can be programmatically multiplexed to either output directly to the HSAM, or to output to an analog output pin. The input voltage to the HSAM must also be exposed as an input pin. We will need to design our memory management system and also design a machine language before this problem can be addressed fully.

A register architecture is beginning to be implied by the design. We now have a 900-bit instruction word and 20 45-bit inputs to separate parts of the circuitry. It is also evident we will need the machine to be capable of receiving consecutive instruction words with a certain frequency and handling them sequentially. For this purpose, it will be necessary to design a a hardware (or software) digital microprocessor around our hybrid analog/digital operation pipeline architecture that implements minimal but ideal control logic. This is a typical exercise in optimizing control logic that could be performed by any competent embedded systems designer. The fundamental advantage for quantum emulation of our design is entirely encapsulated by the work presented thus far.

The circuitry defined thus far plays the role of the execution modules in a digital pipeline. A processor built around our execution module might have no ALU or FPU. Rather, it could have the described operation pipelining infrastructure and resulting combinational digital logic and convolutional analog differential operator circuits in their place. This would allow it to achieve efficient quantum information emulation. Alternatively, our execution module could be added to supplement the FPU or ALU in a pre-existing computing system.

Optimality Comparison

The optimality of the proposed emulator will be compared to another recent hardware general quantum computing emulator that makes use of digital FPGA technology. In December 2018, Dlugopolski and Pilch published an FPGA- based real quantum computer emulator design in the Journal of Computational Electronics.

Dlugopolski and Pilch implemented their own 10 bit floating point arithmetic in VHDL. They created a quantum computing emulator.

The quantum state was maintained as a set of 10 bit floating point numbers, one float for each of the real and imaginary components of each coefficient. Quantum transformations were implemented by parallel floating point multiplications acting on input gate registers and inputs state registers.

Qubit measurement in their design was implemented as Von Neumann measurement, requiring the following steps.

  1. Probability of measuring 0 is computed based on the entire state
  2. A pseudo-random real number is generated
  3. If the number from step 2. is greater than the probability from step 1., qubit’s measured value is set to 1. Otherwise, the qubit’s measured value is set to 0
  4. Amplitudes of all impossible states (ones where selected qubit’s value is different than measured) are set to 0 entire state
  5. All amplitudes are normalized so that:

This procedure clearly scales exponentially with the number of qubits being maintained. However, their implementation is valuable in the sense that it offloads the complexity of emulating quantum information systems completely to the amount of digital hardware being thrown at the problem, and this hardware is being used optimally.

Their study results in the following relationship between hardware ALMs and representable qubits. To recognize the significance of an ALM, consider that some Intel Cyclone V FPGAs have 32,000 onboard ALMs.

Based on current prices of Cyclone FPGAs, using Dlugopolski and Pilch’s approach would cost around $900 to emulate 4 qubits, or $5000 to emulate 5 qubits.

While the computational speed of their approach is not analyzed in their paper, it is easy to find the FLOPs that commercical FPGAs are capable of performing. For example, a common Cyclone IV FPGA is capable of 2.7 GFLOPs.

Since our approach does not explicitly use floating point operations to emulate quantum computing, we must introduce a comparable performance metric. The precision and slew rates of the op amps used in our design will be the limiting characteristics of our approach. Since the op amps are used only in common configurations that don’t depend on their specific implementations, we have the ability to choose the family of amps to employ based on these characteristics, and weigh efficiency against cost.

High precision LMH3401 is used op amps have a slew rate of 18000 V / μs and cost $14 apiece as of now on DigiKey. This translates to a max propagation delay of roughly 0.5ns considering that we may have a voltage range of 10 volts.

Taking a look at the critical path through the X1 operation’s circuitry, we can estimate the propagation delay of the entire operation to be (5 op amps) multiplied by:

This leads to the conclusion that our device might operate with an efficiency comparable to 2,880,000,008 = ̇ 2.88 GFLOPs, better than the FPGA implementation.

If we notice that the HSAM and oscillator are not actually required elements of the system for computation, but only required for information encoding for output, we can estimate the cost of a computation-only system based on our approach by summing the costs of the op amps in the quantum operator modules: $14.55 × (50 op amps) = $727.5. This is notably less than the estimated $900 cost to emulate 4 qubits using Cyclone V FPGAs.

The selected op amp has a 30 mV stable precision. However, by introducing another, cheaper LMV771 opamp we can perform common mode voltage correction for $0.55 per op amp and achieve a 3 mV precision.

The number of a values encodable can be estimated by 10v / a max = ̇3,333,333.3 distinguishable states. Since this is less than the number of states that are distinguishable in the oscillator output, this shows that the op amp choice is the limiting factor for the design’s capabilities if the LMH3401 is used.

If we wanted to increase the performance of the op amps, we could choose a more performant amp, say the ADL5569. This op amp has a slew rate of 24000 V / μs and therefore would allow our device to perform with an efficiency comparable to 38.4 GFLOPs. However, each is around $28, so the cost of a device would almost double, increasing to $1400. Note that this is an order of magnitude increase in speed for a doubling of cost, which is actually quite good.

CPU Cooperation and Interfacing

Intermediate Hardware

Interfacing with the hardware emulator device defined thus far can be achieved through an intermediate hardware device that facilitates communication between the emulator and a typical computer over USB. Such a device was designed and prototyped using an FT232H FTDI device and MCP4725 12-bit DAC.

The DAC converts digital signals from the computer into the analog signals used to program the emulator. The MCP4725 is addressable since it uses the I2C bus communication protocol, so multiple such devices could be daisy-chained using the same bus and individually addressed. The FT232H converts signals from the computer’s USB to the desired I2C protocol signals. The FT232H also supports use of 15 digital GPIO pins on top of the serial data and serial clock I2C pins.

These devices are prototyping tools and allow the interface device to be designed, but in a production device a single enclosed PCB would be designed instead.

Minimal Python Interface

A minimal Python script that enables software executed on the computer to write to the digital registers of the DAC is shown here. This script makes use of a package provided by Adafruit.

import sys
import Adafruit_GPIO.FT232H as FT232H

def query_connected_devices():
print(’Scanning all possible emulator addresses.’)
for address in range(127):
# Skip I2C addresses which are reserved.
if address <= 7 or address >= 120:
continue
# Create I2C object.
i2c = FT232H.I2CDevice(ft232h, address)
# Check if a device responds to this address.
if i2c.ping():
print(’Found device at address 0x{0:02X}’.format(address))

def read(device_address, register):
# Convert inputs to hex
device_address = int(device_address, 16)
register = int(register, 16)
# Create an I2C device at hex address.
i2c = FT232H.I2CDevice(ft232h, device_address)
if i2c.ping():
# Read a 16 bit unsigned little endian value from register.
response = i2c.readU16(register)
else:
print(’No device at address {0:02X}’.format(device_address))
return response

def write(device_address, register, value):
# Convert input to hex
device_address = int(device_address, 16)
register = int(register, 16)
value = int(value, 16)
# Create an I2C device at hex address.
i2c = FT232H.I2CDevice(ft232h, device_address)
if i2c.ping():
# Write a 8 bit value to register.
i2c.write8(register, value)
else:
print(’No device at address {0:02X}’.format(device_address))

def print_help():
for command in COMMAND_LIST:
print(COMMAND_LIST[command][’help’])

COMMAND_LIST = {
’list_devices’: {
’method’: query_connected_devices,
’params’: [],
’help’: ’list_devices: list connected devices and their addresses.’
},
’read_device_register’: {
’method’: read,
’params’: [’device_address’, ’register’],
’help’: ’read_device_register <device_address> <register>: Read
the value from a device\’s register.’
},
’write_to_device_register’: {
’method’: write,
’params’: [’device_address’, ’register’, ’value’],
’help’: ’write_to_device_register <device_address> <register>
<value>: Write a value to a device\’s register.’
}, ’help’: {
’method’: print_help,
’params’: [],
’help’: ’help: Print help.’
}
}

# Temporarily disable FTDI serial drivers.
FT232H.use_FT232H()
# Find the first FT232H device.
ft232h = FT232H.FT232H()
arg = sys.argv[1]
args = sys.argv[1:]
if arg not in COMMAND_LIST.keys():
print(’Command {0} not found. Acceptable commands include:
{1}’.format(arg, COMMAND_LIST.keys()))
exit()
if not len(args[1:]) == len(COMMAND_LIST[arg][’params’]):
print(’Command {0} given {1} incorrect parameter(s). Correct usage:
{2}’.format(arg, len(args[2:]), COMMAND_LIST[arg][’help’]))
exit()
result = COMMAND_LIST[arg][’method’](*args[1:])
if result is not None:
print(result)

It is appealing to provide an interface to the emulator in Python since Python is the language with some of the most prominent scientific and experimental computing tools. Beyond the appeal of having scientific computing tools like SciPy, NumPy, Pandas, etc. Python is also the main language being used to develop quantum programming tools. Python is the programming language used in quantum computing projects including IBM’s IBMQuantumExperience package, IBM’s QisKit package, Google’s Cirq simulator, Rigetti’s Forest SDK, Xanadu’s PennyLane QML package and Xanadu’s StrawberryFields CV package.

The osciallator could be approximated for demonstrative purposes by a 555 timer in a variable frequency configuration and a CPLD. Here I used an Altera CPLD and DIP package ICs to this end.

Criticism

The presented design only works for a small number of qubits as it is limited by the precision and accuracy of the analog signals that can be encoded using current electronics.

The scalability looks very good in terms of the cost / qubit (up to 20 anyhow) but is very poor in terms of the number of electrical components needed to implement a universal gateset. A new set of ASICs have to be designed for each new qubit count and this is where the poor scaling is offloaded. This is therefore not a scalable platform.

The error introduced by these ASICs has not been fully benchmarked and will depend on each individual approach to implementing the gatesets per qubit count. By the way, one of the approximate circuits here can be replaced by a simpler one. See if you don’t find it after another read.

License

This project is licensed under the CERN Open Hardware Licence Version 2 — Weakly Reciprocal (CERN-OHL-W). For details please see the LICENSE file or https:/cern.ch/cern-ohl

SPDX-License-Identifier: CERN-OHL-W-2.0

https://spdx.org/licenses/CERN-OHL-W-2.0.html

References

This article is strictly content that I wrote and presented as I was working towards my thesis at the University of Waterloo which I defended in September 2020. The final version of the thesis may be found here: https://uwspace.uwaterloo.ca/handle/10012/16383.

  1. M. A. Nielsen, I. L. Chuang, Quantum Computation and Quantum Information
  2. Cour, B. R., & Ott, G. E. (2015). Signal-based classical emulation of a universal quantum computer. New Journal of Physics, 17(5), 053017. doi:10.1088/1367–2630/17/5/053017
  3. Pilch, Jakub, and Jacek Dlugopolski. “An FPGA-Based Real Quantum Computer Emulator.” Journal of Computational Electronics, vol. 18, no. 1, 2018, pp. 329–342., doi:10.1007/s10825–018–1287–5.
  4. Ates, A., Alagoz, B. B., Yeroglu, C., & Alisoy, H. (2015). Sigmoid based PID controller implementation for rotor control. 2015 European Control Conference (ECC). doi:10.1109/ecc.2015.7330586
  5. Park, J., Vanzee, R., Lal, W., Welter, D., & Obeysekera, J. (2005). Sigmoidal Activation of Proportional Integral Control Applied to Water Management. Journal of Water Resources Planning and Management, 131(4), 292–298. doi:10.1061/(asce)0733–9496(2005)131:4(292)
  6. Raghunandan, C., Sainarayanan, K., & Srinivas, M. (2006). Encoding with Repeater Insertion for Minimizing Delay in VLSI Interconnects. 2006 6th International Workshop on System on Chip for Real Time Applications. doi:10.1109/iwsoc.2006.348237
  7. Demrow, R. I. (1970). Settling Time of Operational Amplifiers. Ana-
    log Dialogue, 4. Retrieved December 21, 2018, from https://www.analog.com/media/en/analog- dialogue/volume-4/number-1/articles/volume4-number1.pdf.
  8. Larose, R. (2019). Overview and Comparison of Gate Level Quantum Software Platforms. Quantum, 3, 130. doi:10.22331/q-2019–03–25–130
  9. Sarma, S. D., Deng, D., & Duan, L. (2019). Machine learning meets quantum physics. Physics Today, 72(3), 48–54. doi:10.1063/pt.3.4164
  10. Request for Information (RFI) DARPA-SN-18–68 Quantum Computing Applications with State of the Art Capabilities[PDF]. (2018, July 10). DARPA.
  11. Quantum valley. (2015, June 04). Retrieved from https://uwaterloo.ca/research- technology-park/news/quantum-valley
  12. B., S., I., V., S., N., V., B., . . . H. (2017, April 05). Characterizing Quantum Supremacy in Near-Term Devices. Retrieved from https://arxiv.org/abs/1608.00263
  13. Microsoft opens up about its research in quantum computing. (2014). Physics Today. doi:10.1063/pt.5.028000
  14. Miller, R. (2018, October 04). BlackBerry races ahead of security
    curve with quantum-resistant solution. Retrieved from https://techcrunch.com/2018/10/04/blackberry- races-ahead-of-security-curve-with-quantum-resistant-solution/
  15. “Cramming More Power Into a Quantum Device.” IBM Research Blog, 15 Mar. 2019, www.ibm.com/blogs/research/2019/03/power-quantum- device/.
  16. Madhok, V., Gupta, V., Trottier, D., & Ghose, S. (2015). Signatures of chaos in the dynamics of quantum discord. Physical Review E,91(3). doi:10.1103/physreve.91.032906
  17. Feynman, R. P. (1982). Simulating physics with computers. Interna- tional Journal of Theoretical Physics,21(6–7), 467–488. doi:10.1007/bf02650179
  18. “Xanadu Raises $32M Series A to Bring Photonic Quantum Computing to the Cloud.” Canada NewsWire, 2019, pp. Canada NewsWire, Jun 24, 2019.
  19. Gibney, E. (2016). Inside Microsoft’s quest for a topological quantum computer. Nature. doi:10.1038/nature.2016.20774.
  20. “Ion-Based Commercial Quantum Computer Is a First.” Physics World, 20 Dec. 2018, physicsworld.com/a/ion-based-commercial-quantum- computer-is-a-first/.
  21. “Quantum.” GoogleAI, ai.google/research/teams/applied-science/quantum- ai/.
  22. Nathan Killoran, Josh Izaac, Nicol ́as Quesada, Ville Bergholm, Matthew Amy, and Christian Weedbrook. Strawberry Fields: A Software Platform for Photonic Quantum Computing 2018. arXiv:1804.03159
  23. Edwards, Marcus. ”Developing a Hybrid Methodology for Solving Quantum Problems.” Canadian Association of Physics CAM Conference. Laurentian University, Sudbury, ON, Canada. 25 July 2019.
  24. “IEEE Standard VHDL Language Reference Manual.” doi:10.1109/ieeestd.- 1994.121433.
  25. Kaye, P., Laflamme, R., & Mosca, M. (2010). An introduction to quantum computing. Oxford: Oxford University Press
  26. “Continuous-Variable Quantum Computing.” Introduction — Strawberry Fields 0.12.0-Dev Documentation, strawberryfields.readthedocs.io/en/lat- est/introduction.html.
  27. Edwards, Marcus. “Q.E.E. Quantum Experiment Engine.” Quantum Experiment Engine, https://github.com/comp-phys-marc/distributed-emulator.
  28. Reaching the 50-qubit milestone in quantum computing. (n.d.). AccessScience. doi:10.1036/1097–8542.br1120171
  29. Gibney, E. (2017). D-Wave upgrade: How scientists are using the world’s most controversial quantum computer. Nature, 541(7638), 447–448. doi:- 10.1038/541447b
  30. Wu, Rebing, et al. “Control Problems in Quantum Systems.” Chinese Science Bulletin, vol. 57, no. 18, 2012, pp. 2194–2199., doi:10.1007/s11434–012–5193–0.
  31. Butkovskiy, A. G., and Yu. I. Samoilenko. “Optimal Control of Quantum-Mechanical Processes.” Mathematics and Its Applications Control of Quantum-Mechanical Processes and Systems, 1990, pp. 101–116., doi:10.1007/978–94–009–1994–5 4.
  32. Vandersypen, L. M., & Chuang, I. L. (2005). NMR techniques for quantum control and computation. Reviews of Modern Physics, 76(4), 1037–1069. doi:10.1103/revmodphys.76.1037
  33. Tarn, T., Huang, G., & Clark, J. W. (1980). Modelling of quantum mechanical control systems. Mathematical Modelling, 1(1), 109–121. doi:10.1016/0270–0255(80)90011–1
  34. IBM Q — Quantum Computing. (2018, June 05). Retrieved from https://w-ww.research.ibm.com-/ibm-q/
  35. Rigetti. (n.d.). Retrieved from https://www.rigetti.com/
  36. D-Wave Systems. (n.d.). Retrieved from https://www.dwa-vesys.com/
  37. “The Changelog #356: Observability Is for Your Unknown Unknowns with Christine Yen.” Changelog, https://changelog.com/podcast/356.
  38. Baugh, C.R, and B.A Wooley. “A Two’s Complement Parallel Array Multiplication Algorithm.” IEEE Transactions on Computers, C-22, no. 12, 1973, pp. 1045–1047.
  39. Luk, W. K., and J. E. Vuillemin. Recursive Implementation of Optimal Time VLSI Integer Multipliers. Dept. of Computer Science, Carnegie- Mellon University, 1983.

--

No responses yet