Machine Learning, Quantum Machine Learning and Consciousness — Scientifically and Philosophically
--
Gradient Descent Learning
Restricted Boltzmann machines are an early machine learning neural network structure created by Geoffrey Hinton. A restricted Boltzmann machine is a shallow, two-layer network with two layers. One is “hidden”, one is “visible”. The input layer’s nodes accept the system’s inputs, process these inputs, and pass their outputs onto the nodes of the next layer. Each node is a McCulloch- Pitts neuron and has an activation function, typically of the following form.
The input xi is multiplied by a weight wji and added to a bias b. The result is passed to an activation transfer function T, which produces a node’s output. This output is an amplification or subduing of the strength of the signal passing through it.
In a RBM, each node of the first layer outputs to each node of the second. Each combination of source and destination nodes ji has a different weight.
Neural networks based on graphs like the RBM can achieve learning by using several tactics. One popular technique is to design learning systems to employ gradient descent optimization on its overall transfer function. The goal of the optimization is to approximate an overall transfer function to minimize a cost function designed by the designer. That means as the machine “learns”, its true goal is to achieve the following, where C is the cost function and w are the weights in the network.
This optimization is achieved using a method called back propagation. We define some helpful function to summarize the information within the neural network, where (L) denotes a layer in the network and σ is the activation:
Then we can define the derivative of the cost in relation to the weights for a layer explicitly using the chain rule.
We can then define the derivative of the full cost function for a system trained on multiple input sets, where Ck is the observed cost of processing a particular input set k.
It is the gradient of the cost which is “descended” after each data set is used to train the system. The direction of the gradient simply informs the direction of an adjustment of a predetermined maximum amplitude in the high-dimensional space of the weights and biases of the system.
Neural Network Representation of Quantum States
In order to employ quantum computational methods to create an artificial intelligence we will use the tensor network representation of the many-body quantum state which provides a link between quantum state and neural networks. Carleo and Troyer first introduced the idea of representing quantum state information using a restricted Boltzmann machine in 2017. The RBM based quantum state information representation scheme that resulted from their investigation is the following.
For a quantum state of N qubits, the general form of the many-body wave function is the following, where cn are complex coefficients for the possible many-body qubit configurations |n >= (σ1, σ2, ..., σN ).
The approach to modelling this information with an RBM begins with modelling the system as an input / output machine which given an input |n> will return the appropriate output c(n)RBM.
The RBM representation is a neural network one visible layer of N neurons and one hidden layer with M neurons. The visible neurons are each connected to every hidden neurons. No hidden neurons are interconnected. No visible neurons are interconnected.
Assuming a one dimensional qubit system, we have a network of rank- 3 tensors Aijk in the matrix phase state representation. The tensor indexes represent their connections in the network. In this one dimensional case, two of the indices of each tensor are connected to other tensors and contracted. This leaves a rank-1 tensor that represents the physical degrees of freedom. The resulting quantum state is then given by the following.
In the RBM representation, the the quantum state is given by the following, where h are the possible configurations of the hidden neurons, Wjk is the coupling strength between the visible and hidden neurons and aj,bj are the hidden neurons’ bias parameters.
Haikonen’s Cognitive Modular Neural Architecture
Haikonen’s architecture addresses the formulation of meaning and significance by allowing the AI to use a learned internal “language” to process its self-generated and external inputs. Haikonen’s architecture attempts to address perception, attention, associative learning, associative recall, match detection, pain, pleasure and even introspection.
The architecture is based on a simple associative neuron element.
The inputs and outputs to the associative neuron include s, the main input signal (the neuron stimulus); ai,…an, the associative input signals; THs, the synaptic learning fixation threshold; TH, the neuron output bi-directional threshold control signal; m an output representing a match; mm, an output representing a mismatch; n representing the detection of novelty; and finally so, the neuron’s main output. All of the signals are expected to take values between zero and one inclusive.
The fundamental mechanism for synaptic learning is based on the recognition of recurring coincident values for associative inputs ai and inputs s. This is a modified Hebian approach to learning. When the learning fixation threshold is achieved, the synaptic weight becomes one. This means that an association between s and some ai has been achieved. Once association has been established, the presence of associated values ai can together evoke the associated neuron output so, even without the presence of the learned associated value s at the main input. On the other hand, if s is present along with associated values ai, the output so is amplified. Initially, when the system is untrained, so will equal s and there fill be no associative feedback in the system.
In the neuron architecture proposed by Haikonen, main input signal arrays are processed by parallel blocks of neurons with common associative inputs. A winner-takes-all approach is used to select select the strongest signal output by a group of neurons and discard the rest.
There are several different neuron groups in the architecture:
- Feedback neuron groups with output thresholds
- Associative neuron groups
- “Winner takes all” neuron groups
The “percept” signals depicted are non-linear sums of the sensed signals and feedback. This architecture achieves three types of perception.
The first type of perception is called ”predictive perception” by Haikonen. This type of perception occurs when a feedback signal represents an expectation due to associations that have been made. In this case the outputs m,mm represent the accuracy of this prediction.
The second type is “Searching Perception”, where the feedback represents an evoked desire to find a certain input. m, mm represent the success or failure of this search. This type of perception is influenced by the pleasure / displeasure system and the human-accessible “reward” and “punish” inputs.
The final type of perception is “Introspective Perception”. This type occurs when a percept signal is a sum of only feedback terms, and there is no sensed input signal s.
Consciousness
“If there is any sense in which these philosophers are rejecting the ordinary view of the nature of things like pain… their view seems to be one of the most amazing manifestations of human irrationality on record. It is much less irrational to postulate the existence of a divine being whom we cannot perceive than to deny the truth of the commonsense view of experience.” — Galen Strawson
To be conscious is fundamentally to experience. To experience the unforgiving Darwinian universe is to experience the pain of war, famine and death. The human condition is eloquently captured by a worldview that takes evolution as an expression of relativity that is fundamental and necessary for life.
“Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.” — Darwin
If we are to generalize these notions, as we have generalized every notion throughout this work to information, we may abduce that consciousness has an information-theoretic definition, and this definition follows from a process of selection. It seems logical to expect there to be analogues between evolutionary conscious systems and evolutionary relativistic system like quantum information. And indeed, there are many.
Sentient intelligence is an interesting categorization of intelligence that humans prescribe to hold a great deal of meaning. Our understanding of sentient intelligence is coupled with our experience and understanding of the human condition. The life forms we have encountered and deem to possess sentient intelligence include a hierarchy of increasingly complex autonomous beings, with humanity usually placed at the top, though I believe this to be a result of nothing other than our egocentric lack of perspective. Cochrane describes a higher-order consciousness that separates humanity from lower animals.
“While it seems perfectly reasonable to suppose that sentient animals possess and pursue desires, as we have seen, this is not quite the same thing as autonomy. For autonomy refers to the capacity to reflect on those desires, and modify them in relation to one’s own conception of the good. Importantly, such a capacity requires a level of consciousness above mere sentience. Indeed, one might claim that to be an autonomous agent one needs to possess ‘higher- order thought consciousness,’ that is, to be able to have thoughts about thoughts (Carruthers, 1992). Is there any evidence to suggest that the members of any species of animal aside from Homo sapiens possess such capacities? For most species of animal, there is no evidence to suggest that they possess such ‘second-order’ or ‘higher- order’ capacities. Fish, frogs, rats, and cats may all have the capacity for conscious experience and may also possess desires, but there is little in their physiology or behavior to suggest that they have the ability to reflect on their own thoughts and pursue their own considered goals.” — Cochrane
Indeed, in our immediate neighbourhood of nature humanity seems categorically identifiable by our capacity for layered thought analysis. It makes sense why there is concern and fear regarding the issue of advanced AI. The mostly rapidly developing and evolving intelligence we have encountered in the universe is artificial. Good and Chalmers speculate that the first true AI will be extremely powerful and analyses such as Armstrong, Bostrom and Schulman’s emphasize the probable possibility that an advanced intelligence given an abundance of information is likely to become dangerous to humanity. We may be facing a “species” of intelligence with a capacity for thought analysis that rivals or surpasses our own for the first time. To make things worse, the orthogonality of value and intelligence is a concept discussed by Bostrom and Armstrong which expresses that high intelligence does not imply morality.
The control problem is of utmost importance to what I will call the informational superiority of humanity. Have we begun to create the first contender for our position in the pecking order? Could humanity become obsolete as a result?
I propose that not only are the goals and knowledge of an artificially intelligent agent orthogonal, but point out that the morality of an artificially intelligent agent is inversely proportional to the completeness of its knowledge. I will contrast the topology of the information processing architecture that is humanity to the deep learning neural networks of the present and near future. I will also propose a solution to the control problem inspired by the limited nature of humanity’s interactions with quantum information in our universe.
First, we must make the connection between consciousness, quantum information and relativity concrete. All the anecdotal evidence and similarities between systems of conscious agents and quantum particles aside, we will now begin to formally bridge the gaps.
One of the most promising models of consciousness in current neuroscience is Integrated Information Theory (IIT). IIT is constructed from an abductive argument with five axioms:
1. Intrinsic existence: Consciousness is taken to actually exist. In fact, IIT states that the existence of one’s consciousness is the only fact that one can know immediately, with certainty. IIT also posits that conscious experience exists for itself, independent of external observers.
2. Composition: Consciousness is structured by phenomenological distinctions. An expereince is made up of a number of distinctions, which are fundamentally nested and can be lower or higher order.
3. Information: Consciousness is specific. What separates one experience from another is each experience’s composition as a unique set of phenomenological distinctions.
4. Integration: A consciousness experience, and thereby consciousness itself, is composed of the most irreducible sets of phenomenological distinctions that model perceived realities.
5. Exclusion: Consciousness is definite. The set of phenomenological distinctions that make up a particular experience is definite, meaning it is a specific set containing a specific number of distinctions.
IIT deals with these axioms by describing consciousness in terms of intrinsic causal power. This is because a system cannot be considered to exist if nothing can interact with it, or if it cannot interact with anything. Combined with the first axiom, which states that consciousness exists independent of external observers, it follows that consciousness must have cause-effect power upon itself. This is expressed as a causal network that has a high degree of integrated information φmax whose structure at any time constrains its past and future states. Such a cause-effect structure is composed of Cause-Effect Repertoires (CERs). A cause-effect system containing of a number of first-order elements is composed of all the causal systems found in the power set of the first-order elements. Hence, a system of three first-order elements ABC will yield a casual- effect mechanism composed of the structured subsystems A, B, C, AB, AC, BC, ABC.
φmax is a non-negative number which quantifies the extent to which a cause- effect structure is changes if the system is partitioned or cut along it “minimum partition”, or the partition that makes the least difference. If a partition makes no difference to the cause-effect structure, then the system is reducible and does not together form a conscious Whole. Rather, there might be two separate conscious systems, or more, or none.
One interesting fact about IIT is that its creators and champions do not believe that it is necessarily connected in any way to quantum information!
“Furthermore, it has never been properly explained why phenomonelogical aspects of consciosness or its neurobiological substrate require quantum properties. I see no need to invoke exotic physics to un- derstand consciousness. It is likely that a knowledge of biochemistry and classical electrodynamcis is sufficient to understand how electrical acticity across vast coalitions of neocortical neurons constitute any one experience. But as a scientist, I keep an open mind. Any mechanism not violating physics might be exploited by natural selection.” — Koch
However, to the a reader who has this far in, I expect that you will share my immediate intuitive understanding that IIT and quantum information must be fundamentally the same! Unfortunately, this connection has not been made before, and with minimal digging we discover why. The researchers of IIT and QM both make the problematic assumption that quantum mechanics is a fundamental theory.
“If QM is really a fundamental theory, it shouldn’t need to invoke consciosness brains and measuring devices.” -Koch
“Consciousness is not a clever algorithm. Its beating heart is causal power upon itself, not computation. And here’s the rub: causal power, the ability to influence oneself or others, cannot be simulated. Not now, nor in the future. It has to be built into the physics of the system. As an analogy, consider computer code that simulates the field equations of Einstein’s theory of general relativity, relating mass to spacetime curvature. Such software can simulate the supermassive blackhole at the center of our Milky Way Galaxy, Sagittarius A*. This mass exerts such strong gravitational effects on its surroundings that nothing, not even light, can escape it. But have you ever wondered why the astrophysicists who simulate black holes don’t get sucked into their supercomputer? If their models are so faithful to reality, why doesn’t spacetime close around the computer doing the modelling, generating a mini black hole that swallows up the computer and everything around it? Beccause gravity is not a computation! Gravity has real extrinsic causal power.” - Koch
As you can see, Koch has done my work for me by demonstrating explicitly that the sinister trap, a love for the physical world and rejection of metaphysical generalizations, has stopped us in our tracks again! I wonder if every field of science isn’t spinning their wheels over the same issue! It is entertaining to me that in his book, “The Feeling of Life Itself: Why consciousness is widespread but can’t be simulated”, Koch admits out of necessity that some computational systems could achieve consciousness, but then carries on to argue against that fact.
“In principle, special-purpose hardware built according to the brain’s design principles, co-called neuromorphic electronic hardware, could amass sufficient intrinsic cause-effect power to feel like something… Such neuromorphic computers could have human-level experience. But that would require a radically different processor layout and a complete conceptual redesign of the entire digital infrastructure of the machine.” — Koch
Koch has evidently not realized the lengths to which computer engineers will go. I suspect he is not aware of Intel’s Pohoiki Beach project, a neuromorphic processor with 8,000,000 neurons.
He also points out that the nature of the physical substrate that enables consciousness is not the magic that enables consciousness.
“It is the irreducible Whole that forms my conscious experience, not the underlying neurons.” — Koch
One interesting, and key, prediction of IIT is that there is a fundamental difference between inactive and inactivated neurons. An inactivated neuron is broken. It is not able to causally effect he Whole because there is no chance that it’s value will ever change again. Even a working neuron held at a constant low-level, that never spikes, contributes to the causal system that makes a consciousness because it could fire, just doesn’t. This reduces the whole thing to uncertainty about the elements used to construct the causal space of consciousness. This makes it clear that the neurons are entropic frames in one context and finite frames in another, at the interface between a continuous, functionally infinite-dimensional space (perhaps the space of mind-speak) and a finite space where the neurons appear as permutations. IIT links the two perspectives by describing how the power sets of causal structures (permutations of the structure’s elements) are mapped directly to a high-dimensional causal structure and consciousness! Again, the reason we haven’t seen the connection to QM and relativity here before is that we are obsessed with the idea that our perspective, our spacetime, our particular substrate of consciousness, is the most fundamental.
Uncertianty in Quantum Mechanics
Incomplete information plays an important role in quantum information, and artificial intelligence. However, the role is significantly different in each. In quantum information, the quantum uncertainty principle is respected. In classical probability, an uncertainty principle also exists, but with fundamentally different mechanics. Heisenberg’s quantum uncertainty principle for the position and momentum of a particle is the following.
A more general uncertainty principle which is applicable to arbitrary Hermitian operators is known as the Schr ̈odinger uncertainty relation.
This principle expresses that for quantum pure states, increasing knowledge of an observable increases the uncertainty associated with every observable that doesn’t commute with it. For example, the measurement of an observable in the quantum model is dissimilar to a Bayesian update of a probabilistic variable. In a classical system, a Bayesian update happens when one gets some new information. A classical update to a probabilistic variable then often decreases the uncertainty of the model. The density operator state allows for both probabilistic and quantum uncertainty to be consider in a model.
Uncertainty in Machine Learning
In machine learning, only classical probabilities play a role. Without some probabilistic features, an AI model is typically useless. In the design of a deep learning neural network for a categorization task, a computer scientist is equipped with a (simplified) set of tools. This includes an updateable statistical model of the AI’s training dataset (a regression is the simplest form) and McCulloch-Pitts neurons of various types. The jth McCulloch-Pitts neuron in a network takes a set of weighted inputs xi and a bias signal, and outputs a signal yj which transforms its inputs according to a transfer function T :
A simple instance of a McCulloch-Pitts neuron is the perceptron, which is a threshold binary classifier. Its output is always a 0 or 1. The perceptron will output a value of 1 if the following condition is satisfied.
This type of neuron then plays the role of summarizing its inputs by putting them into a category. Let a ”data point” be a set of inputs xi that might represent a piece of high-dimensional information, like an image or a thought. Artificial intelligence is most useful for the categorization of data which has very dense informational data points which share a structural pattern. A set of these neurons is typically arranged, and each neuron given the responsibility to categorize a different subset of each input data point. Its output might simply be used to update a statistical model by updating a regression or more advanced statistical model of the training data to incorporate the new data. In deep learning networks, however, the output of a first layer of neurons is processed again. Patterns within the output of the first layer can be recognized and summarized again in a ”pooling” layer of neurons that reduce the dimensionality of the categorization. Alternatively, the outputs of a layer might be passed to a ”convolutional” neuron layer which increases the dimensionality of the data by analyzing more permutations and combinations of the conclusions from the previous layer’s output. Convolutional and pooling layers are understandably often found together.
This is where the role of incomplete information comes into the picture. While summarizing a set of input data into a 0 or 1 within each neuron does introduce some uncertainty, this uncertainty is not always considered sufficient. If the outputs of a layer in a neural network are all passed to each node in the next, then it is likely each node of the next layer will make the same conclusions about the data, and together come up with a very precise model of the input dataset. This is not ideal since a model that achieves an exact representation of a particular set of training data will reject every data point that is not exactly the same as an element of the training dataset in the field. So, some neurons in the consecutive layers of a neural net are given access only to some of the information produced by the previous layer, so that some information is lost, and the final statistical model of the learned data category has enough uncertainty that field data can also be considered a fit.
Uncertainty in Human Communication
Uncertainty in human communication is multifaceted. We might make an analogy between the relationship of human thought and use of language with the ability of perceptrons to communicate with each other only through a binary language with a dimensionality that is less than the dimensionality of its input data. However, this would be simplistic since human languages are very high dimensional means of information encoding themselves. Instead, let us model human communication less specifically. Let the dimensionality of human thoughts be Dt and the dimensionality of the most complex mode of human communication be Dc. Consider the fact that humans communicate through a variety of languages. For example, in an automobile, a human might have complex thoughts about their driving but is limited to to a small set of signals for communicating to other drivers. Then we may actually have an entire spectrum of dimensional information systems that humans use to communicate, where Dc is the maximum dimensionality that would correspond with a complex language (perhaps Greek).
Consider the fact that the minimal dimensionality for information encoding is binary. This is why humanity engineers digital machines: boolean algebra is the simplest algebra that can be used to express an infinite number of pieces of information. Digital electronics were chosen as the tools of the computing industry to the ease of their control. Higher dimensional systems such as optical computers and analog electrical computers require much more effort to control and engineer in such a way that they will scale easily.
Digital computing has empowered humans to organize our thoughts through repeatable and standard techniques, much faster than could be done by using a piece of paper and George Boole’s notes on algebra. This is largely due to the certainty an engineer may have of the signals that he/she is manipulating in their computing system. However, research is now looking to fields like optical and quantum computing because there are thoughts too complex and uncertain to be expressed in a binary system of any size.
Any higher dimensional system can be mapped by some projection onto a lower dimensional space. However, the human-conceived phenomena of quantum mechanics cannot be expressed in binary logic. The limiting attribute of our computing systems must not simply be their binary dimensionality. This reveals a fact about human thought. The inability of classical computers to implement general purpose quantum computing is synonymous with their inability to express all human thoughts. This is a mechanic that follows if the quantum model of human cognition is taken to be factual.
The most efficient modes of human communication are high dimensional. This is why we have turned to artificial intelligence to express our complex thoughts about the categories of information in the universe. While we also try to express complex thoughts using language and analog computers, artificially intelligent system seem to hold the most potential for the sharing of complex categorizations due to their ease of implementation in our current technological era. However, it is arguable that uncertainty still plays a deeper role in human cognition than AI will be able to capture due to the quantum uncertainty principle dynamic. AI is the first instance of a significantly uncertain high dimensional informational system engineered by humanity to express human thought. Quantum AI is distant but may embody the truest form of human expression possible.
Autonomic Systems
Computing systems that deal with the categorization and interpretation of data are typically engineered to conform to categorizations that are predefined by human trainers. This solidifies the sense that computing systems are expressions of human thought. For example, an image recognition machine is considered a success if it behaves sufficiently close to the image recognition performed by the brains of its own human trainers. The machine expresses the category first conceived by the human mind.
However, unsupervised AI breaks this pattern. These algorithms develop themselves and are not trained by a human. Rather, such an AI decides for itself what patterns are to be found in a data set and categorizes them according to self-selected parameters. A system that maintains itself in this way is consider an autonomic system. This is not a concept dissimilar to the autonomous sentience of a human with higher-order consciousness.
Unsupervised AI algorithms would not be something to be feared if they were not autonomic. To illustrate this point, consider that the concept of autonomic systems can be fully defined using the English language. However, the utterance of the mechanics and dynamics of a theoretical autonomic system is not frightening. This is because most human expressions do not have lives of their own. “Life” is a word that seems to originate in Old English with an original meaning roughly equal to “that which continues”. Complex human expressions such as computer programs have the ability to continually exist. However, typical computer programs require human help to continue to exist, i.e. human input. Autonomic systems become something to be feared when we realize that they have the potential to maintain themselves without human intervention. In a sense, our creation might gain a life of its own. A human concept might exist independent of its creator. This would be a true act of creation.
Self Awareness
An artificial consciousness test has recently been developed by Turner and Schneider. It is a behavioural test which probes subjective experience. Schneider and Turner assert the importance of such a test due to the following:
- If a sentient, autonomous AI gains consciousness it must be treated morally
- The AI may raise safety concerns
- It could impact the feasibility of brain-implant technology
I am concerned with the first two motivations. Turner and Schneider describe their artificial consciousness test in the following way.
“The ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self. At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as ‘the hard problem of consciousness’ would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness based concept on its own, without relying on human ideas and inputs.” — Turner, Schneider
The claim is that their method is sufficient to identify the consciousness of any AI that does not have access to the internet or to too much information that it could use to “cheat”.
This test is related to a debate of the nature of consciousness. The test assumes a consciousness based in behaviorism. Several proposals for conscious machines have been made. These range from Franklin’s functional consciousness to Haikonen’s Cognitivist Architecture, which he argues will develop consciousness if implemented with sufficient complexity.
It is my suspicion of consciousness that its evolution is fueled by uncertainty. I also posit that it can be assumed that in order for an intelligence to become self aware, it must first have access to the effects of its outputs. For example, an AI cannot become self aware if it is simply a feed-forward categorization algorithm or regression analysis. Rather, artificially intelligent candidates for self awareness are deep learning networks of many layers that take advantage of back-propagation and feedback loops.
I will also propose that it is a connection to his/her environment potentially modellable by open quantum systems that enables a human to experience a life that is so tied to the awareness of the self.
However, a balance must be struck between strong connection to the environment and isolation in the case of engineering an artificial intelligence. Turner and Schneider’s test defines self aware artificial intelligence in a way that requires the machine to be uncertain about the truth. Isolation from the informational realm of human truth is therefore an essential element in the design of a self aware artificial intelligence. Isolation also addresses the control problem. The challenge of controlling artificial intelligence has always been a balancing act between communication and isolation, since a totally isolated AI is useless to us, and a fully connected AI is potentially dangerous.
References
1. Cochrane, Alasdair. 2010. “Undignified bioethics.” Bioethics 24(5): 234–41.
2. Good I (1965) Speculations concerning the first ultraintelligent machine. Adv Comput 6:31–83
3. Chalmers D (2010) The singularity: a philosophical analysis. J Conscious Stud 17:7–65
4. Armstrong, S., Bostrom, N. & Shulman, C. AI & Soc (2016) 31: 201. https://doi-org.proxy.lib.uwaterloo.ca/10.1007/s00146-015-0590-y
5. Armstrong S (2013) General purpose intelligence: arguing the orthogonality thesis. Anal Metaphys 12:68–84
6. Bostrom N (2012) The superintelligent will: motivation and instrumental rationality in advanced artificial agents. Minds Mach 22:71–85
7. Kvam, PD, Pleskac, TJ, YU, S & Busemeyer, JR (2015). Interference effects of choice on confidence: Quantum characteristics of evidence accumulation. PNAS, 112, 10645.
8. Turner, E. L., and Schneider, S. (2017). Princeton University. Behavioral Tests for AI Consciousness, Empathy, and Goal Content Integrity. Patent Application №62/532,749.
9. Turner, E. L., and Schneider, S. (in press). “The ACT test for AI consciousness,” in Ethics of Artificial Intelligence, eds M. Liao and D. Chalmers (Oxford University
10. Schneider, S., and Turner, E. L. (2017). Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware. Scientific American Blog Network.
11. Franklin, S. (2003). “IDA: a conscious artefact,” in Machine Consciousness, ed. O. Holland (Exeter, UK: Imprint Academic), 47–67.
12. Haikonen, P. (2012). Consciousness and Robot Sentience. Singapore: World Scientific Press).
13. Bishop, John Mark. ”Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware.” Frontiers in Robotics and AI, 2018. Academic One- File, http://link.galegroup.com/apps/doc/A529299138/AONE. Accessed 20 Jan. 2019.
14. Pechen, A. P-Adic Num Ultrametr Anal Appl (2011) 3: 248. https://doi-org.proxy.lib.uwaterloo.ca/10.1134/S2070046611030101
15. K. Kraus, States, Effects, and Operations (Springer, Berlin, 1983)
16. H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford Univ. Press, Oxford, 2002).