Computer-Generated Consciousness: Holy Grail or Holy Fail?

B&C

As a part of this year’s Manchester Science Festival, The Museum of Science and Industry recently hosted a debate entitled ‘Brains and Computers’. This free event featured a discussion on whether brains are similar to computers, between Raymond Tallis, a philosopher, novelist, and a former physician whose research publications mostly focused on neuroscience and old age, and Professor Steve Furber, a distinguished academic whose work include designing the BBC Microcomputer and the ARM 32-bit RISC microprocessor. The debate was introduced and ‘chaired’ by the coolest scientist on the planet, rock guitar-wielding physicist, Dr. Mark Lewney. As a psychology graduate, neuroscience enthusiast and a guitarist, I did not hesitate to book a ticket. I ignored the horrendous weather (typical here in Manchester), and the possibility that I would be crazy tired the next morning (as the event was quite late for someone like me).

The main crux of the discussion was whether it is possible for anyone to produce an accurate computerised representation of the brain, and perhaps more importantly, consciousness. Dr. Lewney first asked Raymond Tallis to comment. Dr. Tallis was quick to answer with a resounding ‘no’. To him, it is highly unlikely for anyone to produce such a computer-simulation of consciousness. To him, consciousness is far too complex to be deduced to mere computations and algorithms. He argued that no computer in the world appears to be conscious. One might propose that certain technological equipments are able to reproduce human-like actions, such as a pocket calculator which can ‘perform’ complex calculations just like, or at times even better than, a human being. However, Dr. Tallis insisted that the calculator is merely a tool which humans use to aid us in our daily calculations. In his words, “it is still you who does the calculations, but on a pocket calculator”. Tallis extended his argument in pointing out that consciousness involves a multitude of things including people’s awareness of their  surroundings, their cultural background, feelings and philosophical beliefs, to which computers (at the moment) simply have no match to humans. He also stated that even if an entity would be invented that looks like him, behaves like him and acts like him, but have no idea what it is like to be him, then that entity, whatever that might be, is still not conscious.

After Raymond Tallis’ summation of his arguments, Dr. Lewney turned to Professor Furber and asked for his opinions. It may be important to point out that Prof. Furber and his team are attempting to simulate large-scale brain functions using millions of mobile phone processors, as a part of his spiNNaker project (Spiking Neural Network Architecture). One of the SpiNNaker Project’s objective is to “provide a platform for high-performance massively parallel processing appropriate for the simulation of large-scale neural networks in real-time, as a research tool for neuroscientists…” (SpiNNaker Website). Prof. Furber admitted that creating a simulation of the brain is an incredibly challenging feat as the brain has billions of neurons. Replicating a human brain would involve hundreds/thousands of microprocessors and may require output from a power plant. If successful, this project may aid neuroscientists to find out how the brain works, and how to fix those that are ‘broken’.

Prof. Furber explained that experiments have been conducted wherein circuit boards that simulate parts of the brain were attached to robots with specially designed eyes (those that resemble human eyes) in order to look at vision and visual processing. When asked whether robots and/or computer programmes can simulate learning through rewards and punishment, Prof. Furber pointed out that it is possible to put a ‘bump’ with a sensor in front of a robot. Sensors on the bump will beep if the robot knocks something in front of it, and afterwards would be able to ‘learn’ not to do it again. He also explained that computer programmes nowadays are becoming so complex that even their own programmers do not know how they will behave- similar to a ‘conscious’ individual who is unpredictable.

Both of Prof. Fuber’s and Dr. Tallis’ arguments are persuasive, interesting and based on empirical evidence. However, althroughout the debate, I sat there wondering why they have not (at least attempted to) define consciousness. Granted that Dr. Tallis admitted that as of yet, nobody knows where consciousness lies. As a result of this lack of a consensus on a definition, there is no existing measure of consciousness. So, how would anyone know whether a robot, or indeed a human-being, plant or non-human animal, is conscious if we don’t know what it is or how to measure it? Nevertheless, the debate was still thought-provoking. Regardless of whether the SpiNNaker Project would produce a simulation of a conscious brain or not, as long as it can simulate the workings of an ideal human brain, it can still be a valuable tool.

I would personally like to thank the organisers and volunteers of the Manchester Science Festival for putting together such an amazing event!

Click HERE to see the full listings of events in this year’s Manchester Science Festival.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s