11 Comments
Jun 16Liked by Andrew Cutler

I think this makes me feel better 🤔🤔

Expand full comment

I have a much more dumdum level of complacency about AI rooted in the fact that we can always unplug it.

By that I mean: AI runs on energy sources it can't self-generate. There are still a lot of human steps involved in drilling oil, mining, getting materials from point A to point B in an accessible form. Maybe someday there will be incredibly seamless robots who can do this physical side of the work, but currently robotics are still pretty clunky. Why even worry about something as high level as consciousness when just picking stuff up and putting it down again can only be done under fairly controlled conditions when you are asking a machine instead of a person to do it?

Expand full comment

>Why even worry about something as high level as consciousness when just picking stuff up and putting it down again can only be done under fairly controlled conditions when you are asking a machine instead of a person to do it?

The people that worry think that consciousness may be easier to solve than washing dishes, actually. One of the legitimately surprising things about AI has been that things like coding and playing chess have been automated before driving a truck. Some researchers (and enthusiasts) took that lesson very hard. Especially those that think consciousness is illusory or not very interesting.

Expand full comment
Jun 25Liked by Andrew Cutler

Doesn't the proof for the incompleteness theorems rely roughly on the ability to reliably do arithmetic? If so does the fact that human's can't reliably do arithmetic in their heads suggest that they're not the sort of formal system that the incompleteness theorems describe a limitation of?

Regarding the relevance realisation argument, humans have 92 million cone cells in their eyes, some finite number of mechanical sensors in their skin and ears, etc. These are all much smaller numbers than infinity. Yes they are much larger than the number of tokens in chat GPT's attention window, but the amount of for example visual information we take in is not a huge multiple of what we might already use in an AI model. It is often not useful to model humans behaviour as algorithmic due to their complexity, but scientists have recently understood insects circling lights as a bug in their algorithm for level flight: https://www.nature.com/articles/s41467-024-44785-3

In general I tend to find that people who claim things like "consciousness is definitely difference to computation" seem to have a view of computation limited to the kinds of systems designed by humans, at every level of abstraction, to be simple and logical enough for people to understand. Even these systems can display unexpected behaviour, and evolution is not limited by a need to understand its algorithms.

Expand full comment

>Doesn't the proof for the incompleteness theorems rely roughly on the ability to reliably do arithmetic? If so does the fact that human's can't reliably do arithmetic in their heads suggest that they're not the sort of formal system that the incompleteness theorems describe a limitation of?

The way Penrose presents it, what is important is abilities, not he median case. Humans can, in principle, do certain mathematical proofs that a computational system could not. Lots of people disagree with his interpretation (before even getting to the instantiation via microtubules).

>Regarding the relevance realization argument...

The problem of interacting with the world does seem ~infinite to me. Where you have an infinite world filtered through a finite (but still enormous large) number of sensors, which then presents one with ~infinite choices. I can also see how one would say the number of choices is actually binned, and not that much. And the sensor data is greatly downsampled as well. And maybe that can get one within orders of magnitude of what GPT4 or GPT7 do. The cool thing about AI is they are making these empirical questions.

Expand full comment
Jun 21Liked by Andrew Cutler

I seem to be typing a comment - - - love the paperclip theme.

Expand full comment

because of microtubules "all the way down" ( stolen from Sabine https://www.youtube.com/watch?v=R6G1D2UQ3gg) I had forgotten Penrose's use of their intercession in the … — gap Gödel provides. Read it so many years ago.

I feel Julian Barbour's book The Janus Point may become a similar book. to Penrose's At the very least it allowed me to think of the tensegrity of a universe of particle-wave systems, and move on from the virtual grids of space or space-time Newton provided. ( https://whyweshould.loofs-samorzewski.com/is-the-universe-a-calculator.html )

There are many types of gap. Is consciousness a gap-dancing survivor? Is intelligence the epiphenomena, has evolution in allowing life to survivor, embedded the spark in a machine, and we can no longer see the gap between machine and life/consciousness in ourselves?

https://en.wikipedia.org/wiki/Minds,_Machines_and_G%C3%B6del

Some call this the Gödelian machine. https://whyweshould.substack.com/p/reading-the-relativistic-brain-how

I used to think that maybe parallel rather than serial would get us there, emergently, now looks at quantum stuff…. hmmm, so hard, we love throwing stuff into the… —gap

Expand full comment

I've run this through ElevenLabs to create a multi-voiced narration, if that's OK:

https://askwhocastsai.substack.com/p/the-doomsday-debate-by-andrew-cutler

Expand full comment

So fast on the draw, unbelievable

Expand full comment
Jun 16Liked by Andrew Cutler

I've refined my process quite a bit since last time, identifying quotes and assigning random voices. It's getting more and more "Push button".

Expand full comment

Substack should hire you. I added your post to this essay, btw. Thanks!

Expand full comment