I loved this paper, below is the punchline for me I reckon (and summarizes where-they-are-at nicely too I think - finding out a lot of possibly very important things!):
""The discovery of unexpected problem-solving competencies (such as Delayed Gratification and segregation) that are not apparent from the component policies and algorithms themselves is a critical research program.""
Building something that isn't Turing-complete is surprisingly-hard once it's complex enough.
If basal intelligence is present in diverse computational structures, then weak intelligence is everywhere.
If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Personally, I blame game theory. Too many agents too smart in one place, you get conflicts, and eventually someone breaks an atom apart in your direction.
Or do you need emotions to have conflict? Are there basal emotions?
I'm usually not worried about AI uprisings, but I do believe in the possibility of conflict.
> If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
A species needs more than raw intelligence to create technology. They also need:
1. Dexterity: dolphins and ravens are intelligent, but they have no fine motor manipulators, so there is no way to build technology.
2. Reasonably high bandwidth communication: other primates are intelligent, social and dextrous, but don't have sophisticated language for precise and expansive communication.
3. Social inclinations leading to building cultural knowledge across generations: octopuses are intelligent, are reasonably dextrous, and their colour changing ability could possibly be used for reasonably moderate bandwidth communication, but they are largely solitary creatures.
There are probably even a couple more.
Edit: come to think of it, I think a species that builds technology would need to have all of the above features and feature some distinct physical disadvantages in order to drive them towards compensating by developing tools and knowledge to survive. For instance, humans are physically quite weak compared to other primates.
>Building something that isn't Turing-complete is surprisingly-hard once it's complex enough
The most basic computational device that is studied is the (deterministic) finite automaton, which corresponds to regular languages (regex, although actual implementations are usually way more powerful).
If you add a stack (to count parenthesis basically) you have context-free (CF) languages, which correspond to the syntax of most programming languages. Add a second stack and you're already Turing-complete (TC).
If you know that, you can add any extra-power to your machine that is strictly less than a second unbounded stack, and you get a new language class! For a example, a second n-bounded stack. If you do so you will easily get an infinity of language classes. The point is, are they interesting? In particular, the language classes we focus on have some good properties that most arbitrary classes tend to lack.
The Chomsky hierarchy has context-sensitive languages in between CF and TC, but it is already not a very natural class so I've never seen it discussed anywhere, even in complexity theory research --which focuses a lot more in getting links to computability theory or subtle distinctions between deterministic and non-deterministic classes (most famously P vs NP). For the latter, studying analogs of the complexity classes on restricted models of computations is an interesting approach since Turing machines are difficult to work with.
> If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Most certainly outside of our light cone.
It took 4 billion years for this planet to produce intelligent life that can send out radio signals. If we were to wipe ourselves out, it would be another half a billion years for another intelligent species to appear on this planet (probably? - using Cambrian explosion as a benchmark FWIW).
We've been emitting radio signals for a century so far, and mayyyyybe we'll last another 1000 years before we blow ourselves up? This is something we can only conjecture about at this point.
But just for the sake of argument, let's say that a post-radio-emissions intelligent species lasts 10,000 years. This means that our light cone must match up to a 10,000 year period in a planet's 4b year history (or 500m year repeat) TODAY, in order for us to detect anything at all. The chances of that are vanishingly small. And they're certainly not going to visit us a mere 100 years after we began emitting detectable signals.
It's not just a problem of space; it's a problem of time (and timing).
I think that's wrong. And thinking of it like that provides another possibility:
The dinosaur era was a local maximum that couldn't develop human-like intelligence and technology. Then around 65 million years ago, Earth got "reset" and broke us out of the local maximum. Only after that did life have a chance to develop in a different direction and end up as us.
Seems at least possible to me that life is quite abundant, but local maximums that can't develop intelligence/technology might be more common than we think and it's easy to get stuck there. Earth just got lucky.
Extending your logic (which is convincing), we, too, could be a local maximum and a form that is relatively low on the “cosmic intelligence scale”, if there is such a thing and if it is linear-ish.
In half a billion years our sun will start the end of its life cycle and boil away all oceans on earth. So life as we know it will end then. But intelligent life probably won't take so long to evolve again and we have several species today who have enormous potential if they only manage to evolve usage of tools somehow. We are where we are today to a large part because of our prehensile extremities.
Why isn't at least one species expanding across the cosmos though? The light speed limit isn't really much of a hurdle for cosmic timescales.
The guy on cool worlds YouTube channel (Department of Astronomy, Colombia) has argued that we're still in the early days. The conditions for intelligent life in the galaxy hasn't been around for that long.
Maybe interstellar colonization is just never gonna be worth it?
We could already colonize Antarctica, or the sea-- those are easier to reach, supply and colonize than other planets, but we are not trying.
Most of our past exploration/settling efforts happened because there was some gain to be had; it seems quite plausible (if somewhat bleak) to me that interstellar travel could just remain pointlessly expensive regardless of technological progress.
Perhaps there is, but once again it would have to be visible from our light cone in order for us to even be capable of detecting it. Even with a civilization of 100,000 or even a million years that's still tiny and highly unlikely to happen within a timeframe that intersects with our small window of awareness.
And even if these aliens have cracked FTL travel, who's ever going to find our little planet on the ass end of some mediocre galaxy, with an EM emissions bubble that has only covered 100 light years so far? Needle in a haystack.
Hmm, let's say there's 100 billion stars in our galaxy and one billion habitable planets. Assuming we are average, half, or 500 million have civilizations older than ours. We could assume some sort of distribution where we could say 10% are older than a million years. So 50 million civilizations older than a million years in the milky way. In a million years moving at 0.1c you move 100,000 light years, or across the whole galaxy.
They could have been here already before modern humans even existed.
We don't know how long it takes to evolve our level and kind of intelligence, nor if intelligence like ours implies successful expansion such that it could eventually be noticed from the kinds of distances we can sense with our tech, nor how fast it would actually expand.
If the first in any light cone dominates that light cone, expanding at a high fraction of c, then almost everyone starts off thinking they're the first.
We may be the first in our own light cone, and that light cone may be just about to start intersecting with that of a galaxy where every star has been completely Dyson'd by a Kardeshev 3 civilisation.
If the civilisation is two million years older than us, that galaxy could even be the Andromeda galaxy.
No, it actually seems to be the most likely explanation. The universe is so young yet. It's just a cosmic blip of time since the current generation of stars has began forming.
The Fermi paradox can be answered in so many ways, and is tied to questions like what is the purpose of life and the universe.
Beyond the existence of a single person (such as myself, or you) what do we exist to do?
Is it to learn the universe? (Curiosity)
Is it to decrease entropy locally in order to increase it globally? (Spend energy)
Is it to increase complexity? (Do interesting things, foster maximum diversity?)
For example, if the purpose is indeed curiosity, maybe all we will need is one Dyson sphere in order to understand the universe. We could have a dozen super intelligent life forms in our galaxy alone and probably wouldn't notice them. Basically would just look like a quiet black hole the size of a star.
In my opinion, life is just self-replicating tumbleweeds of matter that drift towards local spaces with high energy. The ideal "shape" of these tumbleweeds is gradually approximated via the algorithm of evolution, filtering out the tumbleweeds that fly too close to the sun and so on. Intelligence becomes an emergent property of these optimal shapes, but intelligence doesn't change the outcome, broadly speaking, they still drift towards local spaces with high energy.
Individual organisms will live their life perusing energy, with every breath, with every meal. Even super organisms, such as a nation, will (attempt to) peruse energy in the form of a thriving economy, which influences the energy allocation of the organisms that make it up.
Even absent these tumbleweeds, high density matter (high energy) will literally bend space, and attract other matter to itself through gravitational force. It's entirely different than what I've already discussed, yet intuitively similar?
How does this apply to the fermi paradox? Maybe the idea that the algorithm of evolution will eventually lead to life self-propagating across the universe is flawed. Maybe the spirit of exploration is not universal. Maybe the the simple fact that interstellar travel and communication is energy inefficient is enough to explain the aggregate effect we are seeing?
I thought Dyson sphere will emit enormous radiation anyway? Like how do you convert photons into electricity and use that electricity afterwards with 100% efficiency? Is it even theoretically possible? There should be lots of heat emitted as infrared light.
A dyson swarm is nothing difficult to achieve. All you need is the ability to put satellite into orbit. That's the minimum. The other part is mass manufacturing them.
That is incorrect, dolphins are unlikely to evolve hands and humanoids evolved hands before they became intelligent (probably to grab branches). It was very lucky that a good brain evolved in a body that already had hands.
> very lucky that a good brain evolved in a body that already had hands.
The other way around: hands adds evolutionary pressure towards becoming more intelligent. (The ones that understand how to use their hands and tools better...)
Dolphins do have organs with which they pick up things like rocks or shells and they are able to give them to each other.
They use their sexual organs as "hands"! Both males and females.
In the tree of life brains are correlated much more strongly with locomotion than with hands. The moment you need to do (inverse) kinematics to plan an immediate action, and to plan sequences of motions, and to plan a hunting or fleeing strategy, is what put pressure to evolve brains, static lifeforms can be very complex and have complicated genomes, but brains you wont find in them...
If elephants were carnivores, with their trunks, they would have evolved efficient methods to hunt, and would probably be the dominant species on the earth landmass surface.
All this without hands.
The fact that they are vegetarian gave us the chance to do that evolution ourselves.
Looking at this planet, it's less likely to happen.
Maybe high density (water) makes tools less useful, and thus hands less useful,
since you cannot move a tool particularly fast under water, compared to on land.
I suppose you've tried throwing a stone underwater -- compare with throwing on land.
From this seems to follow, that creatures with human like intelligence, are less likely to appear, if the density of the liquid or gas surrounding them, is too high. (Dolphins are bright but not that bright.)
Every organism manipulates their environment in some way. The ones that can manipulate it in a way that allows them access to more resources than the others will out compete the ones who don't.
Evolution doesn't really work like that. It's just a low bar that everything has to cross from time to time. Being a specialist, very advanced hunter is in no ways better than a dumb jellyfish that spawns billions of offsprings.
emotions are just a form of intelligence that's calcified over evolutionary time. each one of our emotions can be linked with survival and/or reproduction.
The last coauthor listed on this preprint is Michael Levin, who has a lot of other cool work.
In particular, this talk of his from NeurIPS 2018 includes fascinating biology research results, as well as musings on the future of biologically-inspired artificial intelligence.
One of Levin's main points is to describe agency/intelligence as a continuous spectrum rather than an on/off thing so within the framework of thought that this paper exists in, it's no longer meaningful to treat the answer to the 'is it intelligence?' question as having a boolean answer.
I completely agree with this myself (and have for a long time before I even read any of Levin's frankly amazing work) and I think of the answer to this as more like a float/real-numbered thing - the amount of consciousness/intelligence/agency as a fraction of overall energy usage or something maybe? And that probably will lead to one constantly having to try to work out where the heck zero and one are all the time eh? heheh : )
I think it's fun and fascinating as well though for sure, and I think that even stuff as simple as a reaction-diffusion simulation, can actually contain some tiny elements of agency (just like this paper does with it's self-sorting cells!) Who cares what the scale is, right?, it's the same phenomena at the tiniest scales in my opinion, that led to life, that led to humans.
Whatever the purpose of this "sliding scale" is, it has little to do with the property of intelligence we are concerned with generally.
When we say one species is more intelligent than another, one breed of dog, or one person -- we aren't describing a difference in their organisational structure. And when we want to build intelligent systems we aim to build things that have specific capacities, not that have this sort of abstract organisation which is entirely orthogonal to these capacities.
One dog is more intelligent than another if it can read the intentions of its owner (theory of mind), coordinate in its environment (eg., open doors, etc.), plan more extended actions, pretend/fake actions to confuse the owner/other-dogs, and so on.
These ranges of capacities do not follow from an abstract 'organisational' description of the dog. The great pseudoscience of this 'computer science' thinking is that abstracts to a degree of description that is almost universal, then claims to make fine-grained distinctions.
That the earth-and-moon are 2, and the tree-and-bird are 2, does not mean the earth-and-sun and the tree-and-bird are sharing in any capacities at all. To instantiate an abstract description implies almost nothing.
Yes I agree there's little point in instantiating arbitrary abstract structures, however that isn't what I was suggesting. The related structures and properties identified in Levin's work are specifically those of the 'agential' type (going from molecular-networks/cellular scale right up to and past human-level): Goal-following meta-rules is what would be in the 'class', not just measurements or information attached/correlated to something because some numbers happen to be equal, but a type of memory of intrinsic meta-self-serving micro-behaviors that actually do inform the micro-elements how to behave.
I think that self-organization creates the possibility for the natural nucleation of agential-behavior (akin to crystal-formation) and when the system is also replicative overall, it might cause itself to happen again too! (and down the rabbit-hole we go!)
And I wonder whether it's really ok to just claim there is no micro-organizational structure that gives rise to this meta-goal-following, I mean, this is what Levin's work is all about! My own consciousness/'intelligence' is brought about by many smaller agents (my cells), and I think there's really no sharp categorical barrier here - goal-seeking (and hitting, for the winners of evolution) are pretty clear strategies from the molecular-scale to the blue-whale-sized (and humans too I reckon!)
If you mean to say that materials which have the property of self-replication with variation are essential to intelligence, then I agree with you.
One finds, in reality, that crystals do not meet this requirement -- the only known chemistry to provide it is a highly specific subtype of carbon chemistry called biology.
Yeah I think we are mostly on the same page for sure, I would nearly go so far as to say that self-replicating with variation is 'essential' for intelligence.. and I do utterly love evolution as an idea - but maybe it's just what happened-to-produce the main examples of intelligence we've observed so far in recent history on Earth? (personally I don't think it's just humans that are intelligent either though! so many animals and probably other things are extremely smart too! Just very hard to measure!)
And yeah I was only using the idea of a crystal as an analogy, all replicators (all life?) is a bit like a kind of smooshy-space-and-time-crystal in a way though right? Especially the multicellular kind!!
It's the regeneration/continuation of information of who-knows what type? All types! Obviously there's genes and Levin's vmem and other epigenetic stuff on the biological-side, but now there's also youtube, video-games, recipes, traditions? all memetic-replicators, or meta-memetic ones that control which other memes you allow in your life? I like to remember that all of them are subject to the rules-of-evolution too - there's a success for the memes themselves if they are getting us to carrying them forward!! Can be wildly different types of memories accross epic amounts of time too? Bloody amazing to think about I reckon! Evolution FTW!
> The great pseudoscience of this 'computer science' thinking
You spelled Nobel-level science wrong.
Most of Levin work is not easy to understand or appreciate. Our university lab has been doing cooperative intelligence for decades. The insight in some of his work is revolutionary. He shows why randomness is much more intelligent then most scientist think.
I just loved this bit in the paper, could be so easily taken on so many tangents!:
""Delayed Gratification is used to evaluate the ability of each algorithm undertake actions that temporarily increase Monotonicity Error in order to achieve gains later on. Delayed Gratification is defined as the improvement in Sortedness made by a temporarily error-increasing action.""
Is it slightly analogous in some ways to the avoidance of getting stuck in local maxima perhaps?
Is it that with a bit more upfront investment (or 'delayed gratification'), a system might be able to find shorter (or less energy-intensive) paths through the space that they are navigating to get to their targets?
I think it certainly does seem like when there is essentially some 'computational slack' ('extra line' say, slightly over-provisioned or just set-to-explore more) I'd guess there's a good chance that that could yield a better (cheaper/shorter) result than a brute-force (minimal-effort, perhaps more technically 'efficient') solution?
Unfortunately I don't really feel like I really went anywhere with what I said, but your very short comment made me wonder many things about what you meant! Appreciated!
A beach sorts itself by size of sand, just by applying physics, so any array copied by parallel processes will sort itself given enough time, by computation time spend on the element aka the size of the rocks.
I think if you 'just apply physics' (let's say 'just apply computation' shall we?) then an array of numbers can only hope for this to happen In a kind of bogosort-style way?: Shuffle them all, and if now sorted, return! Else, loop and shuffle again, and so on. Such a hillarious sort algo.. but wait till u hear about bogobogosort! lollll!
But different sizes of sand do move against each other differently, and I think maybe that aspect is slightly reminiscent of the every-cell-for-themselves aspect of the cells described in the paper, and especially how the different rules allow different swapping operations when the swap-target is smaller or larger than the current cell. So I think it's a very relevant observation!
I loved this paper, below is the punchline for me I reckon (and summarizes where-they-are-at nicely too I think - finding out a lot of possibly very important things!):
""The discovery of unexpected problem-solving competencies (such as Delayed Gratification and segregation) that are not apparent from the component policies and algorithms themselves is a critical research program.""
Building something that isn't Turing-complete is surprisingly-hard once it's complex enough.
If basal intelligence is present in diverse computational structures, then weak intelligence is everywhere.
If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Personally, I blame game theory. Too many agents too smart in one place, you get conflicts, and eventually someone breaks an atom apart in your direction.
Or do you need emotions to have conflict? Are there basal emotions?
I'm usually not worried about AI uprisings, but I do believe in the possibility of conflict.
> If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
A species needs more than raw intelligence to create technology. They also need:
1. Dexterity: dolphins and ravens are intelligent, but they have no fine motor manipulators, so there is no way to build technology.
2. Reasonably high bandwidth communication: other primates are intelligent, social and dextrous, but don't have sophisticated language for precise and expansive communication.
3. Social inclinations leading to building cultural knowledge across generations: octopuses are intelligent, are reasonably dextrous, and their colour changing ability could possibly be used for reasonably moderate bandwidth communication, but they are largely solitary creatures.
There are probably even a couple more.
Edit: come to think of it, I think a species that builds technology would need to have all of the above features and feature some distinct physical disadvantages in order to drive them towards compensating by developing tools and knowledge to survive. For instance, humans are physically quite weak compared to other primates.
Can't make fire under water either.
>Building something that isn't Turing-complete is surprisingly-hard once it's complex enough
The most basic computational device that is studied is the (deterministic) finite automaton, which corresponds to regular languages (regex, although actual implementations are usually way more powerful). If you add a stack (to count parenthesis basically) you have context-free (CF) languages, which correspond to the syntax of most programming languages. Add a second stack and you're already Turing-complete (TC).
If you know that, you can add any extra-power to your machine that is strictly less than a second unbounded stack, and you get a new language class! For a example, a second n-bounded stack. If you do so you will easily get an infinity of language classes. The point is, are they interesting? In particular, the language classes we focus on have some good properties that most arbitrary classes tend to lack.
The Chomsky hierarchy has context-sensitive languages in between CF and TC, but it is already not a very natural class so I've never seen it discussed anywhere, even in complexity theory research --which focuses a lot more in getting links to computability theory or subtle distinctions between deterministic and non-deterministic classes (most famously P vs NP). For the latter, studying analogs of the complexity classes on restricted models of computations is an interesting approach since Turing machines are difficult to work with.
> If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Most certainly outside of our light cone.
It took 4 billion years for this planet to produce intelligent life that can send out radio signals. If we were to wipe ourselves out, it would be another half a billion years for another intelligent species to appear on this planet (probably? - using Cambrian explosion as a benchmark FWIW).
We've been emitting radio signals for a century so far, and mayyyyybe we'll last another 1000 years before we blow ourselves up? This is something we can only conjecture about at this point.
But just for the sake of argument, let's say that a post-radio-emissions intelligent species lasts 10,000 years. This means that our light cone must match up to a 10,000 year period in a planet's 4b year history (or 500m year repeat) TODAY, in order for us to detect anything at all. The chances of that are vanishingly small. And they're certainly not going to visit us a mere 100 years after we began emitting detectable signals.
It's not just a problem of space; it's a problem of time (and timing).
> using Cambrian explosion as a benchmark
I think that's wrong. And thinking of it like that provides another possibility:
The dinosaur era was a local maximum that couldn't develop human-like intelligence and technology. Then around 65 million years ago, Earth got "reset" and broke us out of the local maximum. Only after that did life have a chance to develop in a different direction and end up as us.
Seems at least possible to me that life is quite abundant, but local maximums that can't develop intelligence/technology might be more common than we think and it's easy to get stuck there. Earth just got lucky.
Extending your logic (which is convincing), we, too, could be a local maximum and a form that is relatively low on the “cosmic intelligence scale”, if there is such a thing and if it is linear-ish.
In half a billion years our sun will start the end of its life cycle and boil away all oceans on earth. So life as we know it will end then. But intelligent life probably won't take so long to evolve again and we have several species today who have enormous potential if they only manage to evolve usage of tools somehow. We are where we are today to a large part because of our prehensile extremities.
Why isn't at least one species expanding across the cosmos though? The light speed limit isn't really much of a hurdle for cosmic timescales.
The guy on cool worlds YouTube channel (Department of Astronomy, Colombia) has argued that we're still in the early days. The conditions for intelligent life in the galaxy hasn't been around for that long.
Maybe interstellar colonization is just never gonna be worth it?
We could already colonize Antarctica, or the sea-- those are easier to reach, supply and colonize than other planets, but we are not trying.
Most of our past exploration/settling efforts happened because there was some gain to be had; it seems quite plausible (if somewhat bleak) to me that interstellar travel could just remain pointlessly expensive regardless of technological progress.
> We could already colonize Antarctica, or the sea-- those are easier to reach, supply and colonize than other planets, but we are not trying.
On the other hand - we were and still are present on the Antarctica, have a permanent base on the South Pole etc.
Perhaps there is, but once again it would have to be visible from our light cone in order for us to even be capable of detecting it. Even with a civilization of 100,000 or even a million years that's still tiny and highly unlikely to happen within a timeframe that intersects with our small window of awareness.
And even if these aliens have cracked FTL travel, who's ever going to find our little planet on the ass end of some mediocre galaxy, with an EM emissions bubble that has only covered 100 light years so far? Needle in a haystack.
Hmm, let's say there's 100 billion stars in our galaxy and one billion habitable planets. Assuming we are average, half, or 500 million have civilizations older than ours. We could assume some sort of distribution where we could say 10% are older than a million years. So 50 million civilizations older than a million years in the milky way. In a million years moving at 0.1c you move 100,000 light years, or across the whole galaxy.
They could have been here already before modern humans even existed.
Are you wondering why Conway’s Game of Life or the C++ type system isn’t trying to communicate with us from beyond the stars?
Beyond the stars, a static void. Ions, but no aliens.
Till one day SETI finds a 5k line template compilation error!
Rejoice! We are not alone! Aliens have to deal with C++ too!
> Or do you need emotions to have conflict?
Microbes and insects have massive conflicts.
> If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Someone has to be first (in our speed-of-causality bubble), maybe it's us?
Doesn't that seem even less likely? Not only do we exist but we're the first?
Not without more information.
We don't know how long it takes to evolve our level and kind of intelligence, nor if intelligence like ours implies successful expansion such that it could eventually be noticed from the kinds of distances we can sense with our tech, nor how fast it would actually expand.
If the first in any light cone dominates that light cone, expanding at a high fraction of c, then almost everyone starts off thinking they're the first.
We may be the first in our own light cone, and that light cone may be just about to start intersecting with that of a galaxy where every star has been completely Dyson'd by a Kardeshev 3 civilisation.
If the civilisation is two million years older than us, that galaxy could even be the Andromeda galaxy.
No, it actually seems to be the most likely explanation. The universe is so young yet. It's just a cosmic blip of time since the current generation of stars has began forming.
The Fermi paradox can be answered in so many ways, and is tied to questions like what is the purpose of life and the universe.
Beyond the existence of a single person (such as myself, or you) what do we exist to do?
Is it to learn the universe? (Curiosity) Is it to decrease entropy locally in order to increase it globally? (Spend energy) Is it to increase complexity? (Do interesting things, foster maximum diversity?)
For example, if the purpose is indeed curiosity, maybe all we will need is one Dyson sphere in order to understand the universe. We could have a dozen super intelligent life forms in our galaxy alone and probably wouldn't notice them. Basically would just look like a quiet black hole the size of a star.
In my opinion, life is just self-replicating tumbleweeds of matter that drift towards local spaces with high energy. The ideal "shape" of these tumbleweeds is gradually approximated via the algorithm of evolution, filtering out the tumbleweeds that fly too close to the sun and so on. Intelligence becomes an emergent property of these optimal shapes, but intelligence doesn't change the outcome, broadly speaking, they still drift towards local spaces with high energy.
Individual organisms will live their life perusing energy, with every breath, with every meal. Even super organisms, such as a nation, will (attempt to) peruse energy in the form of a thriving economy, which influences the energy allocation of the organisms that make it up.
Even absent these tumbleweeds, high density matter (high energy) will literally bend space, and attract other matter to itself through gravitational force. It's entirely different than what I've already discussed, yet intuitively similar?
How does this apply to the fermi paradox? Maybe the idea that the algorithm of evolution will eventually lead to life self-propagating across the universe is flawed. Maybe the spirit of exploration is not universal. Maybe the the simple fact that interstellar travel and communication is energy inefficient is enough to explain the aggregate effect we are seeing?
it sounds like your take is the entropy one, but with a caveat that dark energy prevents Indefinite growth.
I thought Dyson sphere will emit enormous radiation anyway? Like how do you convert photons into electricity and use that electricity afterwards with 100% efficiency? Is it even theoretically possible? There should be lots of heat emitted as infrared light.
Hard to fathom what engineering a civilization like that might be capable of, maybe it would emit extremely hard to detect radio noise.
A dyson swarm is nothing difficult to achieve. All you need is the ability to put satellite into orbit. That's the minimum. The other part is mass manufacturing them.
How much energy and material is needed to manufacture, launch, position, maintain, and then leverage a Dyson swarm?
>where are the aliens?
It's probably a Plato's Cave situation. You're chained there, staring at flickering shadows on the wall asking, "Where are the aliens?".
Which is to say, the dimension that must be traversed in order to meet the aliens is an invisible one.
Where is it written that intelligent beings must create the means for interstellar communication, or any technology at all?
Imagine a planet with highly intelligent whales who have no way to manipulate their environment (hands) and no need to.
Experience on earth that they eventually evolve hands.
That is incorrect, dolphins are unlikely to evolve hands and humanoids evolved hands before they became intelligent (probably to grab branches). It was very lucky that a good brain evolved in a body that already had hands.
> very lucky that a good brain evolved in a body that already had hands.
The other way around: hands adds evolutionary pressure towards becoming more intelligent. (The ones that understand how to use their hands and tools better...)
The lack of knowledge.
Dolphins do have organs with which they pick up things like rocks or shells and they are able to give them to each other.
They use their sexual organs as "hands"! Both males and females.
In the tree of life brains are correlated much more strongly with locomotion than with hands. The moment you need to do (inverse) kinematics to plan an immediate action, and to plan sequences of motions, and to plan a hunting or fleeing strategy, is what put pressure to evolve brains, static lifeforms can be very complex and have complicated genomes, but brains you wont find in them...
If elephants were carnivores, with their trunks, they would have evolved efficient methods to hunt, and would probably be the dominant species on the earth landmass surface.
All this without hands.
The fact that they are vegetarian gave us the chance to do that evolution ourselves.
That's because there's land here, but what about a planet with only water, or just not enough land.
But maybe water-surface-only (no land surface) is unlikely
Why would water prevent the evolution of hands? Lots of sea creatures have claws.
The sea is also not all that different from an atmosphere with a higher density in principle, we live "under-air".
Looking at this planet, it's less likely to happen.
Maybe high density (water) makes tools less useful, and thus hands less useful,
since you cannot move a tool particularly fast under water, compared to on land.
I suppose you've tried throwing a stone underwater -- compare with throwing on land.
From this seems to follow, that creatures with human like intelligence, are less likely to appear, if the density of the liquid or gas surrounding them, is too high. (Dolphins are bright but not that bright.)
There would still be enough earth like planets with land.
Every organism manipulates their environment in some way. The ones that can manipulate it in a way that allows them access to more resources than the others will out compete the ones who don't.
Evolution doesn't really work like that. It's just a low bar that everything has to cross from time to time. Being a specialist, very advanced hunter is in no ways better than a dumb jellyfish that spawns billions of offsprings.
Still hard to smelt metal if you live in the ocean.
That's why genetic mutations happened and they grew hands and started smelting on land.
This happened.
emotions are just a form of intelligence that's calcified over evolutionary time. each one of our emotions can be linked with survival and/or reproduction.
The last coauthor listed on this preprint is Michael Levin, who has a lot of other cool work.
In particular, this talk of his from NeurIPS 2018 includes fascinating biology research results, as well as musings on the future of biologically-inspired artificial intelligence.
https://youtu.be/RjD1aLm4Thg
HN discussion about the talk: https://news.ycombinator.com/item?id=18736698
Artem Kirsanov has many great videos on this subject most of which I fail to absorb on first pass.
https://www.youtube.com/@ArtemKirsanov/videos
He's on Lex Fridman's podcast as well - https://www.youtube.com/watch?v=p3lsYlod5OU
Great conversation.
"The Collective Intelligence of Morphogenesis: a model system for basal cognition" by Michael Levin
https://www.youtube.com/watch?v=JAQFO4g7UY8
And from Machine Learning Street Talk
Michael Levin - Why Intelligence Isn't Limited To Brains.
https://www.youtube.com/watch?v=6w5xr8BYV8M
This is fun, but I would call it an example of self organization / self organized complexity not intelligence.
Cell membranes assemble themselves, so do micella ( little spherical protein baubles ), or to take a non living example lipid bilayers.
We would not call such a system intelligent.
One of Levin's main points is to describe agency/intelligence as a continuous spectrum rather than an on/off thing so within the framework of thought that this paper exists in, it's no longer meaningful to treat the answer to the 'is it intelligence?' question as having a boolean answer.
I completely agree with this myself (and have for a long time before I even read any of Levin's frankly amazing work) and I think of the answer to this as more like a float/real-numbered thing - the amount of consciousness/intelligence/agency as a fraction of overall energy usage or something maybe? And that probably will lead to one constantly having to try to work out where the heck zero and one are all the time eh? heheh : )
I think it's fun and fascinating as well though for sure, and I think that even stuff as simple as a reaction-diffusion simulation, can actually contain some tiny elements of agency (just like this paper does with it's self-sorting cells!) Who cares what the scale is, right?, it's the same phenomena at the tiniest scales in my opinion, that led to life, that led to humans.
Whatever the purpose of this "sliding scale" is, it has little to do with the property of intelligence we are concerned with generally.
When we say one species is more intelligent than another, one breed of dog, or one person -- we aren't describing a difference in their organisational structure. And when we want to build intelligent systems we aim to build things that have specific capacities, not that have this sort of abstract organisation which is entirely orthogonal to these capacities.
One dog is more intelligent than another if it can read the intentions of its owner (theory of mind), coordinate in its environment (eg., open doors, etc.), plan more extended actions, pretend/fake actions to confuse the owner/other-dogs, and so on.
These ranges of capacities do not follow from an abstract 'organisational' description of the dog. The great pseudoscience of this 'computer science' thinking is that abstracts to a degree of description that is almost universal, then claims to make fine-grained distinctions.
That the earth-and-moon are 2, and the tree-and-bird are 2, does not mean the earth-and-sun and the tree-and-bird are sharing in any capacities at all. To instantiate an abstract description implies almost nothing.
Yes I agree there's little point in instantiating arbitrary abstract structures, however that isn't what I was suggesting. The related structures and properties identified in Levin's work are specifically those of the 'agential' type (going from molecular-networks/cellular scale right up to and past human-level): Goal-following meta-rules is what would be in the 'class', not just measurements or information attached/correlated to something because some numbers happen to be equal, but a type of memory of intrinsic meta-self-serving micro-behaviors that actually do inform the micro-elements how to behave.
I think that self-organization creates the possibility for the natural nucleation of agential-behavior (akin to crystal-formation) and when the system is also replicative overall, it might cause itself to happen again too! (and down the rabbit-hole we go!)
And I wonder whether it's really ok to just claim there is no micro-organizational structure that gives rise to this meta-goal-following, I mean, this is what Levin's work is all about! My own consciousness/'intelligence' is brought about by many smaller agents (my cells), and I think there's really no sharp categorical barrier here - goal-seeking (and hitting, for the winners of evolution) are pretty clear strategies from the molecular-scale to the blue-whale-sized (and humans too I reckon!)
If you mean to say that materials which have the property of self-replication with variation are essential to intelligence, then I agree with you.
One finds, in reality, that crystals do not meet this requirement -- the only known chemistry to provide it is a highly specific subtype of carbon chemistry called biology.
Yeah I think we are mostly on the same page for sure, I would nearly go so far as to say that self-replicating with variation is 'essential' for intelligence.. and I do utterly love evolution as an idea - but maybe it's just what happened-to-produce the main examples of intelligence we've observed so far in recent history on Earth? (personally I don't think it's just humans that are intelligent either though! so many animals and probably other things are extremely smart too! Just very hard to measure!)
And yeah I was only using the idea of a crystal as an analogy, all replicators (all life?) is a bit like a kind of smooshy-space-and-time-crystal in a way though right? Especially the multicellular kind!!
It's the regeneration/continuation of information of who-knows what type? All types! Obviously there's genes and Levin's vmem and other epigenetic stuff on the biological-side, but now there's also youtube, video-games, recipes, traditions? all memetic-replicators, or meta-memetic ones that control which other memes you allow in your life? I like to remember that all of them are subject to the rules-of-evolution too - there's a success for the memes themselves if they are getting us to carrying them forward!! Can be wildly different types of memories accross epic amounts of time too? Bloody amazing to think about I reckon! Evolution FTW!
> The great pseudoscience of this 'computer science' thinking
You spelled Nobel-level science wrong.
Most of Levin work is not easy to understand or appreciate. Our university lab has been doing cooperative intelligence for decades. The insight in some of his work is revolutionary. He shows why randomness is much more intelligent then most scientist think.
I just loved this bit in the paper, could be so easily taken on so many tangents!:
""Delayed Gratification is used to evaluate the ability of each algorithm undertake actions that temporarily increase Monotonicity Error in order to achieve gains later on. Delayed Gratification is defined as the improvement in Sortedness made by a temporarily error-increasing action.""
Is it slightly analogous in some ways to the avoidance of getting stuck in local maxima perhaps?
Or maybe the fact that the path of minimal effort != sequence of paths with locally simplest steps.
I'd love to know more about what you mean!
Is it that with a bit more upfront investment (or 'delayed gratification'), a system might be able to find shorter (or less energy-intensive) paths through the space that they are navigating to get to their targets?
I think it certainly does seem like when there is essentially some 'computational slack' ('extra line' say, slightly over-provisioned or just set-to-explore more) I'd guess there's a good chance that that could yield a better (cheaper/shorter) result than a brute-force (minimal-effort, perhaps more technically 'efficient') solution?
Unfortunately I don't really feel like I really went anywhere with what I said, but your very short comment made me wonder many things about what you meant! Appreciated!
I recommend Michael Levin’s YouTube channel. Lots and lots of fascinating discussions.
A beach sorts itself by size of sand, just by applying physics, so any array copied by parallel processes will sort itself given enough time, by computation time spend on the element aka the size of the rocks.
I think if you 'just apply physics' (let's say 'just apply computation' shall we?) then an array of numbers can only hope for this to happen In a kind of bogosort-style way?: Shuffle them all, and if now sorted, return! Else, loop and shuffle again, and so on. Such a hillarious sort algo.. but wait till u hear about bogobogosort! lollll!
But different sizes of sand do move against each other differently, and I think maybe that aspect is slightly reminiscent of the every-cell-for-themselves aspect of the cells described in the paper, and especially how the different rules allow different swapping operations when the swap-target is smaller or larger than the current cell. So I think it's a very relevant observation!