This article on Reductionism vs Emergence has, er, emerged.
https://bigthink.com/13-8/reductionism-vs-emergence-science-philosophy/
Vlad,Tried to find at least some sketchy explanatory chain from physics to sociology and art but alas he never got beyond er, physics and when he did get out of physics into biology it was a very physical biology. I wouldn't be surprised if eliminationists were round this like flies around the proverbial.
And here's a rebuttal:
https://bigthink.com/starts-with-a-bang/universe-reductionist/
Tried to find at least some sketchy explanatory chain from physics to sociology and art but alas he never got beyond er, physics and when he did get out of physics into biology it was a very physical biology. I wouldn't be surprised if eliminationists were round this like flies around the proverbial.
Alas the whole piece emerges as another case of someone who thinks that what he does for a living with how the world is.
Emergence fails for him seemingly because and solely because it isn't reductionism.
He cannot forgive emergentists for not knowing how emergence works but is quick to absolve reductionists for not knowing the same things with an added promissory note that they are sure to if only they have the right gear.
Vlad,It certainly wasn't an ad hom Hillside. That comes now. By only talking about physics he was taking the piss. Was it this that attracted you to this article?
As with so much else, you have this wrong. The “emergentist” claim is: “You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles”. The problem with that bold claim that Ethan Siegel sets out is that there’s no justifying argument to support it. That is, why given both perfect knowledge of the initial conditions and sufficiently massive computing power could all that not be predicted?
The rest of your effort is just your standard ad hom ascribing of dodgy motives for no apparent reason.
It certainly wasn't an ad hom Hillside. That comes now. By only talking about physics he was taking the piss. Was it this that attracted you to this article?
Vlad,Firstly there is his claim of fundamental laws and then what he calls additional laws. The assertion here is there can be no more fundamental laws and that we know them all.
Evasion noted. He actually explained why "just" physics is the default, and that the burden of proof is thus with the "emergentist" to explain why it's not. Burden of proof is something that's always foxed you, but even you should be able to grasp the point here surely?
Firstly there is his claim of fundamental laws and then what he calls additional laws. The assertion here is there can be no more fundamental laws and that we know them all.
Secondly his attitude to what he calls qualitative novel properties is one that they are illusory. This is eliminationist and that is not an established position.
Since he has not established reducibility as universal rather than a technique in science I don't see how he can have the default position.
It certainly wasn't an ad hom Hillside. That comes now. By only talking about physics he was taking the piss. Was it this that attracted you to this article?
Given that biology is just applied chemistry, and chemistry is just applied physics, I can see his stance,Yes, Eliminationist
but I agree with you that to just think of the physics is fundamentally flawed.I did hear somewhere that Maths was shorthand for philosophy...
Physics, after all, is just applied maths.
Yes, Eliminationist
I did hear somewhere that Maths was shorthand for philosophy...
Vlad,That seems a most complex rendering of a ''claim''. You might be the only one making that on behalf of people like myself. If the fundamentals are basically units that can join together we can predict there will be large number complex arrangements of units and that's about it. we may speculate that these may become repeating or chaotic but I don't see this predicting anything more. Thus anything more than this would have to be eliminatively reduced to what I've previously said for your contention to work.
No. The claim is that, even with perfect knowledge of the starting conditions and unlimited computing power, the emergent phenomena would be impossible to predict even in principle (“You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles”).
So far as I can see there’s no argument to justify that claim but that’s what it is nonetheless.I don't know who is making that exact claim.
That seems a most complex rendering of a ''claim''.
You might be the only one making that on behalf of people like myself.
If the fundamentals are basically units that can join together we can predict there will be large number complex arrangements of units and that's about it. we may speculate that these may become repeating or chaotic but I don't see this predicting anything more. Thus anything more than this would have to be eliminatively reduced to what I've previously said for your contention to work.
Chomsky observed that the pure sciences become less useful when we move into areas such as sociology, psychology and anthropology. Is the language of say, physics sufficient for ecology or ethology? I would say not.
I don't know who is making that exact claim.
Does the claim of reductionism here match other default position claims you have made? I would say not since those have been made in the context of empirical or sense data, here empirical sense such as say wetness seems to be dismissed as at worst illusion and at best merely sensing forces between particles and being somehow hoodwinked by qualia. So it is difficult to justify reductionist here as any kind of default position. Also can you call a philosophy a default position? I'm not sure you can.
If the fundamentals are basically units that can join together we can predict there will be large number complex arrangements of units and that's about it.
we may speculate that these may become repeating or chaotic but I don't see this predicting anything more.
Chomsky observed that the pure sciences become less useful when we move into areas such as sociology, psychology and anthropology. Is the language of say, physics sufficient for ecology or ethology? I would say not. I don't know who is making that exact claim.
No, we can accurately predict interactions between a small number of particles. The interactions themselves do not become more complex in larger systems, there are simply a vast number of them to be considered. The problem isn't one of understanding when one tries to scale up to macroscopic effects, but rather one of scale.Yes, geological modelling sounds the ideal job for a supercomputer but I wonder if we take chaotic events into consideration. Predicting chaos though doesn't sound like predicting everything. how do you handle chaos in aerodynamics?
And yet we have increasingly accurate models for any number of phenomena, from air-flow over formula 1 aerofoils to population growth of bacteria in various media.
It is an unfortunate aspect of reality that it's really, really hard to find perfectly spherical chickens which do not evince wind resistance in order to conduct experiments in ideal conditions... Is the language of physics sufficient for ecology, no, because the language of ecology takes inherent short-cuts so as not to have to account for the innumerable individual sub-atomic interactions. A sufficiently large computer with an appropriate dataset, however, could accurately plot activity so that you could describe in ecological language the exact future of any given piece of land, sea or atmosphere.
O.
This article on Reductionism vs Emergence has, er, emerged.
https://bigthink.com/13-8/reductionism-vs-emergence-science-philosophy/
A phenomenon is emergent if it cannot be reduced to, explained or predicted from its constituent parts… emergent phenomena arise out of lower-level entities, but they cannot be reduced to, explained nor predicted from their micro-level base
Yeah. It's nonsense.How does it say the laws of physics are wrong? The author is a physicist isn't he?
This is saying that the laws of physics are wrong. It's obviously bollocks.
Yes, geological modelling sounds the ideal job for a supercomputer but I wonder if we take chaotic events into consideration. Predicting chaos though doesn't sound like predicting everything. how do you handle chaos in aerodynamics?
With more processing power and more accurate start data.Thanks for that. I did hear that chaos was not the same as random.
The problem with modelling chaos is insufficient capacity, it's different to randomness. Genuine randomness is difficult for a computer to simulate, you typically need some form of analogue input to provide the randomness, and even then by its nature that limits the effectiveness of any prediction or modelling. Chaos, though is about the instability of a system, how quickly and significantly small changes to input variables can result in a change. That's not difficult in principle for a computer to model because chaos is still a product of an absolutely deterministic system, it's just a product of a deterministic system which is volatile.
That can be difficult to model, but isn't always.
O.
Thanks for that. I did hear that chaos was not the same as random.
I still wonder, since we are dealing at the molecular and particulate level whether terms like complexity, order and disorder equate to the term emergent or whether the term emergent is, effectively, redundant.
In my experience, and there might be a more technical use of it that I'm not familiar with...
How does it say the laws of physics are wrong? The author is a physicist isn't he?
Because, if you can't reduce emergent behaviour in terms of the fundamental particles that it is made of, it means the fundamental particles are not behaving within the laws of physics.They can be working fine within the laws of Physics without describing the tax arrangements of the principality of Leichtenstein in any meaningful way.
They can be working fine within the laws of Physics without describing the tax arrangements of the principality of Leichtenstein in any meaningful way.
Vlad,Why do I think fundamental particles follow the laws of physics without reference to tax arrangements in Leichtenstein?
Why do you think that's true even in principle (which is the claim)?
Why do I think fundamental particles follow the laws of physics without reference to tax arrangements in Leichtenstein?
Er, because of the laws of physics.
Tax arrangements are an emergent from particles which don't have tax arrangements.
Vlad,So what is it about particles that necessitates tax arrangements. What property of the quark makes them inevitable?
You don’t get it still. In an entirely deterministic model, if you know precisely all the starting conditions and you have unlimited computing power then there’s no inherent reason in principle that you couldn’t predict the future tax arrangements of Liechtenstein.
What “laws of physics” do you think make that not the case?
So what is it about particles that necessitates tax arrangements. What property of the quark makes them inevitable?
Vlad,This post is non sequitur to any recent discussion.
Doesn't matter. What is it about oxygen atoms and hydrogen atoms that "necessitates" wetness? Either you accept that the universe is deterministic in character as the evidence suggests, or you don't. If you do then in principle at least the tax arrangements of Liechtenstein were predictable all the way from the big bang; if you don't though then you have a big job to explain your abandonment of the model.
Oh, and "So what is it about particles that necessitates tax arrangements. What property of the quark makes them inevitable?" is just an expression of your incredulity by the way, not an argument.
This post is non sequitur to any recent discussion.
No amount of studying particles will reveal anything about tax arrangements but might give you a fair bit of detail about where those particles are.
An infinitely large computer will tell you everything? Will it really? don't you have to programme it first.
Which brings us to the question of information and the amount of substrate to carry it.
There are too many ifs in your argument and I don't think people are saying what you are alleging anyway.
This discussion seems to be confusing reduction with prediction.Very fair assessment, though in terms of moving from chemistry to biology though there are linguistic differences between the two as i was informed by a chemist friend hugely irritated by what he saw as sloppy and innaccurate use of chemical terms and meanings by biologists.
It's difficult to say whether perfect prediction is possible, even in principle, because of two reasons.
Firstly, we don't know if the universe is deterministic or not. The problem here is quantum mechanics (well, quantum field theory really). Whether we live in a deterministic universe depends on the interpretation of quantum mechanics; and even if you choose the most popular deterministic version, 'Many Worlds', it doesn't actually mean that what we will observe will be deterministic. The macro world appears to be deterministic but it isn't difficult (as in Schrödinger's cat) to magnify single quantum events to the macro scale (I know the cat was a thought experiment but we are perfectly capable of doing something like that in practice - preferably without killing any cats).
Secondly, chaos also throws a bit of a spanner in the works, if the universe contains genuine continua (e.g. space-time). If it does, then perfect knowledge of the position of just one particle would require an infinite amount of data (it's not even just countable infinity, the infinity of a continuum is 'bigger'). Chaos means that you would need perfect knowledge because it can potentially magnify the tiniest of differences in the starting conditions, given enough time. I guess you could argue that an in principle argument could involve a truly infinite amount of starting data but you would need a literally infinite computer, not just one with unlimited storage and processing power, i.e. 'as big and powerful as necessary'.
All of that is different from reduction. As I explained in #21 (http://www.religionethics.co.uk/index.php?topic=19509.msg857363#msg857363), you can deduce chemical properties of atoms from the basic quantum mechanics of the individual particles. Reduction would mean that, in principle, you can carry on up, explaining biology in terms of chemistry, then on up though behaviour and all the way up to tax arrangements. I don't see any problems with that in principle, and if you claim we can't, then it does call into question the laws of physics.
Very fair assessment, though in terms of moving from chemistry to biology though there are linguistic differences between the two as i was informed by a chemist friend hugely irritated by what he saw as sloppy and innaccurate use of chemical terms and meanings by biologists.
There are linguistic differences at every level, including, of course, physics and chemistry, and probably even more between physics (quantum mechanics) and semiconductor design (that requires quantum mechanics).True. I wonder though, how far that reflects the emergence of novel phenomena.
They can be working fine within the laws of Physics without describing the tax arrangements of the principality of Leichtenstein in any meaningful way.
But they do. It's just that humans lack the intellect and resources to understand the tax laws of Liechtenstein in terms of the fundamental laws of physics.Sounds like understanding beethoven's Fifth in terms of Yoghurts.
Sounds like understanding beethoven's Fifth in terms of Yoghurts.
You're the only person I have ever met who claims that Beethoven's Fifth Symphony is composed of yoghurt.You have to be cultured to appreciate it?
You're the only person I have ever met who claims that Beethoven's Fifth Symphony is composed of yoghurt.What I am suggesting in a humourous fashion (I thought it was funny and at the end of the day that's what counts) Is perhaps we can't understand Tax arrangements in terms of fundamental particles and not just because we are human.
You have to be cultured to appreciate it?Have you been dabbling in the Yakult?
;)
What I am suggesting in a humourous fashion (I thought it was funny and at the end of the day that's what counts) Is perhaps we can't understand Tax arrangements in terms of fundamental particles and not just because we are human.
What I am suggesting in a humourous fashion (I thought it was funny and at the end of the day that's what counts) Is perhaps we can't understand Tax arrangements in terms of fundamental particles and not just because we are human.
This discussion seems to be confusing reduction with prediction.
It's difficult to say whether perfect prediction is possible, even in principle, because of two reasons.
Firstly, we don't know if the universe is deterministic or not. The problem here is quantum mechanics (well, quantum field theory really). Whether we live in a deterministic universe depends on the interpretation of quantum mechanics; and even if you choose the most popular deterministic version, 'Many Worlds', it doesn't actually mean that what we will observe will be deterministic. The macro world appears to be deterministic but it isn't difficult (as in Schrödinger's cat) to magnify single quantum events to the macro scale (I know the cat was a thought experiment but we are perfectly capable of doing something like that in practice - preferably without killing any cats).
Secondly, chaos also throws a bit of a spanner in the works, if the universe contains genuine continua (e.g. space-time). If it does, then perfect knowledge of the position of just one particle would require an infinite amount of data (it's not even just countable infinity, the infinity of a continuum is 'bigger'). Chaos means that you would need perfect knowledge because it can potentially magnify the tiniest of differences in the starting conditions, given enough time. I guess you could argue that an in principle argument could involve a truly infinite amount of starting data but you would need a literally infinite computer, not just one with unlimited storage and processing power, i.e. 'as big and powerful as necessary'.
All of that is different from reduction. As I explained in #21, you can deduce chemical properties of atoms from the basic quantum mechanics of the individual particles. Reduction would mean that, in principle, you can carry on up, explaining biology in terms of chemistry, then on up though behaviour and all the way up to tax arrangements. I don't see any problems with that in principle, and if you claim we can't, then it does call into question the laws of physics.
...from whom Beethoven stole his harmonies?
Stranger,
Good post, but when the argument is expressed as an “if” thought experiment (“if the universe is deterministic, then….” etc) then I can’t see how the author of the article Vlad linked to justifies his comment on reductionism specifically that: “You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles”.
Good post, but when the argument is expressed as an “if” thought experiment (“if the universe is deterministic, then….” etc) then I can’t see how the author of the article Vlad linked to justifies his comment on reductionism specifically that: “You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles”.
Also, even in the Copenhagen interpretation of QM, which is non deterministic, you can certainly predict probabilities of future events in principle, if not exactly which events will happen.
Also, even in the Copenhagen interpretation of QM, which is non deterministic, you can certainly predict probabilities of future events in principle, if not exactly which events will happen.Predicting future events is what use to be known as clairvoyance or prophesy etc. I am not talking about anything like that. What I am talking about is an explanatory gap between the components that give rise to emergents and the emergents themselves, novel properties if you like. I see no reason why that could not arise within a deterministic context or a non deterministic context.
“You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles”.This first appeared in reply 3. Your post.
Predicting future events is what use to be known as clairvoyance or prophesy etc. I am not talking about anything like that.
What I am talking about is an explanatory gap between the components that give rise to emergents and the emergents themselves, novel properties if you like. I see no reason why that could not arise within a deterministic context or a non deterministic context.
However here is your opportunity to back up your contention. Name a future event which results in a novel property and give us the probability of it. I'm not talking here about say, the probabilty of a number of water molecules becoming something wet but a new property or as you say a future event that hasn't been seen.
Vlad,The explanation is plain in the definitions of emergent properties namely novel and not possessed by the components so the question is how can these be predicted from entities that do not possess them? Over to you. Saying ''We can't now because we are only human but we can in principle'' seems like meaningless mealy mouth nonsense.
Nor is anyone else.
“Explanatory gap” in practice or in principle? The article refers specifically to “in principle”.
You’re missing the point still. A deterministic model in which you have perfect knowledge of the starting conditions and unlimited computing power (leaving aside for now Stranger’s reservations about the possibility of “unlimited”) is predictable regardless of the complexity involved, at least in principle. If you think otherwise, then it’s your job to explain why not.
The explanation is plain in the definitions of emergent properties namely novel and not possessed by the components so the question is how can these be predicted from entities that do not possess them? Over to you. Saying ''We can't now because we are only human but we can in principle'' seems like meaningless mealy mouth nonsense.
Vlad,You are just adding to a list of unjustified assertions concerning your point of view.
The components not having the properties of the phenomenon that emerges from them has nothing to do with whether or not the latter can be predicted from the former (given perfect knowledge of the starting conditions and sufficient computing power etc).
I thought you claimed to know something about emergence?
You are just adding to a list of unjustified assertions concerning your point of view.
Let us review some.
All novel events and properties can be predicted from entities that do not possess them.
We cannot do so.
We could in principle.
These need to be justified.
Yes, this is certainly true, but you quickly run into problems with prediction the more successive probabilistic events you are considering because the probability of any particular outcome will quickly become tiny, even if the probability of the individual events in the chain are quite high.But the point is not whether it is practical for humans to do it, but whether it can be done in principle.
But the point is not whether it is practical for humans to do it, but whether it can be done in principle.
Vlad's article seems to claim that it is impossible even in principle.
Vlad,Not sure of a claim here but a speculation based on the inability to demonstrate an asserted principle as exemplified by your own continual refusal or inability. Nature abhors a vacuum and I abhor yours.
See Replies 10 & 12 for the explanation of “default”. Once again, the claims is: “You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles”.
The default response based on anything we verifiably know so far is that that’s wrong. If you think that default response should be amended or abandoned though, then it’s your job to explain why.
Try to remember this.
Not sure of a claim here…
…but a speculation based on the inability to demonstrate an asserted principle as exemplified by your own continual refusal or inability. Nature abhors a vacuum and I abhor yours.
Sorry but I don't see the relevance. The point I made has nothing to do with whether humans can do the calculations or not. If you're trying to make a prediction that involves long chains of probabilistic events, then the longer the chain, the more possible outcomes you have and the lower the probability of even the most probable one becomes. If every outcome has a tiny probability, then it isn't really much of a prediction.
But as I said before, there seems to be a confusion between predicating the future (from a point in time, somebody mentioned from the big bang) and emergence, i.e. predicting properties that will emerge from the combination of entities that don't have them individually. In the latter case, the probabilities might not matter at all - as in the example I gave of making a quantum model of an atom and predicting its chemical properties (which none of the parts have themselves).
Much as I hesitate to wade into an area in which you clearly know more than I do...
...for an in principle argument does the number of probabilistic events matter?
Isn’t this a bit like the black hole paradox whereby it’s now generally thought (as I understand it) that information is not destroyed in a black sole so, given enough computing power, it should be possible to throw, say, Vlad into a black hole and then predictably reconstruct him after after evaporation?
Wade ahead.
It depends on what you want to achieve. There is a difference that I've been trying to point out between predicting the future from some point in time and dealing with emergence.
An in principle argument can't change the underlying mathematics, so if you're trying to predict an inherently probabilistic chain of events, then the number of possible outcomes grows very rapidly, so for a series of events for which there are only two outcomes, there are 2N outcomes for N events. At the same time, the probability of any one particular outcome is the product of the probabilities in the chain that leads to it, and so the probability of even the most probable outcome will rapidly shrink, even if the individual events in the chain are highly probable individually. So, to the extent probabilistic events matter to the prediction you're trying to make, the number of outcomes will quickly become vast and the probabilities tiny for each one. A perfect calculation, with infinite computing power, isn't going to give you much of a prediction.
The problem with prediction is therefore that we simply don't know if the universe is deterministic or relies on probabilities. You have no problem with perfect predictions for a deterministic universe (except the possible problem with chaos and continua that I mentioned before), but if it relies on fundamental probabilities, then you do.
On the other hand, if you're dealing with emergence, i.e. properties that emerge from the interactions of parts that do not possess them individually, it might not matter. To go back to the example of a quantum model of an atom, you're not really interested in what one specific atom does, you're interested in the generic properties that different kinds of atoms (elements) have. The emergent properties of atoms are mostly to do with the states it is possible for its elections to be in and, specifically, the energies associated with each one. So, for example, the spectral lines associated with each element correspond to the energy differences between possible states of the electrons, because they represent absorption or emission of photons with those energies, and energy is directly related to frequency. The model cannot tell you about what state one atom is in, or where one electron is, but it does tell you that the possible energies are quantised and what the allowed energies are - and that's all you really need to know. [I'm glossing over a lot of detail here, but I'm trying to get the basic idea across as simply as I can.]
Talk about opening the proverbial can of worms! I think I'll pass on this for the moment because it risks either going off at a tangent, or possibly just revisiting the same unknowns from a different point of view. We can go back to it if necessary.
I hope what I've said covers the rest of your questions, if not, let me know and I'll try again.
But my question is why. Leaving aside for now the complexity of the calculation, what special characteristic do emergent phenomena have that, just by virtue of being emergent, makes them non-predictable even in principle – in essence the claim of the author Vlad linked to in the OP, and rebutted by the author (Ethan Siegel) I linked to in Reply 1?
In other words, why in principle should it be any more difficult/impossible for the reductionist model to predict an emergent future event than it is to predict a non-emergent one?
Yes I get that iterative probability events spiral quickly (exponentially?) into very, very hard to predict outcomes but isn’t this still a computing problem of scale rather than one of principle? Take a horse race for example - sure I could study the form, talk to the trainers etc before placing my bet but there are still vast numbers of unknown variables potentially in play that could affect the result. What though if instead I knew absolutely everything there was to know - every possible component of the horses, every possible thought process they would have, every possible weather parameter, every possible chance of a bird passing by and distracting the horse I fancied, every everything in other words? That is, what if there were be no more unknowns that could affect the outcome? And let’s say too that I knew all that ab initio, and also that I had a big enough computer to do the calculations - would I then know in advance the winner with no possibility of not cashing in? Isn’t that what a deterministic model would imply?
Siegel covers this I think:
“Some composite structures and some properties of complex structures will be easily explicable from the underlying rules, sure, but the more complex your system becomes, the more difficult you can expect it will be to explain all of the various phenomena and properties that emerge.
That latter piece cannot be considered “evidence against reductionism” in any way, shape, or form. The fact that “There exists this phenomenon that lies beyond my ability to make robust predictions about” is never to be construed as evidence in favor of “This phenomenon requires additional laws, rules, substances, or interactions beyond what’s presently known.
You either understand your system well-enough to understand what should and shouldn’t emerge from it, in which case you can put reductionism to the test, or you don’t, in which case, you have to go back down to the null hypothesis: that there’s no evidence for anything novel.
And, to be clear, the “null hypothesis” is that the Universe is 100% reductionist. That means a suite of things.
• That all structures that are built out of atoms and their constituents — including molecules, ions, and enzymes — can be described based on the fundamental laws of nature and the component structures that they’re made out of.
• That all larger structures and processes that occur between those structures, including all chemical reactions, don’t require anything more than those fundamental laws and constituents.
• That all biological processes, from biochemistry to molecular biology and beyond, as complex as they might appear to be, are truly just the sum of their parts, even if each “part” of a biological system is remarkably complex.
• And that everything that we regard as “higher functioning,” including the workings of our various cells, organs, and even our brains, doesn’t require anything beyond the known physical constituents and laws of nature to explain.
To date, although it shouldn’t be controversial to make such a statement, there is no evidence for the existence of any phenomena that falls outside of what reductionism is capable of explaining.”
This seems persuasive to me. Does it to you?
OK (I think) but I’m still not seeing a qualitative difference between the predictability of non-emergent outcomes and emergent ones.
Yeah I know - seems I’ve been listening to the Infinite Monkey Cage podcast a little too much lately (!) but the point was just a simple one - ie, that no matter how vast (and currently unachievable) the computing power necessary, that’s no reason to invalidate the hypothesis.