Many consider that one needs consciousness in order to be able to deliberate, to be able to consider alternatives and decide between them, that consciousness is a necessary and constitutive part of this process. And, for some, this consciousness leads to and certainly supports the idea of free will. Well here I want to briefly question the presumption that some form of consciousness necessary for a deliberative process.
Consciousness itself is a highly problematic term with many debates past and present on this alone, without regard to the specific issue I want to address. So rather than get side tracked into what it is or is not or could be; whether it is natural, naturalisable, epiphenomenal, an illusion and so and so forth, rather it is better to approach the question from the other side, to see see how deliberation could work and as to whether it requires some notion, whatever it is, of consciousness or not.
At the very least, deliberation is searching a space of options, possibly finding new ones, sometimes discarding old ones on the way and at some point selecting one option. Now if there is only one possible outcome then there is nothing to deliberate over. It is only when there is more than one possible outcome and the data appears causally under-determined that deliberation is needed. As I have argued previously regardless of the appearance of causal under-determinism as a trigger to initiate the deliberative process, the deliberative process itself is part of and parcel - constitutive - of the causal process, what else could it be? Further we expect that any alteration in this search space could lead to different results. The fact that going through the same dynamic search space, with the same initial conditions, leads to the same result is not pertinent to the issue at hand, it is still a deliberative process.
How important is the "appearance" term in the above paragraph? Is it somehow essential? Well there is nothing in principle that requires this, it was just a useful commonly understood term to help make a description of deliberation. It is certainly feasible that such a process could be performed by an intelligent agent, a non-biological and in whatever important sense non-conscious actor, that has to perform some sort of deliberation prior to acting, where none of the initial options presented to it are acted upon, where other options are discovered in the process of deliberation and one of this is finally selected. Indeed there are various solutions with various levels of sophistication that already do this. So the term appearance is not required in a description of deliberation, it is no necessary.
Am I being too liberal with the term "deliberative" and applying only a derivative or metaphorical extension of this to such intelligent agents or robots? Behind this question is I think an intuition that the decision making process is distinct from people making choices and to the latter is added consciousness, well as far as I can see deliberation just is the process through which people make choices. These are not two separate “things”. Note how people can say there are determined to do X or they chose to do X, both refer to the same decision making process. Of course one's own deliberative process is experienced directly and we cannot experience any others' deliberative process directly but only deal the products of such other processes. Still I fail to see why any of this makes a substantive difference to describing a deliberative process nor how it actually works.
Is consciousnesses required for us to learn from our deliberations in the past and for the future? Again I see no compelling reason for requiring consciousness just because we utilise some form of imaginative reconstruction to rerun one's past deliberations, indeed suc imaginative process are the means by which we run though our current deliberations. So the simple description I provided above does not prevent updates in the light of experience?We can expand "experience" as in: the deliberation process operates with all data (experience) to date, and allows that new data can alter the next deliberation process.
Does the use of "imaginative" in the above imply consciousness? Well here mean imagistic as a broad tersm to cover reconstruction possibly using up to all five of our senses to reconstruct the past and make projections into the future. It certainly feels that without consciously doing this there would capacity for such imaginings. Well for quite a while in robotics there have been neural computing solutions that explicitly use an "imaginative" capacity to help solve the deliberation challenge (e.g see as Igor Alexander's Magnus).
So all in all, whatever consiouness is if it is not somehow entirely natural, is not required for deliberation and if it is entirely natural, it is a redundant concept in explaining deliberation in general. It is is easier to avoid such discussions, which can only sidetrack such investigations to no benefit.