What we are interested in [when considering morality] is the real meaning of could have done otherwise [...] you’ve said why, it’s to do with what we mean by ability, have the power to, capable of, could. And it’s to do with [...] evaluating options and act on the bases of the evaluation. No need for unnecessary complication at all. Real randomness [...] can’t possibly make us deserving of blame, reward, shame, punishment and so on. And we can’t be deserving without it. Responsibility must be compatible with determinism or else it is a lie.
[...] what would you rather? Your decisions to depend upon the reasons that you have the desire set you do as well as the desire set? Or just the desire set, there due to indeterminism? [...] when you bring indeterminism to your computers or rather pseudo randomness, you place it very carefully somewhere, or else the thing would be utterly useless. [...] we have another perfectly good answer to why it’s a struggle to get computers to behave like us. Because we are much more complex.
To begin with, I find the question what I would rather causality be like to be strange. Should I not prefer neither of the presented options but simply that causality be such that effects always benefit me? It's completely irrelevant what I would prefer. I also find it bizarre to appeal to simplicity for the sake of some rule that we need only consider what is convenient to maintain moral simplicity. Occam's razor does not state that the simpler theory is always true. The principle is a guide to where we should look for answers. But if anecdotal evidence points to a more complex picture, that's where we must head.Stephen Lawrence, 18/05/2012 (Jerry Coyne on free will, Talking Philosophy)
Stephen's comment about human complexity being what distinguishes us from computers is unhelpful. We know that we are more complex than the mother boards and CPU's that make up an electronic device. But what is it that renders us more complex? Is it just that there is more logic and opportunity for "mechanical" bugs in humans? Or is it that we differ in some more fundamental way? I'm suggesting it's the latter.
It is indeed true, as Stephen suggests, that I carefully use pseudo-randomness in my code. Having code that uses a pseudo-random number generator (PRNG) in software is associated with a known and morally interesting conundrum. If I deliberately introduce pseudo-randomness to make an airplane work well in most circumstances then is it acceptable that in some rare circumstance it causes crashes and kills people? If on investigation it turns out the crash could have been avoided would the PRNG have output anything lower than 0.7854 on a scale of 0 to 1, how will we react? Will it do to say that the PRNG statistically saved thousands of lives prior?
We do not seem to want computers to be like us, free and capable of error. But is it possible that our intelligence is fundamentally related to our capacity to err? I'm going to assume moral competence is associated with intelligence. After all, we don't put pigs on trial! And not even crows, orcas or chimpanzees even if they are some of the smarter non-humans.  If we assume this integral relationship between intelligence and moral competence, then it would seem obvious that ethicists should ask what intelligence is. Many assume and keep insisting that intelligence is equatable with rationality. What I'm suggesting is that it's not. At least not entirely. That said, if I understood the exact "mechanics" of intelligence I'd be be less researching developer and more birth nanny of non-biological babies. Why do we really quibble about free will in the contest of morality? What we're really quibbling about is how it's possible for humans to make good decisions.
This has been a corner stone of all modern law so far: are you competent enough to stand trial? So competence is what ethicists must study, not really the "freedom" part of free will. However, it's not unreasonable to claim that our "freedom" is what allows us to be intelligent, sensible beings and hence morally competent. Assume for a moment that some amount of pseudo-randomness makes for better software. What a PRNG does is allow a computer to be more free. The variables that rely on the PRNG are not fixed. It could be that the permissible range of the values are fluctuating depending on how well the software that uses it performs. When the software starts off, the ranges are wide. As the software matures, the values get more and more constrained .
The baby is growing up, the baby is becoming a (wo)man. Isn't it peculiar how long humans stay helpless and cuddly? And how knuckle-headed teenagers can be in their experimentation? Maybe the insanity of freedom and capacity for error has something to do with our great intelligence, sensibility and moral competence. Essentially, we couldn't be intelligent and morally competent if we couldn't on occasion be profoundly stupid.
2.^ We call such code a type of evolutionary algorithm.