Emergence of behavior through software
Tue, 17 Oct 2000 00:34:46 +0200
On Mon, Oct 16, 2000 at 11:04:34AM -0700, email@example.com wrote:
>> I had moved this discussion to firstname.lastname@example.org, for a reason.
> What was that reason?
> I object to the move:
> 1. I don't believe cyber-ethics exists as a seperate study from ethics.
Neither do I. The name of the list is originally a voluntary broken
spelling for "cybernetics", that allowed me to insist on my tendency
to see strong relationship between ethics and cybernetics.
Actually, it was Tril who first remarked to me that it could be seen as
a prefixed expression "cyber'n'ethics".
Anyway. You mightn't like the name of the list, it doesn't change the fact
that some discussions are on-topic on one list and off-topic on the other.
> I won't discuss that on this list, though; that belongs on the other list.
I suppose that by "that", you mean this meta-discussion
about lists, their names and and topics. Maybe we should create a
> Plus, I hate the prefix 'cyber'.
So do I. See above.
> 2. We're not discussing whether it's moral or right to form an AI; we're
> discussing whether it's possible and reasonable.
I consider that in comparison to AI's vapourness, even Tunes is rock solid,
so that it deserves discussion in a separate forum.
> After we decide whether
> it's possible it will be reasonable (and vitally important) to discuss
> whether it's ethical and moral: in other words, whether we 'should' do it.
You cannot separate possibility from ethics.
Ethics IS about choosing behaviors, depending on what you think is possible.
Besides, half of Lynn's arguments, flawed as I may think they are,
ARE of ethical nature.
> 3. We're not discussing an issue unrelated to Tunes;
For any two issues, you can find a shorter or longer relationship.
> we're arguing about something that you've just revealed
> to be the core of Tunes itself.
I'm sorry if I wasn't clearer before.
I admit that my discussions with Basile, J. Pitrat, etc,
and my various cybernetical readings, helped me understand it better
along with time. So maybe I just used not understand enough
to be able explain the issue.
> If we can't make AI, then we can't realise your vision of Tunes.
Wrong. Tunes' goal is not, has never been, and will never be to make an AI.
It is to provide a reflective infrastructure whence more complex
computional behavior can emerge than has ever emerged before.
I conspicuously make no upper-limit claims on the technology,
and limit my lower-limit claims about its usefulness
to smooth integration of existing technology
(which would be a feat in itself, if we achieve it).
> I didn't know this before, although I've have vague suspicions.
When I discuss of TUNES with AI people,
I tell them that it isn't an AI project,
but that it's upstream to AI development.
We're building an infrastructure that will hopefully
make the lives of AI researchers easier,
and enable them to boldly go where no AI researcher has gone before,
even if they never reach the Holy Grail.
And use "IA" instead of "AI" if you wish,
it makes no difference to me or to Tunes as I consider it.
> Please clarify, Fare'. I don't understand why you're doing this.
Unless you have doubt about the nature of TUNES itself,
would you please move these meta-discussions to email@example.com ?
I'd rather we used firstname.lastname@example.org for more technical matters.
And YES, I am well aware that it's mostly my fault that we do not yet
have more technical substratum to discuss about.
[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
There cannot be Ethics without Models of possible behaviors, and Imagination
to explore them. [Corollary: there is no Ethics for an all-knowing God,
but there are Ethics for mostly-ignorant but nevertheless thinking humans]