(Continuation from yesterday)
If the program is for a control system, I could find the wrong algorithm of the program by a phenomenon of strange system behavior, for example, halt or uncontrolled system.
On the other hand, in case of a reasoning system, whose purpose is "reasoning", we cannot know the correct answer in advance. So if the output of the reasoning system become a mess, nobody cannot notice the mess.
-----
When I was a college student, I started an AI team (fuzzy and neural network) alone at the beginning, in view of my tuition in attendance.
Of course, the beginning period, there was no time-tested reasoning engine around me, so I had to make a reasoning engine with reading a lot of papers and books from scratch.
As the team leader, I asked members of the team to make a reasoning engine by themselves. I didn't open my codes to anyone.
I thought that "the coding is good to understand the reasoning algorithm".
However, the above my policy was to fall into a pit.
-----
Now I notice that
This story is available for the next serial of "AI".
I will continue this story on the next the serial.
Please don't think badly of me.