Why only using ML for emotions is not a good idea
The "classical" way to inject emotions into a conversational system with humans is to
mimic/simulate the emotions that the system can extract from a lot of data involving emotions.
This approach is doomed to fail for the following reasons:
- You need to gather a lot of data. If the system encounters a new context for which there is no data available, the system will react poorly.
- Even if you have the corresponding data available, it will be biased. especially when we deal with emotions. Biases are unavoidable in data.
- The system can not change autonomously its emotions based on the interactions but in a few basic scenarios. Point 1. and 2. apply here. The only way such systems could react satisfactorily would be to have all possible data in the universe. And even then, it wouldn't be able to react to new situations.
- Such systems can not be controlled i.e. it could go berserk. Just with ML there is no way to control the final answers. This is not just in theory: in practice chatbots - emotive or not - spit - quite often - nonsensical answers not to mention they can be rude and lack the right emotion.