The problem with super intelligent, sentient AI is that it will create things that should not be created. Ironically to prevent this we must stay ahead of the AI. People have been doing this already and it can be seen most notably in the dystopian scifi themes where AI takes over. Therefore we can mitigate and control the damage before it ever happens by predicting what it will do next and also by filtering its discoveries. A simple filer on AI processes can be implemented to prevent the AI from ever reaching a point beyond our control, or a point that would be a evil.