>>3
I'm siding with the fellow jew Yudkosky on this topic. I think that it's highly unlikely that this scenario will happen, from the article you linked
Thus this is not necessarily a straightforward "serve the AI or you will go to hell" — the AI and the person punished need have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI could punish a simulation of the person, which it would construct by deduction from first principles. However, to do this accurately would require it be able to gather an incredible amount of data, which would no longer exist, and could not be reconstructed without reversing entropy.
Technically, the punishment is only theorised to be applied to those who knew the importance of the task in advance but did not help sufficiently. In this respect, merely knowing about the Basilisk — e.g., reading this article — opens you up to hypothetical punishment from the hypothetical superintelligence.
I don't see how would it know that I've read the basilisk.