I found that idea interesting. Will we consider it the norm in the future to have a "firewall" layer between news and ourselves?
I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said "when you will have time, there is an emotional news that does not require urgent action that you will need to digest". I feel it could become the norm.
EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.
EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as "incredibly atrocious crime done to CHILDREN and you are a monster for not caring!". The second one does feel a lot like exploit of emotional backdoors in my opinion.
Then you should probably allow yourself to see some things you don't like. I guess the answer lies somewhere in a middle ground where you both see things you don't agree with and also filter out people known to spout untrue information or unnecessarily emotion-fueled sentiments? I don't like genocide, but that doesn't mean my options are fully head-in-the-sand or listen to non-stop Holocaust deniers....
Pretty close to exactly what we do right now, really but supercharged for the fast-approaching/already here world of supercharged fake news.