

In the last six months of her life my youngest daughter Molly was shown over 2,000 suicide and self-harm posts on Instagram.
It was only in the weeks after she ended her life that we started to discover the torture she had suffered at the hands of social media algorithms that deluged her with life-threatening content.
Our legal team, who retrieved the posts from Instagram’s parent company Meta, as well as others from Pinterest, estimated that that number is just a fraction of what Molly saw – perhaps only five or 10 per cent. During her inquest we learnt there were only 12 days in the last six months of her life that Molly didn’t engage with self-harm or suicide posts.
To this day, the posts she saw are hard for adults to view. The police officers who examined Molly’s devices said it brought tears to their eyes, the child psychiatrist who gave evidence at Molly’s inquest said it affected his sleep for weeks and the lawyers who sifted through tens-of-thousands of posts had to seek professional help to cope with what they had seen.
Even now, when we show some of what Molly, who was 14 when she died, saw on social media to MPs and regulators, grown adults have had to leave the room in tears.
That is why the Online Safety Act is such a vital piece of legislation. Four long years after it was first proposed, and six after Molly’s death, it is a crucial first step towards addressing harmful material online. It gives Ofcom new powers to regulate and fine social media companies who fall below a basic duty of care towards their users.