According to the Rankings page, it's a variation of the OUSA formula:
The short version is that it depends on who you beat and who beats you and what their rankings are.
Ahh - looking - yes, consider how you did compared to the first place winner who has times comparable to other well-established O competitors. Then in order for all the rankings to "fit", your effort must be worth that amount.
It's a combination of factors to say the course itself was easier or harder than average and then individual effort more or less than average.
Repeated adjusting things up and down until there is no effective difference between this version and the last version.
I don't think I would say it was any easier than:
Durand Eastman 2013
While the 2013 had more controls many of them were just to funnel us around the fence.
But consider - the top finishers in that event got 59 points, the finisher 12 minutes away - 50.
It's not about the per km speed, it's about relative speed vs the fastest on that event.
Rankings of the runners matter too - it's not just how far back you were, but also who you were back from. A low-ranked runner doing well overall will pull everyone else's rankings down, while a high-ranked runner doing poorly overall will push everyone else's rankings up. Your rankings will also be pulled up or down if you perform similarly to a higher- or lower-ranked runner.
"Course difficulty" as it factors into the calculations has very little to do with how difficult a competitor perceives the course to be. It is based on times and rankings, so longer times and higher-ranked runners will lead to higher difficulty values, but the way that gets factored back into computing rankings means that what really matters is the ratio of slowest to fastest times.
The other thing to keep in mind when comparing scores from different events is that they are based on runners' rankings at the time, so if the runners in 2013 were more highly ranked then than now, scores from a similar event now will be lower.