No, it’d still be a problem; every diff between commits is expensive to render to web, even if “only one company” is scraping it, “only one time”. Many of these applications are designed for humans, not scrapers.
If the rendering data for scraper was really the problem
Then the solution is simple, just have downloadable dumps of the publicly available information
That would be extremely efficient and cost fractions of pennies in monthly bandwidth
Plus the data would be far more usable for whatever they are using it for.
The problem is trying to have freely available data, but for the host to maintain the ability to leverage this data later.
No, it’d still be a problem; every diff between commits is expensive to render to web, even if “only one company” is scraping it, “only one time”. Many of these applications are designed for humans, not scrapers.
If the rendering data for scraper was really the problem Then the solution is simple, just have downloadable dumps of the publicly available information That would be extremely efficient and cost fractions of pennies in monthly bandwidth Plus the data would be far more usable for whatever they are using it for.
The problem is trying to have freely available data, but for the host to maintain the ability to leverage this data later.
I don’t think we can have both of these.