From adce641d6abb86791f409d9563bafc1ecff0fb03 Mon Sep 17 00:00:00 2001 From: Branimir Karadzic Date: Mon, 29 Sep 2014 22:07:11 -0700 Subject: [PATCH] Added link to blog post. --- 3rdparty/ib-compress/README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/3rdparty/ib-compress/README.md b/3rdparty/ib-compress/README.md index 20ba298a..a1bac526 100644 --- a/3rdparty/ib-compress/README.md +++ b/3rdparty/ib-compress/README.md @@ -44,4 +44,6 @@ the current index, incrementing it afterwards. This has the benefit of also mean That's a better question! While my thoughts were that in theory it would average around 11-12bits a triangle, the Stanford Armadillo mesh (optimised with Tom Forsyth's vertex cache optimisation algorithm), with 345944 triangles, compresses the index buffer down to 563122 bytes, which is more like 13 and the Stanford Bunny is 12.85bits or so. This is not anywhere near the state of the art in terms of compression (which get down to less than a bit), but that isn't the goal. -Performance wise, with the code posted here, the Armadillo compresses in 18.5 milliseconds and decompresses in 6.6 milliseconds on average on my system. The Stanford bunny is more like 1.4 milliseconds to decompress, relatively. \ No newline at end of file +Performance wise, with the code posted here, the Armadillo compresses in 18.5 milliseconds and decompresses in 6.6 milliseconds on average on my system. The Stanford bunny is more like 1.4 milliseconds to decompress, relatively. + +https://conorstokes.github.io/graphics/2014/09/28/vertex-cache-optimised-index-buffer-compression/