It strikes me that it would almost require more calculations to determine which of a trillion points could be seen than to just render the trillion points in the first place.
You'd start by rendering the points closest to the camera, then as the camera was identified as seeing the point, not bother calculating for points behind that point which would be blocked from view by that point. Then you'd simply go further and further away, theoretically gradually eliminating the need to calculate the position of most of the points.
Seems very iffy to me.
Yeah but calculating all those trillion points is not really such a large process before selecting the ones that need displaying on the screen. Its the shading, rendering, lightning, shadows, physics, particles, post-effects that take mostly all of the processing and memory power. So the script takes the relevant points and then renders them only, saving probably 80-90% of the work in a complex virtual environment. I might be wrong on the lightning/shadows/physics though, because this technology could use what's left of the CPU to render these effects in post-processing only once the points have been given a basic texture and geometry.
Although, the maker claiming that no memory card is needed and only processing hardware is a bit non sense because you need graphic/memory hardware to render pixels and send them to your computer screen anyway. Its probably just a way of talking us in.