ChaosGroup have released through the ChaosGroup labs podcast that they are going to be implementing a new way of processing images in the RT core, by using the CPU(s) as additional CUDA devices.
Aside from obvious efficiency gains, what mainly interests me here is that it finally offers what could potentially be a CPU to GPU pathway, which quite simply hasn't existed until now beyond "the leap of faith".
That is to say, if you wanted to test the GPU waters, you could always start the project on the GPU, and if something doesn't work, or you run out of RAM, well you can just switch back over to the production renderer part of the way through the job. The issue being though, you will suffer whatever consequences that results in, this could be particularly troublesome if the client had seen a GPU render that later had to become a CPU render that simply won't match.
Add to this the issue investment in rendering capacity. The future seems as though it will at the very least feature GPU rendering, potentially very heavily, but to invest in technology that is unreliable and still underdeveloped would be a significant financial risk, especially if the software forces you into a zero-sum game where you must choose either CPU or GPU.
With the advent of hybrid rendering, I now consider it apodictic that out-of-core technology should be implemented into VRay RT. That is to say, if you can use both the CPU and GPU in tandem, it would make sense to permanently shift your workflow to CUDA-Hybrid, and if the GPU does run out of memory and suffers subsequent performance loss, at least the GPU is doing something. Even if the GPU suffered a 90% performance bottleneck, which I highly doubt is the case, I would still rather have it doing something rather than nothing. As GPU's improve then, you could invest more and more heavily into GPU's while using what you still have, and not risk burying money into expensive GPU's that at least some of the time cannot be used and having to rely on your now-ageing CPU farm that isn't receiving resources because they've been directed instead into GPU's.
I will take this case to ChaosGroup and report how I get on. I really hope I can get through to them as this would make the future clear for me as to how to progress to GPU rendering, as opposed to just waiting for 24-32GB cards to drop to a reasonable price. And even then, a lot of the studios I contract in require more RAM than that, sometimes 2-3 times that, that puts a GPU future a very long way off indeed.
I really hope they listen.