Color me skeptical. It sounds like they've rediscovered voxels, and nothing more.
In the early says of 3D graphics, there were a variety of methods attempted, such as voxel ray casting, quadratic surfaces, and ray tracing. The reason the industry settled on rasterization is that rasterization is fast. There are some impressive effects that you can get with some other graphical methods (e.g., much better reflections via ray tracing) that aren't available for rasterization, but that's no good for commercial games if you're stuck at two frames per second.
If you want a technology to be viable for commercial games, then it needs to be able to deliver 60 frames per second while looking pretty nice on hardware that isn't unduly expensive. To make a seven minute demo video, you don't need to be able to hit 60 frames per second. You can do one frame per minute and leave a computer running for a week to make your video.
In order to prove that the technology is real, you have to let members of the tech media test it out in real time. Let someone who is skeptical have the controls, and move forward and back, turn left and right, and rotate the camera in real time. Prove that you can render it in real time, rather than having a ton of stuff preprocessed. If the technology works properly, then it shouldn't be that hard to do this. So why haven't they?
Above, I said that the reason why other graphical methods didn't catch on is that they were too slow. As time passes, graphics hardware gets faster, so one might think that once it gets fast enough, another method will be more efficient than rasterization.
But what about situations where the hardware available is dramatically faster today? For example, Pixar spends hours to render each frame of their movies, which means that they can take a hundred thousand times as long to render each frame as is necessary for a commercial game to run smoothly. And then they can do that on some seriously powerful graphical hardware, too, as spending a million dollars on hardware to render a movie would be a relatively minor expense for them. And they still use rasterization.
There is at least partially inertia. Right now, video cards from both AMD and Nvidia are very heavily optimized for rasterization. They really aren't optimized for other graphical methods, though they are becoming better at doing more general workloads. But if some other method were vastly better than rasterization with that sort of large workloads, it would still be worth it for Pixar to use that other method, even if it means taking several times as long to render each frame as it would for better optimized hardware.
Even if some other method were to be better, it would take a long time for the industry to transition. It takes about three years from the time that AMD or Nvidia start working on a new architecture to the time that the first new cards come to market. If everyone were to suddenly realize that Euclideon's method was far better than rasterization and games were to switch to use it, all graphical hardware released within the next three years wouldn't be able to run such games properly.
Furthermore, even once such hardware is out, you don't want to target a game at hardware that hardly anyone has yet. It would probably be five or six years before games could safely use the technology and have a decent potential playerbase available, and even then, needing to have a card that released in the last few years would lock out a large fraction of their potential playerbase.