Are you in the Android clan?0 posts
Are you in the Android clan?0 posts
Achievement unlocked?0 posts
Shake the shackles0 posts
Let your Microsoft flag fly3 posts
Phoneville, USA0 posts
Laptops, desktops, peripherals0 posts
Slabs, slates, and pads0 posts
Out there weirdness for creeps0 posts
Recommended RobbCab's comment in Why Windows will likely surpass Android in the tablet market.
about 5 hours ago
Try 5.5 hours if you’re doing tablet stuff… about 2.5 if you’re playing Skyrim on it.
about 12 hours ago on Paper maker FiftyThree raises $15 million to build the Office suite of the future 2 replies 2 recommends
Just a note on that “continues to disregard indie developers” statement in the article… Kotaku was reporting that they’ve had it confirmed by Don Mattrick that there will be an indie developer program on the Xbox One:
“We’re going to have an independent creator program,” Don Mattrick, Microsoft’s head of interactive entertainment (read: he’s in charge of the Xbox), told me last week. “We’re going to sponsor it. We’re going to give people tools. We’re going to give more information.”
“That is something we think—I think—is important,” he said of an indie program. “That’s how I started in the industry. There’s no way we’re going to build a box that doesn’t support that.”
21 days ago on One platform: how the Xbox One could change everything at Microsoft 3 replies 17 recommends
Looks like your link just goes to the general Microsoft Studios channel. I presume you were intending to link to this panel talk?
Yeah, basically this sort of simulation, but expanded in size to be the entire game environment, and running on a Tesla infrastructure in the cloud rather than an single GPU:
Apparently NVIDIA agrees with you Steve, and it looks like they’re planning to address the bandwidth and latency issues that Mojave3012 mentioned in a SIGGRAPH 2013 talk on interactive cloud-computed indirect lighting:
Well… it’s a “fiasco” that a number of researchers in HPC technologies are determined to solve for distributed visualization processing.
You may be right, but it appears that Microsoft’s gamble is that you are wrong, and that the issues with this sort of distributed visualization processing will be worked out during this console generation. They certainly appear to be expecting that this processing scenario is more likely to work in the next 8 years than the gaming model that uses a pure hosting/streaming environment, with all visual data delivered over the network.
Yup… this is going to be a difficult problem to solve (thus the dependency on research efforts in HPC space). Entirely speculative stuff next, as I’m a sysadmin, not a CUDA programming specialist for Tesla infrastructure, or anything similar. You’re exactly right that they’re going to have to target use of distributed technologies carefully, and come up with scenarios that can be easily massively parallelized, but aren’t highly latency dependent.
Things that immediately come to mind which could be easily parallelized would be environment modelling that doesn’t affect direct gameplay interactions. For instance, consider simulation of an oceanic environment, with full wind and fluid dynamics data. They could offload all of the world environment modelling that didn’t affect direct X, Y, Z position data for valid targets in the environment. The XB1 could track user and target positioning in 3D space, relative cant, projectile locations, and other critical data that must be latency sensitive. They may be able to offload everything else about the environment (wind and wave modelling, multiple force interactions, accuracy of physics effects and how environmental interactions ultimately affect targets, etc.) to a Tesla GPU environment for fluid dynamics and simulation processing.
If they environmentally model wind and wave data in the cloud for areas that don’t directly affect targets (just visual stuff in the environment outside of target spheres of influence), it may even be able to be rendered and streamed down with some latency tolerant surface morphing on the XB1 in case of network delay. This could make for visuals and environment simulation that are far beyond what you could get by modelling the environment locally on the XB1. The difference between this sort of structure and a simple predetermined or canned environment model is that player interactions could be captured locally and sent to the cloud-based simulation to influence the model.
For areas of the game in target spheres of influence to the player, cloud resources could simply deliver discreet environmental data events based on the current modelling state to the XB1 for each valid target (i.e. – force moves you -3,2,6 in 3D space; force moves enemy #1 9,4,0 in 3D space). This could be far faster than trying to have the XB1 simulate the entire physics and fluid dynamics environment and run the calculations for how targets are affected.
Then the XB1 is just focused on rendering the current frame based on physics and environment data streaming from cloud resources, and tracking target positions for projectile hit/miss calculations. The rest of it… the part that makes the environment seem “real” and dynamic may not need to be handled by the XB1 at all.
Alternately, create a forest fire in a land-based game, and dynamically simulate how the fire moves based on player interactions and a physics model. The overall environment modelling, and even a good chunk of the rendering environment doesn’t necessarily directly impact the player in terms of positioning or targeting (thus it can be a bit off in terms of latency without being noticed). Local interactions of the player with the environment would have to be locally calculated, and I think for now locally rendered, but even that could change over time.
Finally, looking a bit more at CPU-based calculations… one of the issues with MMO environments is the CPU load needed for a client to track hoards of players and their interactions at once. Could this generation of console handle offloading some of that tracking to a distributed processing environment (think of massive space battles in EVE, where you can see far more than you will actually interact with), and only have the console tracking players/events that come within a certain “critically imminent” threshold for interactions? Essentially the background CPU processing is offloaded, the foreground processing is handled locally, but the whole environment is rendered for the player (potential issue: foreground and background could be out of sync slightly due to latency, and how do you handle objects/players that are on that transition boundary?). You could build far more massive environments that can be viewed by a player by splitting processing loads in such a manner.
That’s entirely theory-crafting though… I suspect that there are a number of tricks for rendering visualizations non-locally that aren’t on my radar, but which could be split out using such an infrastructure. That’s part of Microsoft’s challenge with this console infrastructure though – to figure out how they make the system resources effectively grow over time as they come up with scenarios to transfer processing to remote resources.
24 days ago 33 comments 9 recommends
25 days ago